I mean compressor as half of a compression/decompression algorithm. The better way I should have worded it is: when you apply machine learning to a compression problem, you can do it lossless…your uncompressed output will be identical to the input, every time.
“NNCP” is a good search term to learn more, specifically about how this works.
This is not new knowledge and predates the current LLM fad.
See the Hutter prize which has had “machine learning” based compressors leading the ranking for some time: http://prize.hutter1.net/
It’s important to note when applied to compressors, the model does produce a code (aka encoding) that exactly reproduces the input. But on a different input the same model is unlikely to produce an impressive compression.
But, I don’t think it’s smart. Holding this for more than a day or two is irresponsible. You capture more risk on the up days then you will gain on the down days of the underlying ticker.
Instead, invest in a business you expect to grow. Just ignore the failing ones.
Can you share sample code I can try or documentation I can follow of using an AMD GPU in that way (shared, virtualized, using only open source drivers)?
AFIK it’s only NVIDIA that allows containers shared access to a GPU on the host.
With the majority of code being deployed in containers, you end up locked into the NVIDIA ecosystem even if you use OpenCL. So I guess people just use CUDA since they are limited by the container requirement anyways.
That’s from my experience using OpenGL headless. If I’m wrong please correct me; I’d prefer being GPU agnostic.
I’ve been in this scenario and I didn’t wait for layoffs. I left and applied my skills where shit code is not tolerated, and quality is rewarded.
But in this hypothetical, we got this shit code not by management encouraging the right behavior, and giving time to make it right. They’re going to keep the yes men and fire the “unproductive” ones (and I know fully, adding to the pile is not, in the long run, productive, but what does the management overseeing this mess think?)
To be fair, if you give me a shit code base and expect me to add features with no time to fix the existing ones, I will also just add more shit on the pile. Because obviously that’s how you want your codebase to look.
There is value in just using something like this to break spending habits of the population.
A lot of people may find that a portion of their spending wasn’t that necessary after all, and will stop beyond the boycott. The businesses will need to improve services or lower prices to win customers back.
At least, that’s what I hope this achieves. The organizers might have varying goals.
In my current role, I mostly hire “senior” roles. So the applicants (which are pre screened before I see them) typically have 5+ years experience. I ask about the code they’ve written, and then I ask some questions about how they would extend the code (to meet some new requirements). What I’m looking for is not so much a specific answer, but more so “can we think through this problem together.”
That said, I’ve been the interviewer for “junior” roles…and there isn’t as much correlation between ability and experience as you might think. So no reason to feel imposter syndrome. I’ve worked with extremely smart/talented developers without any formal training.
I think all the stuff you’re doing sets a really good foundation for a career in software, if that’s where you want to go. One thing I might suggest is making a few contributions to open source or team projects. It can be useful to learn about how to read code, and present code to others (or to fit your idea into an existing code base).
I could have said it better.
I mean compressor as half of a compression/decompression algorithm. The better way I should have worded it is: when you apply machine learning to a compression problem, you can do it lossless…your uncompressed output will be identical to the input, every time.
“NNCP” is a good search term to learn more, specifically about how this works.