LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They're good at that. Need something summarized? They can do that, too. Need a question answered? No can do.
LLMs can't generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they'll also happily generate complete bullshit answers and to them there's no difference to a real answer.
They're text transformers marketed as general problem solvers because a) the market for text transformers isn't that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.
I know I sorted by feed by Top 6 Hours but that doesn't mean I expect six hours worth of text in a single image. Did they copy and paste three different job postings together? Did they use a LLM that had its stop token configured incorrectly? Is it an attempt at weeding out people who object to having their time wasted by corporate bullshit?
We may never know. What we do know is that this wall of text has more red flags than a Chinese military parade.
Given that prisons are an industry in the States and that inmates are one of their main sources of cheap labor, the high recidivism rate is there to maximize profits.
Because giving answers is not a LLM's job. A LLM's job is to generate text that looks like an answer. And we then try to coax framework that into generating correct answers as often as possible, with mixed results.
I remember talking to someone about where LLMs are and aren't useful. I pointed out that LLMs would be absolutely worthless for me as my work mostly consists of interacting with company-internal APIs, which the LLM obviously hasn't been trained on.
The other person insisted that that is exactly what LLMs are great at. They wouldn't explain how exactly the LLM was supposed to know how my company's internal software, which is a trade secret, is structured.
But hey, I figured I'd give it a go. So I fired up a local Llama 3.1 instance and asked it how to set up a local copy of ASDIS, one such internal system (name and details changed to protect the innocent). And Llama did give me instructions... on how to write the American States Data Information System, a Python frontend for a single MySQL table containing basic information about the member states of the USA.
Oddly enough, that's not what my company's ASDIS is. It's almost as if the LLM had no idea what I was talking about. Words fail to express my surprise at this turn of events.
Run a fairly large LLM on your CPU so you can get the finest of questionable problem solving at a speed fast enough to be workable but slow enough to be highly annoying.
This has the added benefit of filling dozens of gigabytes of storage that you probably didn't know what to do with anyway.
And this is the point where I'd step in as a creator and announce that Paul the Protagonist wasn't in a coma, all of the audience were, and that we've hallucinated the entire show.
I wouldn't call their Windows support stellar, either. There's only one error code for any and all problems and RTXes can be damn finicky if you're unlucky.
sfc /scannow does fix certain problems, just not nearly as many as the Microsoft support forum would like.
I do agree with you on the log, although that's often because whichever component is misbehaving just doesn't believe in error logs. I'm looking at you, Nvidia.
I can appreciate guns from a technical design standpoint. Some of them can look good. I'd even consider owning an inert USFA Zip .22 as an example of spectacularly bad product design. (I'm a UI/UX guy and the total lack of consideration for ergonomics is fascinating to me.)
I have no desire to own a functioning gun, though. Very few people really need one.
I'm kinda planning on teaching my team how to use interactive rebases to clean the history before a merge request.
The first thing they'll learn is to make a temporary second branch so they can just toss their borked one if they screw up. I'm not going to deal with their git issues for them.
Do you mean the Atari 2600? Because all Amigas had either a floppy drive (all of the desktop models) or onboard NVRAM (the CDTV and the CD32).