Large language models, explained with a minimum of math and jargon
Large language models, explained with a minimum of math and jargon

Large language models, explained with a minimum of math and jargon

Large language models, explained with a minimum of math and jargon
Large language models, explained with a minimum of math and jargon
There are a few things I've taken from that article on first reading:
Anyway some pseudorandom babbling that I hope is at least as useful as a hallucinating AI.
I disagree that 3 is not a problem.
As opposed to industrial processes that you compared it to, we cannot predict the output of a LLM with any kind of certainty. This can and will be problematic, as our economy is built around predictable processes.
That is true, but perhaps inappropriate in this case. Humans are not predictable, nor is weather, the actual outcomes of policy decisions, and any number of things that are critical to a functioning society. We mostly cope with most issues by creating systems that are somewhat resilient, take into account the lack of perfection, and by making adjustments over time to tweak the results.
I think perhaps a better analogy than the oil refinery might be economic or social policy. We have to always be fiddling with inputs and processes to get the results we desire. We never have perfectly predictable outcomes, yet somehow mostly manage to get things approximately correct. This doesn't even ignore the issue that we can't seem to really agree on what "correct" is as we seem to be in general agreement that 1920 was better than 1820 and that 2020 was better than 1920.
If we want AI to be the backbone of industry, then the current state of the art probably isn't suitable and the LLM/transformer systems may never be. But if we want other ways to browse a problem space for potential solutions, then maybe they fit the bill.
I don't know and I suspect we're still a decade away from really being able to tell whether these things are net positive or not. Just one more thing that we have difficulty predicting, so we have to be sure to hedge our bets.
(And I apologize if it seems I've just moved the goal posts. I probably did, but I'm not really sure that I or anyone else really knows enough at this point to really lock them in place.)
Hallucinations come from the weighting of training to come up with a satisfactory answer for the output. Future AGI or LLMs guided by such would look at the human responses and determine why the answers weren't good enough, but current LLMs can't do that. I will admit I don't know how the longer memory versions work, but there's still no actual thinking, it's possibly just wrapping up previous generated text along with the new requests to influence a closer new answer.
I wonder how creative these things are. Somewhere between "hallucination" and fully verifiable correct answers based on current knowledge, there might be a "zone of creativity."
I would argue that there is no such thing as something completely from nothing. Every advance builds on work that came before, often combining bodies of knowledge in disparate fields to discover new insights.
There is a flip side of the coin for #2 and its is something no one really wants to talk about. People actually get very emotional if you even suggest it. Which is the consciousness issue.
Basically if the claim is that machine learning is on the right path to explaining how our minds work, which is a claim im inclined to agree with, then it seems unreasonable to dismiss the idea that deep neural networks now might have some kind qualitative conscious experience. I am not going to say for sure they do have conscious experience, they might not, but I think its wholly unreasonable to dismiss the possibility out of hand.
As it stands we don't have any well accepted theories on how consciousness arises at all. The issue is actually something science is not well equipped to address in its current state, we need fundamental philosophy to address it (im talking academic philosophy not woo woo crystals shit i shouldn't need to say this).
The best we can do now is try to find what are referred to as "neural correlates of consciousness" which is the correlation between neural states and conscious experiences but we don't have a way of explaining why those activity patterns produce the experiences they do. We have theories on how matter acts, not what matter experiences. There is no connection between information processing and experience, that link just does not exist in our theoretical frameworks and it's unlikely to go away with just more understanding of the details on how information is being processed in the brain. We need some way to link types of information processing to types of conscious experience, closest we have is stuff like integrated information theory but its not fully accepted.
I agree that consciousness is a sensitive issue. I haven't refined my thinking on it far enough to really argue my position, but I suspect that that it's just one more aspect of the "mind of the gaps". As with the various "god of the gaps" creationist arguments, I think that consciousness will end up falling into that same dead end. That is, we'll get far enough to start feeling comfortable with the idea that gaps are only gaps in the record or our understanding, not failures of theory.
Some current discussion of the matter is already starting to set up the relevant boundaries. We have ourselves as conscious beings. Over time we've come to accept that those with mental and intellectual disabilities are conscious. Some attempts to properly define consciousness leave us no choice but to conclude that consciousness is like intelligence in that there are degrees of consciousness. That, in turn, opens the door to the possibility of consciousness in everything from crows and octopuses to butterflies and earthworms to bacteria and even plants.
I find it particularly interesting that the "degrees of consciousness" map pretty nicely to the "degrees of intelligence".
So if you were to ask me today if my old Fidelity chess computer was conscious, I'd say "to a low degree". Not because I claim any kind of special knowledge, but because I'd be willing to bet a small amount of money that we'll get to the point where the question can actually be answered with confidence and that the answer would likely be "to a low degree".
To your discussion of the neural correlates of consciousness, my opinion is that making the claim that this still tells us nothing about "what material experiences" is a step into the "mind of the gaps". I'm happy enough to have those correlates as evidence that information processing and consciousness cannot be kept separate.