No, it's just that it doesn't know if it's right or wrong.
How "AI" learns is they go through a text - say blog post - and turn it all into numbers. E.g. word "blog" is 5383825526283. Word "post" is 5611004646463.
Over huge amount of texts, a pattern is emerging that the second number is almost always following the first number. Basically statistics. And it does that for all the words and word combinations it found - immense amount of text are needed to find all those patterns. (Fun fact: That's why companies like e.g. OpenAI, which makes ChatGPT need hundreds of millions of dollars to "train the model" - they need enough computer power, storage, memory to read the whole damn internet.)
So now how do the LLMs "understand"? They don't, it's just a bunch of numbers and statistics of which word (turned into that number, or "token" to be more precise) follows which other word.
So now. Why do they hallucinate?
How they get your question, how they work, is they turn over all your words in the prompt to numbers again. And then go find in their huge databases, which words are likely to follow your words.
They add in a tiny bit of randomness, they sometimes replace a "closer" match with a synonym or a less likely match, so they even seen real.
They add "weights" so that they would rather pick one phrase over another, or e.g. give some topics very very small likelihoods - think pornography or something. "Tweaking the model".
But there's no knowledge as such, mostly it is statistics and dice rolling.
So the hallucination is not "wrong", it's just statisticaly likely that the words would follow based on your words.
Much better integrated refactoring support. Much better source code integration support. Much better integrated debugging support. Much better integrated assistive (but not ai) support.
Vscode can do many things IntelliJ can, but not all, and many of them require fiddling with plugins.
Usually, JB is also faster (if your dev machine can run it, but in my experience most devs have beefy machines).
I would expect it to rise. I still think it's worth it, if it's a good tool for you. IntelliJ is really a good product, even if they do have their downsides. In a commercial environment, it's totally worth it to buy a licence per developer, if it makes them more productive.
I don't mind paying for tools that help me do my job. For several years I even had a personal licence for "all products pack" thing. Their IDEs do a decent job.
There are better tools for specific things, but overall as an IDE, it's pretty good and makes you effective. And especially if you have to use Windows, it's integrating enough tools that you don't have to mess with the Windows crappy tooling that often.
That said, it's still a big fat slow IDE. For a while now I've been using neovim my modernized Linux toolkit and for the most part, I'm happier with it then I was with IntelliJ and Goland and the rest. Happier enough to not having a licence for JetBrains any more.
And recently I've looked into Zed. Zed looks pretty neat so far, but it's still under development. Once things stabilise there, I might commit to it and switch full time to Zed. It's got a few nice things that I miss from IntelliJ, but it's way, way more responsive.
Back on topic: I wanted to say I don't mind paying for IDEs, if they're good tools. But this is more of an ideological challenge and I'm always trying to keep myself from overreacting. So while I don't agree with you in general ("don't trust paid IDEs"), I might agree with you specifically ("don't fall for JetBrains' lure and Microsoft-like tactics").
As an aside, I see we're bringing the strangers thing over from Reddit. I hope more of the fun and funny stuff gets over, I miss some of the light shitposting.
No, it's just that it doesn't know if it's right or wrong.
How "AI" learns is they go through a text - say blog post - and turn it all into numbers. E.g. word "blog" is 5383825526283. Word "post" is 5611004646463. Over huge amount of texts, a pattern is emerging that the second number is almost always following the first number. Basically statistics. And it does that for all the words and word combinations it found - immense amount of text are needed to find all those patterns. (Fun fact: That's why companies like e.g. OpenAI, which makes ChatGPT need hundreds of millions of dollars to "train the model" - they need enough computer power, storage, memory to read the whole damn internet.)
So now how do the LLMs "understand"? They don't, it's just a bunch of numbers and statistics of which word (turned into that number, or "token" to be more precise) follows which other word.
So now. Why do they hallucinate?
How they get your question, how they work, is they turn over all your words in the prompt to numbers again. And then go find in their huge databases, which words are likely to follow your words.
They add in a tiny bit of randomness, they sometimes replace a "closer" match with a synonym or a less likely match, so they even seen real.
They add "weights" so that they would rather pick one phrase over another, or e.g. give some topics very very small likelihoods - think pornography or something. "Tweaking the model".
But there's no knowledge as such, mostly it is statistics and dice rolling.
So the hallucination is not "wrong", it's just statisticaly likely that the words would follow based on your words.
Did that help?