Am I the only one in this thread who uses VSCode + GDB together? The inspection panes and ability to breakpoint and hover over variables to drill down in them is just great, seems like everyone should set up their own c_cpp_properties.json && tasks.json files and give it a try.
I'm betting the truth is somewhere in between, models are only as good as their training data -- so over time if they prune out the bad key/value pairs to increase overall quality and accuracy it should improve vastly improve every model in theory. But the sheer size of the datasets they're using now is 1 trillion+ tokens for the larger models. Microsoft (ugh, I know) is experimenting with the "Phi 2" model which uses significantly less data to train, but focuses primarily on the quality of the dataset itself to have a 2.7 B model compete with a 7B-parameter model.
Doesn't that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.
I've been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what's happening behind the scenes.) The problem still remains the "context window." Claude.ai is > 100k tokens now I think, but the context still limits an entire 'session' to only make so much code in that window. I'm still trying to push every model to its limits, but another big problem in the industry now is effectiveness via "perplexity" measurements given a context length.
This plot shows that as the window grows in size, "directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time" everything that it produces becomes less accurate and more perplexing overall.
But you're right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don't get the feeling we'll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.
You just described all of my use cases. I need to get more comfortable with copilot and codeium style services again, I enjoyed them 6 months ago to some extent. Unfortunately current employer has to be federally compliant with government security protocols and I'm not allowed to ship any code in or out of some dev machines. In lieu of that, I still run LLMs on another machine acting, like you mentioned, as sort of my stackoverflow replacement. I can describe anything or ask anything I want, and immediately get extremely specific custom code examples.
I really need to get codeium or copilot working again just to see if anything has changed in the models (I'm sure they have.)
I use AI to write code for work every day. Many different models and services, including https://ollama.ai on my own hardware. It's useful for a developer when they can take the code and refactor it to fit into large code-bases (after fixing its inevitable broken code here and there), but it is by no means anywhere close to actually successfully writing code all on its own. Eventually maybe, but nowhere near anytime soon.
I do the exact same thing, once my comment reaches a paragraph long I just think "this is way too much stupid information to add, fuck it all, cancel." Maybe I should shitpost random thoughts either way and let the chips fall where they may.
That's the unfortunate side effect of rampant inflation where wages stay stagnant as markets decline. Look at how many restaurants and fast food places closed last year. I'm lower middle class and I have stopped getting fast food entirely. Every time I pass by on the way to and from work I just see the total for the meal in my head and drive past them now because it's just not worth the overall financial damage. I honestly don't know how everyone else is affording even Taco Bell meals regularly every week. I used to work there years ago and the food was significantly better back then, and way cheaper than other food sources. Where the hell did they all go so wrong this past year? Greed? There's gotta be a price gouging greed mechanism in there somewhere.
I'm more surprised that they thought they could double prices, but yet somehow still very noticeably drop in quality. McDonald's quality drop isn't quite as bad as say, Taco Bell and others but wow, seems like hubris is in the air lately with these corporations' executive boards.
I am genuinely perplexed at the amount of mental gymnastics you are doing to justify attacking civilian ships from other countries that are not enemy combatants. Incredible Olympic display of psychological back-flips.
The only downside is that their algorithm never changed, the same station had the same songs on repeat for 7+ years, not a single new song added per-query for some reason. Keeping it fresh would have gone a much longer way.
While that is true, a lot of death and suffering was required for us to reach this point as a species. Machines don't need the wars and natural selection required to achieve the same feats, and don't have our same limitations.
I finally made the plunge to Linux desktop for all work in 2016 and have not looked back (and occasional windows VM, extremely rare now.) Even Arch is now perfectly fine as a workstation which surprised me. Recommend EndeavourOS to streamline the install process but it's Arch underneath.
A modern Neovim configuration with full battery for Python, Lua, C++, Markdown, LaTeX, and more...
This is enough to get the intellisense and linters up and running. Only takes ~5 minutes to configure by installing prerequisites, it's worth it though.
Late stage capitalism is a blight of humanity, there's gotta have to be some sort of revolutionary changes to society at the rate this is all headed. The world is not healthy right now.
Am I the only one in this thread who uses VSCode + GDB together? The inspection panes and ability to breakpoint and hover over variables to drill down in them is just great, seems like everyone should set up their own c_cpp_properties.json && tasks.json files and give it a try.