Skip Navigation

User banner
Posts
2
Comments
552
Joined
1 yr. ago

  • Probably not going to go belly-up in a while, but the enshittification cycle still applies. At the moment, investors are pouring billions into the AI business, and as a result, companies can offer services for free while only gently nudging users towards the paid tiers.

    When the interest rates rise during the next recession, investors won’t have access to money any more. Then, the previously constant stream of money dries up, AI companies start cutting what the free tier has, and people start complaining about enshittification. During that period, the paid tiers also get restructured to squeeze more money out of the paying customers. That hasn’t happened yet, but eventually it will. Just keep an eye on those interest rates.

  • Maybe in the future you could have an AI implant to take care of all translations while you're talking to people, and this idea has been explored in scifi many times. I think the babel fish was the funniest way to implement this idea in a story.

    If that sort of translator becomes widespread, it would definitely change the status learning languages has. That would also mean you have to think about a potential man in the middle attack. Can you trust the corporation that runs the AI? What if you want to have a discussion about a topic that isn't approved by your local tyrannical dictatorship? MITM attack can become a serious concern. Most people probably don't care that much, so they won't learn new languages, but some people really need to.

  • Problem solved! I don't need to think about this premium stuff any more. Recently, I've been playing with the idea of paying for premium, but that's no longer the case. Specifically, the family pack is the one that kinda made some limited sense in the past. I can see the kind of game Google is playing, and I'm not planning to participate.

  • The Last Airbender.

    If you just forget about the avatar series for a while, and treat this as a bit of harmless fun, it’s not that bad. Well it’s not good enough that I would watch it again, nor is it bad enough to warrant all the abysmal reviews. If you expect this movie to fit in with the series, all of the hate and anger is entirely justified though.

    It all depends on how you watch this movie, and I would argue that there is a way to enjoy it. It’s not all bad.

  • The Internet is a pretty big place. There’s no such thing as an idea that is too stupid. There’s always at least a few people who will turn that idea into a central tenet of their life. It could be too stupid for 99.999% of the population, but that still leaves about 5 000 people who are totally into it.

  • Glad I could help! This command is just so much nicer.

  • Now you know why it’s called the Disk Destroyer.

    Before using dd, I prefer to run lsblk first so that I can see what each disk is called. Before pressing enter, I also double check the names with the lsblk output.

  • The best thing about R is that it was made by statisticians. The worst thing about R is that it was made by statisticians.

  • My intuition says you’re right, but I’ve learned to question it from time to time. I don’t know any billionaires myself, nor have I read much about them, so I don’t really have any facts either way. Got any sources I should look into?

  • When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.

    With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.

  • That's a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.

    When that approach stops working, AI companies need to figure out a way to get high quality data, and that's when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn't even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.

  • Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.

    For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you'll be ok.

  • There might be a way to mitigate that damage. You could categorize the training data by the source. If it's verified to be written by a human, you could give it a bigger weight. If not, it's probably contaminated by AI, so give it a smaller weight. Humans still exist, so it's still possible to obtain clean data. Quantity is still a problem, since these models are really thirsty for data.

  • I've even tried to use Gemini to find a particular YouTube video that matches specific criteria. Unsurprisingly, it gave me a bunch of videos, none of which were even close to what I'm looking for.

  • Oh absolutely. Cyberpunk was meant to feel alien and revolting, but nowadays it is beginning to feel surprisingly familiar. Still revolting though, just like the real world.

  • Copilot wrote me some code that totally does not work. I pointed out the bug and told it exactly how to fix the problem. It said it fixed it and gave me the exact same buggy trash code again. Yes, it can be pretty awful. LLMs fail in some totally absurd and unexpected ways. On the other hand, it knows the documentation of every function, but somehow still fails at some trivial tasks. It's just bizarre.

  • Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.