I think these models struggle with this because they don't process text as individual characters, but rather as tokens that often contain parts of a word. So the model never sees the actual characters within a token, and can only infer the contents of a token from the training data itself if the training data contains more information about it. It can get it right, but this depends on how much it can infer from training data and context. It's probably a bit like trying to infer what an English word sounds like when you've only heard 10% of the dictionary spoken aloud and knowing what it sounds like isn't actually that important to you.
I've written Haskell quite a bit, and I don't fully understand why this is called Haskell style. Haskell code looks nothing like this, the syntax is completely different. For Haskell's syntax I think it works fine, because I never noticed something weird. But this code in "Haskell style" looks absolutely insane
Honestly, as a guy, I'm more and more starting to feel the same. There's so much dumb or hurtful shit happening under the premise of "boys being boys". It's making me hate basically everything about men and being male. Luckily there are plenty of us who are chill, so I look for those. But seeing some men being impulsive, aggressive, dominant, offensive assholes is definitely becoming more and more frustrating. It's painful to look back how much of this behaviour was just programmed into our brains as normal.
I'm struggling to see what alternative you are envisioning here, but maybe I'm misunderstanding you. People and companies pay for the work they need done. When not enough people do that work, the salaries paid for it will increase. When there's a large pool of people willing to do a job but not enough people and companies who need that job done, the salaries drop. The capitalist job market, from a theoretical perspective, seems to regulate the job market such that people choose jobs that are desired by society.
Now there's obviously some downsides to this, because the gap between income levels is way too large in my opinion. Regulation is required so the CEO's don't earn like 200 times as much as the cleaning people. But in the end any system that exists needs to make sure that people do work that benefits society. And certain jobs are just more desired by society than others. Not everyone who likes drawing can become an artist, because in the end society just doesn't need many artists. So any system should penalize people who try to do a job that society simply doesn't need at that moment and incentivize jobs that have shortages.
I'm aro/ace and I don't really say anything more than LGBT or LGBT+ myself. I'm not really a fan of the whole alphabet soup acronym, it doesn't make conversation any easier. I don't speak for everyone though, some people clearly like the name including everyone. Personally I tend to even omit the + or Q after the first time of saying because otherwise it's still a mouthful.
As a software engineer/data scientist who has spent quite a while to find some good AI work, this sounds like absolute bullshit. Most companies don't need AI. Prompt engineer seems like a niche thing, I can't imagine that most companies really need someone who does that. It really frustrates me that these bullshit articles keep coming out without any sense or reason. AI is cool technology (imo), but currently it's just the latest bait for CEOs, managers, etc. Somehow these kinda people are just so vulnerable for hype words without ever thinking more than as second about how to use it or whether it's even useful.
Honestly I don't feel like it's right to post this with their face and name on the internet. I know that they put it on tinder themselves, but the reach of tinder is very different compared to Lemmy (and Reddit, judging by the watermark)
If you give up you'll never achieve anything. This guy is a hero. He puts himself in danger just to show that there's still people out there willing to stand against Putin. It gives the Kremlin a headache because they have to come up with some bullshit reason again to ban him from participating. It reminds all the Russians how their system is not a real democracy. He doesn't stand a chance to actually win, but it still communicates to everyone that there's plenty of people in Russia who support change.
Damn. I cannot play Rocket League without my DS3. Somehow any other controller feels horrible in that game. Luckily I'm used to wired and don't play RL too often anyway, but it's still confronting to see something like that reach EOL (sort of).
Personally I still use Windows for gaming and some other programs that work better under Windows. I've tried to switch, but it was just a bit too unstable to depend on for me. For me none of this shit has happened tho. No forced Cortana, no sudden Candy Crush install, no Edge fucking with my browsing. I'd rather switch to Linux full time instead of dual booting, because M$ is still pulling all these moves on others, but sometimes convenience does win.
Water. And I like water, so no issue there. I don't regularly drink alcohol on a weekday, and any soda or other garbage is banned entirely from my house
I dislike it because it is usually used by the kind of people or media that live from buzzword to buzzword. IoT, Cloud, Big Data, Crypto, Web 3.0, AI, etc. I'm quite interested in deep learning and have done some research in the field as well. Personally, I don't think AI is necessarily a misnomer, the term has been used forever, even for simple stuff like a naive Bayes classifier, A*, or decision trees. It's just so unfortunate to see this insanely impressive technology being used as the newest marketing gimmick. Or used in unethical and irresponsible ways because of greed (looking at you, "Open"AI). A car doesn't need AI, a fridge doesn't need AI, most things don't need AI. And AI is certainly not at the level where it makes sense to yeet 30% of your employees either.
I don't hate AI or the awesome technology, I hate that it has become a buzzword and a tool for the lawless billionaires to do whatever they please.
I think these models struggle with this because they don't process text as individual characters, but rather as tokens that often contain parts of a word. So the model never sees the actual characters within a token, and can only infer the contents of a token from the training data itself if the training data contains more information about it. It can get it right, but this depends on how much it can infer from training data and context. It's probably a bit like trying to infer what an English word sounds like when you've only heard 10% of the dictionary spoken aloud and knowing what it sounds like isn't actually that important to you.
More info can be found here: https://platform.openai.com/tokenizer