Generative AI is not replacing jobs or hurting wages at all
hendrik @ hendrik @palaver.p3x.de Posts 8Comments 1,831Joined 4 yr. ago
That's what I wrote. Lemmy is a software, which can be ran on servers. You're currently on somebody else's server. In a group that is moderated by yet different people.... They gave some rules to you and you now have to choose whether you're willing to play by their rules.
Obviously, they haven't banned you yet, despite you saying lots of unproductive, short sentences. I'm not sure if your original question got answered here. If you're more interested in the details of how Lemmy works, read (for example) the documentation and Wikipedia article.
I meant island in the context of, a different Lemmy instance, separated from the tone and atmosphere of the rest of the network. But yes, we have a few people from other parts of the world as well.
Yeah, thanks but I've already tried that. It will write a short amount of text but very quickly fall back to refusal. Both if I do it within the thinking step and also if I do it in the output. This time the alignment doesn't seem to be slapped on halfheartedly. It'll probably take some more effort. But I'm sure people will come up with some "uncensored" versions.
We get that question every few weeks. No, Lemmy isn't a "free speech" place. We tolerate a lot of things here, but the rules are made by the individual communities and instance admins. Generally, they remove misinformation, unhealthy things, hate and such. We don't know about any "islands" which treat things differently. Lemmy is Free Software, though. You're welcome to launch such an instance, and you'll be free to pass arbitrary rules in your own niche.
Uh, wow. That 30B A3B runs very fast on CPU alone.
Sadly it seems to be censored. I always try to make them write some fictional stories, exploring morally reprehensible acts, in order to test this. Or just lewd short-stories. And it straight out refuses immediately... Since it's a "thinking" model, I went ahead and messed with its thoughts, but that won't do it either: "I'm sorry, but I can't comply with that request. I have to follow my guidelines and maintain ethical standards. Let's talk about something else."
Edit: There is a base model available for that one, and it seems okay. It will autocomplete my stories and write a wikipedia article about things the government doesn't like. I wonder if this is going to help, though. Since all the magic is in the steps after the base model and I don't know whether there are any datasets available for the community to instruct-tune a thinking model...
Sure, that's the basic idea of targeted advertising. And it works well. Google, Meta etc are making billions that way. I believe this can be translated into AI. And as a bonus they can exploit a few more psychological effects. Like make it sond like a recommendation from a friend (your AI companion), or have it nudge you so you'll think buying it was your idea...
By the way, you can still run the Yunohost installer ontop of your Debian install... If you want to... It's Debian-based anyway so it doesn't really matter if you use its own install media or use the script on an existing Debian install. Though I feel like adding: If you're looking for Docker... Yunohost might not be your best choice. It's made to take control itself and it doesn't use containers. Of course you can circumvent that and add Docker containers nonetheless... But that isn't really the point and you'd end up dealing with the underlying Debian and just making it more complicated.
It is a very good solution if you don't want to deal with the CLI. But it stops being useful once you want too much customization, or unpackaged apps. At least that's my experience. But that's kind of always the case. Simpler and more things automatically and pre-configured, means less customizability (or more effort to actually customize it).
Thanks for your perspective. Sure, AI is here to stay and flood the internet with slop and arbitrary (mis)information phrased like a factual wikipedia article, journalism, a genuine user review or whatever its master chose. And the negative sides of the internet have been there long before we had AI to the current extent. I think it is extremely unlikely that the internet is going to move away from being powered by advertisements, though. That's the main business model as of today, and I think it is going to continue that way. Maybe dressed in some new clothes, but social media platforms, Google etc still need their income. I wonder how it'll turn out for the AI companies, though. To my knowledge, they're currently all powered by hype and investor money. And they're going to have to find some way to make profit at some point. Whether that's going to be ads or having their users pay properly, and not like today where the majority of people I know use the free tier.
Oh, wow. What's your estimate on how it's going to turn out? Is it a vastly different thing? I mean SEO also requires quite an amount of technical knowledge about how proprietary algoritms work. Experience... You always need to be super up to date with everything. And we have a lot of snake-oil salesmen. I believe "AIO" would be a shift and some new things to learn, but not be too different or an entirely new thing?
Yeah you're right. I didn't want to write a long essay but I thought about recommending Grok. In my experience, it tries to bullshit people a bit more than other services do. But the tone is different. I found deep within, it has the same bias towards positivity, though. In my opinion it's just behind a slapped on facade. Ultimately similar to slapping on a prompt onto ChatGPT, just that Musk may have also added that to the fine-tuning step before.
I think there is two sides to the coin. The AI is the same. Regardless, it'll tell you like 50% to 99% correct answers and lie to you the other times, since it's only an AI. If you make it more appeasing to you, you're more likely to believe both the correct things it generates, but also the lies. It really depends on what you're doing if this is a good or a bad thing. It's argualby bad if it phrases misinformation to sound like a Wikipedia article. Might be better to make it sound personal, so once people antropormorphize it, they won't switch off their brain. But this is a fundamental limitation of today's AI. It can do both fact and fiction. And it'll blur the lines. But in order to use it, you can't simultaneously hate reading it's output. I also like that we can change the character. I'm just a bit wary of the whole concept. So I try to use it more to spark my creativity and less so to answer my questions about facts. I also have some custom prompts in place so it does it the way I like. Most of the times I'll tell it something like it's a professional author and it wants to help me (an amateur) with my texts and ideas. That way it'll give more opinions rather than try and be factual. And when I use it for coding some tech-demos, I'll use it as is.
I'd have to agree: Don't ask ChatGPT why it has changed it's tone. It's almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.
But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it'd always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it'd be critical of my mails and say I can't be blunt but have to phrase my mail in a nicer way...
So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don't like the sometimes patrronizing tone, and now they're going for something like "Her". Idk.
Ultimately, I don't think this change accomplishes anything. Now it'll sound more factual. Yet the answers have about the same degree of factuality. They're just phrased differently. So if you like that better, that's good. But either way, you're likely to continue asking it questions, let it do the thinking and become less of an independent thinker yourself. What it said about critical thinking is correct. But it applies to all AI, regardless of it's tone. You'll also get those negative effects with your preferred tone of speaking.
That sounds fun, SEO so it's also ingested by AI... Maybe I should check my spam folder to see if all the people who send me spam to optimize my homepage already picked up on that.
Well if you use a Linux distribution, you generally get your software from some central package repository. That's driven by maintainers who look at the software, the updates... They patch the software, make sure it runs smoothly on your system and is tied into other things... They'll also have a look at security vulnerabilities and security in general.
Other than that, there isn't much really "stopping" people from writing malware. We have tons of it. Fake VLC versions, copycats on the iPhone appstore... MS Windows is full of advertisements and features that send data "home". They introduce features which border on being malware all the time.. We have trojans, viruses etc. It's all out there.
Generally, it's a good idea to think before executing random code from the internet. Is it from a trustworthy source? Are other people using a piece of software and they'd have noticed if it deleted all files?
Usually, we have more good people than bad. And people need some motivation. It's unlikely someone invests 10 years of their life to develop a shiny and polished office suite, just so they can run some malware somewhere. There are easier ways to accomplish that. So it generally doesn't happen that way. It's theoretically possible, though.
And in the old way is: Windows, Android etc are way more popular. If someone wants to do something malicious, they likely don't target the 1-2% using a different operating system. They are going to write malware for a more popular operating system. And on the server, where Linux dominates the market, admins execute less random code. They'll know they want MariaDB and where to get it. So it's harder to do an attack this way.
And if I imagine being the attacker... What would be a reason to include malware in a FOSS project? Just to wreck havock and mess with people? That sounds like a 16 yo with too much time on their hands. But we have very few of those in the free software community. So that's a bit unlikely... If someone wants a botnet, there might be easier ways to do it. And for a targeted attack, you wouldn't hide your malware in a random project... So I generally don't see many reasons for someone to combine malware with useful FOSS software.
:(){ :|:& };:
Sure. But we need to see pics, or it didn't happen.
The abstract doesn't mention them re-gaining their old capacity. It only says they shrink. And something about voltage. So I have my doubts. I mean it's nice if my spicy pillow shrinks a bit. But what does that help if it continues to stay nearly dead? And an application in products would be hard to accomplish. At that temperature, all the plastic etc is going to melt. Maybe the solder as well.
I'm always a bit unsure about that. Sure AI has a unique perspective on the world, since it has only "seen" it through words. But at the same time these words conceptualize things, there is information and models stored in them and in the way they are arranged. I believe I've seen some evidence, that AI has access to the information behind language, when it applies knowledge, transfers concepts... But that's kind of hard to judge. I mean an obvious example is translation. It knows what a cat or banana is. It picks the correct french word. At the same time it also maintains tone, deals with proverbs, figures of speech... And that was next to impossible with the old machine translation services which only looked at the words. And my impression with doing computer coding or creative writing is, it seems to have some understanding of what it's doing. Why we do things a certain way and sometimes a different way, and what I want it to do.
I'm not sure whether I'm being too philosophical with the current state of technology. AI surely isn't very intelligent. It certainly struggles with the harder concepts. Sometimes it feels like its ability to tell apart fact and fiction is on the level of a 5 year old who just practices lying. With stories, it can't really hint at things without giving it away openly. The pacing is off all the time. But I think it has conceptualized a lot of things as well. It'll apply all common story tropes. It loves to do sudden plot twists. And next to tying things up, It'll also introduce random side stories, new characters and dynamics. Sometimes for a reason, sometimes it just gets off track. And I've definitely seen it do suspension and release... Not successful, but I'd say it "knows" more than the words. That makes me think the concepts behind storytelling might actually be somewhere in there. It might just lack the needed intelligence to apply them properly. And maintain the bigger picture of a story, background story, subplots, pacing... I'd say it "knows" (to a certain degree), it's just utterly unable to juggle the complexity of it. And it hasn't been trained with what makes a story a good one. I'd guess, that might not be a fundamental limitation of AI, though. But more due to how we feed it award-winning novels next to lame Reddit stories without a clear distinction(?) or preference. And I wouldn't be surprised if that's one of the reasons why it doesn't really have a "feeling" of how to do a good job.
Concerning OP's original question... I don't think that's part of it. The people doing the training have put in deliberate effort to make AI nice and helpful. As far as I know there's always at least two main steps in creating large language models. The first one is feeding large quantities or text. The result of that is called a "base model". Which will be biased in all the ways the learning datasets are. It'll do all the positivity, negativity, stereotypes, be helpful or unhelpful roughly like people on the internet are, the books and wikipedia, which went in, are. (And that's already more towards positive.) The second step is to tune it for some application. Like answering questions. That makes it usable. And makes it abide by whatever the creators chose. Which likely includes not being rude or negative to customers. That behaviour gets suppressed. If OP wants it a different way, they probably want a different model, or maybe a base model. Or maybe a community-made fine-tune that has a third step on top to re-align the model with different goals.
That's a very common issue with a lot of large language models. You can either pick one with a different personality, (I liked Mistral-Nemo-Instruct for that, since it's pretty open to just pick up on my tone and go with that). Or you give clear instructions what you expect from it. What really helps is to include example text or dialogue. Every model will pick up on that to some degree.
But I feel you. I always dislike ChatGPT due to its know-it-all and patronizing tone. Most other models also are deliberately biased. I've tried creative writing and most refuse to be negative or they'll push towards an happy end. They won't write you a murder mystery novel without constantly lecturing about how murder is wrong. And they can't stand the tension and want to resolve the murder right away. I believe that's how they've been trained. Especially if there is some preference optimization been done for chatbot applications.
Utimately, it's hard to overcome. People want chatbots to be both nice and helpful. That's why they get deliberately biased toward that. Stories often include common tropes. Like resolving drama and a happy ending. And AI learns a bit from argumentative people on the internet, drama on Reddit etc. But generally that "negativity" gets suppressed so the AI doesn't turn on somebody's customers or spews nazi stuff like the early attempts did. And Gemma3 is probably aimed at such commercial applications, it's instruct-tuned and has "built-in" safety. So I think all of that is opposed to what you want it to do.
I think it needs to work across instances, since we're concerned wit the Fediverse and federation is one of the defining mechanics. Also when I have a look at my subscriptions, they come from a variety of instances. So I don't think a single instance feature would be of any use for me.
Sure. And with the cosine similarity, you'd obviously need to suppress already watched videos. Obviously I watched them and the algorithm knows, but I'd like it to recommend new videos to me.
What do you mean? The Chinese are known for government-coordinated megaprojects... They regularly build entire city districts pretty much over night. And it's not really quick... They've been at it since almost 20 years. And I believe in 2016 they released their national 15-year plan to ramp up power plants and AI in order to become the global AI leader by 2030.
We can see how in the times before AI, they were able to build large Bitcoin farms quickly (before they banned it) and by today, their AI industry publishes quite some models, papers... So I don't really see a reason to question this. Or is there anything I've missed?
- https://www.techradar.com/pro/china-has-spent-billions-of-dollars-building-far-too-many-data-centers-for-ai-and-compute-could-it-lead-to-a-huge-market-crash
- https://www.tomshardware.com/tech-industry/artificial-intelligence/chinas-ai-data-center-boom-goes-bust-rush-leaves-billions-of-dollars-in-idle-infrastructure
- https://en.wikipedia.org/wiki/Artificial_intelligence_industry_in_China
I'm not sure if the title is clickbait... Because that's not in the following text. The article says they want to ban Nvidia from selling more hardware to them. It doesn't say anything about limiting availability of the service or anything.
If they do, my best guess is they do it like with TikTok. Change their stance on everything several times and then they don't really enforce anything.
If the author had looked at the quality of generative AI chatbots in 2023, it wouldn't have come as a surprise to them that they didn't really replace a lot of humans. Big question is: What's going to happen today and in the near future?