I've been using a script to generate custom nameplates which are oriented such that the face is parallel to the build plate, so I can swap filament colors when it transitions from the nameplate to the name.
I could do this manually in CAD, but it would take a huge amount of time. Now I just edit a script file, alter a string or two and adjust some spacing values and get a ready to print model.
I am a software developer and used to working with wsl, debian servers, etc. I selfhost a bunch of things and know my way around the linux commandline and would call me privacy enthusiast that uses a lot of FLOSS software. I also do occasional gaming but I guess that should work on any distro with enough work.
You're a power user who has enough technical knowledge to deal with the issues of running bleeding edge.
I'd say Arch, even the manual install isn't too complicated once you've done it a few times and then you'll have access to the latest and greatest packages.
Occasionally this results in some weird bugs. For example, currently, when waking from suspend my HDMI outputs fail to connect until I change the display properties, so I wrote a bash script to toggle the refresh rate and bound that to a hotkey so I can recover without a display. I'm sure in a day or two a system update will fix it and, if not, I know how to locate the problem (in the system log: kernel: nvidia-modeset: WARNING: GPU:0: HDMI FRL link training failed. ) and report it on the appropriate bug tracker.
If this doesn't sound intimidating then you'll be fine as an Arch user.
Oh yeah, you gotta get rid of S mode before you can do essentially anything.
I've only dealt with one laptop that came with that 'feature' so I just ignored all of the warnings that they've posted around the official way of disabling it (I mean "Enabling Developer Mode", i.e. regular Windows)
I think the simplest way to explain it is that the average person isn't very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.
In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I've established in my decades of talking to people online.
In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they've picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.
I'll admit that this is all very subjective. It's entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.
For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.
That way you could objectively say that on the topic of Gun Control on this one specific subreddit we're seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).
The research in the OP is a good first step in figuring out how to solve the problem.
That's in addition to anti-bot measures. I've seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn't slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.
Except they use a bunch of dark patterns to discourge any user from doing it by calling it 'Developer mode' and throwing a bunch of scary sounding warning screens at you when all you're doing is disabling the forced use of the Microsoft store.
It's a super scummy move that will be very effective. Many people will just use the Microsoft store and Microsoft will, once again, have used their monopoly to manipulate the market by forcing their own product to be used (like they got in trouble for in the IE vs Netscape Navigator case)
All of Google's search algorithims are "AI" (i.e. Machine Learning), it's what made them so effective when they first appeared on the scene. They just use their algorithms and a massive amount of data about you (way more than in your comment history) to target you for advertising, including political advertising.
If you don't want AI generated content then you shouldn't use Google, it is entirely made up of machine learning who's sole goal is to match you with people who want to buy access to your views.
I think when posting on a forum/message board it’s assumed you’re talking to other people
That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.
LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.
Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.
You put it better than I could. I've noticed this too.
I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I'm doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.
It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.
Their success metric was to get the OP to award them a 'Delta', which is to say that the OP admits that the research bot comment changed their view. They were not trying to farm upvotes, just to get the OP to say that the research bot was effective.
One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.
Before Elon bought the company he was trashing them on social media for being mostly bots. He's obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.
This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.
And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing 'people' so that the average reader is more than likely going to read the opinion that you're pushing and not the opinion of the human beings.
As a late Gen-X/early Millennial, e-greeting cards are rad.
Kids these days don't know how good they have it with their gif memes and emoji-supporting character encodings... get off my lawn you young whippersnappers!
You're right about this study. But, this research group isn't the only one using LLMs to generate content on social media.
There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody's life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.
I use a local LLM, that I've fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It's setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user's profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot's time.
This is being done as a side project, on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It's terrifying to think of what you can do with a lot of resources and experts working full-time.
It's super useful to make custom 3D prints.
I've been using a script to generate custom nameplates which are oriented such that the face is parallel to the build plate, so I can swap filament colors when it transitions from the nameplate to the name.
I could do this manually in CAD, but it would take a huge amount of time. Now I just edit a script file, alter a string or two and adjust some spacing values and get a ready to print model.
Pretty neat