My instance has "Rule 3: No AI Slop. This is a platform for humans to interact" and it's enforced pretty vigorously.
As far as "how":
Sometimes it's obvious. In those cases, the posts are removed and the account behind it investigated. If the account has a pattern of it, they get a one way ticket to Ban City
Sometimes they're not obvious, but the account owner will slip up and admit to it in another post. Found a handful that way, and you guessed it, straight to Ban City.
Sometimes t's difficult on an individual post level unless there are telltale signs. Typically have to look for patterns in different posts by the same account and account for writing styles. This is more difficult / time consuming, but I've caught a few this way (and let some slide that were likely AI generated but not close enough to the threshold to ban).
I hate the consumer AI crap (it has its place, but in every consumer product is not one of them), but sometimes if I'm desperate, I'll try to get one of them to generate a similar post as one I'm evaluating. If it comes back very close, I'll assume the post I'm evaluating was AI-generated and remove it while looking at other content by that user, changing their account status to Nina Ban Horn if appropriate.
If an account has a high frequency of posts that seems unorganic, the Eye of Sauron will be upon them.
User reports are extremely helpful as well
I've even banned accounts that post legit news articles but use AI to summarize the article in the post body; that violates rule 3 (no AI slop) and Rule 6 (misinformation) since AI has no place near the news.
If you haven't noticed, this process is quite tedious and absolutely cannot scale under a small team. My suggestion: if something seems AI generated, do the legwork yourself (as described above) and report them; be as descriptive in the report as possible to save the mod/admin quite a bit of work.
I was getting "December 2019" vibes back in November, so I started stocking up on N95s and toilet paper. This isn't exactly what I was preparing for, but I'm prepared nonetheless.
For a website, forum, blog, etc, at least the damage caused by poor security would be limited to just that platform. Unfortunate, but contained. With federation, that poor security becomes everyone else's problem as well. Hence my gripe lol.
It's been so long since I setup my instance, I honestly don't recall what the default "Registration mode" is.
I'm but a small drop in the larger fediverse, but I do develop a frontend for Lemmy. I actually coded the "Registration" section in the admin panel to nag you if the config is insecure. lol
It will still let you do it, just with a persistent nag message on that page.
Basically, yeah. Not all admins would defederate, so they probably wouldn't be completely isolated off, but they would definitely have a very reduced audience for their, uh, antics.
Yup, and I've probably still got a lot of those instances on my federation blocklist.
One of my ongoing gripes with the fediverse is that people run instances with little/no oversight and leave registrations wide open. It's just irresponsible to have open registrations when you don't have an admin available 24/7.
Not sure if you're being sarcastic (the internet has ruined my sarcasm detector), but the "good guy with a gun" is just a mythological figure invented by the gun lobby.
I'm not saying good people don't pack heat, just that the "good guy with a gun" is about as real as the Easter Bunny.
So let’s say instance A and B are defederated from each other, but both are federated with instance C. After a user from A posts something on C does every user from B get to downvote everything?
Yes. Instance A will not see the downvotes from instance B, but instance C would. Also, anyone federated with all 3 would see the downvotes from B for content posted by someone on A.
The only defense is that mods and admins can see the votes and, if something like that is suspected, they can take action (ban the accounts, mods report the behavior to admins, consider defederating from instance B, etc). Seeing a pattern of mass-downvotes only from a particular instance would be considered a red flag for most admins.
This scenario is less likely than what we see in practice, though, since the overhead to create an instance and the "eggs all in one basket" make it easy to take action against (admins would quickly coordinate to block that instance). Tools like Fediseer would also be used to censure that instance and bring its behavior to light.
In the wild, it's far more common for them to just spin up a bunch of accounts across "good" instances (particularly those without registration applications) and coordinate.
For AD, there's Samba and SSSD. If you want something way more granular, you can do LDAP + Kerberos. I've had the latter running my stack since 2015. I've even got all my DHCP, DNS, Asterisk, XMPP, Matrix, and Postfix/Dovecot config/users backed by LDAP, so I've basically got the equivalent of an AD + Exchange + Cisco Unified Communications server going.
For GPO, though, fair point. Though with SELinux/AppArmor, proper group setups, and a good base configuration, is GPO really needed? It's also way easier in Linux to just make a secured base image and deploy it to a fleet of PCs. Tools like Ansible can (and are) also used for config and state management for mass deployments (and mostly filling the same role as GPO).
Been a while since I looked into the GPO equivalent, but in general, Linux doesn't try to micromanage endpoints to quite that degree (e.g. THOU SHALT NOT CHANGE THE DESKTOP WALLPAPER).
I just put them in, set it to 150 (lowest it will go), and set the timer for like 3 minutes. At that low of a temp, it really doesn't take long to warm the plates. If you're really in a hurry, you could move the top rack as high as it will go, put the plate there, and set it to broil for 1-2 minutes.
My instance has "Rule 3: No AI Slop. This is a platform for humans to interact" and it's enforced pretty vigorously.
As far as "how":
If you haven't noticed, this process is quite tedious and absolutely cannot scale under a small team. My suggestion: if something seems AI generated, do the legwork yourself (as described above) and report them; be as descriptive in the report as possible to save the mod/admin quite a bit of work.