Skip Navigation

User banner
Posts
9
Comments
1,289
Joined
2 yr. ago

  • Damn, wasn't expecting this one. RIP.

  • https://www.politico.eu/article/elon-musk-ukraine-starlink-russia-crimea-war-drone-submarine-attack-sabotage/

    Isaacson writes that Musk reportedly panicked when he heard about the planned Ukrainian attack, which was using Starlink satellites to guide six drones packed with explosives towards the Crimea coast.

    After speaking to the Russian ambassador to the United States — who reportedly told him an attack on Crimea would trigger a nuclear response — Musk took matters into his own hands and ordered his engineers to turn off Starlink coverage “within 100 kilometers of the Crimean coast.”

    This caused the drones to lose connectivity and wash “ashore harmlessly,” effectively sabotaging the offensive mission.

    Ukraine’s reaction was immediate: Officials frantically called Musk and asked him to turn the service back on, telling him that the “drone subs were crucial to their fight for freedom.”

    They had access. Musk revoked it from the Crimea region after the fact.

  • They had previously had access, and then were denied further access by geofencing critical regions. That's effectively the same as having services cut off.

  • And yet, warning signs are still ignored by most people, and travesties like this continue to occur almost daily in this country.

    I'd argue that we don't know as much as we'd like to think we do.

  • Tell that to the Ukranian soldiers who had their Starlink access cut off during a critical moment in the war with Russia or to the people injured/killed by Tesla's half-baked autopilot that Elon refuses to admit is not safe for public use, both of which are decisions spearheaded by Elon, directly.

  • It’s not my belief – just look at the raw data and what psychologists say. We don’t know everything, obviously, but we know quite enough.

    What psychologists say that we know enough? I've never heard this claimed by a professional in the field before.

    e: I’m not talking about what psychologists can learn, I’m talking about us, the public.

    It's not up to the public to learn the intricacies of psychopathy. You and I aren't the ones who need or can make use of that data.

  • Elon is like Trump.

    In that, they're both assholes who have influence over people's lives? Yeah, exactly. Literally my point.

    We shouldn't let them operate in the shadows. They need to be exposed. Every single time.

  • We’ve learnt all we can from these psychopaths, to be honest.

    What makes you believe this?

  • I feel like this was an exception to the rule, since he was still at large for a while and it was imperative to the public safety that people knew who he was and what he looked like.

    That said, yes, the media does need to pull back on publishing details about the killers in these situations.

  • Because the things he says and does affect real people, and it's important that this behavior is known so that he doesn't get away with his shitty misdeeds in secrecy.

  • Which one: the Flipper Zero, or the bluetooth spamming function?

    Flipper Zero is a thing because it's a very capable device for hackers and tinkerers. It can be used as an intro to coding and pen-testing.

    The bluetooth spam is a thing because some dev is an asshole.

  • It's not open source. At least, not in the conventional sense. The source is available, but it's not under a traditional OSS license.

  • They've been trying, but the existing ISPs have ironclad contracts with most cities they operate in, making it very hard for anybody else to bring competition to those markets.

  • planned manslaughter

    Uhh, I think you need to look up the definition of "manslaughter".

    If it's premeditated, it's not manslaughter.

  • I'm not defending it, dipshit. I'm explaining how generative AI training works.

    The fact that you can't see that is what's really concerning.

  • It knows what naked people look like, and it knows what children look like. It doesn't need naked children to fill in those gaps.

    Also, these models are trained with images scraped from the clear net. Somebody would have to had manually added CSAM to the training data, which would be easily traced back to them if they did. The likelihood of actual CSAM being included in any mainstream AI's training material is slim to none.

  • Some things should be censored, and I don't think that's too hot of a take, either. Any material that encourages intolerance of others should not be accepted in any civil culture.

  • Working from home, I'd imagine. Since covid, I think I've only put on real pants maybe 8 times.

  • With growth comes quality, though. Right now, almost every community/instance is supplied with content by only a small handful of users. This means less things to engage with on the platform, and more opportunity for people to spin a narrative with their content.