Isaacson writes that Musk reportedly panicked when he heard about the planned Ukrainian attack, which was using Starlink satellites to guide six drones packed with explosives towards the Crimea coast.
After speaking to the Russian ambassador to the United States — who reportedly told him an attack on Crimea would trigger a nuclear response — Musk took matters into his own hands and ordered his engineers to turn off Starlink coverage “within 100 kilometers of the Crimean coast.”
This caused the drones to lose connectivity and wash “ashore harmlessly,” effectively sabotaging the offensive mission.
Ukraine’s reaction was immediate: Officials frantically called Musk and asked him to turn the service back on, telling him that the “drone subs were crucial to their fight for freedom.”
They had access. Musk revoked it from the Crimea region after the fact.
They had previously had access, and then were denied further access by geofencing critical regions. That's effectively the same as having services cut off.
Tell that to the Ukranian soldiers who had their Starlink access cut off during a critical moment in the war with Russia or to the people injured/killed by Tesla's half-baked autopilot that Elon refuses to admit is not safe for public use, both of which are decisions spearheaded by Elon, directly.
I feel like this was an exception to the rule, since he was still at large for a while and it was imperative to the public safety that people knew who he was and what he looked like.
That said, yes, the media does need to pull back on publishing details about the killers in these situations.
Because the things he says and does affect real people, and it's important that this behavior is known so that he doesn't get away with his shitty misdeeds in secrecy.
They've been trying, but the existing ISPs have ironclad contracts with most cities they operate in, making it very hard for anybody else to bring competition to those markets.
It knows what naked people look like, and it knows what children look like. It doesn't need naked children to fill in those gaps.
Also, these models are trained with images scraped from the clear net. Somebody would have to had manually added CSAM to the training data, which would be easily traced back to them if they did. The likelihood of actual CSAM being included in any mainstream AI's training material is slim to none.
Some things should be censored, and I don't think that's too hot of a take, either. Any material that encourages intolerance of others should not be accepted in any civil culture.
With growth comes quality, though. Right now, almost every community/instance is supplied with content by only a small handful of users. This means less things to engage with on the platform, and more opportunity for people to spin a narrative with their content.
Damn, wasn't expecting this one. RIP.