Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TE
Posts
4
Comments
1,694
Joined
2 yr. ago

  • I didn't misinterpret what you were saying, everything I said applies to the specific case you lay out. If illegal networks were somehow entirely destroyed, someone would just make them again. That's my point, there's no way around that, there's just holding people accountable when they do it. IMO that takes the form of restitutions to the people proportional to profits.

  • I understand that you are familiar with the buzzword "LLM", but let me introduce you to a different one: transformers.

    Virtually all modern successful AIs are based on transformers, LLMs included. I agree that LLMs currently amount to a chinese-room-inspired parlor trick, but the money involved has no doubt advanced all transfomer-based AI research, both directly (what works for LLMs may generalize) and indirectly (the market demand for LLMs in consumer products has created the a demand for power and compute hardware).

    We have transformer-based AI to thank for our understanding of the covid19 protein, and developing a safe and effective vaccine in a timely manner.

    The massive demand for energy has convinced Microsoft, Meta, and others to invest in their own modern nuclear power plants, representing a monumental step forward in sustainable energy generation that we have been trying to convince the US government to take for decades.

    Modern AI is being used to solve the hardest problems of nuclear fusion. If we can finally crack that nut, there's no telling what's possible.

    But specifically when it comes to LLMs, profitable or not, people obviously find them useful. People aren't using it in place of search engines, or doing all their homework with it because they don't find it useful. My only argument is that any AI trained on public content without consent should be required to effectively buy a license from, or pay royalties to the public. If McDonald's is going to replace their front counters with AI trained on public content, then they should have to pay taxes proportional to how much use they get from that AI.

    In the theoretical extreme, if someone trains an AI on the general public's data, and is able to create an AI that somehow replaces every job on earth, then congrats, we now live in a post-work society, we just need to reach out and take it rather than letting one person capitalize infinitely.

    And at the end of the day, if you honestly believe the profits from AI are non-existent, then what are you worried about? All those companies putting all their eggs in the LLM basket are going to disappear overnight when the AI bubble finally pops, right?

  • Destroying it is both not an option, and an objectively regressive suggestion to even make.

    Destruction isn't possible because even if you deleted every bit of information from every hard drive in the world, now that we know it's possible, someone would recreate it all in a matter of months.

    Regressive because you're literally suggesting that we destroy a new technology because we're afraid of what it will do to the technology it replaces. Meanwhile, there's a very decent chance that AI is our best chance at solving the energy/climate crises through advancing nuclear tech, as well as surviving the next pandemic via ground breaking protein folding tech.

    I realize AI tech makes people uncomfortable (for...so many reasons), but becoming old fashioned conservatives in response is not a solution.

    I would take it a step further than public domain, though. I would also make any profits from illegally trained AI need to be licensed from the public. If you're going to use an AI to replace workers, then you need to pay taxes to the people proportional to what you would be paying those it replaces.

  • He knows exactly what he's doing. It's multiple groups of shitheads using each other to gain power, not just one big group of elites who are all trustworthy buddies.

    My guess is that Musk will need to buy a merc army before he's able to establish his own nation in space. Might already have one.

  • TIL. Unfortunately I feel like we live in a post-precedence world. As though somehow they're going to say, "sure, any normal jury would go with the eggshell rule...but you're no normal jury are you? You're special. You can see right through that EsTaBlIsHeD pReCeDeNcE hogwash and make the REAL right call 😈."

    And somehow it will work.

  • OP is misquoting and misinterpreting what was said here.

    We cannot associate a product serial number with a customer unless that customer has voluntarily registered their product on our site

    But they said that wouldn't even apply if he had registered because his bag was a V1, and

    we did not implement unique serial numbers until V2

  • Yeah it's not a mastodon issue any more than racist speech is an issue with our ability to vocalize as humans.

    Similarly, the solution to people saying racist things isn't for all speech to be policed by a central authority, it's for societies themselves to learn to identify and reject racism.

  • The problem in many cases isn't that they don't literally see it but that they aren't aware of what constitutes racism a lot of the time.

    I agree with this part, "in many cases" sure,

    That's the primary issue here.

    ...but I think this a strong claim to make unless you have data to back it up.

    I believe you and I are likely speaking from our own anecdotal experience on the platform, and for all we know, most people are in instance bubbles and are also speaking from their own perspectives.

    If the "primary issue" is "why do some people not report seeing racism?" and the two possible explanations are either "they see it but are not aware" and "they actually never see it", then unless we have accurate data from all those bubbles, we can't make any claims about which is the real explanation.

    But if you have data on this, that would change everything.

  • Comparing the "racism" present on a federated service to that on a centralized one doesn't make sense. You can say certain instances of the service fail to adequately moderate racism, but there are so many niche pockets of mastodon that most people are exposed to, and moderated by, completely different groups.

    To make a slightly more nerdy analogy, it's like someone saying "the windows desktop experience is better than Linux". Well Linux doesn't come with a desktop interface, so that statement doesn't make sense. Which of the dozens of windowers/distros are you talking about? I'm sure the criticism is fair, but it doesn't contain enough information to make any real claim.

    So it's not unreasonable for one person to say "I see racism on Mastodon" and many others to say "I never see it", and not just because of the races of the people involved. "Mastodon" refers to a protocol, not the various ecosystems that use it.

  • lol if only. We're just going to see more fences around neighborhoods, more police in the "nice" parts of town, more general segregation between the upper and middle class, and if the CEOs are actually scared they'll just spend all their time in other countries that are more safe.