Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EA
Posts
15
Comments
825
Joined
2 yr. ago

  • I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven't already. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.

    People are trying to conjour up new rights to take another piece of the public's right and access to information. To fashion themselves as new owner class. Artists and everyone else should accept that others have the same rights as they do, and they can't now take those opportunities from other people because it's their turn now.

    There's already a model trained on just Creative Commons licensed data, but you don't see them promoting it. That's because it was not about the data, it's an attack on their status, and when asked about generators that didn't use their art, they came out overwhelmingly against with the same condescending and reductive takes they've been using this whole time.

    I believe that generative art, warts and all, is a vital new form of art that is shaking things up, challenging preconceptions, and getting people angry - just like art should.

  • Their problem was that they smashed too many looms and not enough capitalists. AI training isn't just for big corporations. We shouldn’t applaud people that put up barriers that will make it prohibitively expensive to for regular people to keep up. This will only help the rich and give corporations control over a public technology.

  • The new version of midjourney has a real overfitting problem. The way it was done if I remember correctly is that someone found out v6 was trained partially with Stockbase images pairs, so they went to Stockbase and found some images and used those exact tags in the prompts. The output from that greatly resembled the training data, and that's what ignited this whole thing.

    Edit: I found the image I saw a few days ago. They need to go back and retrain their model, IMO. When the output is this close to the training, it has to be hurting the creativity of the model. This should only happen with images that haven't been de-duped in the training set, so I don't know what's going on here.

  • I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven't already. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.

    AI training isn’t only for mega-corporations. We can already train open source models, and Mozilla and LAION have already commited to training AI anyone can use. We shouldn't put up barriers that only benefit the ultra-wealthy and hand corporations a monopoly of a public technology by making it prohibitively expensive to for regular people to keep up. Mega corporations already own datasets, and have the money to buy more. And that's before they make users sign predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us. Regular people, who could have had access to a competitive, corporate-independent tool for creativity, education, entertainment, and social mobility, would instead be left worse off and with less than where they started.