Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EA
Posts
15
Comments
825
Joined
2 yr. ago

  • Help me understand what you mean by "reductionism". What parts do you believe I'm simplifying or overlooking? Also, could you explain why you think being alive is essential for understanding? Shifting the goalposts makes it difficult to discuss this productively. I've also provided evidence for my claims, while I haven't seen any from you. If we focus on sharing evidence to clarify our arguments, we can both avoid bad faith maneuvering.

    Besides, that article doesn’t really support your statement, it just shows that a neural network can link words to pictures, which we know.

    It does, by showing it can learn associations with just limited time from a human's perspective, it clearly experienced the world.

  • When people say that the model "understands", it means just that, not that it is human, and not that it does so exactly humans do. Judging its capabilities by how close it's mimicking humans is pointless, just like judging a boat by how well it can do the breast stroke. The value lies in its performance and output, not in imitating human cognition.

  • The misconception that this is stealing is understandable, but it misses the mark. The model is used to create novel works, and it consists of original analysis of the training data in comparison with one another, not the images themselves. Neither analysis nor creation constitute theft.

    While mechanisms for learning differ, denying that you can produce output that doesn't appear in the set is unfair. If that's not learning, what is?

    We also don't need to compare quality. Art's value transcends technical skill. The subjective nature of quality and limitations of generative models make these comparisons pointless. Instead of a threat to tradition, I see this as a tool with unique challenges and possibilities.

    You should check out this article by the EFF, and this one by the Association of Research Libraries. I think we can have a nuanced discussion without simplistic arguments.

  • His generic AI company CEO roleplay is a strawman. This part is when it devoves from a straw man to straight up caricature.

    If you want to hear some real arguments, I suggest you read this article by Kit Walsh, a senior staff attorney at the EFF, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.

  • The article dealt with Stable Diffusion, the only open model that allowed people to study it. If there were more problems with Stable Diffusion, we'd've heard of them by now. These are the critical solutions Open-source development offers here. By making AI accessible, we maximize public participation and understanding, foster responsible development, as well as prevent harmful control attempts.

    As it stands, she was much better informed than you are and is an expert in law to boot. On the other hand, you're making a sweeping generalization right into an appeal to ignorance. It's dangerous to assert a proposition just because it has not been proven false.

  • This part does:

    It’s not surprising that the complaints don’t include examples of substantially similar images. Research regarding privacy concerns suggests it is unlikely it is that a diffusion-based model will produce outputs that closely resemble one of the inputs.

    According to this research, there is a small chance that a diffusion model will store information that makes it possible to recreate something close to an image in its training data, provided that the image in question is duplicated many times during training. But the chances of an image in the training data set being duplicated in output, even from a prompt specifically designed to do just that, is literally less than one in a million.

    The linked paper goes into more detail.

    On the note of output, I think you’re responsible for infringing works, whether you used Photoshop, copy & paste, or a generative model. Also, specific instances will need to be evaluated individually, and there might be models that don't qualify. Midjourney's new model is so poorly trained that it's downright easy to get these bad outputs.