If AI spits out stuff it's been trained on
If AI spits out stuff it's been trained on
doesn't it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren't OpenAI, Google, Microsoft, Anthropic... sued for possession of CSAM? It's clearly in their training datasets.