Every board does not have this feature, some newer ones do and for older ones you can buy a post beeper that inserts into a speaker slot. Your manual will tell you if you have one
Tbh I think you're making a lot of assumptions and ignoring the point of this paper. The small model was used to quickly show proof of generative degradation over itérations when the model was trained on its own output data. The use of opt125 was used precisely due to its small size so they could demonstrate this phenomenon in less iterations. The point still stands that this shows that data poisoning exists, and just because a Model is much bigger doesn't make sense that it would be immune to this effect, just that it will take longer. I suspect that with companies continually scraping the web and sources for data, like reddit which this article mentions has struck a deal with Google to allow their models to train off of, this process will not in fact take too long as more and more of reddit posts become AI generated in itself.
I think it's a fallacy to assume that a giant model is therefore "higher quality" and resistant to data poisoning
I have, I've noticed. I've noticed that I'm no longer screaming out in a sea of people and being ignored or ridiculed. We're in a smaller pond here and the waters are much clearer
No no it was totally worth it, if you had done them all in one go the gradient would have been the same on all the models, this way it's much more interesting and each kid gets their own color
It's pretty argumentative tbh