And then when they get home and none of their very specific programs that only work on windows run on linux (like photoshop, or microsoft office, or something similar), they'll come back to you and ask what is wrong with their computer.
I don't think that it is even remotely close to being the same thing. I'm sorry but we shouldn't be affording companies the ability to profit off other people's creations without their consent, regardless of how current copyright law works.
Acting as though a human writing a summary is the same thing as a vast network of computers processing data at a speed that is hundreds if not thousands times faster than a human is foolish. Perhaps it is also foolish to try and apply our current copyright laws (which already favour large corporations and not individual creators) to this slew of new technology, but just ignoring the fundamental difference between the two is no way of going about it. We need copyright reform, we need protections for creators, and we need to stop acting as though machine learning algorithms are remotely comparable to humans both in their capabilities, responsibilities and rights.
There is a perfectly reasonable way of doing this ethically, and that is using content that people have provided to the model of their own volition with their consent either volunteered or paid for, but not scraped from an epub, regardless of if you bought it or downloaded it from libgen.
There are already companies training machine learning models ethically in this manner, and if creators do not want their content used as training data, it should not be.
My main point is that if people don't want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
I'm actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.
You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.
The only reason people use Windows is because they don’t choose it. Imagine if every PC sold had a Linux option and a Windows option that cost an extra $100. What do you think people would buy?
I think they would buy windows, because the software they need to do their job only runs on windows.
I can’t believe people smart enough to acquire the wealth for that excursion
You do not need to be smart to acquire wealth.
Of the people in the sub, I am confident that 4/5 of them were born into wealth, and I can't really find any information on the other one.
The Dawoods (father and son) were only wealthy because their father/grandfather was wealthy.
Stockton Rush was also born into wealth, his family made their money from oil and shipping.
Can't find a lot of information about Hamish Harding, but he was flying aeroplanes at 13 and went to a prestigious private school called The King's School, so it's safe to say he was also born into considerable wealth.
Well, the company has the training data, so I would imagine that will be part of discovery phase of the lawsuit.
It will be a very quick case if OpenAI provides their training data and there is no data from Libgen and Z-library included in it.