Fun fact, the X-nm naming convention doesn't actually describe the size of the process. It comes from traditional scaling predictions that described a 70% scaling every 2 years.
No I'd say that it has more to do with improved usability and better design overall making them unable to fix issues when they do occur. There isn't one specific company or system to blame. Nearly everything has, for better or for worse, been boiled down into a webapp where there is minimal potential for error.
It's also not really fair to compare gen z to Millenials as Millennials have had nearly twice as much time to figure things out.
I wasn't reffering to the headline but the situation in general. What I meant was that the regulators expected the companies to be forced to pay up rather than just dropping Canadian news altogether.
The issue is the marketing. If they only marketed language models for the things they are able to be trusted with, summarization, cleaning text, writing assistance, entertainment, etc. there wouldn't be nearly as much debate.
The creators of the image generation models have done a much better job of this, partially because the limitations can be seen visually, rather than requiring a fact check on every generation. They also aren't claiming that they're going to revolutionize all of scociety, which helps.
No but that's not really a concern. Unless the frame generation is signicantly effecting the real frame rate you will get smoother motion with similar latency as without it. It's probably not ideal for competitive games where you want motion to be 1:1 but it's probably good enough for more casual ones.
There isn't anything conclusive yet because there's still very little legal precedent. There was a case where someone made a comic which was essentially machine art with text over it, and there was one where the creation was completely unguided. In both cases they were denied protection because not enough human input was used.
There has yet to be a case where there was a greater amount of human input, such as using a method like controlnet to guide composition.
I think it will eventually come down to proving that a work involved significant human guidance rather than just luck.
By that definition of copying Google is infringing on millions of copyrights through their search engine, and anyone viewing a copyrighted work online is also making an unauthorized copy. These companies are using data from public sources that others have access to. They are doing no more copying than a normal user viewing a webpage.
What laws specifically? The only ones I can find refer to limits on redistribution, which isn't happening here. If the models were able to reproduce the contents of the books that would be another issue that would need to be resolved. But I can't find anything that would prohibit training.
Obviously restricting the input will cause the model to overfit, but that's not an issue for most models where Billions of samples are used. In the case of stable diffusion this paper had a ~0.03% success rate extracting training data after 500 attempts on each image, ~6.23E-5% per generation. And that was on a targeted set with the highest number of duplicates in the dataset.
The reason they were sold doesn't matter, as long as the material isn't being redistributed copyright isn't being violated.
Is that illegal though? As long as the model isn't reproducing the original then copyright isn't being violated. Maybe in the future there will be laws against it but as of now the grounds for a lawsuit are shaky at best.
LLMs only predict the next token. Sometimes those predictions are correct, sometimes they're incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don't make sense.
Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.
I think the shift will be from the boomers who are completely helpless to the zoomers who are overly confident but lack the actual knowledge or ability to fix anything on their own.
Ancestral samplers are deterministic btw, but I think because they build off of the previous step it's more obvious when the determinism is broken by optimizations.
You can install any extension you want on the Dev version and some forks like mull by setting a custom extension collection. It's a bit of a pain but it works.
Duckduckgo doesn't have anywhere near the capacity to collect data that google does, and their ads are keyword based, rather than being influenced by other data. Their search engine is really the only thing I'd recommend using however since their add-on and browser don't offer anything that others don't.
People are scared because it will make consolidation of power much easier, and make many of the comfyer jobs irrelevant. You can't strike for better wages when your employer is already trying to get rid of you.
The idealist solution is UBI but that will never work in a country where corporations have a stranglehold on the means of production.
Hunger shouldn't be a problem in a world where we produce more food with less labor than anytime in history, but it still is, because everything must have a monetary value, and not everyone can pay enough to be worth feeding.
No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they're anywhere near a net profit but they're far from bankruptcy.
Fun fact, the X-nm naming convention doesn't actually describe the size of the process. It comes from traditional scaling predictions that described a 70% scaling every 2 years.
https://en.wikipedia.org/wiki/3_nm_process