Fiber connections are synchronous. Meaning that the download speed is the same as the upload speed.
A gigabit fiber connection gives you 1 gigabit down and 1 gigabit up. A "gigabit" cable connection gives you 1.something gigabit down (it allows for spikes... Usually) and like 20-50 megabits upload.
Fiber ISPs may still limit your upload speeds but that's not a limitation of the technology. It's them oversubscribing their (back end) bandwidth.
Cable Internet really can't give you gigabit uploads without dedicating half the available channels for that purpose and that would actually interfere with their ability to oversubscribe lines. It's complicated... But just know that the DOCSIS standards are basically hacks (that will soon run into physical limitations that prevent them from providing more than 10gbs down) in comparison to fiber.
The DOCSIS 4.0 standard claims to be able to handle 10gbs down and 6gbs up realistically that's never going to happen. Instead, cable companies will use it to give people 5gbs connections with 100 megabit uploads because they're bastards.
From a copyright perspective, you don't need to ask for permission to train an AI. It's no different than taking a bunch of books you bought second-hand and throwing them into a blender. Since you're not distributing anything when you do that you're not violating anyone's copyright.
When the AI produces something though, that's when it can run afoul of copyright. But only if it matches an existing copyrighted work close enough that a judge would say it's a derivative work.
You can't copyright a style (writing, art, etc) but you can violate a copyright if you copy say, a mouse in the style of Mickey Mouse. So then the questionāfrom a legal perspectiveābecomes: Do we treat AI like a Xerox copier or do we treat it like an artist?
If we treat it like an artist the company that owns the AI will be responsible for copyright infringement whenever someone makes a derivative work by way of a prompt.
If we treat it like a copier the person that wrote the prompt would be responsible (if they then distribute whatever was generated).
More interesting superpowers. I want stuff like the powers in Sagrada Reset. Not stereotypical stuff like strength, speed, invisibility, etc.
I also want to see a shapeshifter character as one of the good guys! Why are they always the bad guys?
Give people some superpowers with interesting limitations and make them work together in clever ways to accomplish amazing feats that would never be possible on their own.
Ooh! Get an Arduino/electronics starter kit! You'll learn how computers worked in the 80s. Then you'll be able to move on up to say, Python in no time š
The big difference is that updates in Linux happen in the background and aren't very intrusive. Your hard drive will be used here and there as it unpacks packages but the difference between say, apt, and Windows update is stark. Windows update slows everything down quite a lot.
Dumb Restrictions on Media will always be Dumb Restrictions on Media.
We the people mostly won the DRM wars of the early 2000s. You do not want to legitimize that technology. It only helps big corporations/evil monopolies. It will never be a good thing for humanity as a whole.
When something is stolen the one who originally held it no longer has it anymore. In other words, stealing covers physical things.
Copying is what you're talking about and this isn't some pointless pedantic distinction. It's an actual, real distinction that matters from both a legal/policy standpoint and an ethical one.
Stop calling copying stealing! This battle was won by every day people (Internet geeks) against Hollywood and the music industry in the early 2000s. Don't take it away from us. Let's not go back to the, "you wouldn't download a car" world.
I dunno. It's better than their old, non-AI slop š¤·
Before, I didn't really understand what they were trying to communicate. Nowāthanks to AIāI know they weren't really trying to communicate anything at all. They were just checking off a box š
My argument is that the LLM is just a tool. It's up to the person that used that tool to check for copyright infringement. Not the maker of the tool.
Big company LLMs were trained on hundreds of millions of books. They're using an algorithm that's built on that training. To say that their output is somehow a derivative of hundreds of millions of works is true! However, how do you decide the amount you have to pay each author for that output? Because they don't have to pay for the input; only the distribution matters.
My argument is that is far too diluted to matter. Far too many books were used to train it.
If you train an AI with Stephen King's works and nothing else then yeah: Maybe you have a copyright argument to make when you distribute the output of that LLM. But even then, probably not because it's not going to be that identical. It'll just be similar. You can't copyright a style.
Having said that, with the right prompt it would be easy to use that Stephen King LLM to violate his copyright. The point I'm making is that until someone actually does use such a prompt no copyright violation has occurred. Even then, until it is distributed publicly it really isn't anything of consequence.
This just proves that Google's AI is a cut above the rest!