Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn't even use the overused term "AI".
LLMs, for example, are something like... a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren't as easily defined, we have to resort to other methods. E.g. "machine learning".
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can't even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that "apple slices + batter = apple pie", assuming it has been tuned (aka trained) right.
Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn't even use the overused term "AI".
LLMs, for example, are something like... a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren't as easily defined, we have to resort to other methods. E.g. "machine learning".
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can't even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that "apple slices + batter = apple pie", assuming it has been tuned (aka trained) right.
Learning is, essentially, "algorithmically copy-paste". The vast majority of things you know, you've learned from other people or other people's works. What makes you more than a copy-pasting machine is the ability to extrapolate from that acquired knowledge to create new knowledge.
And currently existing models can often do the same! Sometimes they make pretty stupid mistakes, but they often do, in fact, manage to end up with brand new information derived from old stuff.
I've tortured various LLMs with short stories, questions and riddles, which I've written specifically for the task and which I've asked the models to explain or rewrite. Surprisingly, they often get things either mostly or absolutely right, despite the fact it's novel data they've never seen before. So, there's definitely some actual learning going on. Or, at least, something incredibly close to it, to the point it's nigh impossible to differentiate it from actual learning.
It's illegal if you copy-paste someone's work verbatim. It's not illegal to, for example, summarize someone's work and write a short version of it.
As long as overfitting doesn't happen and the machine learning model actually learns general patterns, instead of memorizing training data, it should be perfectly capable of generating data that's not copied verbatim from humans. Whom, exactly, a model is plagiarizing if it generates a summarized version of some work you give it, particularly if that work is novel and was created or published after the model was trained?
I have a 120 gig SSD. The system takes up around 60 gigs + BTRFS snapshots and its overhead. A have around 15 gigs of wiggle room, on average. Trying to squeeze some /home stuff in there doesn't really seem that reasonable, to be honest.
As long as you don't re-format the partition. Not all installers are created equal, so it might be more complicated to re-install the OS without wiping the partition entirely. Or it might be just fine. I don't really install linux often enough to know that. ¯(ツ)/¯
I have BTRFS on /, which lives on an SSD and ext4 on an HDD, which is /home. BTRFS can do snapshots, which is very useful in case an update (or my own stupidity) bricks the systems. Meanwhile, /home is filled with junk like cache files, games, etc. which doesn't really make sense to snapshot, but that's, actually, secondary. Spinning rust is slow and BTRFS makes it even worse (at least on my hardware) which, in itself, is enough to avoid using it.
SteamOS also uses an immutable filesystem and the system updates as a whole. Because of that, there is no risk of something updating separately and breaking compatibility.
It's fairly common for things to update on regular linux distros and break e.g. anticheat support in Proton or some other thing.
Another thing SteamOS does, at least on the Steam Desk, is actually using two partitions. The updates are always installed to the inactive one, so there's always one image that's known to work. Even if an update fails, the device will simply boot into the intact OS image. Regular distros usually don't have much in terms of fail-safes, so if things break, they have to be fixed manually.
Basically, SteamOS is trying to be as reliable and "hands-off" of an OS as possible to provide best console-like experience.
Even looking at power delivery alone, there's still different voltages and wattage, as well as cable specs. Nothing really changes. You still end up having different cables for different devices, essentially.
USB-C is an interface that can be used for a variety of different things. There are different "levels" of power delivery, there's thunderbolt, there's DisplayPort-over-USB-C, etc. And for things to work, the devices on both ends of the cable and the cable itself must comply with any given standard.
For example, on some laptops you can't use a USB-C port with thunderbolt for charging the device, nor the port that supports power delivery to connect thunderbolt devices. While using the same physical interface, the ports are not interchangeable. Even if you're connecting everything right, nothing is going to work if the cable you're using isn't specced properly (and trying to figure out the spec of a cable you have, considering they rarely have any labeling, is, definitely, fun).
If anything, USB-C makes everything harder and more convoluted, because instead of using different ports and plugs for different standards, it's now one port for nigh everything under the sun. If you want things to work, nowadays, you have to hunt down cable and port specs to ensure everything is mutually compatible.
In the past you could slap together an adapter by chopping up some old cable and slapping it to a new power supply. And things would work, even if voltage or power ratings didn't match exactly, or even at all (although, things would usually work much worse then).
I've jury rigged an adapter for my laptop, which uses a 65w, 20v power brick, to run off a 45w, 16v one, when mine died and I needed to access the files. It worked, as long as I wasn't using doing anything too computationally intensive on the thing.
If the laptops used USB-C, that is very likely would not have worked at all. Chances are, the manufacturer of the smaller laptop would've bundled the cheapest power brick that covers the needs of the machine, so it would've most likely been 45w, 15v over power delivery. And mine would've been 65w, 20v over power delivery. And since everything in USB-C world has to talk to each other and agree beforehand, chances are, nothing would even try to work, even if it, realistically, can.
Virtual desktops are arranged on a one dimensional axis. Whether it's vertical, horizontal or fucking diagonal doesn't really change how they work. It's just fluff and animations, really.
I did. The first couple months were... An experience. But after getting used to all the different ways things work (many of which are, honestly, way better), it's quite, quite nice.
Some of my hardware even works better: the drawing tablet's drivers don't crash and the audio latency is much less!
You see, adding pictures women with white cane facing right, limes and pregnant men is a very important and time consuming job! Standardizing encoding for some human language people use is just not as important!
There are "Video Rooms". They're in beta too.
Also, screen sharing is done via the same platform agnostic web APIs every other Electron-based app uses, though.
I got rid of screen capture induced lag by switching to Wayland.
Element has been working for me and my friends. At the moment, it just embeds Jitsi within the client to do group calls (which works fine. Jisti isn't bad by any means), but native group calls are being worked on and are currently in beta!
Most music I have is from "Pay what you want" albums from Ponies@Dawn, VibePoniez, A State Of Sugar, etc.
When I come across artists I like, I tend to check out their other tracks and grab the ones I like.
Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn't even use the overused term "AI".
LLMs, for example, are something like... a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren't as easily defined, we have to resort to other methods. E.g. "machine learning".
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can't even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that "apple slices + batter = apple pie", assuming it has been tuned (aka trained) right.