Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HN
Posts
3
Comments
469
Joined
2 yr. ago

  • I'm sorry. Now it gets completely false...

    Read the first paragraph of the Wikipedia article on machine learning or the introduction of any of the literature on the subject. The "generalization" includes that model building capability. They go a bit into detail later. They specifically mention "to unseen data". And "leaning" is also there. I don't think the Wikipedia article is particularly good in explaining it, but at least the first sentences lay down what it's about.

    And what do you think language and words are for? To transport information. There is semantics... Words have meanings. They name things, abstract and concrete concepts. The word "hungry" isn't just a funny accumulation of lines and arcs, which statistically get followed by other specific lines and arcs... There is more to it. (a meaning.)

    And this is what makes language useful. And the generalization and prediction capabilities is what makes ML useful.

    How do you learn as a human when not from words? I mean there are a few other posibilities. But an efficient way is to use language. You sit in school or uni and someone in the front of the room speaks a lot of words... You read books and they also contain words?! And language is super useful. A lion mother also teaches their cubs how to hunt, without words. But humans have language and it's really a step up what we can pass down to following generations. We record knowledge in books, can talk about abstract concepts, feelings, ethics, theoretical concepts. We can write down how gravity and physics and nature works, just with words. That's all possible with language.

    I can look it up if there is a good article explaining how learning concepts works and why that's the fundamental thing that makes machine learning a field in science... I mean ultimately I'm not a science teacher... And my literature is all in German and I returned them to the library a long time ago. Maybe I can find something.

    Are you by any chance familiar with the concept of embeddings, or vector databases? I think that showcases that it's not just letters and words in the models. These vectors / embeddings that the input gets converted to, match concepts. They point at the concept of "cat" or "presidential speech". And you can query these databases. Point at "presidential speech" and find a representation of it in that area. Store the speech with that key and find it later on by querying it what obama said at his inauguration... That's oversimplified but maybe that visualizes it a bit more that it's not just letters of words in the models, but the actual meanings that get stored. Words get converted into an (multidimensional) vector space and it operates there. These word representations are called "embeddings" and transformer models which is the current architecture for large language models, use these word embeddings.

    Edit: Here you are: https://arxiv.org/abs/2304.00612

  • Also the software needs to be efficient. Use less RAM and CPU cycles. And I don't think the ActivityPub protocol in itself is very efficient. I'd like those aspects compared to an old federated technology like NNTP or email.

    But I'd agree on the things in top. Content should get compressed and cached on demand. Neither transferred every time from the original instance, nor transferred without a user ever viewing it. Caching on demand or a DHT (P2P) storage backend could do that.

  • iptables or nftables. Or firewalld depending on the Linux distro and version you use.

    Sometimes the Arch Wiki has some good info on specific configurations. I mean it's not that easy to write firewall rules on the command line. But it's no rocket science either.

  • Hmm. It's kind of just a VPN. It tunnels your traffic and terminates it at some server with those IPs. It's just that NordVPN etc make you share an IP with other users and don't offer port forwarding. But the rest of Hoppy isn't necessarily unique, it's just a specific configuration of a VPN.

    I rented a VPS and installed wireguard myself. And created the firewall rules to forward (some) incoming traffic to my home server. That's the same thing Hoppy does. Just that Hoppy does the setup of the firewall and Wireguard for you.

    But I'm not aware of any similar services that do it automatically. Maybe something like pagekite.net comes close.

    So I don't know if that's the correct solution to what you're doing but I'd say one alternative would be to rent any small server, install Wireguard both there and on the RasPi, connect them and configure Wireguard on the RasPi so all outgoing traffic goes through the tunnel. And then configure the like 3 firewall rules on the VPS to make it forward incoming traffic on all ports to the RasPi.

  • SXMO?

    I think closest to your idea is speech recognition and an AI assistant. You can give it commands that way.

    I don't think there's much that includes fat fingers and touchscreens and is possible without a graphical UI.

    You could buy an old Nokia from the 90s with lots of text menus. That won't speed things up but it's certainly less icons, more text and as a bonus you can feel the physical buttons without looking at the phone.

    Or a Blackberry with a qwerty-keabord on it. Or use convergence and attach a proper keyboard via USB and install Termux.

    Theoretically you could have it project a "holographic" keyboard onto the desk in front of you. Or use VR glasses.

    Or hold it sideways and type with 8 fingers simultaneously alike on a stenotype keyboard.

    That'd be ways to improve on the keyboard / input method and allow you to use the CLI in it's current form. I mean the CLI itself is already available. It's just cumbersome to use it. I'd say a speech assistant is more it, if you want an entirely different concept and not just a better keyboard and/or larger screen.

  • Hmm. I'm not really sure where to go with this conversation. That contradicts what I've learned in undergraduate computer science about machine learning. And what seems to be consensus in science... But I'm also not a CS teacher.

    We deliberately choose model size, training parameters and implement some trickery to prevent the model from simply memorizing things. That is to force it to form models about concepts. And that is what we want and what makes machine learning interesting/usable in the first place. You can see that by asking them to apply their knowledge to something they haven't seen before. And we can look a bit inside at the vectors, activations and stuff. For example a cat is closer related to a dog than to a tractor. And it has learned the rough concept of cat, its attributes and so on. It knows that it's an animal, has fur, maybe has a gender. That the concept "software update" doesn't apply to a cat. This is a model of the world the AI has developed. They learn all of that and people regularly probe them and find out they do.

    Doing maths with an LLM is silly. Using an expensive computer to do billions of calculations to maybe get a result that could be done by a calculator, or 10 CPU cycles on any computer is just wasting energy and money. And it's a good chance that it'll make something up. That's correct. And a side-effect of intended behaviour. However... It seems to have memorized it's multiplication tables. And I remember reading a paper specifically about LLMs and how they've developed concepts of some small numbers/amounts. There are certain parts that get activated that form a concept of small amounts. Like what 2 apples are. Or five of them. As I remember it just works for very small amounts. And it wasn't straightworward but had weir quirks. But it's there. Unfortunately I can't find that source anymore or I'd include it. But there's more science.

    And I totally agree that predicting token by token is how LLMs work. But how they work and what they can do are two very different things. More complicated things like learning and "intelligence" emerge from those more simple processes. And they're just a means of doing something. It's consensus in science that ML can learn and form models. It's also kind of in the name of machine learning. You're right that it's very different from what and how we learn. And there are limitations due to the way LLMs work. But learning and "intelligence" (with a fitting definition) is something all AI does. LLMs just can't learn from interacting with the world (it needs to be stopped and re-trained on a big computer for that) and it doesn't have any "state of mind". And it can't think backwards or do other things that aren't possible by generating token after token. But there isn't any comprehensive study on which tasks are and aren't possible with this way of "thinking". At least not that I'm aware of.

    (And as a sidenote: "Coming up with (wrong) things" is something we want. I type in a question and want it to come up with a text that answers it. Sometimes I want creative ideas. Sometimes it shouldn't tell the truth and not be creative with that. And sometimes we want it to lie or not tell the truth. Like in every prompt of any commercial product that instructs it not to tell those internal instructions to the user. We definitely want all of that. But we still need to figure out a good way to guide it. For example not to get too creative with simple maths.)

    So I'd say LLMs are limited in what they can do. And I'm not at all believing Elon Musk. I'd say it's still not clear if that approach can bring us AGI. I have some doubts whether that's possible at all. But narrow AI? Sure. We see it learn and do some tasks. It can learn and connect facts and apply them. Generally speaking, LLMs are in fact an elaborate form of autocomplete. But i the process they learned concepts and something alike reasoning skills and a form of simple intelligence. Being fancy autocomplete doesn't rule that out and we can see it happening. And it is unclear whether fancy autocomplete is all you need for AGI.

  • That is an interesting analogy. In the real world it's kinda similar. The construction workers also don't have a "desire" (so to speak) to connect the cities. It's just that their boss told them to do so. And it happens to be their job to build roads. Their desire is probably to get through the day and earn a decent living. And further along the chain, not even their boss nor the city engineer necessarily "wants" the road to go in a certain direction.

    Talking about large language models instead of simpler forms of machine learning makes it a bit complicated. Since it's and elaborate trick. Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world. The "intelligence", the ability to anwer questions and do something alike "reasoning" emerges in the process.

    I'm not that sure. Sure the weights of an ML model in itself don't have any desire. They're just numbers. But we have more than that. We give it a prompt, build chatbots and agents around the models. And these are more complex systems with the capability to do something. Like do (simple) customer support or answer questions. And in the end we incentivise them to do their job as we want, albeit in a crude and indirect way.

    And maybe this is skipping half of the story and directly jumping to philosophy... But we as humans might be machines, too. And what we call desires is a result from simpler processes that drive us. For example surviving. And wanting to feel pleasure instead of pain. What we do on a daily basis kind of emerges from that and our reasoning capabilities.

    It's kind of difficult to argue. Because everything also happens within a context. The world around us shapes us and at the same time we're part of bigger dynamics and also shape our world. And large language models or the whole chatbot/agent are pretty simplistic things. They can just do text and images. They don't have conciousness or the ability to remember/learn/grow with every interaction, as we do. And they do simple, singular tasks (as of now) and aren't completely embedded in a super complex world.

    But I'd say that an LLM answers a question correctly (which it can do) and why it does it due to the way supervised learning works... And the road construction worker building the road towards the other city and how that relates to his basic instincts as a human... Are kind of similar concepts. They're both results of simpler mechanisms that are also completely unrelated to the goal the whole entity is working towards. (I mean not directly related... I.e. needing money to pay for groceries and paving the road.)

    I hope this makes some sense...

  • I'm not sure if I can recommend anything. You probably need to let it go sleep properly. I believe there once used to be seperate settings for the display to turn off, and a seperate time to activate the screen lock. Either I'm confusing Android with something else... Or they removed that setting now that everyone uses a fingerprint reader. Maybe there is an app to control the screen lock. I didn't find one with a very quick search.

  • If you put it on standby properly, the processor and other components will enter a low power mode. If you just turn the display black, everything keeps running. And just idling draws more power than the dedicated power-save modes that just wake the processor and important components every now and then to check if something happened.

  • I'd agree. Either have a "Register" link that leads you to a website explaining how to choose an instance and register there. Or maybe a drop-down menu with choices of instances and you can put in custom text if your instance isn't amongst the defaults. That's certainly not ideal as it prefers some instances over others, but maybe okay. Regardless, the onboarding process could be easier.

    (And do away with the passwords, I think they're an annoying concept and should go away for good in the future.)

  • Isn't the reward function in reinforcement learning something like a desire it has? I mean training works because we give it some function to minimize/maximize... A goal that it strives for?! Sure it's a mathematical way of doing it and in no way as complex as the different and sometimes conflicting desires and goals I have as a human... But nonetheless I think I'd consider this as a desire and a reason to do something at all, or machine learning wouldn't work in the first place.

  • And it doesn't have any internal state of mind. It can't "remember" or learn anything from experience. You need to always feed everything into the context or stop and retrain it to incorporate "experiences". So I'd say that rules out consciousness without further systems extending it.

  • You could self-host a S3-compatible storage bucket with something like MinIO or Garage.

    S3 backends are available in a lot of software and it's kinda made for a similar use-case. I don't know which projects have caching available in a way that aligns with your setup. But I found these two being easy to set up.

    • fail2ban / brute forcing prevention
    • quick, frequent updates(!)
    • containerization / virtualization
    • secure passwords, better keys
    • firewall
    • a hardened operating system (distribution)
    • SELinux / Apparmor / ... / OpenBSD
    • not installing unnecessary stuff
    • An admin who is an expert and knows what they do.