Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HN
Posts
3
Comments
469
Joined
2 yr. ago

  • Yeah, I wouldn't use Louis' video as the only source nor Techlore. Looks to me like the person has autism and/or struggles with mental issues and it reflects on the way they treat people and deal with stuff. That doesn't necessarily make it better or worse, malicious or whatever. The converstion style is definitely not 'normal' and it contains quite some manic behaviour. It's probably more nuanced. I think the whole story is a bit sad and I like GrapheneOS as a project.

  • Yeah, I think the BSDs lead the way with some things, like jails. But they're not distribution formats. But jailing is part of things like Flatpak. And we have chroots and systemd nspawn. I think I misread the comment and it was more shitposting than anything of substance.

  • It's probably not of any benefit to you as a user if it's also available in the package manager.

    It helps you if you want something that's not available in your distro, or a different or maybe multiple specific versions, or you want to contain some stuff and use the additional permissions system. But you don't have support by the distro maintainers this way and it's not tied into the rest of the system any more. I always use the packaged versions if available.

    Other than that, software developers can use it to just do one build for their homepage that works on every distro.

  • I think the main point is, this makes it unavailable in F-Droid and everyone else unable to build upon, use or adapt it for their own use-cases except for the specific ones outlined by FUTO. It's source-available software. Not free software. And it has other downsides, too. Once YouTube starts cracking down on third-party apps and the companies behind it, it's gone for good. yt-dl has demonstrated free software offers more resilience in those cases.

    And I'd argue it's ineffective. Having a license forbid malicious use will only stop the honest people from using it. The bad players will probably not care. But that's debatable.

  • Sorry, It was probably more me having a bad day. I was a bit grumpy that day, because I didn't have that much sleep.

    I'm seeing lots of ...let's say... uninformed articles about AI. People usually anthropomorphise language models. (Because they do the thing they're supposed to do very well. That is: write text that sounds like text.) People bring in other (unrelated) concepts. But generally, evidence doesn't support their claims. Like with the 'conciousness' in that case with Lemoine, last year. Maybe I get annoyed too easily. But my claim is, it is very important not to spread confusion about AI.

    I didn't see the article was written by two high profile AI researchers. I'm going to bookmark it because it has lots of good references to papers and articles in it.

    But I have to disagree on almost every conclusion in it:

    • They begin with claiming fixing (all) current flaws like hallucinations would mean superintelligence. Without backing it up at all.
    • The next paragraph is titled like they'd now define AGI, but they just broaden the tasks narrow AI can do. I'd agree. It's impressive what can be done with current AI tech. But you'd need to show me the distinguishing factors and prove AI is past that. The way they do it just makes it a wide variant of narrow AI. (And I'd argue it's not that wide at all, compared to the things a human does every day.)
    • I think their example showcasing emergent abilities of ML is flawed. When doing arithmatics, there is a sharp threshold where you don't just memorize numbers and the result but get a grasp of numbers and how the decimal system works and you understand the concept of addition and push past memorizing multiplication tables. I'd argue it's not gradual like they claim. I get that this couldn't be backed up by studying current models. But it could be well the case that they're still so small or you'd need to teach them maths in a more effective way than just feeding them the words of every book on earth and Wikipedia.
    • The story on AI history is fascinating. How people first tried to build AI with formal reasoning and semantic networks, constructed vast connected knowledge databases, got through two "AI winter"s and nowadays we just dump the internet into an LLM and that's the approach that works.

    What I would like to have been part of that article:

    • How can we make sure we're not antropomorphizing, but it's really the thing itself that has general intelligence?
    • What are some quantivative and qualitative measurements for AGI? How does the current state of the art AI perform on these metrics? They address that in the section "Metrics". But they just criticise current metrics and say it passed the bar exam etc. What are the implications? What are some proper metrics to back up a claim like theirs? They just did away with the metrics. What are they basing their conclusion on, then?
    • If defining general intelligence is difficult: What is the lower bound for AGI? What is concidered the upper bound at which we're sure it's AGI?
    • What about the lack of a state of mind in transformer models? It is trained and then it is the way it is. Until OpenAI improves it a few months layer and incorporates new information into the next iteration. But it's unable to transition into a new state while running. It get's some tokens as input, calculates and then does output. No internal state that could save something or change. This is one of the main points ruling out conciousness. But it also limts the tasks it can do at all. Doesn't it? It now needs prior knowledge or to fit every bit of information into the context window. Or retrieve it somehow, for example with a vector database. The authors mention "in-context-learning" early on. But it's not clear if that does it for every task and to what scale. Without more information or new scientific advancements, I doubt it. Most importantly:
    • It can't learn anything while running. It can't 'remember'. This is a requirement per definition of AGI. Aren't intelligent entities supposed to be able to learn?
    • Are there tasks that can't be done by transformer models? One example I read about is: They are feed-forward models. There is nothing regressive in them. The example task is you want to write a joke. Now you first need to come up with a pun and then write the build-up to the pun. But once you tell it, you need to tell the build-up first, then the pun. A transformer model starts writing at the beginning and then comes up with the pun afterwards once it gets to that point in the text. Are there many real-world tasks for intelligence that inherently require you to think the other way round / backwards? Can you map them so you can tackle them with forwards-thinking? If not, transformer models are unable to do that task. Hence they're not AGI. But still there are tasks similar to the joke example that the LLMs obviously do better than you'd expect.
    • Are we talking about LLMs or agents? A LLM embedded in a larger project can do more. For example have the text fed back in. Do reasoning and then give a final answer. Can store/remember information in a vector database. Be instructed to fact check it's output and rephrase it after providing its own critique. But from the article it's completely unclear what they're talking about. It seems like they only refer to a plain LLM like ChatGPT.

    And my personal experience doesn't align with the premise either. The article wants to tell me we're already at AGI. I've fooled around with ChatGPT and had lots of fun with the smaller Llama models at home. But I completely fail to have them do really useful tasks from my every-day life. It does constrained and narrowed down tasks like drafting an email or text. Or doing the final touches. Exactly like I'd expect from narrow AI. And I always need to intervene and give it the correct direction. It's my intelligence an me guiding ChatGPT that's making the result usable. And still it gets facts wrong often while wording them in a way that sounds good. I sometimes see prople use summary bots here. Or use an LLM to summarize a paper for a Lemmy post. More often than not, the result is riddled with inaccuracies and false information. Like someone who didn't understand the paper but had to hand in something for their assignment. That's why I don't understand the conclusion of the article. I don't see AGI around me.

    I really don't like confusion being spread about AI. I think it is going to have a large impact on our lives. But people need to know the facts. Currently some people fear about their jobs, some are afraid of an impeding doom... the robot apocalypse. Other people hype it to quite some levels and investors eagerly throw hundreds of millions of dollars at anything that has 'AI' in its name. And yet other people aren't aware of the limitations and false information they spread by using it as a tool. I don't think this is healthy.

    To end on a positive note: Current LLMs are very useful and I'm glad we have them. I can make them do useful stuff. But I need to constrain them and have them work on a well defined and specific task to make it useful. Exactly like I'd expect it from narrow AI. Emergent abilities are a thing. A LLM isn't just autocomplete text. There are concepts and models of real-world facts inside. I think researchers will tackle issues like the 'hallucinations' and make AI way smarter and more useful. Some people predict AGI to be in reach within the next decade or so.


    More references:

  • I found something by Samsung, called a Galaxy SmartTag. They don't seem to include privacy at all and only work with Samsung smartphones. So they won't talk to my trusty de-googled GrapheneOS Pixel phone. But people expect Google to release their own alternative soon. I don't think we know any details about them yet. I was just thinking... People seem to like the AirTags and they sell well. I could also find some use-cases for a (proper) Find-My network. Guess we're going to find out soon. I think in theory there is nothing against end-to-end encrypting the location data. But I doubt Google chooses privacy as their unique selling point.

  • I have a related question: Does anyone know an alternative to the AirTags? I suppose we need to wait for the Google alternative they're already working on? But I somehow expect google services are needed on the device to make it work.

    I know there are some bluetooth ones out there. But they only work in close proximity. And I once tried one and it needed a new coin cell battery every 6 weeks...

  • (Wow. That's really a bad article. And even though the author managed to ramble on for quite some pages, they somehow completely failed to address the interesting and well discussed arguments.)

    [Edit: I disagree -strongly- with the article]

    We've discussed this in June 2022 after the Google engineer Blake Lemoine claimed his company’s artificial intelligence chatbot LaMDA was a self-aware person. We've discussed both intelligence and conciousness.

    And my -personal- impression is: If you use ChatGPT for the first time, it blows you away. It's been a 'living in the future' moment for me. And I see how you'd write an excited article about it. But once you used it for a few days, you'll see every 6th grade teacher can distinguish if homework assignments were done by a sentient being or an LLM. And ChatGPT isn't really useful for too many tasks. Drafting things, coming up with creative ideas or giving something the final touch, yes. But defenitely limited and not something 'general'. I'd say it does some of my tasks so badly, it's going to be years before we can talk about 'general' intelligence.

  • How about a refurbished Lenovo Yoga?

    https://www.afbshop.de/notebooks/alle-notebooks/33755/lenovo-thinkpad-x380-yoga-13-3-zoll-core-i5-8350u-at-1-7-ghz-8gb-ram-250gb-ssd-fhd-1920x1080-touch-webcam-win10pro

    Runs Linux well, comes with a pen, you can have it folded as a tablet or with a keyboard to take notes. Battery might not last as long as an Android tablet. But is way more powerful and fits perfectly in your budget if bought second hand / refurbished.

    Edit: I bought a Yoga 460 back then. Used it during lectures. But soon realized typing notes was faster than a pen. Except for calculus and linear algebra. You should learn latex or use a pen for that. Otherwise, would recommend a Linux convertible over an Android device.

    I think with linux, rnote or xournal++ work well for taking notes or annotating pdf.

  • I think I would like to see Amazon, Google, Netflix etc to pay for the free and open source projects they use to make money and sell in their AWS and database offerings.

    I -personally- don't miss a store for end users. Marketshare for Linux on the destop is slim anyways. That's not where you earn a considerable amount of your money.

    And i like things like the value-for-value model. So maybe instead include donation links in the package managers and into the databases of the gnome-software etc. (I think it's called packagekit.)