Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)OP
Posts
21
Comments
1,048
Joined
6 mo. ago

  • The kind of basic plug shown in the article is really poor for anything other than concrete. In my 15 years of experience hanging things on walls, floors, and ceilings professionally, I always recommend the Fischer DuoPower as the best all-around general-purpose anchor. As long as you’re using the correct length, these won’t pull out without taking a big chunk of the wall with it.

    Now, if you want the absolute strongest anchor out there - especially for mounting something like a TV stand to drywall - I’d go with a so-called molly anchor. Just keep in mind, these are permanent. Once it’s in, it stays in.

  • I’d start by looking up the ones you recognize, even if you don’t know their names yet. It’s hard to memorize plants you don’t even remember seeing, but if you research the ones you commonly stumble upon - ones you can point to and start attaching names and info to - then the rest builds up organically over time. A book, with pictures, written by a local would be a good start. Goes with birds as well.

  • I get the feeling that many Americans are under the illusion that most Europeans live in big cities like Paris or Amsterdam. And while it may be true that people in those cities have different shopping habits compared to Americans in similarly sized cities, that doesn’t reflect the reality for all - or even most - Europeans. For me and most of my friends, going to the supermarket once or twice a week by car has always been the norm.

  • LLMs are AI. While they’re not generally intelligent, they still fall under the umbrella of artificial intelligence. AGI (Artificial General Intelligence) is a subset of AI. Sentience, on the other hand, has nothing to do with it. It’s entirely conceivable that even an AGI system could lack any form of subjective experience while still outperforming humans on most - if not all - cognitive tasks.

  • Images generated by AI are only “fake” if you falsely present them as actual photographs or as digital art made by a human. There’s nothing inherently fake about AI-generated images as long as they’re correctly labeled.

    Also, suggesting that all information provided by generative AI is false is just as bizarre. It makes plenty of errors and shouldn’t be blindly trusted, but the majority of its answers are factually correct.

    This kind of ideological, blanket hatred toward generative AI isn’t productive. It’s a tool - nothing more, nothing less - and it should be treated as such. Not as what you hoped it would be or what marketing hype wants you to believe it is or will become.

  • I just ran the numbers for the first time ever, and it adds up to 34 months - which I realize is a pretty privileged place to be. However, I’m by no means rich; I just live well below my means and invest all my savings.

  • If I agree with the moral logic behind it, then yes - it’ll upset me even if I’m not personally affected. If I hear someone shouting slurs at a black person, I’ll obviously take issue with it, despite not being black myself.

    On the other hand, if I hear someone say, for example, “this thing is retarded,” then even if society broadly considers that offensive, I still wouldn’t personally have a problem with it - because I don’t agree with the reasoning behind that judgment.

  • I wouldn’t say that staying calm makes you smarter - rather, getting caught up in emotions makes you dumber. When you’re calm, you have access to your highest reasoning abilities, whereas when you’re emotionally charged, those capabilities are diminished. That’s one of the main reasons I spend so much time criticizing reactivity and hostility online, even when it’s directed at causes I also oppose. It doesn’t matter whether the anger is justified or not - you quite literally can’t think straight when you’re angry. And we need you all to think straight.

  • There’s no such thing as “actual AI.” AI is just a broad term that encompasses all artificial intelligence systems. A chess engine, ChatGPT, and HAL 9000 are all examples of AI - despite being fundamentally different. A chess engine is a narrow AI, ChatGPT is a large language model, and HAL 9000 would qualify as AGI.

    It could be argued that AGI is inevitable - assuming general intelligence isn’t substrate-dependent (meaning it doesn’t require a biological brain) and that we don’t destroy ourselves before we get there. But the truth is, nobody knows how difficult it is to create AGI, or whether we’re anywhere close. There’s a lot of hype around generative AI right now because it remotely resembles what AGI might look like - but that doesn’t guarantee it’s taking us any closer. It could be a stepping stone - or a total dead end.

    So what I hear you asking is: “Can’t we just use task-specific narrow AI instead of creating AGI?” And yes, we could - but we’re never going to stop improving these systems. And every step of progress brings us closer to AGI, whether that’s the goal or not. The only things that might stop us are hitting a fundamental wall (like substrate dependence) or wiping ourselves out.

    There’s also the economic incentive. AGI would be the ultimate wealth generator. All the incentives point toward building it. It’s a winner-takes-all scenario: if you're the first to create a true AGI, your competition will likely never catch up - because from that point on, the AGI can improve itself. And then the improved version can further improve itself, and so on. That’s how you get to the singularity: an intelligence explosion that leads to Artificial Superintelligence (ASI) - a level of intelligence far beyond human comprehension.