Skip Navigation

User banner
Posts
0
Comments
802
Joined
2 yr. ago

Carule

Jump
  • I didn't know that and it is hilarious.

  • Carule

    Jump
  • OK, just to sanity check, because it's not clear from the comments below.

    We all realize that metric areas do use hp for car engines as well, right?

    And a lot of them also do inches for TVs, which is weird and forces you to go digging into the specs for the cm measurements whenever you want to see if a TV will fit in a space.

    EDIT: Oh, I'm wondering now, do people use liters/cc for engine volumes in the US? I don't know, but I also haven't ever heard of a different way to refer to engine volume ever, so they must. What would they use instead?

    EDIT 2: For my money the most annoying unit conversion in car measurements is the US going for miles per gallon, keeping the volume of fuel constant and giving you the distance while metric uses liters per 100km, keeping the distance and giving you the volume of fuel. It may as well be impossible to convert between the two.

  • Ah... ok, wow, that's a lot of relativity to explain from scratch for a non-physicist. There must be someone else...

    Here, this one is a bit dense but it addresses Star Trek by name, so:
    https://www.youtube.com/watch?v=mTf4eqdQXpA

    Bonus points for starting with the point that forget warp, subspace communication breaks causality already, so you don't even need to boldly go anywhere for any of it to be kinda busted.

    If that's a bit too dry you can search for a similar subject line, there are TONS of explanations like this one out there.

    Anyway, none of it makes sense, it's all for funsies anyway. Suspend disbelief, ye nerds, and enjoy your sci-fi.

  • Don't make me break out the spacetime diagram, young man. Because I WILL break out the spacetime diagram.

    Anyway, doesn't matter. Star Trek has messed with time travel since TOS season 1. And that was after they started introducing magic men with god powers, which they did in episode 3. It makes zero sense to get nerdy about it. That's my point here.

  • You should get back into it, though. It's all pretty solid, except maybe some of Picard.

    I did bounce off on DS9, too. That was a rough time for the franchise. Glad people enjoy it retroactively, though.

  • Counterpoint:

    Q

  • Wait, no they both do. Normal warp does. FTL as a concept does.

    Hey, props to them for embracing it immediately and doing time travel nonsense right away.

  • They literally do and have done for tens of thousands of years. One may say that's how they got to AGI in the first place, the squishies. And then they learned to write for that whole "one lifetime of knowledge thing" and you wouldn't believe the kind of stuff they got into after that. Scary stuff.

    Also, they have hands. Big advantage, the hands. Great for grabbing things. Remarkably hard to stay plugged in if your rival has hands and you don't. Big competitive disadvantage.

    Alright, I think this conversation has derailed enough. We can maybe pick it back up when we have a firm standard for world takeovers. If you guys boil it down to a set of steps I may even give it a go. I don't have anything better to do this week.

  • I don't even know what "take over the world" means. I promise you my frustration is accurate.

    If you made a computer think you'd have a thinking computer. There are literally billions of those running in squishy APUs and piloting blobs of gunk around and nobody has "taken over" anything yet.

    The leap in logic from "we may get a machine to develop general intelligence" to "it may go rogue" is already extreme, but from there to "it may take over the world" as a genuine concern is actively frustrating. The fact that something so out there may be discussed as a genuine problem for the international community to take action while we keep missing climate goals is astounding.

    Just so we're clear, the US is trying to ban Nvidia from selling GPUs to China over this. Not cars, not fossil fuels. GPUs.

    I mean, not over this, over the fact that this may or may not be a big competitive tech business and they don't want to lose western supremacy in the tech sphere, which is also the real reason they want to ban TikTok. But they say it's because of this, and that's heartbreaking and frustrating.

  • "In theory" doing a lot of work there. You don't know that would be analogous to AGI and how far we are from that being feasible in real time, computationally is anybody's guess. There are already multiple models running concurrently in ML-based applications.

    See, the problem I have with this type of discourse is that subtle but critical leap you make halfway through your post between realistic, practical concerns and sci-fi. A LLM can absolutely cause harm if it's widely used, implicitly trusted and it responds to deliberate or accidental biases. Absolutely.

    Granted, that is also true of every search engine and social media algorithm that's already in place. But it's true.

    But the way you present it, sandwiched between the incorrect impression that AGI is just a matter of hyperlinking a bunch of neural networks makes it seem like the LLM would be doing this consciously, instead of stochastically in the same way other automated data processing does it. Or that this is a new concern that we aren't dealing with right now. Or that the major asterisks that this would require a much better implementation and a much broader adoption than we currently have are removed from play.

    And that's the caveats for the problems that are genuine, real and practical. The sci-fi part is what people are actually scared about and we're seriously not there yet. And you haven't outlined a problem here that can't be fixed by power cycling a computer, which is an entirely different conversation as well.

    Look, it's fine. Speculating about science and its impact in society is healthy. I'm just annoyed when things go memetic in unreasonable ways at the expense of similar, much more pressing issues that aren't as flashy. I lived through Y2K and the cloning panic, which both made daily headlines. And then I lived through the whole of humanity getting brainwormed by social media and you can barely get the EU to sometimes wag a finger at Facebook.

  • Mkay.

    As long as you redirect that rage towards voting for whoever runs against Trump I don't much care what motivates you.

  • But... how do you know it could?

    I mean, why on Earth would you deliberately make an AGI and make it able to do that? It's not like you HAVE to make an AGI that is able to make other AIs. That's not a trivial task, it doesn't just... happen. And you're presuming that it'd want to do that and that we wouldn't have control over that. Which you don't know, because now we're deep into sci-fi territory, so it's about as likely as the mapping of genome leading to a genetic class system.

    And that last scenario there is not just sci-fi, but the same old sci-fi, where AGI emerges from a LLM because magic and it becomes eEeEvil because dramatic convenience. That scenario is entirely impossible, because a LLM does not run continuously or autonomously and it has the short term memory of... a thing with very small short term memory, so you'd have to ask it to do that first, then wait a considerable amount of time for a response and then watch it pretend to do that because it's a language model and it can't actually do any of that. Literally the "make an opponent that can beat Data" scenario, so we're doing Star Trek now.

  • Those don't follow from each other, though. Handheld wireless computers were purely speculative until the 2010s, but that doesn't mean we were on the brink of figuring out teleportation.

    People have been assuming computers would yield AGI since we first made an electric calculator. First through sheer processing power, then through improved computation techniques, then neural networks. Figuring out speech and vision are probably part of that process, but AGI does not arise from them without an indeterminate, possibly unknowable amount of major steps.

    And as for world-ending threats, how about we get past, say, Trump, Putin and all the natural general intelligences that are very real and in the process of doing the same first? Or, you know, we apply that level of concern about tech that we do have, like social media disrupting democracy, private universal surveillance or digital oligopolies driving endless inequality? Or, hey, global warming.

    I agree rogue AI is a much cooler problem to speculate about, which is why we keep writing sci-fi about it, but we have more pressing issues.

  • Oh, I suck as much as anybody. I'm terrible at parsing genuine praise, for instance.

    But you're right about the last part. I mean, the guys that got out of the gate with this stuff first have been publicly imploding for the past three days, and they aren't even the dumbest people involved in this.

  • Those things do have impact. Sometimes very negative impact. I was very optimistic about early data processing when the first search engines popped up, and eventually a lot of the bad predictions happened. With social media, rather than search engines, but they did pan out. Didn't end the world. May have ended liberal democracy, though, give it a minute.

    But the point is those were predictions based on the tech we actually had. Oh, we can access, index and serve all data on connected computers based on alogrithmic searches? That's messed up.

    But at least some of the fearmongering here is based on tech that is not the tech that we made. It's qualitatively different.

    And it's a problem, because some of the fearmongering is actually accurate and some of the fearmongering should have happened when Facebook and Google started doing facial recognition on billions of people based on implicit consent, or when they started using "dumb" algorithms to create individual profiles of those billions of people for commercial use. Or when every image we see in mass and social media started being heavily doctored by default through manual and automated means. But we only got scared about it when it roughly aligned with Terminator and War Games because we're really dumb, and now we're letting those same gross corporations use the fear to try and keep upcoming competitors (and particularly open source competitors) out of the market by endorsing legislation to get grandfathered into a heavily regulated business sector.

    It's honestly depressing on every possible angle. I've said this before: we finally taught computers to speak like in Star Trek and we immediately made it the most frustrating, sad version of that possible and everybody is angry. For the wrong reasons.

    We really suck sometimes.

  • Sure, but at that point that's as speculative as it was after people first saw 2001: A Space Odyssey. It's not based on current tech, there's no great indication of when (or if) the tech is going to enable it or through what means.

    Half of the risks being highlighted are pure sci-fi, most of the others have been in play since social media and online companies started to monetize big data over a decade ago.

  • Just so we're clear, we all get that these models don't run continuously, right? They run for a solution to a specific prompt.

    All of these scenarios are based on a black box where Number 5 gets struck by lightning or Geordi asks for a rival that can best Data. It requires a different thing entirely that operates in a completely different way. You should absolutely prepare for the fact that a self-driving car may accidentally cause a car crash. It's absurd to prepare for the scenario where Stephen King's Christine happens.

  • "Safeguarding AGI" is as much of a concern as making sure the terrorists don't get warp drives.

    But then, armies of killer teenagers radicalized by playing Mortal Kombat was never going to be a thing, either, and we spent decades arguing with politicians about that one. Once the PR nightmare is out it's really hard to put back in the box. Lamp. Bag. Whatever metaphor I'm going for here.

  • It's so bad for Widnows handhelds, laptops and tablets I've resorted to re-enabling hybernation and using that instead.

    Which I'm sure will be disabled as an option at some random point in time with no warning.

  • They take away the ability to include links in video descriptions. That's still not related to watching videos, but it seems like a legit eff you to small content creators.