Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TE
Posts
1
Comments
394
Joined
2 yr. ago

  • But speeding tickets are the most common type of infraction, and I think that's probably a good example of a systematic issue.

    There are areas in this country where the speed limit is set artificially low, just to always allow for police to issue tickets capriciously.

    The Atlanta beltway for example would literally grind the city to a halt if everyone adhered to the speed limit signs, and it's actively dangerous to attempt to do so as an individual.

    That's not a people issue, it's a systems issue.

  • Like, I don't really care one way or another, but are we not on Omegle's side on this one?

    Like, yes, Omegle was a cess pit. We all knew that. It was basically 4chan with video chat. But, like, this case seems like a parenting failure more than anything, right?

    I don't know that I see why this is Omegle's fault really, and it's kinda dumb they had to shut down over it.

  • How is this done in other countries that's better? Like, I would think that assigning children to particular schools based on geography is pretty universal. What makes this a particularly American failing?

    It does sound like this district is managed by jerks, but that doesn't make this some sort of systematic, American issue.

  • You missed the point of my "can be wrong" bit. The focus was on the final clause of "and recognize that it was wrong".

    But I'm kinda confused by your last post. You say that only computer scientists are giving it feedback on its "correctness" and therefore it can't truly be conscious, but that's trivially untrue and clearly irrelevant.

    First, feedback on correctness can be driven by end users. Anyone can tell ChatGPT "I don't like the way you did that," and it would be trivially easy to add that to a feedback loop that influences the model over time.

    Second, find me a person who's only feedback loop was internal. People are told "no that's wrong" or "you've messed that up" all the time. That's what makes us grow as people. That is arguably the core underpinning of what makes something intelligent. The ability to take ideas from other people (computer scientists or no), and have them influence the way you think about things.

    Like, it seems like you think that the "consciousness program" you describe would count as an intelligence, but then say it doesn't because it's only getting its external information from computer scientists, which seems like a distinction without a difference.

  • I think literally all those things are scenarios that a driving AI would be able to measure and heuristically say, "in scenarios like this that were in my training set, these are what often follows." Like, do you think the training set has no instances of people pulling out of blind spots illegally? Of course that's a scenario the model would have been trained on.

    And secondarily, those are all scenarios that "real intelligences" fail on very very regularly, so saying AI isn't a real intelligence because it might fail in those scenarios doesn't logically follow.

    But I think what you are trying to argue is that AI drivers aren't as good as an "actual intelligence" driver, which is immaterial to the point I'm making, and is ultimately super quantifiable. As the data comes in we will know in a very objective way if an AI driver is safer on average than a human. That's quantifiable. But regardless of the answer, it has no bearing on if the AI is in fact "intelligent" or not. Blind people are intelligent, but I don't want a blind person driving me around either.

  • The previous guy and I agreed that you could trivially write a wrapper around it that gives it an internal monologue and feedback loop. So that limitation is artificial and easy to overcome, and has been done in a number of different studies.

    And it's also trivially easy to have the results of its actions go into that feedback loop and influence its weights and models.

    And is having wants and desires necessary to be an "intelligence"? That's getting into the philosophy side of the house, but I would argue that's superfluous.

  • Okay, two things.

    First, that's just not true. Current driving models track all moving objects around them and what they're doing, including pedestrians and objects like balls. And that counts towards "things happening in the moment". Everything in sensor range is stuff happening "in the moment".

    Second, and more philosophically, humans also don't know how to react to situations they've never seen before, and just make a best guess based on prior experience. That's, like, arguably the definition of intelligence. The only difference arguably is that humans are better at it.

  • Skipping over the first two points, which I think we're in agreement on.

    To the last, it sounds like you're saying, "it can't be intelligent because it is wrong sometimes, and doesn't have a way to intrinsically know it was wrong." My argument to that would be, neither do people. When you say something that is incorrect, it requires external input from some other source to alert you to that fact for correction.

    That event could then be added to you "training set" as it were, aiding you in not making the mistake in future. The same thing can be done with the AI. That one addition to the training set that was "just enough to bridge that final gap" to the right answer, as it were.

    Maybe it's slower at changing. Maybe it doesn't make the exact decisions or changes a human would make. But does that mean it's not "intelligent"? The same might be said for a dolphin or an octopus or an orangutan, all of which are widely considered to be intelligent.

  • I don't really get the "what we are calling AI isn't actual AI" take, as it seems to me to presuppose a definition of intelligence.

    Like, yes, ChatGPT and the like are stochastic machines built to generate reasonable sounding text. We all get that. But can you prove to me that isn't how actual "intelligence" works at it's core?

    And you can argue that actual intelligence requires memories or long running context, but that's trivial to jerry-rig a framework around ChatGPT that does exactly that (and has been done already a few times).

    Idk man, I have yet to see one of these videos actually take the time to explain what makes something "intelligent" and why that is the definition of intelligence that they believe is the correct one.

    Whether something is "actually" AI seems much more a question for a philosophy major than a computer science major.

  • That math sounded wrong to me, so I ran it.

    Twelve billion is a 12 with 9 zeros after it. Lop off 4 from the 120,000 leaves you with 5 zeros, and the twelves cancel, so 100,000 per person. Divided by 3 is $33,333 per person per year.

    So, yeah, your math didn't math I'm afraid. Probably still a good bit cheaper than most people's rent in NYC, but still very expensive.

  • Fair. That's a clever solution to getting around the problem of needing to duplicate your set up.

    It is a big step up in complexity though, as you now need an IR receiver as well as an IR blaster, some sort of physical button(s) on the device that puts it into "learning" mode to detect what signal it needs to duplicate (and indicate if it's learning volume up or down), and all the additional development overhead each of those entails.

    You'd probably see a good jump in the parts cost too. Especially as, when adding more controls and sensors, it increases the complexity of the enclosure you'd put all this in, meaning you probably would need some CAD work done as well. Or someone willing to do some precision woodworking.

    All told it's probably about three to five times harder than just knowing the correct IR sequences up front and baking them into the product, so you'd see a commensurate increase in price.

  • Gotcha. The tricky part with that is gonna be that it's specific to the model of your entertainment system.

    Means that if whoever you have working on it is non-local, they'll need to find a duplicate of your entertainment system to test on to make sure it works, which is obviously not super feasible.

    If a local buddy asked me to build something like that, I had the time, and I charged fair market value for the work, you're probably looking at a couple grand.

    If it was a good buddy and I only charged for parts, it'd probably be only a hundred bucks or so?

    I wouldn't even really consider doing it as a remote job, as getting a copy of your receiver is more trouble than it's worth I think. Depends on the receiver to some degree though I guess.

  • Yeah, I should work on my reading comprehension.

    I deff read the prompt as having a bunch of rechargeable AA style Li-ion batteries, and how to utilize them without having to swap out to a new pair or whatever.

    Deff don't want to do this with a bunch of disparately sized smart batteries providing power over USB. Very different problem.

  • Or, I mean, it could just wire all the positives together and all the negatives together and hook that right into your target device.

    It'd be the same output voltage regardless. A little less internal resistance, and lower step down in the later phases, but neither should make a difference in what you're powering.

    Kinda like how there were those converters for the GameBoy back in the day that let you put C batteries into it. Same principle.