Israel Tried to Prevent Civilian Casualties More Than Any Nation in History
PrinceWith999Enemies @ PrinceWith999Enemies @lemmy.world Posts 2Comments 606Joined 2 yr. ago
I was gifted the tests multiple times. I didn’t take the test because of my own data privacy concerns.
The thing I am concerned about is not necessarily 23andMe selling the data, but rather being sold off and having another company come in and being allowed to do what they want with it. I’ve seen that happen before with other data collecting companies, and I’m not sure to what extent the policies put in place by the collection company applies to the new company that buys them and their IP.
I imagine in this case that it would result in a massive class action suit, but for me the risks of having the data made available to, for example, insurance companies who could then deny coverage was just too high of a risk when the main payoff for me would be to find out my family comes from Ireland but that I’m also 5% Jewish.
Wojcicki’s stock carries supervoting privileges, giving her effective control of the company. She said she has never sold a share.
Whoops!
The joke isn’t the primaries, dude. The joke is that they’re complaining about the US candidates being two bad choices, when they have no choices.
You know the joke is that they’re not a democracy, right?
Who were the candidates again? Was it a close election? I wasn’t following so I’m not sure which candidates won the primaries for their parties.
How’s the Chinese election looking? Who are the major candidates and who is in the lead?
Thank you!
I was thinking that VNs were more like “walking simulators,” where most of the action that takes place is pretty scripted but you explore the world and the story being presented by the devs. Games like Life is Strange and Firewatch were my introduction to the genre, and I found that I really enjoyed playing them. Fire watch was my first and, not having read anything about it or the genre, I kept being afraid of dying. It took me a ridiculously long time to figure out it was just telling you a story with some interactive elements. There was also a company that was publishing comics that had audio and (minimal) animation, which I thought was a fantastic innovation. I had some really good horror comics from them, but I don’t know if they got acquired or are still in business.
Anyway, I’m going to look into that one. I do like gothic horror and period work.
I don’t know. I’ve seen things far worse than a vaguely Pride-aligned Treebeard.
This is a great write up - thank you!
I’m not a big fan of anime as an art form or storytelling style, but I’m going to be looking into these titles.
Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”
But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.
Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?
Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.
I feel like this person could finally implement an artificial general intelligence, given an infinite amount of time and memory space.
My problem isn’t that he’s a Mad Titan, but that the plot makes Ready Player One look like Les Miserables. It’s basically a concept script you’d expect to see coming out from the writer pool from 30 Rock where Tracy Jordan has a six armed alien outfit.
We all know GoT died the death it did because they had absolutely no idea how to wrap it up and just wanted to be done with it. The mcu money should have been more than enough to do a proper job with transitioning the storyline, but they felt the need to do something blockbusting with it. I would rather have had a Watchmen style conclusion where some people move into retirement homes while the next generation comes forward, but their need to go over the top just turned it into a ludicrous script.
I really don’t care that much. I was getting a bit tired of the franchise anyway (although the new GotG was pretty great), but it always kind of sucks when you can tell that the creatives involved just don’t care anymore. Contrast that with something like the final episode of MASH.
Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.
I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.
My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.
So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.
So what do you call it when a newborn deer learns to walk? Is that “deer learning?”
I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.
I think the real problem would be ecosystem collapse.
Ecosystems evolve as complex, interdependent systems with nonlinearities. What happens when you kill off 50% of pollinators in a single instant? 50% of plankton? 50% of grasses? The problem with nonlinear systems is that killing off half of A and half of B won’t have a linear effect if the relationship depends on having minimum levels of A. Assume it’s a random function such that we kill off half of all plants and on top of that half of all rhizobium bacteria which fix nitrogen for many plant species. Now we’re killing off potentially all plants that depend on having a stable population of rhizobium bacteria, which will have a cascading effect throughout the already devastated ecosystem. It’s all about tipping points and sigmoid curves and such.
The truth is that it was a completely stupid idea, and it was what finally broke my love of the marvel franchise. Either you have runaway ecosystem collapses, or the populations will simply return back to their original levels to hit their ecological carrying capacities again. Kill off half of termites, and you’ll probably be back to the same level of termites in a decade or less. Even with people (using the word inclusively across all technological species), you’d have a population surge that within less than a century or so would be brought back to carrying capacities. Populations self-regulate via interaction with their ecosystems. You’re either going to end up with 100% extinctions or system recovery to current levels within a very brief period via normal reproduction and evolutionary dynamics.
It was a massive effort undertaken by an immortal and massively intelligent person that is inherently flawed because the marvel writers apparently never took Biology 101-102. I’m not saying it was GoT season 8 levels of bad, but after watching those last couple of movies I not only never rewatched them, but I checked out of the mcu pretty much entirely after having rewatched the previous movies multiple times each.
I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.
The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.
What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.
And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.
My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.
I know that technology continues to improve, especially in driver assist modes. However, previous iterations also tried to make it easier by doing things like have traditional steering at one speed and four wheel steering at a much lower speed. None of those experiments were successful as commercial products.
I do agree that the more the car is using it for you, the more realistic it is. It’s just that my car can already park itself with two wheel steering, and as much as I like automated everything and am cost-neutral on most things, I don’t see the four wheel steering bringing enough to the table to be worth the additional manufacturing and maintenance complexity.
I’m more than happy to be proven wrong, and maybe they did it right this time. But at this point I can really only see it in specialized applications - forklifts, aircraft maintenance vehicles, that kinds of thing.
YBI still.
Transporting the container to the medical bay or science lab would permit the use of force fields whose emitters can be highly focused, permitting containment of the container in question without interfering with the lunches of you or your other crewmates. I’m not sure what emitter configurations are available in your mess hall, but the labs clearly are able to handle such and do so regularly.
We also have learned through unfortunate and perhaps overly-repeated experiences to not make assumptions about unknown cultures. I can’t even recall the number of captain’s logs I’ve read where little blinking lights or some weird rock thing or glowing space object turned out to be intelligent.
I think you are required to document the incident and report it to your commanding officer.
Wow. Just wow. I have no words for this.