Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FR
Posts
1
Comments
681
Joined
2 yr. ago

  • AI is also going to run into a wall because it needs continual updates with more human-made data, but the supply of all that is going to dry up once the humans who create new content have been driven out of business.

    It's almost like AIs have been developed and promoted by people who have no ability to think about anything but their profits for the next 12 months.

  • I honestly don't get why so many people are so reckless and impatient on the roads. I've seen some people being really fucking stupid around cyclists and motorcyclists. One incident haunts me, because I know someone would have been severely injured, maybe killed, if I hadn't been quick enough to get out of the way of an impatient person overtaking in a stupid place.

    And it's just like... why? Just leave home a few minutes earlier!

  • I did not know the exact wording of this guidance, but this is basically the strategy I use. I've always figured that because I prepare for my journeys, I am never in such a rush that I need to put someone else's life at risk in order to pass them quicker - it's not like it's going to make a difference to my day if I arrive at my destination 2 minutes later, but it'll make a huge difference to someone else's day if I rush past a cyclist when it's not safe.

  • This kind of thing is precisely why there's a solid argument for the system some companies are starting to use: transparent, public (within the office) lists of everyone's pay, with set pay bands based on job role. I read an article about a company that did this, and apparently it improved staff retention and job satisfaction, because not only was everybody paid fairly based on the job they did, but they all knew they were paid fairly, had a means of challenging it if their pay didn't reflect the work they were actually doing, and knew what was needed of them to get a promotion and payrise. It meant quieter, less assertive people weren't punished, because it was based not on who could best advocate for themselves, but how long you'd worked there and what measurable work you had done.

  • Not even then. I think the thing that's easy to forget about shareholders is they're not doing this because they're evil and get off on watching people suffer. They're doing it because their own personal inadequacies are so vast that the only way they can cope with life is by trying to fill that enormous emotional hole with money. As a result, even when every other person on the planet has been crushed and ground into paste, and just one person with this mindset finally owns everything... it still won't be enough for them. They will still be left with that unfillable emotional hole. They will still be empty inside.

  • I've definitely thought about modding Freelancer, but haven't quite found the right ones yet. I tried Discovery (I think it was), and felt that the changes to the enemy AI and equipment (such as constantly using shield batteries and nanobots) just made gameplay more frustrating than enjoyable, because it made every single battle challenging - no more just chilling out while hauling random stuff through trade lanes. I'd really love a mod that adds new systems, planets, locations, ships, etc without dramatically changing the gameplay to be exclusively about the combat.

  • Agreed! I think a lot of games benefit from trying to do one thing really well, rather than multiple things badly, and Freelancer is unapologetic about focusing on doing the in-ship stuff well. Games that try to do both the in-ship and not-in-ship elements end up either with both being done badly, or one just feeling like it serves little purpose in the game.

  • I still have a soft spot for Freelancer, despite all the years that have gone by (and aside from some minor UI issues, plays perfectly on a modern PC), and it still looks remarkably nice for its age, too. The story is pretty linear, and the characters not hugely memorable (despite some voice acting from George Takei, John Rhys-Davies, and Jennifer Hale), but it's just fun to play. It can be challenging if you want to venture into areas less travelled, but because progress through the game is largely dependent on the money you earn (in-game), if you just want a chill evening, you can just trade goods.

    And like... this is a game I've been playing on and off for 20 years, and occasionally I still find something new. I played it a couple of months ago, committing to docking with every planet and station... and discovered a new trade route that was both shorter and more profitable than the one I had been using. It probably only cut 10 minutes off my three stage trade run around the entire map, but it was still kind of exciting to go "oooh, I never realised this was an option!" All because I visited a station I don't usually visit.

  • Yeah, I think you could be right there, actually. My instinct on this from the start is that it would prevent the grieving process from completing properly. There's a thing called the gestalt cycle of experience where there's a normal, natural mechanism for a person going through a new experience, whether it's good and bad, and a lot of unhealthy behaviour patterns stem from a part of that cycle being interrupted - you need to go through the cycle for everything that happens in your life, reaching closure so that you're ready for the next experience to begin (most basic explanation), and when that doesn't happen properly, it creates unhealthy patterns that influence everything that happens after that.

    Now I suppose, theoretically, there's a possibility that being able to talk to an AI replication of a loved one might give someone a chance to say things they couldn't say before the person died, which could aid in gaining closure... but we already have methods for doing that, like talking to a photo of them or to their grave, or writing them a letter, etc. Because the AI still creates the sense of the person still being "there", it seems more likely to prevent closure - because that concrete ending is blurred.

    Also, your username seems really fitting for this conversation. :)

  • I absolutely, 100% agree with you. Nothing I have seen about the development of AI so far has suggested that the vast majority of its uses are grotesque. The few edge cases where it is useful and helpful don't outweigh the massive harm it's doing.

  • Given the husband is likely going to die in a few weeks, and the wife is likely already grieving for the man she is shortly going to lose, I think that still places both of them into the "vulnerable" category, and the owner of this technology approached them while they were in this vulnerable state. So yes, I have concerns, and the fact that the owner is allegedly a friend of the family (which just means they were the first vulnerable couple he had easy access to, in order to experiment on) doesn't change the fact that there are valid concerns about the exploitation of grief.

    With the way AI techbros have been behaving so far, I'm not willing to give any of them the benefit of the doubt about claims of wanting to help rather than make money - such as using a vulnerable couple to experiment on while making a "proof of concept" that can be used to sell this to other vulnerable people.

  • I also suspect, based on the accuracy of AIs we have seen so far, that their interpretation of the deceased's personality would not be very accurate, and would likely hallucinate memories or facts about the person, or make them "say" things they never would have said when they were alive. At best it would be very Uncanny Valley, and at worst would be very, very upsetting for the bereaved person.

  • Think of how many family recipes could be preserved. Think of the stories that you can be retold in 10 years. Think of the little things that you’d easily forget as time passes.

    An AI isn't going to magically know these things, because these aren't AIs based on brain scans preserving the person's entire mind and memories. They can learn only the data they're told. And fortunately, there's a much cheaper way for someone to preserve family recipies and other memories that their loved ones would like to hold onto: they could write it down, or record a video. No AI needed.

  • Sure, you should be free to make one. But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!), there are valid questions about whether that will cause them harm rather than help - and grieving people do not always make the most rational decisions. They can very easily be convinced that interacting with AI-you would be good for them, but it actually prolongs their grief and makes them feel worse. Grieving people are vulnerable, and I don't think AI companies should be free to prey on the vulnerable, which is a very, very realistic outcome of this technology. Because that is what companies do.

    So I think you need to ask yourself not whether you should have the right to make an AI version of yourself for those who survive your death... but whether you're comfortable with the very likely outcome that an abusive company will use their memories of you to exploit their grief and prolong their suffering. Do you want to do that to people you care about?

  • Just gonna say that I agree with you on this. Humans have evolved over millions of years to emotionally respond to their environment. There's certainly evidence that many of the mental health problems we see today, particularly at the scale we see, is in part due to the fact that we evolved to live in a very different way to our present lifestyles. And that's not about living in cities rather than caves, but more to do with the amount of work we do each day, the availability and accessability of essential resources, the sense of community and connectedness with small social groups, and so on.

    We know that death has been a constant of our existence for as long as life has existed, so it logically follows that dealing with death and grief is something we've evolved to do. Namely, we evolved to grieve for a member of our "tribe", and then move on. We can't let go immediately, because we need to be able to maintain relationships across brief separations, but holding on forever to a relationship that can never be continued would make any creature unable to focus on the needs of the present and future.

    AI simulacrums of the deceased give the illusion of maintaining the relationship with the deceased. It is certainly well within the possibility that this will prolong the grieving process artificially, when the natural cycle of grieving is to eventually reach a point of acceptance. I don't know for sure that's what would happen... but I would want to be absolutely sure it's not going to cause harm before unleashing this AI on the general public, particularly vulnerable people (which grieving people are.)

    Although I say that about all AI, so maybe I'm biased by the ridiculous ideology that new technologies should be tested and regulated before vulnerable people are experimented on.