Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NO
Posts
7
Comments
1,354
Joined
2 yr. ago

  • We made a huge mistake when we passed the Civil Rights Act in the 1960s.”

    He's absolutely correct on this one though.

    There still was way too much discrimination left in all sorts of laws despite of the Civil Rights Act.

  • My cousin wasn't using any ML model. Their software probably did a geometric projection and that's it. Then they'd search for the proposed owner of the fingerprint and get the real ones to compare against. That's something that ML models cannot take from police as long as long as hallucinating is possible.

  • The research is bogus then as well. Projections like those have been in wide use at police stations around the globe at least since my cousin bragged about having this when he started his police training. That was.2008.

  • Ok Google, switch off the TV.(that is in the same room as you in the Google home app)
    “I’m sorry, I don’t know how to help with that”
    Ok Google, switch off the TV in the living room
    "There was a problem. Please try again in a few seconds"
    Gnah! Ok Google, switch off the TV in the living room!!!
    "Okay, 10h Video of switch off sounds is being played on the device 'bedroom'"
    What. The. Fuck.?!

  • Yeah, Like, we are facing a world where faking a celebrity's voice and having it respond to everything you say completely life-like is a matter of minutes while the "smart" speaker in your house talks like a robocall from the 90s and doesn't understand a single thing when you don't adhere to a very specific command syntax.

  • Can we not pretend that "asshole" is some objective measurement? What looks like someone you really don't want in your life can be completely irrelevant to others. So you can't judge all mods because that person you find offensive obviously has to be offensive to everyone else.

  • That's something only lawmakers can fix.

    Performance Monitoring Tools cannot be need to know at my workplace. The article talks about our worker's councils, right? Those need to be informed if any tool is to be used in that regard, they even need to approve many tools before they can be used at all.

  • Since I'm a team leader at Deutsche Telekom (the mother company of T-Mobile btw), here what AI basically does: You know the whole "some calls may be recorded for training purposes" thing, right? Depending on the topic your team does and how many calls that brings with it, it's rather tiring and time consuming to listen to all of them. AI will analyze the calls and try to point out those that are worth listening to... Or better: those it believes are worth listening to. It's analysis doesn't have any weight of it's own, the team leader still does all the real analysis, feedback, etc. So if the AI is full of shit, the employee doesn't get punished. If the AI is weirdly biased against someone, it'll not have any repercussions besides this tool being less useful to me.