Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KI
Posts
0
Comments
4,358
Joined
2 yr. ago

  • With any model in use, currently, that is impossible to meet. All models are trained on real images.

    yes but if i go to thispersondoesnotexist.com and generate a random person, is that going to resemble the likeness of any given real person close enough to perceptibly be them?

    You are literally using the schizo argument right now. "If an artists creates a piece depicting no specific person, but his understanding of persons is based inherently on the facial structures of other people that he knows and recognizes, therefore he must be stealing their likeness"

  • that's great, i didn't ask what the shitty name of the shitty article was. I want to know what fucking happened.

    (also, as of right now it appears to be identical to when i first made the comment)

    This shocking moment at a GOP town hall in Idaho is a foreboding sign

  • my two biggest concerns are accessibility, and privacy. I want it to be both, accessible, and private, card readers accomplish this to a significant, and universally accepted degree. I can understand a subscription service being based on something a little different, or an auto charge mechanism similar to tesla super chargers, but that should be an option, not the sole means of interacting with it, because then it lends to really shitty behavior on the side of the company operating it.

    The subscription thing is understandable, but the unfortunate reality is that this is going to be a contract law problem, rather than any other problem, you are legally agreeing to a monthly payment model at some point in the checkout, otherwise it wouldn't be legal. Shitty laws and consumer protection problems really.

    My credit card does rotate the numbers a bit but I keep meaning to find one that can generate different virtual cards per service so I can turn a virtual card off when they get abusive with it or when they leak it

    it's funny actually, i wonder if checks will see a comeback, with all the shitty services that exist now, it's a very explicit way of paying for a transaction. I don't really have a huge problem with auto charging systems, most of the time banks can even unfuck some really funny shit if you need them to, though that gets into a different world very quickly. The subscription problem is an interesting one though, my solution is just avoid them at all costs, because it's a parasitic drain. So far i've succeeded in that. Also having a minimal amount of subscriptions really helps you to keep them in check, because there are only so many that can exist.

  • Permanently Deleted

    Jump
  • and yet, in most cases, it pretty roughly aligns with popular vote sentiment. The only difference is that the congress would have a significantly different makeup, whether or not that changes much is a different question.

  • sort of. There are arguments that private ownership of these videos is also weird and shitty, however i think impersonation and identity theft are going to the two most broadly applicable instances of relevant law here. Otherwise i can see issues cropping up.

    Other people do not have any inherent rights to your likeness, you should not simply be able to pretend to be someone else. That's considered identity theft/fraud when we do it with legally identifying papers, it's a similar case here i think.

  • revenge porn, simple as. Creating fake revenge porn of real people is still to some degree revenge porn, and i would argue stealing someones identity/impersonation.

    To be clear, you're example is a sketch of johnny depp, i'm talking about a video of a person that resembles the likeness of another person, where the entire video is manufactured. Those are fundamentally, two different things.

  • what is the law’s position on AI-generated child porn?

    the simplest possible explanation here, is that any porn created based on images of children, is de facto illegal. If it's trained on adults explicitly, and you prompt it for child porn, that's a grey area, probably going to follow precedent for drawn art, rather than real content.

  • Is the output a grey area, even if it seems like real rape?

    on a base semantic and mechanic level, no, not at all. They aren't real people, there aren't any victims involved, and there aren't any perpetrators. You might even be able to argue the opposite, that this is actually a net positive, because it prevents people from consuming real abuse.

    Now another hypothetical. A person closes their eyes and imagines raping someone. “Real” rape. Is that a grey area?

    until you can either publicly display yours, or someone else process of thought, or read peoples minds, definitionally, this is an impossible question to answer. So the default is no, because it's not possible to be based in any frame of reality.

    Let’s build on that. Let’s say this person is a talented artist, and they draw out their imagined rape scene, which we are 100% certain is a non-consensual scene imagined by the artist. Is this a grey area?

    assuming it depicts no real persons or identities, no, there is nothing necessarily wrong about this, in fact i would defer back to the first answer for this one.

    We can build on that further. What if they take the time to animate this scene? Is that a grey area?

    this is the same as the previous question, media format makes no difference, it's telling the same story.

    When does the above cross into a problem?

    most people would argue, and i think precedent would probably agree, that this would start to be a problem when explicit external influences are a part of the motivation, rather than an explicitly internally motivated process. There is necessarily a morality line that must be crossed to become a more negative thing, than it is a positive thing. The question is how to define that line in regards to AI.

  • Watching videos of rape doesn’t create a new victim. But we consider it additional abuse of an existing victim.

    is this a legal thing? I'm not familiar with the laws surrounding sexual abuse, on account of the fact that i don't frequently sexually abuse people, but if this is an established legal precedent that's definitely a good argument to use.

    However, on a mechanical level. A recounting of an instance isn't necessarily a 1:1 retelling of that instance. A video of rape for example, isn't abuse anymore so than the act of rape within it, and of course the nonconsensual recording and sharing of it (because it's rape) distribution of that could necessarily be considered a crime of it's own, same with possession, however interacting with the video i'm not sure is necessarily abuse in it's own right, based on semantics. The video most certainly contains abuse, the watcher of the video may or may not like that, i'm not sure whether or that should influence that, because that's an external value. Something like "X person thought about raping Y person, and got off to it" would also be abuse under the same pretense at a certain point. There is certainly some interesting nuance here.

    If i watch someone murder someone else, at what point do i become an accomplice to murder, rather than an additional victim in the chain. That's the sort of litmus test this is going to require.

    That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere).

    to be clear, this would be a statistically minimal amount of abuse, the vast majority of adult content is going to be legally produced and sanctioned, made public by the creators of those videos for the purposes of generating revenue. I guess the real question here, is what percent of X is still considered to be "original" enough to count as the same thing.

    Like we're talking probably less than 1% of all public porn, but a significant margin, is non consensual (we will use this as the base) and the AI is trained on this set, to produce a minimally alike, or entirely distinct image from the feature set provided. So you could theoretically create a formula to determine how far removed you are from the original content in 1% of cases. I would imagine this is going to be a lot closer to 0 than it is to any significant number, unless you start including external factors, like intentionally deepfaking someone into it for example. That would be my primary concern.

    That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?

    another important concept here is human behavior as it's conceptually similar in concept to the AI in question, there are clear strict laws regarding most of these things in real life, but we aren't talking about real life. What if i had someone in my family, who got raped at some point in their life, and this has happened to several other members of my family, or friends of mine, and i decide to write a book, loosely based on the experiences of these individuals (this isn't necessarily going to be based on those instances for example, however it will most certainly be influenced by them)

    There's a hugely complex hugely messy set of questions, and answers that need to be given about this. A lot of people are operating on a set of principles much too simple to be able to make any conclusive judgement about this sort of thing. Which is why this kind of discussion is ultimately important.

  • It can generate combinations of things that it is not trained on, so not necessarily a victim. But of course there might be something in there, I won’t deny that.

    the downlow of it is quite simple, if the content is public, available for anybody to consume, and copyright permits it (i don't see why it shouldn't in most cases, although if you make porn for money, you probably hold exclusive rights to it, and you probably have a decent position to begin from, though a lengthy uphill battle nonetheless.) there's not really an argument against that. The biggest problem is identity theft and impersonation, more so than stealing work.

  • i have no problem with ai porn assuming it's not based on any real identities, i think that should be considered identity theft or impersonation or something.

    Outside of that, it's more complicated, but i don't think it's a net negative, people will still thrive in the porn industry, it's been around since it's been possible, i don't see why it wouldn't continue.

  • Permanently Deleted

    Jump
  • the hard answer: the voting populace is the single stupidest form of combined intelligence to ever exist, im pretty sure 3 children under the age of 7 in a room would have a higher average IQ than any state in america when measuring the voting populace.

    Voting is a joke. People don't take it seriously, it's all vibes based, and those vibes are horrendously unreliable and meaningless.

    the soft answer: it is, for now. It will change, just give it time. It's inevitable.

  • i'm really not sure how the non tesla super charger experience looks, that concerns me. Though i assume that must be a somewhat solve problem, if people are using it so.

    just use a web browser. I verified I can login through my web browser to manage my account, my products, my payment method. Not the car features though. I have no idea if this is new, but just use any web browser

    that's good, there should definitely be some available features for the card, but you can at least handle payment. Though i would still prefer not needing to provide my payment info to a third party anywhere except for time of transaction. Just opens me up to more bullshit.

  • The performance of hardware acceleration in Jellyfin is markedly worse in my experience. My A380 can handle 2-3x more streams in Plex than it can in Jellyfin.

    i've never used plex or benchmarked it, so it's possible that it does, i wonder if anybody else has reproduced that behavior, i know a lot of people do plex/jellyfin benchmarks these days. Be surprised if that hadn't yet happened. It shouldn't be any faster or slower if you're using the exact same transcoding settings, it's all limited by the hardware physically, so it's possible it was that. Could theoretically be bad drivers, or bad support i guess, but that would be a separate issue.

    Maybe one day Jellyfin can offer that as a paid option, a la Nabu Casa for Homeassistant.

    definitely a possibility, but then again there are several ways of solving this problem, in homelab universal manners, so maybe they should offer a more generic service instead.