• KillingTimeItself@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 hours ago

    Watching videos of rape doesn’t create a new victim. But we consider it additional abuse of an existing victim.

    is this a legal thing? I’m not familiar with the laws surrounding sexual abuse, on account of the fact that i don’t frequently sexually abuse people, but if this is an established legal precedent that’s definitely a good argument to use.

    However, on a mechanical level. A recounting of an instance isn’t necessarily a 1:1 retelling of that instance. A video of rape for example, isn’t abuse anymore so than the act of rape within it, and of course the nonconsensual recording and sharing of it (because it’s rape) distribution of that could necessarily be considered a crime of it’s own, same with possession, however interacting with the video i’m not sure is necessarily abuse in it’s own right, based on semantics. The video most certainly contains abuse, the watcher of the video may or may not like that, i’m not sure whether or that should influence that, because that’s an external value. Something like “X person thought about raping Y person, and got off to it” would also be abuse under the same pretense at a certain point. There is certainly some interesting nuance here.

    If i watch someone murder someone else, at what point do i become an accomplice to murder, rather than an additional victim in the chain. That’s the sort of litmus test this is going to require.

    That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere).

    to be clear, this would be a statistically minimal amount of abuse, the vast majority of adult content is going to be legally produced and sanctioned, made public by the creators of those videos for the purposes of generating revenue. I guess the real question here, is what percent of X is still considered to be “original” enough to count as the same thing.

    Like we’re talking probably less than 1% of all public porn, but a significant margin, is non consensual (we will use this as the base) and the AI is trained on this set, to produce a minimally alike, or entirely distinct image from the feature set provided. So you could theoretically create a formula to determine how far removed you are from the original content in 1% of cases. I would imagine this is going to be a lot closer to 0 than it is to any significant number, unless you start including external factors, like intentionally deepfaking someone into it for example. That would be my primary concern.

    That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?

    another important concept here is human behavior as it’s conceptually similar in concept to the AI in question, there are clear strict laws regarding most of these things in real life, but we aren’t talking about real life. What if i had someone in my family, who got raped at some point in their life, and this has happened to several other members of my family, or friends of mine, and i decide to write a book, loosely based on the experiences of these individuals (this isn’t necessarily going to be based on those instances for example, however it will most certainly be influenced by them)

    There’s a hugely complex hugely messy set of questions, and answers that need to be given about this. A lot of people are operating on a set of principles much too simple to be able to make any conclusive judgement about this sort of thing. Which is why this kind of discussion is ultimately important.