• 0 Posts
  • 8 Comments
Joined 13 days ago
cake
Cake day: June 4th, 2025

help-circle





  • The classic example we already have of this is when you are stopped at a side road about to enter the main road, and a car coming towards you on the main road signals to turn in.

    Many people take the fact the other car has their turn signal on as a guarantee that it’s safe to emerge, but any good driving instructor will tell you to wait until the car actually begins to turn before you yourself emerge.

    They had their signal on but that doesn’t mean they’re actually going to DO what the signal said they would.

    Same with the front brake light. It would be like “Well their front brake light came on, so I assumed it was safe to step into the crosswalk” NO. They could have just tapped the brake a second, doesn’t mean they saw you, or they will actually stop.


  • tiramichu@sh.itjust.workstoFuck AI@lemmy.worldRules for Thee and Not for Me
    link
    fedilink
    arrow-up
    26
    arrow-down
    9
    ·
    edit-2
    12 days ago

    They both use copyrighted material yes (and I agree that is bad) but let’s work this argument through.

    Before we get into this, I’d like to say I personally think AI is an absolute hell on earth which is causing tremendous societal damage. I wish we could un-invent AI and pretend it never happened, and the world would be better for that. But my personal views on AI are not going to factor into this argument.

    I feel the argument here, and a view shared by many, is that since the AI was trained unethically, on copyrighted material, then any manner in which that AI is used is equally unethical.

    My argument would be that the origin of a tool - be that ethical or unethical, good or evil - does not itself preclude judgment on the individuals later using that tool, for how they choose to use it.

    When you ask an AI to generate an image, unless you specify otherwise it will create an amalgam based on its entire training set. The output image, even though it will be derived from work of many artists and photographers, will not by default be directly recognisable as the work of any single person.

    When you use an AI to clone someone’s voice on the other hand, that doesn’t even depend on data held within the model, but is done through you yourself feeding in a bunch of samples as inputs for the model to copy and directing the AI to impersonate that individual directly.

    As an end user we don’t have any control over how the model was trained, but what we can choose is how that model is used, and to me, that makes a lot of difference.

    We can use the tool to generate general things without impersonating anyone in particular, or we can use it to directly target and impersonate specific artists or individuals.

    There’s certainly plenty of hypocrisy in a person using stolen copyright to generate images, while at the same time complaining of someone doing the same to their voice, but our carthartic schadenfreude at saying “fuck you, you got what’s coming” shouldn’t mean we don’t look objectively at these two activities in terms of their impact.

    Fundamentally, generating a generic image versus cloning someone’s voice are tremendously different in scope, the directness of who they target, and the level of infringement and harm caused. And so although nobody is innocent here, one activity is still far worse morally than the other - and by a very large amount.