Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

  • ParsnipWitch@feddit.de
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    We will probably all have to get used to this soon because I can see the same happening to authors, journalists and designers. Perhaps soon programmers, lawyers and all kinds of other people as well.

    It’s interesting how people on Lemmy pretend to be all against big corporations and capitalism and then they happily indulge in the process of making artists jobless becaus “Muh technology cool!”. I don’t know the English word to describe this situation. In German I would say “Tja…”

    • raccoona_nongrata@beehaw.org
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      I don’t think I would mind it so much if we lived in a society where your value and quality of life weren’t tied directly to your economic output. In that context, it is quite tragic because you’re taking skills and sensibilities that someone worked hard to develop and make a living with (not an easy task even in ideal circumstances) and turning it into a profit generator for someone else.

      If it truly were about purely human expression and creativity, if we lived in a society where everyone had their basic needs met and was encouraged to explore, that would be one thing.

      To be charitable, I think maybe part of the disconnect with people who can’t acknowledge the harm being done is they’re sort of jumping ahead to defend that utopian vision, even if it’s not conscious. They’re arguing from a hypothetical perspective of some future state.

      I don’t think that’s practical or helpful way to address the harm, but that’s how I rationalize the motivation without assuming the worst about everyone who is in favor of it even if I strongly disagree.