Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

  • raccoona_nongrata@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 year ago

    Sure, but those individuals are responsible for their proportional contribution to that 100,000 years, which can be a lot to a human being, sometimes a life’s work.

    If you stopped feeding new data to Diffusion, it would not progress or advance the human timeline of art, it would just stagnate. It might have a broader scope than if you fed it cave drawings, but it would never contribute anything itself.

    People don’t want their work and contribution scooped up by a machine that then shoves them aside with literally no compensation.

    If we create a society where no one has to work, we can revisit the question, but that’s nowhere on the horizon.

    • Deniz Opal@syzito.xyz
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      @raccoona_nongrata

      Actually this is how we are training some models now.

      The models are separated, fed different versions of the source data, then we kick off a process of feeding them content that was created by the other models creating a loop. It has proven very effective. It is also the case that this generation of AI created content is the next generations training data, simply by existing. What you are saying is absolutely false. Generated content DOES have a lot of value as source data