Visual artists fight back against AI companies for repurposing their work::Three visual artists are suing artificial intelligence image-generators to protect their copyrights and careers.

  • FooBarrington@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    They cannot make something new. By nature, they can only mimic.

    Explain it to me from a mathematical point of view. How can I know based on the structure of GANs or Transformers that they, by nature, can only mimic? Please explain it mathematically, since you’re referring to their nature.

    The randomness they use to combine different pieces of work, is not creativeness. It’s brute force. It’s doing the math a million times until it looks right.

    This betrays a lack of understanding on your part. What is the difference between creativeness and brute force? The rate of acceptable navigations in the latent space. Transformers and GANs do not brute force in any capacity. Where do you get the idea that they generate millions of variations until they get it right?

    Humans fundamentally do not work that way. When an engineer sees a design, and thinks “I can improve that” they are doing so because they understand the mechanism.

    Define understanding for me. AI can, for example, automatically optimise algorithms (it’s a fascinating field, finding a more efficient implementation without changing results). This should be impossible if you’re correct. Why does it work? Why can they optimise without understanding, and why can’t this be used in other areas?

    Modern AIs do not understand anything. They brute force their way to valid output, and in some cases, like with code, science, or an engineering problem, there might be one single best solution, which an AI can find faster than a human.

    Again, define understanding. They provably build internal models depending on the task you’re training. How is that not a form of understanding?

    But art, DOES NOT HAVE a single correct “solution”.

    Then it seems great that an AI doesn’t always give the same result for the same input, no?

    • MentalEdge
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      The brute forcing doesn’t happen when you generate the art. It happens when you train the model.

      You fiddle with the numbers until it produces only results that “look right”. That doesn’t make it not brute forcing.

      Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.

      Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.

      In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.

      As for current models generating different result for the same prompt… no. They don’t. They generate variations, but the same prompt won’t get you Dalí in one iteration, then Monet in the next.

      • FooBarrington@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        The brute forcing doesn’t happen when you generate the art. It happens when you train the model.

        So it’s the same as a human - they also generate art until they get something that “looks right” during training. How is it different when an AI does it?

        But you’ll have to explain where this brute forcing happens. What are the inputs and outputs of the process? Because the NN doesn’t generate all possible outputs until the correct one is found, which is what brute forcing is. Maybe you could argue that GANs are kinda doing this, but it’s still a very much directed process, which is entirely different from real brute forcing.

        Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.

        You’re using more words without defining them.

        Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.

        But we’re not writing code to generate art. We’re writing code to train a model to generate art. As I’ve already mentioned, NNs provably can build an accurate model of whatever you’re training - how is this not a form of comprehension?

        In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.

        Please prove you need to understand the human experience to be able to generate meaningful art.

        As for current models generating different result for the same prompt… no. They don’t. They generate variations, but the same prompt won’t get you Dalí in one iteration, then Monet in the next.

        Of course they can, depending on your prompt and temperature.

        • MentalEdge
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          You are drawing parallels where I don’t think there are any, and are asking me to prove things I consider self-evident.

          I’m no longer interested in elaborating, and I don’t think you’d understand me if I did.

          • FooBarrington@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            This is what it always comes down to - you have this fuzzy feeling that AI art is not real art, but the deeper you dig, the harder it gets to draw a real distinction. This is because your arguments aren’t rooted in actual definitions, so instead of clearly explaining the difference between A and B, you handwave it away due to C, which you also don’t explain.

            I once held positions similar to yours, but after analysing the topic much much deeper I arrived at my current positions. I can clearly answer all the questions I posed to you. You should consider whether you not being able to means anything regarding your own position.

            • MentalEdge
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I am able to answer your questions for myself. I have lost interest in doing so for you.

              • FooBarrington@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                But can you do so from the ground up, without handwaving towards the next unexplained reason? That’s what you’ve done here so far.

                • MentalEdge
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Yes.

                  I once held a view similar to the one you present now. I would consider my current opinion further advanced, like you do yours.

                  You ask for elaboration and verbal definitions, I’ve been concise because I do not wish to spend time on this.

                  It is clear we cannot proceed further without me doing so. I have decided I won’t.