• LemmysMum@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    1 year ago

    Then perhaps you should look at using them so you can waylay your fears with knowledge.

    • tb_@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      1 year ago

      I should look into using said algorithms?

      I know what they can do, but if that’s through ripping off the work of others I’m not sure I like it.

      Would you pay an artist if you knew their work was traced?

      • LemmysMum@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Here, first you need tools, these are FOSS:
        https://www.dexerto.com/tech/how-to-install-stable-diffusion-2124809/
        https://www.youtube.com/watch?v=nBpD-RbglPw
        https://www.youtube.com/watch?v=SYNd0vAt5jk

        Then you’ll need to know the basics of using Stable Diffusion:
        https://www.youtube.com/watch?v=nBpD-RbglPw

        You’ll want access to community resources:
        https://civitai.com

        That’ll get you started. Should see you sorted for the next month of learning. Once you’ve got the basics of using Stable Diffusion (one of many image gen software) and you have the software under control you can start looking at using custom training models for getting the styles you want and learn how to start getting the results similar to what you want, they won’t be good, most will be trash, then you’ll need to learn about ControlNet, this will get you introduced to wireframe posing, depth maps, softedge, canny, and a dozen other pre-processing tools, once you start getting things that look kinda close to what you want you’ll learn about multi-pass processing, img2img generation, full and selective inpainting, and you’ll start using tools like ADetailer to help try generate better looking hands faces and eyes, and then you’ll need to get into learning how to use Latent-Couple and ComposableLora so you can start making accurate scene placements and style divisions. Don’t worry about the plethora of other more complex tools, you won’t need those at the start.

        • Adalast@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I absolutely know much on the topic. Please read my comments elsewhere in this thread for a strong break down of the issues and how AI actually works. Btw, the source of my authority on all of it is having a Master’s degree in art, working in a professional art field, having a BS in Applied Mathematics, and building AI’s as a hobby. I live in literally every aspect of this debate.

          TL:DR - AI models are never trained directly on source material. Sources are fed into statistical analysis algorithms that utterly destroy the sources and derive info that computers can understand in a process called Vectorization. The AI is then trained on those vectors. Then, when a prompt is given, the algorithm takes it apart as an input in a process called Tokenization. From the input, in an algorithm that goes beyond the scope of this, an output is given that statistically satisfies the model. So even in the usage process, the AI never actually directly works on human inputs.