I’ve been diving into AI assisted workflows and found an extreme font for creativity. My recent efforts have been towards RPG-style characters like you’d see in a D&D game, and this guy came from the idea of a royal guard of an ancient city, Egyptian/African-esque. The AI gave me a variation with just the shield and I really liked the aspect of not killing but defending. If anyone is curious about the workflow I’d be happy to share :)

    • SpeakingColors@beehaw.orgOP
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      For sure! Often I’ll come in with a visual idea already, or will iterate on some with the AI giving inspiration. If I have the idea strongly I’ll sketch out the composition and elements I know I want - sometimes on real tricky poses like fingers I’ll take a photo of myself doing them. Throw that into stable diffusion with img-2-img to generate images based on my sketch/photograph to something more full featured or something I hadn’t thought of but really like (you can also set how “dreamy” the AI should be, how much it should vary from the input material).

      There’s a lot of detail I could get into but the “assistance” is fleshing out a composition -> I go in and correct anatomical mistakes or elements I want to change specifically -> run it through again if it needs it.

    • SpeakingColors@beehaw.orgOP
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I appreciate that! I’ve been shying away from posting stuff on here, as I don’t really know how people take AI art on not explicitly AI communities. For a while I had my own judgements on how the models get trained so I would understand. But thank you!

  • kat@feddit.de
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I’m a little bit conflicted honestly, because I’m not a fan of the idea of A.I. art, in general. But I also have to admit this result looks really good. Very cool character and perspective! I also like your take with him being a defender/carrying no weapon.

    I think maybe it needs some more work in the surroundings, because some of the buildings don’t make sense (missing walls e.g. around his shield).

    • SpeakingColors@beehaw.orgOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I hear you, when this stuff was blowing up I couldn’t shake that it was trained off artists’ work that they didn’t consent to having in the datasets. Sure it’s similar to how human artists work (for music and art the prevailing recommendations for me, or any artist, was to consume material relevant to your art. For visual art they really just wanted you to constantly keep your head open for shapes and form) but it felt closer to plagiarism than inspiration. Some generations can be very close to an individual style (especially if the model was trained specifically off that) but I found that generations that omitted an artist ended up creating something compelling but not tied to one artist specifically - still undoubtedly a conglomeration of the multitudes it was trained on (including photography). It’s muddy water for sure, and the angle of AI replacing workers in general is still relevant - but I also think it empowers people like me who have the visual ideas but can use the help making them fully fleshed out.

      The crux, for me, feels like “when you can see whatever you want, what do you want to see?” A lot of our AI woes are reflections of questionable human behavior (racist chat models, AI for war, deepfakes and dishonesty).

      How do you feel about it?

      • kat@feddit.de
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        but I also think it empowers people like me who have the visual ideas but can use the help making them fully fleshed out

        I totally get that. In fact, I feel similar because I often have ideas or concepts for visual art, but then I lack the drawing/painting skill to actually realize them (I try to improve, but I rarely find the time to practice). In that way, AI assistance could be seen as a tool to make expression via art more accessible to the masses (especially since not everybody can afford or has the resources to visit an art school).

        I think my conflict comes mainly from the immense respect and admiration I have for people who are able to create powerful, realistic images from scratch (e.g. people like Martina Fačková). Maybe I just need to get used to it, I mean calculators exist and that’s not a bad thing, because they make our lives easier.

        A lot of our AI woes are reflections of questionable human behavior (racist chat models, AI for war, deepfakes and dishonesty).

        I find it very interesting, because it poses the question of how much of this behavior is unintentionally woven into AI assisted images. When you look at artists like Martina Fačková, every little detail in her art is intentional and thereby it is also an expression of herself and her views on the world. But with AI art, while the broader composition and concept is intentional, a lot of the details are generated based on how the model is trained (of course you can decide whether you want to go with it or do another pass, in that sense it’s maybe a little bit of self-expression still).

        Interestingly, looking at your picture I can see bits and pieces of a war/apocalypse sceneries, e.g. the bird in the sky almost looks like a plane, the ruined buildings in the background (please note that this is not meant as criticism, just an observation). Now that I think of it, I’m also reminded of this captain america movie poster. Maybe the AI had some marvel movie posters in its training data ;-).

    • SpeakingColors@beehaw.orgOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Thank you! Essentially I’ll come in with a visual idea, some sketches already or I’ll do one with AI in mind (keep the lines simple so it doesn’t get confused). Generate a batch of images with img-2-img and cherry pick the ones that fit closest to the idea or are surprising and wonderful. Rework those for anatomical errors or other things I want to fix or omit -> send it back through img-2-img if it needs it or to inject detail -> upscale and put it as my desktop/phone wallpaper :P

      (I’m using Automatic 1111 which is a webui for Stable Diffusion btw)

      • Anamana@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Oh img2img sounds awesome. Never heard about it before. How do you rework the anatomical errors and stuff? By hand? Or all within Automatic 1111?

        • SpeakingColors@beehaw.orgOP
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Img2img is one of many ways to constrain the AIs efforts to your compositional desires, it’s rad. You can control the amount of “dreaming” the AI does on the base image to get subtle changes, or a radically different image based on the elements of the previous (sometimes to trippy cool results, often to horrendous mutations if the desired image is supposed to be humanoid xD).

          Inpainting is another tool, it’s like a precise img2img on an area you mask. Hands are often the most garbled thing from the AI, so a brute force technique is to img2img the hands - but the process works a lot better if you help the AI out and manually fix the hands. So I’ll throw the image into photoshop, make a list (if I remember :P) of everything I need to fix, address them directly and then toss it back into Automatic 1111. Often the shading and overall style are hard things for me to get right so I’ll inpaint over my edits to get the style and shading back.

  • Hundun@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I’ve been meaning to get into AI-assisted graphic workflows, but haven’t found anything useful aside from basic tutorials on how to set up SD and use it with basic prompts.

    Can you share some leaning resources, perhaps a workflow one could steal?

    • SpeakingColors@beehaw.orgOP
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I replied to a previous comment about the “assistance” part which is sorta an abridged version of my workflow (“workflow” is also a term used in Comfy UI, a visual layout that processes the image sequentially through modules). It’s super fun I highly recommend it! Feel free to PM me anytime I’d be glad to help!

      Really it was looking up terms and areas of Automatic 1111 I was unsure of and finding various sites and guides. Civitai has LOTS of guides often written by model makers or people with lots of hours in the field - it’s also my main resource for LoRAs and Models. But there’s tons of info on there. The most helpful ones where settings and workflows on actual image generation (I can definitely find some links for you there) to get quality results without too much “and if I change this, what happens?” But honestly I love poking around like that so I still spend hours tweaking just to see what happens xD