- cross-posted to:
- imageai@sh.itjust.works
- cross-posted to:
- imageai@sh.itjust.works
one of my favourite things about stablediffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?
one of my favourite things about stablediffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?
Truly enchanting. Would you share the workflow or at least the model you used?
people always ask, but i always just say a blend of models that i refuse to name because they are more NSFW than the images i care to make. hint: animal people.
as for the flow, pretty basic for this one. adjusting with positive/negative prompts, weight of prompts, and seeds to get the general aesthetic i’m looking for. usually specifically trying to find a look between paint and reality for a dream-like feel. once i get what i’m looking for, i controlnet/inpaint what i need to make it what i was looking for. in this image, two of the four pumpkins were made to cover debris.
i’ve done edits to my own artwork back on reddit to show how much control you have over image context and content, but any sub i showed it to outside of stablediffusion got me a lot of scorn and hate for using the tools of the devil. a lot of compliments for my original artwork, but that’s usually in an attempt to insult the A.I. stuff while ignoring the reasons behind my use of original works for the base of the edits in the first place.
but it all boils down to “inpainting and controlnet are awesome.” and it’s just about knowing when to use which tools to get the looks you’re going for. i often edit things in a paint program if i need to do big changes. love the creative things people do with these tools, hope to see more on lemmy.