Reply to thread

The way I approach it, I use the prompts to get the image that I feel like is the closest to what I want and managable when it comes to editing (not gonna use the word photoshop here cause that gives waaaay too much credit to what I'm doing).


I use whatever tools that can help me get that ideal image. I use the lasso selection tool to transform the colour of specific areas (play around with hues, contrast, layering colour etc.). Liquify to change anatomical anomalies if necessary. Simple old paint brushes to add or delete certain elements. In the case of Xiang, I basically doodled her purple hair over the original generated image. If you trust your artistic abilities, you can try drawing over and fixing any malformed fingers or elements. After that I put the crude edit through image2image AI regeneration.


The image after the first edit and regeneration is close to what you want, but usually you're not quite done yet. So repeat the previous step, fine tune the specifics and regenerate again. By the end of it, you'd probably have around a couple hundred duds before getting that ideal image. A few tips, when going for the image2image option, you're going to encounter a strength slider that goes from 0 to 100. Closer to the 0 spectrum means that you want the AI to stick closer to the reference image that you've provided. Towards the opposite end, you're giving the AI more interpretive liberties. Also, I don't quite recommend using Stable Diffusion. Realistic Vision and ICBINP provide better realistic images imo.


To the moderators, sorry about the constant large sized images. I post most of my stuff via my android tablet which doesn't offer the simple luxury of resizing images on this site.


Top


Are you 18 or older?

This website requires you to be 18 years of age or older. Please verify your age to view the content, or click Exit to leave.