Took way more time than I’d like. I did so many inpaintings and manual paintings fixing weird things. I wonder if people just brute force the generations or also do the same.
Workflow (txt2img)
Positive: (masterpiece, best quality:1.3), (flat color, watercolor:1.1), a girl sitting on a sidewalk, leaning on wall, 1girl, pink shirt, black skirt, (frills:0.8), black thighhighs, black choker, frilled shirt collar, lolita fashion, medium breasts, detailed background, nice hands, perfect hands, face
Negative: embedding:easynegative, embedding:ng_deepnegative_v1_75t, (worst quality, low quality:1.3), jpeg artifacts, signature, watermark, bad anatomy, bad proportions, disfigured, amputee, disembodied limb, severed limb, missing limb, missing arms, extra arms, extra legs, extra hands, missing finger, fewer digits, extra digits, (cropped head:1.5), realistic, monochrome, 3d, loli, maid, apron, huge breasts, big breasts, flat chest, hat
Checkpoint: Anything V5 PrtRE, Seed: 1020843393444063, Steps: 20, Sampler: DPM++ 2M Karras, Size: 768x512, Upscaler: 4x Ultra-Sharp and RealESRGAN_x4plus_anime_6B
This workflow only produces the base image. The final image went through countless img2img, inpaintings, and also manual fixes. The process is roughly like this:
Base gen > inpainting and manual painting > 2x upscale (4x-Ultrasharp + x4plus_anime_6B) > inpainting and manual painting > 2x upscale 4x-Ultrasharp > finishing touches
Note that I also use ComfyUI, so the result might be different on other UIs.
I wonder if people just brute force the generations or also do the same.
I always render like 16-32 inpaintings at a time because it is so hard to get right. You either need a good GPU or patience, and I am stuck with patience lol
I usually do 3-4 inpaints at a time, slowly modifying the denoise, prompts, and inpainted area. Sometimes I can’t get the result I want even after a lot of generations, so I have to make a rough drawing as a hint.
soo qt <3
Be careful of posting anything like this in lolita fashion communities though. The crazy neurotic people there apparently don’t like generative AI very much.
Sometimes I visit that sub to look at some irl designs. Though most of them are rather “cosplay”-like, instead of the more practical one.
The crazy neurotic people there apparently don’t like generative AI very much.
It’s kind of expected tbh. People hate AI art as of now, outside of the AI community. I understand their hate, as I was one of them. I thought actual artists were going to lose their job. It took a while for me to shake that oponion away. I just recently tried to learn more about it, after reading some opinions about AI art. One particular comment that stood out to me was that “despite the support for artists against AI art, most people won’t pay for artists’ works anyway” (heavy paraphrasing). That one is very much the truth. People rarely respect artists’ work, often undermining their effort. So I started to look at the other side of AI art; what positive things could it brought. I joined a Stable Diffusion subreddit to lurk into the community and learn more about it. As I learned, I could see more potentials of using AI art to actually help artists. Not only that, it’d also help indie game devs to speed up their work and reduce expenses. Thanks to that I had a change of opinion.