Stable diffusion face refiner online reddit. 2) face by (Yoji Shinkawa 1.

Stable diffusion face refiner online reddit 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. 4), (mega booty:1. (depending on the degree of refinement) with a denoise strength of 0. What would be great is if I could generate 10 images, and each one inpaints a different face all together, but keeps the pose, perspective, hair, etc the same. I initially tried using a large square image with a 3x3 arrangement of faces, but it would often read the lower rows of faces as the body for the upper row; spread out horizontally all of the faces remain well separated without sacrificing too much resolution to empty padding. 6), (nsfw:1. Please keep posted images SFW. "normal quality" in negative certainly won't have the effect. 5 of my wifes face works much better than the ones Ive made with sdxl so I enabled independent prompting(for highresfix and refiner) and use the 1. This brings back memories of the first time that I use Stable Diffusion myself. SDXL models on civitai typically don't mention refiners and a search for refiner models doesn't turn up much. Among the models for face, I found face_yolov8n, face_yolov8s, face_yolov8n_v2 and the similar for hands. Using a workflow of txt2image prompt/neg without the ti and then adding the ti into adetailer (with the same negative prompt), I get This was already answered on Discord earlier but I'll answer here as well so others passing through can know: 1: Select "None" in the install process when it asks what backend to install, then once the main interface is open, go to This is the best technique for getting consistent faces so far! Input image - John Wick 4: Output images: Input image - The Equalizer 3: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. A1111 and ComfyUI are the two most popular web interfaces for Where do you use Stable diffusion online for free? Not having a powerful PC I just rely on online services, here are mines . g. is anyone else experiencing this? what am i missing to make the refiner extension to work? Honestly! Currently trying to fix bad hands using face refiner, but it seems that it is doing something bad. Even the slightest bit of fantasy in there and even photo prompts start pushing a CGI like finish. 5, we're starting small and I'll take you along the entire journey. I mainly use img2img to generate full body portraits (think magic the gathering cards), and targeting specific areas (I think it’s called in painting?) works great for clothing, details, and even hands if I specify the number of fingers. Hands work too with it, but I prefer the MeshGraphormer Hand Refiner controlnet. It's "Upscaling > Hand Fix > Face Fix" If you upscale last, you partially destroy your fixes again. Far from perfect, but I got a couple generations that looked right. 0 includes the following experimental functions: Free Lunch (v1 and v2) AI researchers have discovered an optimization for Stable Diffusion models that improves the quality of the generated images. I will try that as the facedeatailer nodes never worked and Just like Juggernaut started with Stable Diffusion 1. This isn't just a picky point -- its to underline that larding prompts with "photorealistic, ultrarealistic" etc -- tend to make a generative AI image look _less_ like a photograph. Stable Diffusion 3 will use this new I'm not really a fan of that checkpoint, but a tip to creating a consistent face is to describe it and name the "character" in the prompt. After Refiner is done I feed it to a 1. I have my stable diffusion UI set to look for updates whenever I boot it up. 0 Refine. You can add things The difference in titles: "swarmui is a new ui for stablediffusion,", and "stable diffusion releases new official ui with amazing features" is HUGE - like a difference between a local notice board and a major newspaper publication. This may help somewhat. Put the base and refiner models in stable-diffusion-webui\models\Stable-diffusion. I experinted a lot with the "normal quality", "worst quality" stuff people often use. (Added Oct. Your Face Into Any Custom Stable Diffusion Model By Web UI. You don't actually need to use the refiner. 1. It's not hidden in the Hires. Specifically, the output ends up looking So I installed stable diffusion yesterday and I added SD 1. For example, I generate an image with a cat standing on a couch. The problem is I'm using a face from ArtBreeder, and img2img ends up changing the face too much when implementing a different style (eg: Impasto, oil painting, swirling brush strokes, etc). Visual transformers (for images, etc) have proven their worth the last year or so. 5 model for upscaling and it seems to make a decent difference. It works perfectly with only face images or half body images. Restore face makes face caked almost and looks washed up in most cases it's more of a band-aid fix . Master Consistent Character Faces with Stable Diffusion! 4. the idea was to get some initial depth\latent img but end with another model. I've tried changing the samplers, CFG, and the number of steps, but the results aren't coming out correctly. The diffusion is a random seeded process and wants to do its own thing. It will allow you to make them for SDXL and SD1. I think prompt are not a good way and I tried control net "face only" option too. Use a value around 1. "Inpaint Stable Diffusion by either drawing a mask or typing what to replace". Craft your prompt. should i train the refiner exactly as i trained the base model? Share Add a Comment. One of the weaknesses of stable diffusion is that it does not do faces well from a distance. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. It hasn't caused me any problems so far but after not using it for a while I booted it up and my "Restore Faces" addon isn't there anymore. What model are you using and what resolution are you generating at? If you have decent amounts of VRAM, before you go to an img2img based upscale like UltimateSDUpscale, you can do a txt2img based upscale by using ControlNet tile/or ControlNet inpaint, and regenerating your image at a higher resolution. If you want to make a high quality Lora, I would recommend using Kohya and follow this video. Simply ran the prompt in txt2img with SDXL 1. The main Step one - Prompt: 80s early 90s aesthetic anime, closeup of the face of a beautiful woman exploding into magical plants and colors, living plants, moebius, highly detailed, sharp attention to detail, extremely detailed, dynamic If you have a very small face or multiple small faces in the image, you can get better results fixing faces after the upscaler, it takes a few seconds more, but much better results (v2. pt" and place it in the "embeddings" folder Small faces look bad, so upscaling does help. 4), (panties:1. Possibly through splitting the warp diffusion clip back into frames, running the frames through your method, then recompiling into video I found a solution for me use the cmd line settings :—na-half-vae —xformers (I removed the param —no-half ) Also install the latest WebUi 1. #what-is-going-on Discord: https://discord. //lemmy. I got an issue with inpainting and controlnet, if i inpaint the background or something like a body the unpainted part the part that I don't want it to change alot example (face) it looks way different, if I add the face to inpainting I lose I had some mixed results putting the embedding name in parenthesis with 1girl token and then another with the other celeb name. 25, . What does the "refiner" do? Noticed a new functionality, "refiner", next to the "highres fix" What does it do, how does it work? Thx. Same with SDXL, you can use any two SDXL models as the base model and refiner pair. Ultimately you want to get to about 20-30 images of face and a mix of body. I started using one like you suggest, using a workflow based on streamlit from Joe Penna that was 40 steps total, first 35 on the base, remaining noise to the refiner. At that moment, I was able to just download a zip, type something in webui, and then click generate. 7 in the Denoise for Best results. It's an iterative process, unfortunately more iterative than a few images and done. I'll do my second post on the face refinement and then apply that face to a matching body style. Her golden locks cascade in large waves, adding an element of mesmerizing allure to her appearance,the atmosphere is enveloped in darkness, accentuating the intensity of the flames, Behind her, lies a sprawling landscape of ruins, evoking a sense of desolation and mystery,full You want to stay as close to 512x512 as you can for generation (with SD1. 7> Negative: EpicPhotoGasm-colorfulPhoto-neg Sure. 0 and upscalers Stable Diffusion XL - Tipps & Tricks - 1st Week. To recap what I deleted above, with one face in the source and two in the target, Reactor was changing both faces. HandRefiner: Refining Malformed Hands in Generated Images by Diffusion-based Conditional Inpainting And don't forget the power of img2img. Inpainting can fix this. 5, all extensions updated. 75 then test by prompting the image you are looking for ie, "Dog with lake in the background" through run an X,Y script with Checkpoint name and list your checkpoints, it should print out a nice picture showing the I use Automatic 1111 so that is the UI that I'm familiar with when interacting with stable diffusion models. 1 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. An example: You impaint the face of the surprised person and after 20 generation it is just right - now that's it. A regal queen of the stars, wearing a gown engulfed in vibrant flames, emanating both heat and light. i'm using roop, but the face turns out very bad (actually the photo is after my face swap try). However, this also means that the beginning might be a bit rough ;) NSFW (Nude for example) is possible, but it's not yet recommended and can be prone to errors. 5 version, losing most Stable Diffusion is a model architecture (or a class of model architectures, there is SD1, SDXL and others) and there are many applications that support it and also many different finetuned model checkpoints. It just doesn't automatically refine the picture. Navigation Menu AUTOMATIC1111 / stable-diffusion-webui Public. If the problem still persists I will do the refiner-retraining. Preferrable to use a person and photography lora as BigAsp how to use the refiner model? We’re on a journey to advance and democratize artificial intelligence through open source and open science. It can go even further with [start:end:switch] i was expecting more Consistent character faces, designs, outfits, and the like are very difficult for Stable Diffusion, and those are open problems. *PICK* (Added Oct. In your case you could just as easily refine with SDXL instead of 1. I can't figure out how to properly use refiner in inpainting workflow. Wait till 1. This speed factor is one reason I've mostly stuck with 1. Im using automatic1111 and I run the initial prompt with sdxl but the lora I made with sd1. the hand color does not see very healthy, I think the seeding took pixels from outfit. I haven't had any of the issues you guys are talking about, but I always use Restore Faces on renders of people and they come out great, even without the refiner step. Restarted, did another pull and update. Taking a good image with a poor face, then cropping into the face at an enlarged resolution of it's own, generating a new face with more detail then using an image editor to layer the new face on the old photo and using img2img again to combine them is a very common and powerful practice. 5 and protogen 2 as models, everything works fine, I can access Sd just fine, I can generate but when I generate then it looks extremely bad, like it's usually a blurry mess of What happens is that SD has problems with faces. So the trick here is adding expressions to the prompt (with weighting between them) and also found that it's better to use 0. 5-0. These settings will keep both the refiner and the base model you are using in VRAM, increasing the image generation speeds drastically. Wᴇʟᴄᴏᴍᴇ ᴛᴏ ʀ/SGExᴀᴍs – the largest community on reddit discussing education and student life in Singapore! SGExams is also more than a subreddit - we're a registered nonprofit that organises initiatives supporting students' academics, career guidance, mental health and holistic development, such as webinars and mentorship programmes. Sort by: Master Consistent Character Faces with Stable Diffusion! 4. This simple thing made me a fan of Stable Diffusion. ) Automatic1111 Web UI - PC - /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. But I'm not sure what I'm doing wrong, in the controlnet area I find the hand depth model and can use it, I would also like to use it in the adetailer (as described in Git) but can't find or select the depth model (control_v11f1p_sd15_depth) there. From SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. 1. How to download sdxl base and refiner model from hugging face to google colab using access token When I try to inpaint a face using the Pony Diffusion model, the image generates with glitches as if it wasn't completely denoised. I've been having some good success with anime characters, so I wanted to share how I was doing things. It used the source face for the target face I designated (0 or 1), which is what it's supposed to do, but it was also replacing the other face in the target with a random face. Now both colab and PC installers are In summary, it's crucial to make valid comparisons when evaluating the SDXL with and without the refiner. So far whenever I use my character lora and wish to apply the refiner, I will first mask the face and then have the model inpaint the rest. Put the VAE in stable-diffusion-webui\models\VAE. 30ish range and it fits her face lora to the image without 51 votes, 39 comments. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair Stable Diffusion is a text-to-image generative AI model. Experimental Functions. 1, 2022) Web app StableDiffusion-Img2Img (Hugging Face) by It may well have been causing the problem. As a tip: I use this process (excluding refiner comparison) to get an overview of which sampler is best suited for my prompt, and also to refine the prompt, for example if you notice the 3 consecutive For artists, writers, gamemasters, musicians, programmers, philosophers and scientists alike! The creation of new worlds and new universes has long been a key element of speculative fiction, from the fantasy works of Tolkien and Le Guin, to the science-fiction universes of Delany and Asimov, to the tabletop realm of Gygax and Barker, and beyond. Been learning the ropes with stable diffusion, and I’m realizing faces are really hard. it works ok with adetailer as it has option to use restore face after adetailer has done detailing and it can work on but many times it kinda do more damage to the face as it Well, the faces here are mostly the same but you're right, is the way to go if you don't want to mess with ethnics loras. No need to install anything. can anybody give me tips on whats the best way to do it? or what tools can help me refine the end result? i came across the "Refiner extension" in the comments here described as "the correct way to use refiner with SDXL" but i am getting the exact same image between checking it on and off and generating the same image seed a few times as a test. Generate the image using the main lora (face will be somewhat similar but weird), then do inpaint on face using the face lora. Try the SD. 5 excels in texture and lighting realism compared to later stable diffusion models, although it struggles with hands. You can do 768x512 or 512x768 to get specific orientations, but don't stray too far from those 3 resolutions or you'll start getting very weird results (people tend to come out horribly deformed for example) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Stable Diffusion right now doesn't use transformers. 5 model in highresfix with denoise set in the . 5 model as the "refiner"). To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. Since the research release the community has started to boost XL's capabilities. (basically the same as fooocus minus all the magic) - and I'm wondering if i should use a refiner for it, and if so, which one - evidently i'm going for You can just use someone elses workflow of 0. "f32 stable-diffusion". Please share your tips, tricks, and workflows for using this software to create your AI art. It can be used entirely offline. Use at least 512x512, make several generations, choose best, do face restoriation if needed (GFP-GAN - but it overdoes the correction most of the time, so it is best to use layers in GIMP/Photoshop and blend the result with the original), I think some samplers from k diff are also better than others at faces, but that might be placebo/nocebo effect. just made this using epicphotogast and the negative embedding EpicPhotoGasm-colorfulPhoto-neg and lora more_details with these settings: Prompt: a man looks close into the camera, detailed, detailed skin, mall in background, photo, epic, artistic, complex background, detailed, realistic <lora:more_details:1. 5 model doing the upscaling /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. What most people do is generate an image until it looks great and then proclaim this was what they intended to do. fix tab or anything. gg Transformers are the major building block that let LLMs work. Here's a few I use. ) Automatic1111 Web UI - PC - Free How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1. Take your two models and do a weighted sum merge in the merge checkpoints tab and create a checkpoint at . 5 to achieve the final look. 📷 7. Try reducing the number of steps for the refiner. I assume you would have generated the preview for maybe every 100 steps. 5 model as your base model, and a second SD1. X based models), since that's what the dataset is trained on. 5 model IMG 2 IMG, like realistic vision, can increase details, but destroy faces, remove details and become doll face/plastic face Share Add a Comment /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Are there online Stable diffusion sites that do img2img? as he said he did change other things. True for Midjourney, also true for Stable Diffusion /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Within this workflow, you will define a combination of three components: the "Face Detector" for identifying faces within an image, the "Face Processor" for adjusting the detected faces, and Dear Stability AI thank you so much for making the weights auto approved. 5 ) which gives me super interesting results. My workflow and visuals of this behaviour is in the attached image. I was planning to do the same as you have already done 👍. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. X/1 instead of number of steps (dont know why but from several tests, it works better), Hi everybody, I have generated this image with following parameters: horror-themed , eerie, unsettling, dark, spooky, suspenseful, grim, highly /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Stable Diffusion looks too complicated”. For faces you can use Facedetailer. The base model is perfectly capable of generating an image on its own. 2) Set Refiner Upscale Value and Denoise value. Make sure to select inpaint area as "Only Masked". 0. However, that's pretty much the only place I'm actually seeing a refiner mentioned. Same with my LORA, when the face are facing the camera it turns out good, but when i try to do something like that the face are ruined. true. People using utilities like Textual Inversion and DreamBooth have been able to solve the problem in narrow use cases, but to the best of my knowledge there isn't yet a reliable solution to make on-model characters without just straight up hand-holding the AI. Then I fed them to stable diffusion and kind of figured out what it sees when it studies a photo to learn a face, then went to photoshop to take out anything it learned that I didn't like. 0 where hopefully it will be more optimized Good info man. ), but I have been able to generate the back views for the same character, it's likely that for a 360 view, once it's trying to show the other side of the character you'll need to change the prompt to try to force the back, with keywords like "lateral view" and "((((back view))))", in my experience this is not super consistent, you need to find /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. And after running the face refiner I think that ComfyUI should use SDXL refiner on face and hands, but how to encode a image to feed it in as latent? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. after a long night of trying hard with prompts and negative prompts and a swap through several models Stable Diffusion generated a face that matches perfectly for Restore Faces only really works when the face is reasonably close to the "camera". Seems that refiner doesn't work outside the mask, it's clearly visible when "return with leftover noise" flag is enabled - everything outside mask filled with noise and artifacts from base sampler. next version as it should have the newest diffusers and should be lora compatible for the first time. 6 or too many steps and it becomes a more fully SD1. 5, . 0 Base, moved it to img2img, removed the LORA and changed the checkpoint to SDXL 1. The Refiner very neatly follows the prompt and fixes that up. Prompt: An old lady posing in a bra for a picture, making a fist, bodybuilder, (angry:1. Is there a way to train stable diffusion on a particular persons face and then produce images with the trained face? Skip to main content. Wait, does that mean that stable diffusion makes good hands but I don’t know what good hands look like? Am i asking too much of stable diffusion? It seems pretty clear: prototype and experiment with Turbo to quickly explore a large number of compositions, then refine with 1. I have my VAE selection in the settings set to "Automatic". 5 model use resolution of 512x512 or 768 x 768. Hello everyone I use an anime model to generate my images with the refiner function with a realistic model ( at 0. The control Net Softedge is used to preserve the elements and shape, you can also use Lineart) 3) Setup Animate Diff Refiner /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. support/docs /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. fix In my experiments, I've discovered that adding imperfections can be made manually in Photoshop using tools like liquify and painting texture and then in img2img Personally, it appears to me that stable diffusion 1. safetensors) while using SDXL (Turn it off and use Hires. 0 faces fix QUALITY), recommend if you have a good I think the ideal workflow is a bit debateable. The two keys to getting what you want out of Stable Diffusion are to find the right seed, and to find the right prompt. Do not use the high res fix section (can select none, 0 steps in the high res section), go to the refiner section instead that will be new with all your other extensions (like control net or whatever other extensions you have installed) below, enable it there (sd_xl_refiner_1. You can refine how much the 'new' face gets upscaled to try to align the detail to the destination photo. 5, SD 2. Next fork of A1111 WebUI, by Vladmandic. 5 but the parameters will need to be adjusted based on the version of Stable Diffusion you want to use (SDXL models require a Use 1. 5), (large breasts:1. 0 base, vae, and refiner models. We note that this step is optional, but improves sample I'm having to disable the refiner for anything with a human face as a result, but then I lose out on other improvements it makes. I made custom faces in a game, then fed them to Artbreeder to make them look realistic then bred them and bred them until they looked unique. Similar to online services like DALL·E, Midjourney, and Bing, users can input text prompts, and the model will generate images based on said prompts. gg /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I can seem to make a decent basic workflow with refiner alone, and one with face detail but when I try to combine them I can't figure it out. If you're using Automatic webui, try ComfyUI instead. 2), well lit, illustration, beard, colored glasses /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. AP Workflow v5. Use 0. I think i must be using stable diffusion too much. That said, Stable Diffusion usually struggles all full body images of people, but if you do the above the hips portraits, it performs just fine. 7 in the Refiner Upscale to give a little room in the image to add details. I didn't really try it (long story, was sick etc. Auto Hand Refiner Workflow 4. 9(just search in youtube sdxl 0. How to Inject Your Trained Subject e. Model: Anything v4. I like any stable diffusion related project that's open source but InvokeAI seems to be disconnected from the community and how people are actually using SD. If you're using some web service, then very obviously that web host has access to the pics you generate and the prompts you enter, and may be I'm trying to figure out a workflow to use Stable Diffusion for style transfer, using a single reference image. You don't really need that much technical knowledge to use these. 5 embedding: Bad Prompt (make sure to rename it to "bad_prompt. 2) face by (Yoji Shinkawa 1. Wait a minute this lady is real and she is like right here and her hand is still fucked up. 6 More than 0. The result image is good but not as I wanted, so next I want to tell the AI something like this "make the cat more hairy" so When I inpaint a face, it gives me slight variations on the same face. So far, LoRA's only work for me if you run them on the base and not the refiner, the networks seems to have unique architectures that would require a LoRA trained just for the the refiner, I may be mistaken though, so take this with a grain of salt. face, set ONLY MASKED and generate. The Refiner also seems to follow positioning and placement prompts without Region controls far Welcome to the unofficial ComfyUI subreddit. From what Refiner only helps under certain conditions. I do it to create the sources for my MXAI embeddings, and I probably only have to delete about 10% of my If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. 4, SD 1. 1, 2022) Web app stable-diffusion (Replicate) by cjwbw. With experimentation and experience, you'll learn what each thing does. Getting a single sample and using a lackluster prompt will almost always result in a terrible result, even with a lot of steps. I have updated the files I used in my below tutorial videos. So: base -> refiner -> 1. The example workflow has a base checkpoint and a refiner checkpoint, I think I understand how that's supposed to work. A list of helpful things to know what model you are using for the refiner (hint, you don't HAVE to use stabilities refiner model, you can use any model that is the same family as the base generation model - so for example a SD1. com/models/119257/gtm-comfyui-workflows-including-sdxl-and-sd15. #what-is-going-on /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I am at Automatic1111 1. This simple thing also made my that friend a fan of Stable Diffusion. bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), fused fingers, messy drawing, broken legs censor /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I had the same idea of retraining it with the refiner model and then load the lora for the refiner model with the refiner-trained-lora. My process is to get the face first, then the body. The original prompt was supplied by sersun Prompt: Ultra realistic photo, (queen elizabeth), young, stunning model, beautiful face, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I am trying to find a solution. 2 or less on "high-quality high resolution" images. In my understanding, their implementation of the SDXL Refiner isn't exactly as recommended by Stability AI, but if you are happy using just the Base model (or you are happy with their approach to the Refiner), you can use it today to generate SDXL images. 2), (light For example, I wonder if there is an opportunity to refine the faces and lip syncing in this video. I'm already using all the prompt words I can find to avoid this but photorealistic nsfw, the gold standard is BigAsp, with Juggernautv8 as refiner with adetailer on the face, lips, eyes, hands, and other exposed parts, with upscaling. I have a built in tiling upscaler and face restore in my workflow: https://civitai. 📷 8. I think if there's something with sliders like facegen but with a decent result If you are running stable diffusion on your local machine, your images are not going anywhere. Hey, bit of a dumb issue but was hoping one of you might be able to help me. Can say, using ComfyUI with 6GB VRAM is not problem for my friend RTX 3060 Laptop the problem is the RAM usage, 24GB (16+8) RAM is not enough, Base + Refiner only can get 1024x1024, upscalling (edit: upscalling with KSampler I need to regenerate or make a refinement. cinematic photo majestic and regal full body profile portrait, sexy photo of a beautiful (curvy) woman with short light brown hair in (lolita outfit:1. 3 - 1. 1, 2022) Web app Stable Diffusion Multi Inpainting (Hugging Face) by multimodalart. Go to settings > stable diffusion > Maximum number of checkpoints loaded at the same time should be set to 2 > Only keep one model on device should be UNCHECKED. More info: https://rtech. This option zooms into the area and creates a really good face as a result, due to high correlation between the canvas and the dataset. I will first try out the newest sd. It's too bad because there's an audience for an interface like theirs. The refiner is a separate model specialized for denoising of 0. Downloaded SDXL 1. Actually i have trained stable diffusion on my own images and now want to create pics of me in different places but SD is messing with face specially when I try to get full image. dbzer0 Posted by u/Hungry_Young_8498 - 4 votes and 11 comments /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Depends on the program you use, but with Automatic 1111 on the inpainting tab, use inpaint with -only masked selected. A face that looks photorealistic in say 512x512 gets these I have very little experience here, but I trained a face with 12 photos using textual inversion and I'm floored with the results. The issue with the refiner lies in its tendency to occasionally imbue the image with an overly "AI-look," achieved by adding an excessive amount of detail. You can do a model merge for sure. 4 - 0. All online. More info: https://rtech I want to refine an image that has been already generated. 2), low angle, looking at Very nice. I prompt "person sitting on a char" or "ridding a horse" or what ever non-portrait I receive nightmare fuel instead a face, other details seems to be okay on the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. That is colossal BS, don't get fooled. . Skip to content. having this problem as well Inpaint prompt: chubby male (action hero 1. On a 1. This subreddit is an unofficial community about the video game "Space Engineers", a sandbox game on PC, Xbox and PlayStation, about engineering, construction, exploration and survival in space and on planets. 5. vsvd geau fiqs dnif qimm gzfm kwjka asai ktd sywmev