Comfyui load workflow example reddit

Comfyui load workflow example reddit. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Welcome to the unofficial ComfyUI subreddit. Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow. Comfy Workflows Comfy Workflows. Still working on the the whole thing but I got the idea down Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. Just my two cents. Upcoming tutorial - SDXL Lora + using 1. You can then load or drag the following image in ComfyUI to get the workflow: ComfyUI Examples. To add content, your account must be vetted/verified. K12sysadmin is for K12 techs. It's simple and straight to the point. I couldn't find the workflows to directly import into Comfy. Jul 6, 2024 · Download the first image on this page and drop it in ComfyUI to load the Hi-Res Fix workflow. Img2Img ComfyUI workflow. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. 0 and upscalers I think it was 3DS Max. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. The EXIF data won't capture the entire workflow but to quickly see an overview of a generated image, this is the best you can currently get. or through searching reddit, the comfyUI manual needs updating imo. 1. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. 9(just search in youtube sdxl 0. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Using the Comfy image saver node will add EXIF fields that can be read by IIB, so you can view the prompt for each image without needing to drag/drop every single one. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. You can then load or drag the following image in ComfyUI to get the workflow: Load Image Node. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Initial Input block - Welcome to the unofficial ComfyUI subreddit. K12sysadmin is open to view and closed to post. Share, discover, & run thousands of ComfyUI workflows. SDXL Default ComfyUI workflow. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. Here's a quick example where the lines from the scribble actually overlap with the pose. I'll do you one better, and send you a png you can directly load into Comfy. You can see it's a bit chaotic in this case but it works. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. I can load workflows from the example images through localhost:8188, this seems to work fine. 1 ComfyUI install guidance, workflow and example. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. ai/profile/neuralunk?sort=most_liked. This could lead users to increase pressure to developers. 1; Flux Hardware Requirements; How to install and use Flux. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. They do overlap. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. Just load your image, and prompt and go. Flux Schnell is a distilled 4 step model. be/ppE1W0-LJas - the tutorial. If you have the SDXL 0. Then restart ComfyUI. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. If you have the SDXL 1. After studying the nodes and edges, you will know exactly what Hi-Res Fix is. 1 with ComfyUI Aug 2, 2024 · Flux Dev. Of course with so much power also comes a steep learning curve, but it is well worth it IMHO. Instead, I created a simplified 2048X2048 workflow. 1; Overview of different versions of Flux. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. I see youtubers drag images into ComfyUI and they get a full workflow, but when I do it, I can't seem to load any workflows. Create animations with AnimateDiff. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. But let me know if you need help replicating some of the concepts in my process. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. WAS suite has some workflow stuff in its github links somewhere as well. It is not much an inconvenience when I'm at my main PC. and if you copy it into comfyUI, it will output a text string which you can then plug into you 'Clip text encoder' node and it is then used as your SD prompt. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". Here are approx. com/. 0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. Really happy with how this is working. this is just a simple node build off what's given and some of the newer nodes that have come out. 1:8188 but when i try to load a flow through one of the example images it just does nothing. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Hope you like some of them :) Flux. Workflow. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. ControlNet Depth ComfyUI workflow. Using the Comfy image saver node will add EXIF fields that can be read by IIB, so you can view the prompt for each image without needing to drag/drop every single one. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. . Any ideas on this? Starting workflow. Welcome to the unofficial ComfyUI subreddit. Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. This repo contains examples of what is achievable with ComfyUI. Besides, by recording the precise "workflow" (= the collection of interconnected nodes), you even get reasonably good reproducibility, namely, if you load the workflow and change nothing (including the seed) you should get exactly the same result. Ending Workflow. 0. So every time I reconnect I have to load a presaved workflow to continue where I started. The image blank can be used to copy (clipspace) to both the load image nodes, then from there you just paint your masks, set your prompts (only the base negative prompt is used in this flow) and go. I use a google colab VM to run Comfyui. 4 - The best workflow examples are through the github examples pages. I tried to find either of those two examples, but I have so many damn images I couldn't find them. https://youtu. 9 leaked repo, you can read the README. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. You can just use someone elses workflow of 0. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. This is done using WAS nodes. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? That's a bit presumptuous considering you don't know my requirements. I had to place the image into a zip, because people have told me that Reddit strips . 5 with lcm with 4 steps and 0. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. You can find the Flux Dev diffusion model weights here. Same workflow as the image I posted but with the first image being different. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. Please keep posted images SFW. You should now be able to load the workflow, which is here. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. This is a more complex example but also shows you the power of ComfyUI. Its just not intended as an upscale from the resolution used in the base model stage. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. You need to select the directory your frames are located in (ie. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. 1 or not. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. We would like to show you a description here but the site won’t allow us. I might do an issue in ComfyUI about that. 168. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Table of contents. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments One trick I learned yesterday that makes sharing workflows easier when those include pictures and videos: use the Load Video (Path) node, post your video source online (on imgur for example), and link to it via that node with a simple URL. pngs of metadata. I cant load workflows from the example images using a second computer. It covers the following topics: Introduction to Flux. I recently switched from A1111 to ComfyUI to mess around AI generated image. 6 min read. I actually just released an open source extension that will convert any native ComfyUI workflow into executable Python code that will run without the server. Put the flux1-dev. I can load the comfyui through 192. Thank you u/AIrjen!Love the variant generator, super cool. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. I even have a working sdxl example in raw python on the readme. You can encode then decode bck to a normal ksampler with an 1. Merging 2 Images together. This guide is about how to setup ComfyUI on your Windows computer to run Flux. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. second pic. So, I just made this workflow ComfyUI. Upscaling ComfyUI workflow. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. Nobody needs all that, LOL. sft file in your: ComfyUI/models/unet/ folder. Breakdown of workflow content. Please share your tips, tricks, and workflows for using this software to create your AI art. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Help, pls? comments sorted by Best Top New Controversial Q&A Add a Comment You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. yofpus xtpi pnaap inzlx zptleqp wbi gai kgkvq avbe ywhz