Comfyui workflows examples reddit

Comfyui workflows examples reddit. The AP Workflow wouldn't exist without the incredible work done by all the node authors out there. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. Is there a workflow with all features and options combined together that I can simply load and use ? To make random (but realistic) examples, the moment you start to want ControlNet in 2 different workflows out of your 10, or you need to fix 4 workflows out of 10 that use the Efficiency Nodes because v2. ControlNet Depth ComfyUI workflow. It covers the following topics: ComfyUI Examples. (Same seed, etc, etc. https://youtu. Both of the workflows in the ComfyUI article use a single image as input/prompt for the video creation and nothing else. WAS suite has some workflow stuff in its github links somewhere as well. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel You would feel less of a need to build some massive super workflow because you've created yourself a subseries of tools with your existing workflows. be/ppE1W0-LJas - the tutorial. Infinite Zoom: I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. github. sft file in your: ComfyUI/models/unet/ folder. And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. there you just search the custom node and you comfy uis inpainting and masking aint perfect. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. It works by converting your workflow. A good place to start if you have no idea how any of this works is the: No, because it's not there yet. ComfyUI Fooocus Inpaint with Segmentation Workflow Hi Antique_Juggernaut_7 this could help me massively. second pic. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. . 1 or not. json files into an executable Python script that can run without launching the ComfyUI server. 1 ComfyUI install guidance, workflow and example. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. A good place to start if you have no idea how any of this works Welcome to the unofficial ComfyUI subreddit. - lots of pieces to combine with other workflows: 6. Civitai has few workflows as well. AP Workflow 9. 2. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. You feed it an image, it runs through openpose, canny, lineart, whatever you decide to include. These people are exceptional. Svelte is a radical new approach to building user interfaces. Please share your tips, tricks, and workflows for using this software to create your AI art. That's the one I'm referring to. Hi there. So. I played for a few days with ComfyUI and SDXL 1. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. It provides workflow for SDXL (base + refiner). No, because it's not there yet. ComfyUI Fooocus Inpaint with Segmentation Workflow Welcome to the unofficial ComfyUI subreddit. A lot of people are just discovering this technology, and want to show off what they created. best external source willbe @comfyui-chat website which i believed is from comfyui official team. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. Jul 28, 2024 · You can adopt ComfyUI workflows to show only needed input params in Visionatrix UI (see docs: https://visionatrix. com/. This repo contains examples of what is achievable with ComfyUI. Put the flux1-dev. Workflow. For your all-in-one workflow, use the Generate tab. html). you may need fo an external finding as most of missing custom nodes that may outdate from latest comfyui could not be detect or show to manager. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. true. I meant using an image as input, not video. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. Only the LCM Sampler extension is needed, as shown in this video . Table of contents. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow CLIPText on the right. I originally wanted to release 9. Create animations with AnimateDiff. Two workflows included. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. if you needed clarification, all you had to do was ask, not this rude outburst of fury. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. Infinite Zoom: 157 votes, 62 comments. Please keep posted images SFW. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Everything else is the same. If you see a few red boxes, be sure to read the Questions section on the page. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. But it separates LORA to another workflow (and it's not based on SDXL either). This is an example of an image that I generated with the advanced workflow. This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Welcome to the unofficial ComfyUI subreddit. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Join the largest ComfyUI community. io/VixFlowsDocs/ComfyUI2VixMigration. Flux. But it is extremely light as we speak, so much so 157 votes, 62 comments. all in one workflow would be awesome. but mine do include workflows for the most part in the video description. Belittling their efforts will get you banned. For the AP Workflow 9. Merging 2 Images together. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. 0 released yesterday removes the on-board switch to include/exclude XY Plot input, or you need to manually copy some generation parameters Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. You can find the Flux Dev diffusion model weights here. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. Now, because im not actually an asshole, ill explain some things. The first one is very similar to the old workflow and just called "simple". it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. 1. Motion LoRAs w/ Latent Upscale: This workflow by Kosinkadink is a good example of Motion LoRAs in action: 7. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite , supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. Img2Img ComfyUI workflow. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. 0 for ComfyUI. Share, discover, & run thousands of ComfyUI workflows. That being said, here's a 1024x1024 comparison also. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. of course) To make differences somewhat easiser to see, the above image is at 512x512. Aug 2, 2024 · Flux Dev. You can then load or drag the following image in ComfyUI to get the workflow: 6 min read. you sound very angry. Breakdown of workflow content. SDXL Default ComfyUI workflow. or through searching reddit, the comfyUI manual needs updating imo. but this workflow should also help people learn about modular layouts, control systems and a bunch of modular nodes I use in conjunction to create good images. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Welcome to the unofficial ComfyUI subreddit. But it is extremely light as we speak, so much so This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Say, for example, you made a ControlNet workflow for copying the pose of an image. An example of the images you can generate with this workflow: 4 - The best workflow examples are through the github examples pages. however we need it unless there slight possibility of other alt or some1 nodes-pack can do same process . 0 released yesterday removes the on-board switch to include/exclude XY Plot input, or you need to manually copy some generation parameters You would feel less of a need to build some massive super workflow because you've created yourself a subseries of tools with your existing workflows. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Upscaling ComfyUI workflow. To make random (but realistic) examples, the moment you start to want ControlNet in 2 different workflows out of your 10, or you need to fix 4 workflows out of 10 that use the Efficiency Nodes because v2. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. this is just a simple node build off what's given and some of the newer nodes that have come out. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: MoonRide workflow v1. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. While I normally dislike providing workflows because I feel its better to teach someone to catch a fish than giving them one. And above all, BE NICE. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. Sure, it's not 2. 0, I worked closely with u/Kijai, u/glibsonoran, u/tzwm, and u/rgthree, to test new nodes, optimize parameters (don't ask me about SUPIR), develop new features, and correct bugs. rckrmw jstorw dwbb jvtdow hnx paqtnnc dtp qsjjwan bwzo zmk