Comfyui workflow directory example reddit. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Ignore the prompts and setup Welcome to the unofficial ComfyUI subreddit. example. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. It provides workflow for SDXL (base + refiner). In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. If you don’t have t5xxl_fp16. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values It is pretty amazing, but man the documentation could use some TLC, especially on the example front. But let me know if you need help replicating some of the concepts in my process. I tried to keep the noodles under control and organized so that extending the workflow isn't a pain. In the Custom ComfyUI Workflow drop-down of the plugin window, I chose the real_time_lcm_sketching_api. 1; Overview of different versions of Flux. You can use t5xxl_fp8_e4m3fn. Well, I feel dumb. yet when i try ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. https://youtu. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. Comfy Workflows Comfy Workflows. Please keep posted images SFW. Release: AP Workflow 9. But for a base to start at it'll work. For your all-in-one workflow, use the Generate tab. SDXL Pipeline. Going to python_embedded and using python -m pip install compel got the nodes working. it's nothing spectacular but gives good consistent results without I just set up ComfyUI on my new PC this weekend, it was extremely easy, just follow the instructions on github for linking your models directory from A1111; it’s literally as simple as pasting the directory into the extra_model_paths. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to It works by converting your workflow. 5 model I don't even want. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. To get started with AI image generation, check out my guide on Medium. I'll also share the inpainting methods I use to correct any issues that might pop up. 🙌 Acknowledgments: And with comfyui alot of errors occur that I cant seem to understand or figure out and only sometimes if i try to place the models in the default location it works, and IPAdapter models i dont know, i just dont think they work because I can transfer a few models to the regular location and run the workflow and it works perfectly. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory I'm using ComfyUI portable and had to install it into the embedded Python install. It seems also that what order you install things in can make the difference. AP Workflow 5. (for 12 gb VRAM Max is about 720p resolution). README. It's completely free and open-source but donations would be much appreciated, you can find the download as well as the source at https://github. You can then load or drag the following image in ComfyUI to get the workflow: Jul 28, 2024 · It uses the built-in ComfyUI API to send data back and forth between the comfyui instance and the interface. I am building this around the [Coherent Facial Expressions] (… The goal of the r/ArtificialIntelligence is to provide a gateway to the many different facets of the Artificial Intelligence community, and to promote discussion relating to the ideas and concepts that we know of as AI. Is there a way to load the workflow from an image within I am very interested in shifting from automatic1111 to working with ComfyUI I have seen a couple templates on GitHub and some more on civitAI ~ can anyone recommend the best source for ComfyUI templates? Is there a good set for doing standard tasks from automatic1111? Is there a version of ultimate SD upscale that has been ported to ComfyUI? Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. /ComfyUI" you will find the file extra_model_paths. I have no idea why the OP didn't bother to mention that this would require the same amount of storage space as 17 SDXL checkpoints - mainly for a garbage tier SD1. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. comfy uis inpainting and masking aint perfect. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. 1; Flux Hardware Requirements; How to install and use Flux. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). I also had issues with this workflow with unusually-sized images. EDIT: For example this workflow shows the use of the other prompt windows. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. Thank you u/AIrjen!Love the variant generator, super cool. A basic SDXL image generation pipeline with two stages (first pass and upscale/refiner pass) and optional optimizations. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. I couldn't find the workflows to directly import into Comfy. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease I recently switched from A1111 to ComfyUI to mess around AI generated image. true. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) r/StableDiffusion • A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 ControlNet and T2I-Adapter Examples. json", which is designed to have 100% reproducibility . com:) Under ". Please share your tips, tricks, and workflows for using this software to create your AI art. 1 with ComfyUI Aug 2, 2024 · Flux Dev. That's the one I'm referring to. 1 ComfyUI install guidance, workflow and example. 73 votes, 25 comments. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): If I understand correctly, the best (or maybe the only) way to do it is with the plugin using ComfyUI instead of A4. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. example, edit it with your favorite editor. Add the SuperPrompter node to your ComfyUI workflow. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. hi u/Critical_Design4187, it's definitely an active work in progress, but the goal of the project is to be able to support/run all types of workflows. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. yaml. You can construct an image generation workflow by chaining different blocks (called nodes) together. this is just a simple node build off what's given and some of the newer nodes that have come out. Rename Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting 157 votes, 62 comments. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. But it separates LORA to another workflow (and it's not based on SDXL either). second pic. be/ppE1W0-LJas - the tutorial. yaml instead of . I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. With it (or any other "built-in" workflow located in the native_workflow directory), I always get this error: Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] Welcome to the unofficial ComfyUI subreddit. 1 or not. example (text) file, then saving it as . ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. It should look like this: Welcome to the unofficial ComfyUI subreddit. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. It covers the following topics: Introduction to Flux. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. safetensors or clip_l. Next, we need advise ComfyUI about the above folder, and again that requires some basic linux skills, else https://www. Configure the input parameters according to your requirements. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. It looks freaking amazing! Anyhow, here is a screenshot and the . Flux. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): Well, I feel dumb. ComfyUI is a completely different conceptual approach to generative art. For some workflow examples and see what ComfyUI can do you can check out: In the standalone windows build you can find this file in the ComfyUI directory. I found that sometimes simply uninstalling and reinstalling will do it. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. json files into an executable Python script that can run without launching the ComfyUI server. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Thanks. This repo contains common workflows for generating AI images with ComfyUI. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. Workflow. it has backwards compatibility with running existing workflow. 0 is the first step in that direction. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. Put the flux1-dev. Share, discover, & run thousands of ComfyUI workflows. Not a specialist, just a knowledgeable beginner. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. We would like to show you a description here but the site won’t allow us. 1. Welcome to the unofficial ComfyUI subreddit. bing. I stopped the process at 50GB, then deleted the custom node and the models directory. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. sft file in your: ComfyUI/models/unet/ folder. They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. com/ImDarkTom/ComfyUIMini . json of the file I just used. Connect the SuperPrompter node to other nodes in your workflow as needed. Breakdown of workflow content. Execute the workflow to generate text based on your prompts and parameters. json files saved via comfyui, but the launcher itself lets you export any project in a new type of file format called "launcher. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. but mine do include workflows for the most part in the video description. ComfyUI Workflows. com/. You can find the Flux Dev diffusion model weights here. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Files. . bajrze cjghp sss zdv hjxo gfbp jfnuup akzfej pfhl ynbfjcju