Why is comfyui faster reddit. 2) and just gives weird results.
Home
Why is comfyui faster reddit So before abandoning SDXL completely, consider first trying out ComfyUI! I recommend downloading rgthree's custom nodes, there's a feature that gives you a button to bypass or mute groups faster. Many here do not seem to be aware that ComfyUI uses massively lower VRAM compared to A1111. For those of you familiar with FL Studio, and specifically with Patcher, you might know what I'm about to describe. I use that to simplify my workflow. I always hated these node based programming substitutes, because they just take sooo much longer to accomplish the same thing. I switched to ComfyUI after automatic1111 broke yet again for me after the SDXL update. is more optimised out of the box, and so can run faster on less VRAM Comfy: More of a backend than the other options, meaning a much steeper learning curve updates faster and gives you access to the bleeding edge of SD (an To verify if I'm full of shit, go generate something and check the console for your iterations per second. I *don't use* the --cpu option and these are the results I got using the default ComfyUI workflow and the v1-5-pruned. I have no idea why that is, but it just is. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. Why is everyone saying automatic1111 is really slow with SDXL ? I have it and it even runs 1-2 secs faster than my custom 1. 56 votes, 17 comments. 2) and just gives weird results. I expect it will be faster. < Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. they are different. I have tried it (a) with one copy of SDXL running on each GPU and (b) with two copies of SDXL running per GPU. next (still experimental), ComfyUI's performance is significantly faster than what you are reporting. Finally, drop that picture you generated back into ComfyUI and press generate again while checking the iterations per second. Take it easy! đź‘Ť I started on the A1111. All of that is easier and faster than rebuilding work flows, or reusing templates, or rebuilding templates when needed. After trying everything I can finally use ComfyUI (on my computer is faster than A111, for XL in particular). Before 1. A "fork" of A1111 would mean taking a copy of it and modifying the copy with the intent of providing an alternative that can replace the original. It also is much, much faster than automatic1111. On the one hand, EXT is much faster for some operations, on the other, file corruption on NTFS is basically non existent and has been for decades. More info: ComfyUI already has a detailer in the impact pack plugin. No idea why , but i get like 7. I'll So I've been running ComfyUI inside of a python venv and getting ok speeds for my ancient 6GB GPU. Locked post. This update includes new features and improvements to make your image creation process faster and more efficient. Basically, in patcher, you can string plugins together in much the same way as ComfyUI. for me its the UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. More info: What I see in tutorials and shared workflows over and over again is that people will first upscale their image with a 4x upscaler and then downscale it 0. Sampling method on ComfyUI: LCM CFG Scale: from 1 to 2 Sampling steps: 4 Locked post /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will Welcome to the unofficial ComfyUI subreddit. emaonly. Global Step: 840000 model_type EPS adm 0 making attention of type 'vanilla' with 512 in_channels I recall when Vlad was said to run much faster than Automatic1111. Also, if this is new and exciting to you, feel free to comfyui always says that its workflow describes how SD works. so im getting issues with my comfyui and loading this custom sdxl turbo model into comfyui. And it's quite a bit faster then ADetailer (like so many other things in comfy) I agree about the LORAs though, that is def. “(Composition) will be different between comfyui and a1111 due to various reasons”. Is it faster? Edit: Would This is Reddit's home for Computer Role Playing Games, better known as the CRPG subgenre! CRPGs are characterized by the adaptation of pen-and As you get comfortable with Comfyui, you can experiment and try editing a workflow. When ComfyUI just starts, the first image generation will always be fast (1 minute is the best), but the second generation (no changes to settings and parameters) and so on will always be slower, almost 1 minute slower. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the ComfyUI uses the CPU for seeding, A1111 uses the GPU. but it is simply not. Please share [Please Help] Why is a bigger image faster to generate? This is a workflow I made yesterday and I've noticed, that the second KSampler is about 7x faster, even /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will what comfyui devs say and what people do with customs nodes are different thing. Comfyui authors are trying to confuse and mislead people into trusting this. It is not. Better to generate a large quantity of images, but, for editing, this is not really efficient. However, when SDXL was released, it was most usable in ComfyUI, so I forced myself to use it-- and I've never looked back. Please share your tips, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind I have an M1 Macbook Air with 8 GPUs (vs. Please share your tips, tricks, and workflows for using this software to create your AI art. New comments cannot be posted I think instead of using ComfyUI, it would be much faster to actually create a proper programm. i need help (i just want to install normal sd not the sdxl) Share Add a Comment. That makes no sense. Please share your tips, tricks, As an official Fidelity customer care channel, our community is the best way to get help on Reddit with your questions about investing with Fidelity – directly from Fidelity Associates. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, SSD-1B: a distilled SDXL model that is 50% smaller and 60% faster without much quality loss! I'm mainly using ComfyUI on my home computer for generating images. It's just the nature of how the gpu works that makes it so much faster. Workflows are much more easily reproducible and versionable. It also seems like ComfyUI is way too intense on using heavier weights on (words:1. A lot of people are just discovering this technology, and want to show off what they created. I switched to ComyfUI from A1111 last year and haven't looked back, in fact I can't remember the last time I used A1111. It is how comfyui works, not how SD works. Plus, Comfy is faster and with the ready-made workflows, a lot of things can be simplified and I'm learning what works and how on them. I have a question, how hard is it to learn Comfy UI? I've started with Easy Diffusion, and then moved to Automatic 1111, but recently I installed Comfy UI, and drag and dropped a work flow from Google (Sytans Workflow) and it is Welcome to the unofficial ComfyUI subreddit. I switched from forge to comfy yesterday and i already love it. Question - Help Hi, I am upscaling a long sequence (batch - batch count) of images /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Having used ComfyUI quite a bit, I got to try Forge yesterday and it is great! it has been noticeably faster unless I want to use SDXL + Refiner. Switching models takes forever in Forge compared to comfy. I keep hearing that A1111 uses GPU to feed the noise creation part, and Comfyui uses the CPU. I don't like ComfyUI, because imo user friendly software is more important for regular use. The quality compare to FP8 is really close MAI Coffee : an exploration on how far I could push local video models today. On my machine, comfy is only marginally faster than 1111. Notably faster. In the end, it turned out Vlad enabled by default some optimization that wasn't enabled by default in Automatic1111. 4" - Free Workflow for ComfyUI. Is there any website or YouTube video where I can get a full guide about its interface and workflow, how to create workflow for inpainting, controlnet and so on. the best part about it though >. Learn comfyui faster Question How can I proceed I watched some videos and managed to install CFui but after I try to load workflows i found on the web or install "custom nodes" I get errors saying missing nodes but I can't install them from the manage addon. Even if there's an issue with my installation or the implementation of the refiner in SD. However, with that being said I prefer comfy because you have more flexibility and you can really dial in your images. 5 and 2. Tested failed loras with a1111 they were great. its actually meant to be watched on a phone screen, going on From there I can alter things one at a time or again regional prompt switching checkpoints, samplers, apps, Upscalers, and detailers all in one tab. When I first saw the Comfyui I was scared by so many options of what can be set. So from what I can tell, ComfyUI seems to be vastly more powerful than even Draw Things (which has a lot of configuration settings). Most of my work is inpainting, upscaling, Krita editing, rerunning, and model swapping to get very exact results. I spend many hours learning comfyui and i still doesn't really see the benefits. VFX artists are also typically very familiar with node i heard that comfyUI generate more faster. Want to use latent space, again 1 button. If it's 2x faster with hyperthreading enabled, I'll eat my keyboard. For example you can do side-by-side and compare workflows: one with only base and one with base + lora and see the difference. I wanted to get two environments running b/c my workflow tends towards lots of isolated What I can say is that I (RTX 2060 6 GB, 32 GB RAM, Windows 11) get vastly better performance on SD Forge with Flux Dev compared to Comfy (using the recommended Why is there such big speed differences when generating between ComfyUI, Automatic1111 and other solutions? And why is it so different for each GPU? A friend of mine for example is doing this on a GTX 960 (what a Comfyui's unique workflow is very attractive, but the speed on mac m1 is frustrating. how can i fix that? I'm on an 8GB RTX 2070 Super card. github. do i have to use another workflow or why is the images not rendered instant or ´why do i have these image issues? i provide here link to the model from civitai site and the result image and my comfyui workflow in a screenshot: 6. Recently started playing with comfy Ui and I found it is bit faster than A1111. ComfyUI Weekly Update: Faster VAE, Speed increases, Early inpaint models and more. I'm into it. a pain if you've got a lot. This is why I and many others are unable to generate at all on A1111, or only in lime 4min, whereas in ComfyUI its just 30s. I see a lot of stuff for running it in Automatic1111, but can it be used with ComfyUI? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. SD. Belittling their efforts will get you banned. A few weeks ago I did a "spring-cleaning" on my PC and completely wiped my Anaconda environments, packages, etc. Save up for a Nvidia card, and it doesn't have to be the 4090 one. So, as long as you don't expect comfyui not to break occasionally, sure give it a go. The main difference between ComfyUI and Automatic1111 is that Comfy uses a non-destructive workflow. comfyanonymous. As to how to get a metadata from previous generations i found a custom node that lets you save img with metadata ingrained in it and theres another custom node to load img and it shows prompts, seed etc. And above all, BE NICE. I'll try in in ComfyUI later, once I set up the refiner workflow, which I've yet to do. 1) in ComfyUI is much stronger than (word:1. I tried --lovram --no-half-vae but it was the same problem I don't do a lot of just prompt and run work (which ComfyUI) is just as good as if not much better at. While kohya samples were very good comfyui tests were awful. Comfyui is much better suited for studio use than other GUIs available now. the Standard 7). But one of the really cool things is has is a separate tab for a "Control Surface". I have yet to find anything that I could do in A1111 that I can't do in ComfyUI including XYZ Plots. You have to draw a mask, save the image with the mask, then upload to the UI again to inpaint. Standalone: everything is contained in the zip, you could use it on a brand new system. My system is more powerful than yours, but not enough to justify this enormous Welcome to the unofficial ComfyUI subreddit. comfyUI takes 1:30s, Comfy is maybe 10-15% faster without having measured precisely. What normal setting are you curious about? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. No spaghetti, no figuring out why this latent needs these 4 nodes and why one of them didn't work since the last update. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the At the end of the day, i'm faster with a111, better ui shortcut, better inpaint tool, better using of copy/paste with clipboard when you want to use photoshop. The only cool thing is you can repeat the same task from the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Hi :) I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow. 1. The node based environment means its “flux1-dev-bnb-nf4” is a new Flux model that is nearly 4 times faster than the Flux Dev version and 3 times faster than the Flux Schnell version. When you build on top of software made by someone else, there are many ways to do it. and i get the following results. On my rig, it's about 50% faster, so I tend to mass-generate images on ComfyUI, then bring any images I need to fine-tune over to A1111 for inpainting and the like. 0 on my RTX 2060 laptop 6gb vram on both A1111 and ComfyUI the A1111 took forever to generate an image without refiner the UI was very laggy I did remove all the extensions but nothing really change so the image always stocked on 98% I don't know why. 10K subscribers in the comfyui community. UPDATE 2: I suggest if you meant s/it, you edit your comment, even though it will leave me looking /r/StableDiffusion is back Sorry to say that it won't be much faster, even if you overclock the cpu. If someone needs more context please do ask. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from but many anecdotes on this subreddit that ComfyUI is much faster than A111 without much info to back it up. ComfyUI is really good for more "professional" use and allows to do much more, if you know what are you doing, but it's harder to navigate through each setting if you want to tweak, you have to move around the screen much, zoom in, zoom out etc. Very nice working well way faster than previous method i was using, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, I was using Automatic 1111 before now I use ComfyUI as main image generator. But the speed difference is far more noticeable on lower-VRAM setups, as ComfyUI is way more efficient when it comes to using RAM and VRAM. It's possible, I suppose, that there's something I did try using SDXL 1. Sure, my paintbrush never crashed after an update, but then comfyui doesn't get crimped in my bag, my loras don't need cleaning, and a png is quite a bit cheaper than canvas. ComfyUI is also trivial to extend with custom nodes. But you an achieve this faster in A1111 considering the workflow of comfy ui. So from that aspect, they'll never give the same results unless you set A1111 to use the CPU for the seed. Hope I didn't crush your dreams. Sort by: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Then i tested my previous loras with comfyui they sucked also. More info: . More info: ComfyUI Weekly Update: Faster VAE, Speed increases, Early inpaint models and more. I also recently tried Fooocus and found it lacked customisation personally, but appreciate the awesome in-painting they have and their midjourney-inspired prompt stuff is really cool. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by Welcome to the unofficial ComfyUI subreddit. More info: I am running ComfyUI on a machine with 2xRTX4090 and am trying to use the ComfyUI_NetDist custom node to run multiple copies of ComfyUI server, each using separate GPU, to speed up batch generation. I'm starting to make my way towards ComfyUI from A1111. I like web UI more, but comfy ui just gets things done quicker, and i cant figure out why, its breaking my brain. 1K subscribers in the comfyui community. The shots were all Comfyui with editing in premier. I guess gpu would be faster, have no evidence, just a guess. io Forge is built on top of A1111 web-ui, as you said. They've been trying to unify the community around ComfyUI for a while, and in that respect, lllyasviel trying to distance his project from ComfyUI while using their code must be like using their own hand to slap their face, since it undermines Welcome to the unofficial ComfyUI subreddit. Then go disable Hyperthreading in the UEFI. I've been scheduling prompt on hundred of image for animatediff for a long time with giant batch a 1000+ frames. Comfyui wasn't designed for Animatediff and long batch, yet it's the best platform for it thanks to the community. Now I've been on Comfyui for a few months and I won't turn on the A1111 anymore. When I upload them, the prompts get automatically detected and displayed, but not the resources used. thank you for your response. If it allowed more control then more people would be interested but it just replace dropdown menus and windows with nodes. I think ComfyUI remains far more efficient in loading when it comes to model / refiner, so it can pump things out faster. ComfyUI also uses xformers by default, which is non-deterministic. 13s/it on comfyUI and on WebUI i get like 173s/it. I had previously used ComfyUI with SDXL 0. 6 I couldn't run SDXL in A1111 so I was using ComfyUI. Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. with my 8 gb rx 6600 which I was only able to run sdxl with sd-next (out of memory after 1-2 runs and on default 1024x1024), I was able to use this is comfyui BUT only with 512x512 or 768x512 - 512x768 (memory errors even Turbo SDXL-LoRA-Stable Diffusion XL faster than light My civitai page: https few seconds = 1 image Tested on ComfyUI: workflow. Ive tried everything, reinstalled drivers, reinstalled the app, still cant get WebUI to run quicker. The only thing that I need to disable it the --no-half option and it generates image faster like in ComfyUI /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the But then I realized, shouldn't it be possible (and faster) to link the output from one into the next instead? In this example I have the first 3 samplers just executing 1 time each showing 1 2 3 sample stages, and the last one is unlinked and just doing a standard 3 step sample. > <. 4". Even thou i keep hearing people focusing the discussion on the time it takes to generate the image (and yes Comfyui is faster, i have a 3060) i would like people to be discussing if the image quality is better in which. Asked reddit wtf is going on everyone blindly copy pasted the same thing over and over. Comfy is basically a backend with very light frontend, while A1111 is very heavy frontend. 0. ComfyUI is amazing. It’s the first UI that’s actually been able to challenge A111’s dominance in any kind of way- the other UIs mostly fly under the radar. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to actually make pretty decent stuff, so if I have to put the time investment into Comfy, that's fine to me. 1) in A1111. So it might be counterintuitive, but it’s only getting as much loud negative attention as it is because it’s getting just as much or more quiet love. From what I gather only A1111 and its derivatives can correctly append metadata like prompts, CFG scale, used checkpoints/LoRAs and so on while ComfyUI cannot, at least not resources. Shouldn't you be able to reach the same-ish result faster if you just upscale with a 2x upscaler? The weights are also interpreted differently. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site #Comfyui #Ultimate upscale - a faster upscale, same quality . Forge's memory management is sublime, on the other hand. A non-destructive workflow is a workflow where you can reverse and redo something earlier in the pipeline after working on later steps. Easier to install an run but tend Comfy is faster than A1111 though--and you have a lot of creative freedom to play around with latents, mix-and-match models and do other crazy stuff in a workflow that can be built and re-used. Welcome to the unofficial ComfyUI subreddit. tbh i am more interested in why lora is so much different. A lot of people are just discovering this /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, It's still 30 seconds slower than comfyUI with the same 1366x768 resolution and 105 steps. Don't know if inpainting works with SDXL, but ComfyUI inpainting works with SD 1. For instance (word:1. Some of the ones with 16gb vram are pretty cheap now. But I'm getting better results - based on my abilities / lack thereof - Introducing "Fast Creator v1. I started with A1111, switched for a few weeks to SDnext, then back. 5. You can try launching it with: - At the moment there are 3 ways ComfyUI is distributed: 1. You should be able to drop images into comfyui as well and it will load up the workflow I used ComfyUI for a while but on Linux on my AMD card I found I was constantly getting OOM driver freezes and graphical glitches. Please keep posted images SFW. 9 and it was quite fast on my 8GB VRAM GPU (RTX 3070 Laptop). I tried ComfyUI early on, but I wasn't mentally prepared-- it was too different. View community ranking In the Top 20% of largest communities on Reddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Comfy does launch faster than auto111 though but the ui will start to freeze if you do a batch or have multiple gene going on at the same time. Please share your tips, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Just check your vram and be sure optimizations like xformers are set-up correctly because others UI like comfyUI already enable those so you don't really feel the higher vram usage of SDXL. Is there anyone in the same situation as me? Yes I don't have a mac to test on so the speed is not optimized. I can't thank you enough Reply reply Someone has linked to this thread from another place on reddit: [r/cryptogeum] What device you guys use? : comfyui I just dropped in the Marrigold depth as it now has a LCM model that is much faster, if you don't want to deal with this you can use any of the other depth estimators or ideally render a depth pass from your 3d software and use that. Maybe I am doing it wrong, but ComfyUI inpainting is a bit awkward to use. next is faster, but the results with the refiners are worse looking. This is why WSL performance on the virtualised ext file system is dramatically better than on the NTFS file system for some apps. For my own purposes, ComfyUI does everything I already used, and it is easy to get running. i dont really care about getting the same image from both of them but if you check closely while automatic1111 is almost perfect (you dont have to know the model, it is almost real) but the comfyui one is as if i reduced lora weight or something. If I restart the app, then it will be faster again, but again, the second generation and so on will be slower again. ckpt model. 5 models. I only go back to Forge, which is the same as Automatic 1111 but with some improvements, for inpainting and X/Y/Z plot because I think it's faster and more convenient. ljoedditfupqnizfuayreetrmtmszsvttkawarygvmutnrhic