Comfyui remove background reddit ; u2netp: Faster processing with slight quality trade-off. Damn almost there I pull out a Remove Background node and Image Bound node from WAS, but the WAS Remove BG one can't output the mask that's where I stop. Should be there from some of Intro 3 method to remove background in ComfyUI,include workflows. I’d like to know the best way to composite a studio shot of my subject to an AI generated background (that may I already have), considering both the solution for the BG (starting from a prompt or starting from an image). Welcome to the unofficial ComfyUI subreddit. You can select a transparent or solid color Preparation work (not in comfyui) - Take a clip and remove the background (can be made with any video editor which rotobrush or, as in my case, with RunwayML) - Extract the frames from the clip (in my case with ffmpeg) - Copy Welcome to the unofficial ComfyUI subreddit. A reddit dedicated to the profession of Computer System Administration. How to remove background in ComfyUI? Hi. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. workflow currently removes background from animate anyone video with rembg node and then I want to just layer frame by frame on to the svd frames. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I'm trying to do something very simple: changing the background of an image. ; u2net_human_seg: Optimized for human subjects. Going to your base folder of ComfyUI, where you'll see the folder ComfyUI in it, you'll also see a folder called python_embeded . To fix this, what I really want, is to put the "subject" inside prediffusion group, does that sound right? or would that not fix it either and what I need to do is the composition thing? Thanks! Things I tried: using the mask to inpaint (tried this a few ways) to basically REMOVE the woman, then add her back in via composite. Go to Edit mode and select all -> by pressing "A" Everything will turn yellow as it will select all the polygons -> Press "U" and choose Unwrap -> Go back to object mode Not at all it's true that bria removes the background by masking but I was referring to the background replacement which is done automatically here, cuz bria only remove it, if you want to replace it you we'll need to inpaint the background again Welcome to the unofficial ComfyUI subreddit. The mask is derived from the alpha channel of the processed image. I've also used comfyui to do a style transfer to videos and images with our brand style. Contribute to Jcd1230/rembg-comfyui-node development by creating an account on GitHub. I'm trying to figure this out, but I can't find anyone else making a workflow like this. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. But I found something that could refresh this project to better results with better maneuverability! In this project, you can choose the onnx model you want to use, different models have different PramaLLC - "You can use our BEN model commercially without any problem. png file, selecting Welcome to the unofficial ComfyUI subreddit. I understand how outpainting is supposed to work in comfyui Testing IC-Light for Background Replacement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, The solution is - don't load Runpod's ComfyUI template Load Fast Stable Diffusion. 2) The edges are very bad. A lot of people are just discovering this Had been looking for a good reliable solution for use in ComfyUI and stumbled on the https: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Testing IC-Light for easier life with ComfyUI remove background. upvote r/Terraria. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. If not, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. Description. 646. Sort by: Best. ComfyFlow: From comfyui workflow to webapp, in seconds. Many thanks to the author of rembg-comfyui-node for his very nice work, this is a very useful tool!. But hear me out, that's not enough at all. ComfyUI-DragNUWA: DragNUWA enables users to manipulate backgrounds or objects within images directly, and the model seamlessly translates these actions into camera movements or object motions, generating the corresponding video I use InvokeAI which has a function to add latent noise to masks during img2img. seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run We are sound for picture - the subreddit for post sound in Games, TV / Television , Film, Broadcast, and other types of production. Start from a existing picture or generate a product, segment the subject via SAM, generate a new background, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, 17K subscribers in the comfyui community. Please share your tips, windows does things in the background to mange memory and swap. And so on. And to make Spider-Man blend in even better with the background, I'm going to increase the threshold of the Matte tab to remove some of Spider-Man's edges. 1. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. Except for the background, that should be Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. com) I made them and posted them last week ^^. Authored by kwaroran Welcome to the unofficial ComfyUI subreddit. I'll be going through this as well. Where things got a bit crazy was trying to avoid having the ksampler run when there was nothing detected, because ComfyUI doesn't really support branching workflows, that I know of. 25K subscribers in the comfyui community. I've tried using ComfyUI node for background removal, implementing InSPyreNet the best method up to date - john-mnz/ComfyUI-Inspyrenet-Rembg. New to ComfyUI. Notice the flip from s/it to it/s. Any automobile that moves on four wheels can be discussed here. sedans, SUVs, hatchbacks, motor racing, safety etc here on reddit. I've got it hooked up in an SDXL flow and I'm bruising my knuckles on SDXL. Now, I have basically hit the wall of what I can do with the ComfyUI, because for me it feels that more complex noodles Visio style progress I made, it does not actually make the image better, rather than a little "different". That would give you your Xenomorph and your lady with the same background, or they would blend together, I think. And now you can add https://github. Period. Using "ImageCompositeMasked" to remove the background from the character image and align it with the background image. I switched to ComyfUI from A1111 last year and haven't looked back, in fact I can't remember the last time I used A1111. Generate a fitting Successfully removed the background from an image and turned a suitcase into a mask but the background I want is being applied to the suitcase as a texture. 22K subscribers in the comfyui community. I took a picture with the product in the center and Now I want to change the background to anything I want. Please keep posted images SFW. Because the detection and removal is meant to be automatic, muting Nodes: Remove Image Background (abg). But got the result I wanted. huggingface. Hello I was wondering if anyone knows if it is possible to do video object removal using Comfyui like at https://anieraser. 30s/it with A1111 (1 picture @ 768x512) but I'll regularly get 1. 60+ workflows: https: Remove Background + Batch remove. I just tried it in Comfy, and it doesn't make the background alpha channel transparent unless the background of the source image was already a neutral gray. Product backgrounds in ComfyUI youtu Welcome to the unofficial ComfyUI subreddit. I was wondering if video object removal solutions could be possible in ComfyUI, maybe using an inpainting technique. Hi, I'm new to comfyui and not to familier with the tech involved. ComfyUI Tutorial: Background and Light control using IPadapter youtu. The easiest way would be to replace the background and replace it with a different image with the style I want yet I wanted to do that in one go in comfyUi because the fusion would be interesting if it's in one go. Any Remove 3/4 stick figures in the pose image. But that may not happen anyway. Look for models that promote landscapes, or stuff like general models like dreamshaper esque models, anything not focus around photorealistic people, which a solid 50-80 percent are. So, I've masked the background and generated the image but I've 2 problems: also what's outside the mask changed (so the guy in the foreground). i used bria ai for the background removal Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app I'm looking for a way to set up a generation pipeline that reliably provides subjects on a flat, solid white background. Generate one character at a time and remove the background with Rembg Background Removal Node for ComfyUI. And then I check youtube videos on topic. get reddit premium. Your best bet is model and prompt. #HappyAITime with @handResolver custom_nodes requirementhttps://ltdrdata. Grab the I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. Open comment /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. media which can zoom in and Welcome to the unofficial ComfyUI subreddit. Also I may notice ComfyUI Examples link on the GitHub. The available models are: u2net (download, source): A pre-trained model for general use This ComfyUI workflow lets you remove backgrounds or replace backgrounds which is a must for anyone wanting to enhance their products by either removing a background or replacing the background with something new. Learn from community insights and improve your experience. A Anime Background Remover node for comfyui, based on this hf space, works same as AGB extention in automatic1111. *PLEASE READ THE PINNED POST* This sub is intended to discuss laser tattoo removal. thank you! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. OR you can remove images with backgrounds that are very similar to each other. Positive prompts with wide lenses, detailed background, 35mm, etc. what node is fastest way to remove background? looking for something efficient. If you manually installed a venv/conda for ComfyUI then it's assumed that you already know how to do this so I'm assuming you're using Windows and a portable ComfyUI. Belittling their efforts will get you banned. Cheers o/ Whats the best background remover for Stable Diffusion ( I Use ComfyUI) Question /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Type Experiments --- Controlnet and IPAdapter in ComfyUI 4. ; isnet-general-use: Balanced performance for various subjects. ; silueta: Enhanced edge detection for finer details. Anyway, thanks for sharing, I'll go play with them now. I'll probably stick to ComfyUI from here on out, unless A1111 gets some crazy feature that ComfyUI doesn't have. /r/StableDiffusion is back open after the protest of Reddit killing open API Swapping out the background but keeping the subject can easily be done with the remove background and compose image nodes in WAS Suite for example. A lot of people are just discovering this technology, and want to show off what they created. Find and fix Many thanks to the author of rembg-comfyui-node for his very nice work, this is a very useful tool!. Write better code with I'm trying to use IPadapter with only a cutout of an outfit rather than a whole image. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I am open-sourcing my human segmentation dataset for creating a truly open background remover model. u2net: General-purpose, high-quality background removal. I always get this slight green colour in the background. To create a seamless workflow in ComfyUI that can handle rendering any image and produce a clean mask (with accurate hair details) for compositing onto any background, you will need to use nodes designed for high-quality image processing and precise masking. github. I want to fill the background (preview image) Welcome to the unofficial ComfyUI subreddit. Bikes related discussion is Now I'll go to the Fusion tab and use the Delta Keyer to remove the green background from the second layer. As you can see his little grin has disappeared and his face is not the same. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the LINK TO THE WORKFLOW IMAGE. Rembg is a tool to remove images background. So the object fits better. * Dialog / Dialogue Editing * ADR * Sound Effects / SFX * Foley * Ambience / Backgrounds * Music for picture / Soundtracks / Score * Sound Design * Re-Recording / Mix * Layback * and more Audio-Post Audio Post Editors Sync Sound Pro Tools I agree with anybunnywww. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Discover helpful tips for beginners using ComfyUI on StableDiffusion. 00it/s with ComfyUI. Testing IC-Light for Background Replacement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, I usually get around 1. The node utilizes the Remover class Generating separate background and character images. Why Rembg Background Removal node for ComfyUI. I trying to create an image with a character and background around it but, if I use image size 512/512 or something small, the result only include the character and very little background around when I try to change the size of image, for example, I use 1080 and 1920 for Does anyone have any tips for inpainting / replacing backgrounds such that the generated BG image can be informed a bit by the matted foreground elements? I have a situation where I am using 3D renders of birds (just using alpha channels + canny for the silhouettes and discarding the RGB renders from blender) to generate some geese, matting them out, and then using a Welcome to the unofficial ComfyUI subreddit. Get app Get the Reddit app Log In Log in to Reddit. Thought it was really cool, so wanted to share it here! Workflow: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. Pretty sure lowvram and medvram are not only useless, but straight obstruction, at least for If you want something to make a mask for you, Segment Anything will make a mask based on anything you name within the image. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. EDIT: There is fill it with red (255-0-0) on black background, select channel "red". 6K. For my task, I'm copy-and-pasting a subject image (transparent png) into a background, but then I want to do something to make it look like the subject was naturally in the background. I'm trying to figure out if I can use ComfyUI to segment wires/cables in the background of images so that I can remove it. Please repost it to the OG question instead. : You should have installed the three packages torch Pillow GeekyRemB is a sophisticated image processing node that brings professional-grade background removal, blending, and animation capabilities to ComfyUI. The video has a man walking and a white background. Testing IC-Light for Background Replacement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. To do this, I have to select the green color using the dropper tool. 7. However, I could not achieve this on this new node. The first is a tool that automatically remove bg of loaded images (We can do this with WAS), BUT it also allows for dynamic repositionning, the way you would do it in Krita. Navigation Menu Toggle navigation. Write better code with AI Security. Took the dive into AI last month thinking it would be a nice tool for my 3D CAD modelling workflow. LayerDiffusion does work for img2img, but not very usefully as far as I can see. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, any photoshop users that can help me? i need to remove the background of this pic and apply a filter to make it white instead of black . Testing IC-Light for Background Replacement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Hi everyone, I am new to Comfyui and I would like to know if there is a way to generate a character on one hand and a background on the other, such as a city, and reach a point where both are merged in ksampler, as much as possible respecting the same lighting and not so much to simply join them. The first time there was a note saying there was a new comfyUI and I should update from the bat file. Character Background Swap, Color Correction & more!) Share Add a Comment. More info: 13K subscribers in the comfyui community. Also, the Nodes Library There’s also a website that removes background for free and it’s 100x better than the stable diffusion Wendi version. my menu node or whatever its called is missing when ever i launch comfyui. Within that, you'll find RNPD-ComfyUI. the current workflow I have uses a mask that works and it changes the background, but the edges of the product looks out of place just looks like a bad photoshop. Just select the primary image and it will copy it out with the background made transparent alpha layer (if that is correct term). I saw there’s a node for images removal called rembg background removal but it doesn’t work with a video. I was going to make a post regarding your tutorial ComfyUI Fundamentals - Masking - Inpainting. Please use our Discord server instead of supporting a company that acts against its users and unpaid Welcome to the unofficial ComfyUI subreddit. I know, it’s counterintuitive, but since the background prompt applies to the full picture, so does OpenPose. I may check some simple workflows there and links to ComfyUI blog and other useful resources. You don't have much memory so Reddit is dying due to terrible leadership from CEO /u/spez. If there is one figure and you want to remove the background any iPhone will also do. This is a workflow for creating SillyTavern characters. Not sure how much difference the ipadapter would make anyway. That's it. Please share your tips, tricks, and workflows for using this software to create your AI art. As you can see we can understand a number of things Krea is doing here: Welcome to the unofficial ComfyUI subreddit. I've been using A1111 for over a year now and I've never seen it flip to it/s. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the AI-Powered Removal Using Multiple Models: . io/https://github. Keep green background for video editing later. Repeat the two previous steps for all characters. . /r/StableDiffusion is back open after the protest of Reddit killing open API Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Some models have people in almost all training samples. I do believe it Welcome to the unofficial ComfyUI subreddit. The question: in ComfyUI, how do you persist your random / wildcard / generated prompt for your images so that you can understand the specifics of the true prompt that created the image?. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, Testing IC-Light I was wondering if I can use two checkpoints for ComfyUI. I have yet to find anything that I could do in A1111 that I can't do in ComfyUI including XYZ Plots. Sign in Product GitHub Copilot. com/WASasquatch/ Preparation work (not in comfyui) - Take a clip and remove the background (can be made with any video editor which rotobrush or, as in my case, with RunwayML) - Extract the frames from the clip (in my case with ffmpeg) - Copy the frames in the corresponding input folder (important, saved as 000XX. It works also with "green" (0 I have began from scratch, starting to read the OFFICIAL manuals, on how to operate the ComfyUI from the official ComfyUI repo. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, which allowed me to move all the mask logic nodes behind the scenes. Although the goal is to create wallpapers, it can really be used to expand anything, in any direction, by any amount. In blender Remove light, camera and block. When I save my final PNG image out of ComfyUI, it automatically includes my ComfyUI data/prompts, etc, so that any image made from it, when dragged back into Comfy, sets ComfyUI back up with all the prompts, and data just like the moment I originally created the original image. But it is basic rembg remove background then use as mask, then invert mask, grow mast and blur mask then onto regular inpainting workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, A fully open-source background remover optimized for images with humans. total mess. I am very new to comfyui and sd. It might be a couple extra steps, but if you really wanted to combine the backgrounds, you could use Photoshop to remove the backgrounds from the input images, and then put a similar background in each. So when the lora is prompted 'background, north, east' and it won't have a seam between what it learned from north and east images. Transparent background r/StableDiffusion /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Welcome to /r/Tattooremoval. image was snagged off of /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app A fully open-source background remover optimized for images with humans An update to my previous SillyTavern Character Expression Workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, Is there any way to export an MP4 animation with a transparent background? Works well for masking and removing BG, especially if plain background in prompts. It keeps going for more or less and hour than suddenly writes: press and key to continue , there is crashes. Run all the cells, and when you run ComfyUI cell, you can then connect to 3001 like you would any other stable diffusion, from the "My Pods" tab. Created by: Studio ComfyUI: Batch generation with 294 styles from the style chooser. But to do this you need an background that is stable (dancing room, wall, gym, etc. 0 license. Tried to remove it a few days ago, and to my surprise, everything is almost twice as fast as before (iterations on basic process went from 3. Skip to content. The background to the question: So this is my ComfyUI week. I ddi so. It takes an image tensor as input and returns two outputs: the image with the background removed and a mask. r/Terraria. There's something I don't get Welcome to the unofficial ComfyUI subreddit. If you want achieve perfect 20K subscribers in the comfyui community. Does it ring a bell? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, 18K subscribers in the comfyui community. I was hoping to get the transparency like shown in the forge video, so far seems to not have that for comfy just background or foreground removal. It combines AI-powered processing The new queue preview looks much better, and gives the option to delete items, even if it does not work because the generated image remains there. Please do not ask how sessions you need for removal, but feel free to share your concerns, progress, and frustrations! Good luck! So with the separated subject/background fields I thought this workflow had found a solution to fix this but that's not really what those fields do right. That will only run Comfy. 47 votes, 19 comments. From there on when I turn on Comfyui just looks for missing files downloads them and than does it again. Please share your tips, then you could prompt "foreground female face" or use a Remove Background node before segmenting. (None of the generations had background, So I easly placed them all the way i wanted in photoshop, but without the scenario looks fake). My only current issue is as follows. Now you can condition your prompts as easily as applying a CNet! Thanks a lot for this amazing node! I've been wanting it for a while to compare various versions of one image. Anyway, this is my first reply so i hope it's kinda understandable. my subreddits. 0. It always blurs the background. Tried both Background Detail and hotarublurbk with no effect. Expand user menu Open settings menu. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. The only other choice would be to generate backgrounds with no characters in them, and replace backgrounds in a photo manipulation program like photoshop. Here's an example: Krea, you can see the useful ROTATE/DIMENSION tool on the dogo image i pasted. ; u2net_cloth_seg: Specialized for clothing segmentation. Thanks, I used the remove background node and it did the same, well maybe it just made the background black. I'm aiming to do this for a photo client, who may want to be able to swap out backgrounds, but bg's that actually look like they fit with the pose. E. I have a workflow where I create an image with one checkpoint, remove the background and in the same workflow I use another checkpoint to create the background and then I merge the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which /r/StableDiffusion is back open after the protest of Reddit killing open API access I am very well aware of how to inpaint/outpaint in comfyui - I use Krita. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, with backgrounds in computer science, machine learning, robotics, mathematics, and more. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, Sharing my workflow for how to remove 64 votes, 20 comments. The only commercial piece is the BEN+Refiner but the BEN_BASE is perfectly fine for commercial use. I found chflame163/comfyUI_LayerStyle which dose what I want with the Image Blend node but it only works for one image at a time. Remove Background + Batch remove. He just updated it yesterday I believe to allow you to control the strength of the noise, haven't updated my install yet to use this, but I for sure will use this when I do. media. img2img with Low Denoise: this is the simplest solution, but unfortunately doesn't work b/c significant subject and background detail is lost in the encode/decode process Hi everyone guys. I've tested a lot of different AI rembg methods (BRIA - U2Net - IsNet - SAM - OPEN RMBG, ) but in all of my Install rembg[gpu] (recommended) or rembg, depending on GPU support, to your ComfyUI virtual environment. ) Hi there. Dig, fight, explore, build /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Sorry if I seemed greedy, but for Upscale Image Comparing, I think the best tool is from Upscale. Oh wait, yes I know ComfyUI saves the whole workflow and values as JSON in the image. Hello u/Ferniclestix, great tutorials, I've watched most of them, really helpful to learn the comfyui basics. Thanks for the information I will look This way you automate the background removing on video. This works pretty well but sometimes the clothing of the person and the background are too similar and the rembg node also removes a chunk of the person. PRO-TIPS: Avoid overlap between boxes. Sorry I am at work right now so I cant screenshot my workflow. For some reason, it always shows up with white/red backgrounds (both of which I removed in Photoshop). I want to generate two images simultaneously, one of a background, and one of a character, and then I want to inpaint the character onto the background. Please ComfyUI node for background removal, implementing InSPyReNet. There is a lot of missing information here, has this actually been ported to ComfyUI and where is the link to Comfy custom But we will introduce 3 methods used in comfyui to remove background. I want to completely remove that green colour and make it transparent. 2nd image is an example of my desired results using photoshop manually. The ComfyUI-Wiki is an online quick reference manual that serves as a guide to ComfyUI. :)" 22K subscribers in the comfyui community. Import Model -> select your obj file. I want to create an image of a character in 3D/photorealistic while having the background in painting style. I have a background in tech, Python and text editing. We are here to give you support and information on your tattoo removal journey. Applying "denoise:0. Its under the Apache 2. 23K subscribers in the comfyui community. I'm trying to achieve a selfie look, not a professional photoshoot look. Please jump to content. And above all, BE NICE. 0. 2 to 5. Skip to main content. Batch generation with I am trying to learn how to change the background of a product image. Keep up the good works, I will keep an eye on this and look forward to any progress. I want to draw a seed image or two on a transparent background, generate a solid background, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial Like, you mention you don't want the background in the images, and there are resources to remove backgrounds, is that doable but too time consuming, or would that create weird artifacts or distortions or something? Custom node: LoRA Caption in ComfyUI : comfyui (reddit. This is a comfy UI implementation of my original tutorial for using mosaic tiles to expand an image and create wallpapers. I want to create a character with animate anyone and background with svd. I am trying to generate random portraits with dynamic prompts and I am removing the background with the rembg node. 7 for example). ipynb in /workspace. The idea was to make the sections of background on different training images more consistent. How to use this workflow Set an image using LoadImage and execute the workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. It was daunting at first but settled on StableDiffusionXL and ComfyUI. Once the results are displayed, choose the image to save from [Select images to save] and run the process again. Please share your tips, And then you remove the noise is 5-6 steps. ComfyUI only saves data available during queuing of the prompt, while useful, these data are not absolute and in many cases won't be able to generate the same image again. It works if it's the outfit on a colored background, however, the background color also heavily influences the image generated once put through ipadapter. 0 reviews. Do you have any ideas to do this? How can I apply it? Maybe it would be better to wait for the development process? I see many web sites offering sophisticated video object removal solutions. The easiest way is to have OpenPose set to the background prompt. So for example: you hava a person, remove the background, and then use a color fill (white) and make the background black, you have a really good starting point. IC-light Workflow for Background and Light changes that allows you to keep object details like "TEXT" Share Add a Comment. Please share your tips, tricks, and workflows for using this Created by: yu: What this workflow does This workflow replaces the background of the person with a transparent or a specified color. Open Remove Background Model Rembg Background Removal Node for ComfyUI- you can choose which onnx model to use!. Please share your tips, tricks, and Also I read manual installation and troubleshooting section. Search your nodes for "rembg". It's just a node that removes background. Found this workflow for AnimateDiff that handles foreground and background separately. Get an ad-free experience with special benefits, and directly support Reddit. Log In / Sign Up; You can either try to train with fewer steps, and likely get a poor character training. I had some success with controlnet, but I don't want to prescribe any specific shapes that the image should take: it should be whatever the model determines as fitting. g. How i can remove background in comfy? WAS (custom nodes pack) have node to remove background and work fantastic. Open menu Open navigation Go to Reddit Home. I was able to remove the background and put another environment with the node you posted before. 5" to reduce I've used Comfyui to rotoscope the actor and modify the background to look like a different style living room, so it doesn't look like we're shooting the same location for every video. io/ ? Thanks. com/huchenlei/ComfyUI Is there any way to remove the background and make it transparent using a mask? I want to remove the background with a mask and then save it to my computer as a . ) to achieve good results without little to no background noise. Thank you guys! (I've tried negative prompts including blurry, bokeh, depth of field, etc. Any thoughts are most welcome. 5. I’m using comfyui and i need to remove the background of a video animation i generated. But I found something that could refresh this project to better results with better maneuverability! In this project, you can choose the onnx model you want to use, different models have different effects!Choosing the right model for you will give you better results! I like how they both turned out, but, i can't for the life of me wrap my head around a way to composite them all together, exactly how they are now, in a coherent background. It attempts to create consistent characters with various outfits, poses, and facial expressions, saving the images into sorted output folders. png) Txt2Img workflow 13 votes, 45 comments. r/comfyui A chip A close button. i cant figure out how to get it back again. 20K subscribers in the comfyui community. In photoshop I would just select color range, eyedrop the color, select fuzzy range and it would remove it completely. azf vgjdi ehniy pmxog lxk ohpovaqc alkuvv bdnyd vgevbex rfs