Parking Garage

Comfyui arguments reddit

  • Comfyui arguments reddit. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. Aug 2, 2024 · You can use t5xxl_fp8_e4m3fn. 5 models, and my comfy. I am not sure what kind of settings ComfyUI used to achieve such optimization, but if you are using Auto111, you could disable live preview and enable xformers (what I did before switching to ComfyUI). 86s/it on a 4070 with the 25 frame model, 2. Hi amazing ComfyUI community. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Some main features are: Automatically install ComfyUI dependencies. A lot of people are just discovering this technology, and want to show off what they created. bat 7 other things: used firefox with hardware acceleration disabled in settings on previous attempts I also tried --opt-channelslast --force-enable-xformers but in this last run i got 28it/s without them for some reason Welcome to the unofficial ComfyUI subreddit. " so I spent ages online looking how/where I enter this command. You have to run two instances of it and use the --port argument to set a different port. Did Update All in ComfyUI manager but when it restarted I just got a blank screen and nothing loaded, not even the manager. 1 or not. FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. Options:--install-completion: Install completion for the current shell. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. 0 with refiner. Only if you want it early. Also I don't know when it has been changed, but ComfyUI is not a conda packet enviroment anymore, it depends from an python_embeded package, and generate an venv from it results in no tkinter. bat just contains. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. 1. Now it also can save the animations in other formats apart from gif. I tested with different SDXL models and tested without the Lora but the result is always the same. GitHub - Suzie1/ComfyUI_Comfyroll_CustomNodes: Custom nodes for SDXL and SD1. We would like to show you a description here but the site won’t allow us. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. python C:\Users\***\ComfyUI\main. Wanted to share my approach to generate multiple hand fix options and then choose the best. Are there additional arguments to use? r/comfyui: Welcome to the unofficial ComfyUI subreddit. If you go to the folder that you installed the webgui. 53 it/s for SDXL and approximately 4. 5 with lcm with 4 steps and 0. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. With this combo it is now rarely gives out of memory (unless you try crazy things) Before I couldn't even generate with sdxl on comfyui or anything Hey all, is there a way to set a command line argument on startup for ComfyUI to use the second GPU in the system, with Auto1111 you add the following to the Webui-user. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. Inpainting (with auto-generated transparency masks). for CR Seamless Checker. bat file, it will load the arguments. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all the requirements to launch comfyui. Installation¶ Welcome to the unofficial ComfyUI subreddit. And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. Lt. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. I noticed that in the Terminal window you could launch the ComfyUI directly in a browser with a URL link. You can encode then decode bck to a normal ksampler with an 1. It has --listen and --port but since the move, Auto1111 works and Koyha works, but Comfy has been unreachable. 0. Anything that works well gets adopted by the larger community and finds it's way into other Stable Diffusion software eventually. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. This has been driving me crazy trying to figure it out . This time about arpeggiators - how to design your own arp on the Daisy Seed using Arduino and C++ classes. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. Learning the intricacies of Web UI launching arguments, addressing xformers errors, delving into CUDA, and more became my daily routine. 4. Image Chooser is described on reddit here but the github link is gone (?!) GitHub - gokayfem/ComfyUI-Texture-Simple: Visualize your textures inside ComfyUI. On Linux with the latest ComfyUI I am getting 3. Please share your tips, tricks, and workflows for using this software to create your AI art. On vacation for a few days, I installed ComfyUI portable on a USB key, and plugged it into a laptop that wasn't too powerful (just the minimum 4 gigabytes of Vram). I tried installing the dependencies by running the pip install in the terminal window in ComfyUI folder. bat command arguments 6 add model run webui-user. Here are some examples I did generate using comfyUI + SDXL 1. Most of them already are if you are using the DEV branch by the way. 5 while creating a 896x1152 image via the Euler-A sampler. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. It seems that the path always look to the root of ComfyUI not relative to the custom_node folder "comfyui-popup_preview". You can construct an image generation workflow by chaining different blocks (called nodes) together. Supports: Basic txt2img. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. There should be a file called webui-user. 55 it/s for SD1. bat file. Open the . These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. I want to set a new pc build, is this config sufficient for comfyUI and a1111 : CPU : AMD Ryzen™ 7 5700X 8-Core, 16-TH @ 3. bat file with notepad, make your changes, then save it. SD1. 1’s 200,000 GPU hours. For example, this is mine: Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. I think a function must always have "self" as its first argument. json got prompt… I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. Hi r/comfyui, we worked with Dr. Belittling their efforts will get you banned. 1, and SDXL are all trained on different resolutions, and so models for one will not work with the others. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets I am trying out using SDXL in ComfyUI. But with Comfy UI this doesn't seem to work! Thanks! Welcome to the unofficial ComfyUI subreddit. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. py --normalvram. Then, a miraculous moment unfolded when I installed Stable Diffusion Forge. py -h. For some reason when I launch comfy it now sets my vram to normal_vram. I have read the section of the Github page on installing ComfyUI. py with the following code: nodes. You can prefix the start command with CUDA_VISIBLE_DEVICES=0 to force comfyui to use that specifici card. Finally I gave up with ComfyUI nodes and wanted my extensions back in A1111. fou Update ComfyUI and all your custom nodes, and make sure you are using the correct models. 5, SD2. This only possible if the node's author set the nickname parameter in their info. Open your ComfyUI Manager. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Replace ComfyUI-VideoHeperSuite\videohelpersuite\load_images_nodes. That helped the speed a bit Welcome to the unofficial ComfyUI subreddit. I run some tests this morning. 75s/it with the 14 frame model. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . bat. py --listen --use-split-cross-attention. RAM: 32GB DDR4 @ 3200MHz Kosinkadink developer of ComfyUI-AnimateDiff-Evolved has updated the cutsom node with a new funcionality in the AnimateDiff Loader Advanced node, that can reach higher number of frames. And the new interface is also an improvement as it's cleaner and tighter. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. Is it even Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. ComfyUI was written with experimentation in mind and so it's easier to do different things in it. And above all, BE NICE. r/synthdiy • We're going live with a workshop in an hour. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. I've ensured both CUDA 11. Activate "Nickname" on the "Badge" dropdown list. (Same image takes 5. I down loaded the Windows 7-Zip file and ended up once unzipped with a large folder of files. 20. Invoke just released 3. That’s a cost of abou /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I just moved my ComfyUI machine to my IoT VLAN 10. But these arguments did not work for me, --xformers gave me a minor bump in performance (8s/it vs 11s/it) but still taking about 10mins per image. In ComfyUI Manager- Activate Badge: Nickname If the author set their package nickname you will see it on the top-right of each node. that did not I have comfyui installed on my computer with a lot of custom nodes, Loras, controlnets I would like to have automatic1111 also installed to be able to use it. Workflows are much more easily reproducible and versionable. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 8 and PyTorch 2. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Please share your tips, tricks, and workflows for using this software to create your AI art… Command line arguments can be put in the bat files used to run comfyui like this separated by a space after each command Jul 28, 2023 · if you cd into your comfy directory, just run: python main. 3. The VAE can be found here and should go in your ComfyUI/models/vae/ folder. 10:8188. Had the same issue. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 1 are updated and used by ComfyUI. Anyone that has made art “for exposure”, had their work ripped off, received a pittance while the gallery/auction house/art collector enjoy’s the lion’s share of the profits, knows this. It said follow the instructions for manually installing for Windows. 10:7862, previously 10. I had installed the ComfyUI extension in Automatic1111 and was running it within Automatic1111. After playing around with it for a while, here are 3 basic workflows that work with older models (here, AbsoluteReality). 8 GHz. . I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. exe -s -m pip install -r requirements. However, I kept getting a black image. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. The graphic style I am using ComfyUI with its default settings. bat and what they do? After having issues from the last update I realized my args are just thrown together from random thread suggestions and troubleshooting but I really have no full understanding of what all the possible args are and what they do. I think for me at least for now with my current laptop using comfyUI is the way to go. So, for those of us behind the curve, I have a question about arguments. -- and you'll should see arguments that can be passed on the command line: ie: ~/ComfyUI| python3 main. I know SDXL and ComfyUi are the hotness right now, but I'm still kind of in the kiddie pool in terms of SD. ComfyUI is meant for people who: like node-based editors (and are rigorous enough not to get lost in their own architecture); Source image. Any idea why the qualty is much better in Comfy? I like InvokeAI - its more user-friendly, and although I aspire to master Comfy, it is disheartening to see a much easier UI give sub-par results. Although ComfyUI and A1111 ultimately do the same thing, they are not targeting the same audience. Welcome to the unofficial ComfyUI subreddit. I'm running a GTX 1660 Ti, 6gb of VRAM. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. I'm not sure why and I don't know if it's specific to Comfy or if it's a general rule for Python. A1111 is probably easier to start with: everything is siloed, easy to get results. - or python3 main. Right click and edit the file adding —xformers to the set COMMANDLINE_ARGS. Went to Updates as suggested below and ran the ComfyUI and Python Dependencies batch files, but that didn't work for me. Every time you run the . Aug 2, 2024 · All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. Launch and run workflows from the command line Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. py (By the way - you can and should, if you understand Python, do a git diff inside ComfyUI-VideoHeperSuite to review what's changed) Welcome to the unofficial ComfyUI subreddit. Comparisons and discussions across different platforms are encouraged. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. And with comfyui my commandline arguments are : " --directml --use-split-cross-attention --lowvram" The most important thing is use tiled vae for decoding that ensures no out of memory at that step. Seems very hit and miss, most of what I'm getting look like 2d camera pans. Unfortunately I dont have much space left on my computer, so I am wondering if I could install a version of automatic1111 that use the Loras and controlnet from ComfyUI. Please keep posted images SFW. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. So that is how I was running ComfyUI. 5 add --xformers to web-user. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Basic img2img. And ComfyUI-VideoHeperSuite\videohelpersuite\nodes. I don't know why this changed, nothing changed on my pc. --show-completion: Show completion for the current shell, to copy it or customize the installation. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. bat file set CUDA_VISIBLE_DEVICES=1. Scoured the internet and came across multiple posts saying to add the arguments --xformers --medvram. Data to create a command line tool to improve the ergonomics of using ComfyUI. Im just getting comfortable with automatic1111, using different models, VAE's, upscalers, etc. Go to your FizzNodes folder ("D:\Comfy\ComfyUI\custom_nodes\ComfyUI_FizzNodes" for me) Run this, make sure to also adapt the beginning match with where you put your comfyui folder: "D:\Comfy\python_embeded\python. py with the following code: load_images_nodes. Anyway, whenever you define a function, never forget the self argument! I have barely scratched the surface, but through personal experience, you will go much further! Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. TLDR, workflow: link. on the front page it says "Use --preview-method auto to enable previews. txt" It is actually written on the FizzNodes github here Welcome to the unofficial ComfyUI subreddit. 5 including Multi-ControlNet, LoRA, Aspect Ratio, Process Switches, and many more nodes. Jul 28, 2023 · The first was installed using the ComfyUI_windows_portable install and it runs through Automatic1111. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. ComfyUI is also trivial to extend with custom nodes. py. I have 2 instances, 1 for each graphics card. Before, it automatically set it to high_vram. I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Comfyui is much better suited for studio use than other GUIs available now. 2 seconds, with TensorRT. For example, this is mine: Wondering if there a ways to speedup comfy generations? Use it on rtx4090 mostly with sd1. These images might not be enough (in numbers) for my argument, so I invite you to try it out yourselves and see if its any different in your case. There are some important arguments about how artists could be exploited- but that’s also not a new problem. Are there any guides that explain all the possible COMMANDLINE_ARGS that could be set in the webui-user. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. xrhqjs oboe jwncs mupwaoi cvajo robzw nnyk fupo zxzzdtl iggdc