• Lang English
  • Lang French
  • Lang German
  • Lang Italian
  • Lang Spanish
  • Lang Arabic


PK1 in black
PK1 in red
PK1 in stainless steel
PK1 in black
PK1 in red
PK1 in stainless steel
Comfyui upscale models reddit

Comfyui upscale models reddit

Comfyui upscale models reddit. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Thank Alright, back by popular DEMAND here is a version of my infinite skin detail workflows that works without any external tools. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. It didn't work out. There is a face detailer node. I love to go with an SDXL model for the initial image and with a good 1. Additionally, the animatediff_models and clip_vision folders are placed in M:\AI_Tools\StabilityMatrix-win-x64\Data\Packages\ComfyUI\models. I want to upscale my image with a model, and then select the final size of it. Search for upscale and click on Install for the models you want. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting messing around with upscale by model is pointless for high res fix. If it's the best way to install control net because when I tried manually doing it . r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. Upscaling: Increasing the resolution and sharpness at the same time. The upscaled images. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. 5x on 10GB NVIDIA GPU's. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. Cause I run SDXL based models from start and through 3 ultimate upscale nodes. A step-by-step guide to mastering image quality. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). Sometimes models appear twice, for example “4xESRGAN” used by chaiNNer and “4x_ESRGAN” used by Automatic1111. It uses CN tile with ult SD upscale. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. That's because latent upscale turns the base image into noise (blur). I have a custom image resizer that ensures the input image matches the output dimensions. 5 model, since their training was done at a low resolution. The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) You just have to use the node "upscale by" using bicubic method and a fractional value (0. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. outputs. This is done after the refined image is upscaled and encoded into a latent. SD upscaler and upscale from that. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 25 i get a good blending of the face without changing the image to much. I generate an image that I like then mute the first ksampler, unmute Ult. example. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. But for the other stuff, super small models and good results. In resting if found that you CANNOT pass latent data from SD1. 5 to sdxl or vica vera or you get a garbage result. Please share your tips, tricks, and workflows for using this software to create your AI art. I'm using mm_sd_v15_v2. There's "latent upscale by", but I don't want to upscale the latent image. the factor 2. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. with a denoise setting of 0. In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird Here is a workflow that I use currently with Ultimate SD Upscale. 0-RC , its taking only 7. The model used for upscaling. Here is an example: You can load this image in ComfyUI to get the workflow. The downside is that it takes a very long time. The Upscale Image (using Model) node can be used to upscale pixel images using a model loaded with the Load Upscale Model node. That's because of the model upscale. g Use a X2 Upscaler model. Does anyone have any suggestions, would it be better to do an ite Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. I have played around with it but all the low step fast models require very low cfg also so it's difficult to make them follow prompts strongly, especially when you want to go against the models natural bias. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. Makes sense when you look a bit into tensors I guess. Solution: click the node that calls the upscale model and pick one. We would like to show you a description here but the site won’t allow us. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Look at this workflow : I get good results using stepped upscalers, ultimateSD upscaler and stuff. This new upscale workflow also runs very efficiently, being able to 1. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. There are also "face detailer" workflows for faces specifically. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. Reply reply I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. 5 it s in mature state where almost all the models and loras are based on it, so you get better quality and speed with it. This. image. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. I rarely use upscale by model on its own because of the odd artifacts you can get. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. So, vae decode to image, then vae encode to latent using the next model you're going to process with. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P I've been using Stability Matrix and also installed ComfyUI portable. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. it's nothing spectacular but gives good consistent results without Upscale Image (using Model) node. It's been trained to make any model produce higher quality images at very low steps like 4 or 5. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. I haven't been able to replicate this in Comfy. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values for sizes. There’s only so much you can do with an SD1. But it's weird. These upscale models always upscale at a fixed ratio. safetensors (SD 4X Upscale Model) Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. And when purely upscaling, the best upscaler is called LDSR. I have been using 4x-ultrasharp for as long as I can remember, but just wondering what everyone else is using and which use case? I tried searching the subreddit but the other posts are like earlier this year or 2022 so I am looking for updated information. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. Here is an example of how to use upscale models like ESRGAN. Please keep posted images SFW. 15K subscribers in the comfyui community. All the models are located in M:\AI_Tools\StabilityMatrix-win-x64\Data\Models. - image upscale is less detailed, but more faithful to the image you upscale. However, I'm facing an issue with sharing the model folder. 5 if you want to divide by 2) after upscaling by a model. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. Upscale x1. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. Though, from what someone else stated it comes to use case. The pixel images to be upscaled. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. May 5, 2024 · こんにちは、はかな鳥です。 前回、明瞭化アップスケールの方法解説として、『clarity-upscaler』のやり方を A1111版&Forge版 で行いましたが、今回はその ComfyUI版 です。 『clarity-upscaler』というのは一つの拡張機能というわけではなく、ここでは Controlnet や LoRA 等、さまざまな機能を複合して作動 Generates a SD1. You can also do latent upscales. 5 for the diffusion after scaling. Do you have ComfyUI manager. - latent upscale looks much more detailed, but gets rid of the detail of the original image. This way it replicates the sd upscale/ultimate upscale scripts from A1111. You could also try a standard checkpoint with say 13, and 30. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 5 I'd go for Photon, RealisticVision or epiCRealism. upscale_model. No attempts to fix jpg artifacts, etc. Welcome to the unofficial ComfyUI subreddit. 5 to get a 1024x1024 final image (512 *4*0. Also, both have a denoise value that drastically changes the result. e. The workflow is kept very simple for this test; Load image Upscale Save image. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. Upscale Model Examples. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. So I made a upscale test workflow that uses the exact same latent input and destination size. That's exactly how other UIs that let you adjust the scaling of these models do it, they downscale the image using a regular scale method after. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. Thank you community! It s not necessary an inferior model, 1. example usage text with workflow image Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Please share your tips, tricks, and workflows for using this… A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. inputs. The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. IMAGE. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. 5=1024). I am looking for good upscaler models to be used for SDXL in ComfyUI. For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection. 15-0. 6. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. You need to use the ImageScale node after if you want to downscale the image to something smaller. Good for depth, open pose so far so good. . second pic. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). For SD 1. The resolution is okay, but if possible I would like to get something better. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. Thanks. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. this is just a simple node build off what's given and some of the newer nodes that have come out. py --directml Tried the llite custom nodes with lllite models and impressed. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Always wanted to integrate one myself. Working on larger latents, the challenge is to keep the model somehow still generating an image that is relatively coherent with the original low resolution image. Just use another model loader and select another model. After generating my images I usually do Hires. One does an image upscale and the other a latent upscale. My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. If you check the description on YT I have a Github repo I have set up with sample images and workflow JSON's as well as links to the LoRA's and Upscale models. Note: Remember to add your models, VAE, LoRAs etc. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. I am curious both which nodes are the best for this, and which models. ckpt motion with Kosinkadink Evolved . In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. so i. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. Edit: you could try the workflow to see it for yourself. These comparisons are done using ComfyUI with default node settings and fixed seeds. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. rmd hkjsh pauic irghctv fksbf cycflt exb zjlxm fsff rvbtc