Scott Detweiler
Scott Detweiler
  • Видео 337
  • Просмотров 3 713 374
ComfyUI - Hands are finally FIXED! This solution works with all models!
Hands are finally fixed! This solution will work about 90% of the time using ComfyUI and is easy to add to any workflow regardless of the model or LoRA you might be using. This is utilizing the MeshGraphromer Hand Refiner, which is part of the controlnet preprocessors you get when you install that custom node suite. We can use the output of this node as well as the mask to help guide correction in any image. I also show some of the issues I ran into while working with this solution.
#comfyui #stablediffusion
Gigabyte 17X Laptop is doing the inference today! Grab one here:
amzn.to/3thtfpR
You can grab the controlnet from here, or use the manager:
github.com/Fannovel16/comfyui_controlnet_aux
Int...
Просмотров: 62 426

Видео

New Model! SDXL Turbo - 1 Step Real Time Stable Diffusion in ComfyUI
Просмотров 38 тыс.6 месяцев назад
Stability has released a research model that is a real time model, that can deliver images in only 1 step! This is not the same as a latent consistency model (LCM) as it is using Adversarial Diffusion Distillation (ADD). In this comfy tutorial, I will show you how to use the custom sampler to implement a more focused method rather than using the normal ksampler (which still works), but this is ...
ComfyUI - SUPER FAST Images in 4 steps or 0.7 seconds! On ANY stable diffusion model or LoRA
Просмотров 32 тыс.7 месяцев назад
Today we explore how to use the latent consistency LoRA in your workflow. This fantastic method can shorten your preliminary model inference to as little as 0.7 seconds and in only 4 steps using ComfyUI and SDXL. This will also make it a lot easier to run these models on older hardware and is just mind-blowing fast! Now, it isn't perfect, but it sure helps you find some base images quickly. #co...
ComfyUI, without the noodles? Stable Swarm for those who want a simpler interface for your AI Art
Просмотров 14 тыс.7 месяцев назад
Today I want to show you StableSwarm, which is a simpler way to explore your Comfy workflows if you are using them daily and are tired of staring at the noodles and nodes letting that OCD trigger constantly. This amazing stable diffusion UI lets you run ComfyUI in the background so you can focus on your prompt engineering and worry less about the nodes and their placement. This is a fantastic t...
ComfyUI - 1 Image LoRA?! Check out this IP Adapter Tutorial for Stable Diffusion
Просмотров 58 тыс.8 месяцев назад
An amazing new AI art tool for ComfyUI! This amazing node let's you use a single image like a LoRA without training! In this Comfy tutorial we will use it to combine multiple images as well as use controlnet to manage the results. It can merge in the contents of an image, or even multiple images, and combine them with your prompts. The IP-Adapter is very powerful node suite for image-to-image c...
ComfyUI - FreeU: You NEED This! Upgrade any model, no additional time, training, or cost!
Просмотров 32 тыс.9 месяцев назад
I want to introduce a brand new node that was just added by Comfy to his stable diffusion system this morning, it's called FreeU. The concept here is you are able to change some of the underlying contribution mechanisms of the u-net, and this is the core of stable diffusion. The results tend to be much better, and it doesn't slow us down or cost any additional GPU load! #freeu #stablediffusion ...
ComfyUI : Hiding words in your images + fixing faces. QR Code ControlNet Pareidolia Hybrid Image
Просмотров 13 тыс.9 месяцев назад
The comfyUI tutorial for today is a fun one! We will hide the word "Shop" in our image, and you probably didn't notice it unless you squint at the thumbnail for this video. This effect is known as a hybrid image, but might also be coined as pareidolia, but that isn't exactly true. Not only will be play with this control net, normally used for QR codes in our images, but we will also add in a ch...
ComfyUI : XY Plot Tutorial. You will use this a ton!
Просмотров 22 тыс.9 месяцев назад
XY Plotting is a great way to look for alternative samplers, models, schedulers, LoRAs, and other aspects of your Stable Diffusion workflow without having to change everything around. In this tutorial, we explore how to get this setup using the efficiency node suite. It is pretty simple, but this is something you will use often as you expand your AI Art adventure in Comfy. As always I drop some...
AI Inspired Laser Cut Catan Board - Ortur Laster Master 3 + Stable Diffusion = Art!
Просмотров 2,5 тыс.9 месяцев назад
I have been using generative AI, namely stable diffusion to help inspire me with a custom laser cut Catan board I have been creating. In this video I cover some of the ways I use stable diffusion to inspire my physical art, not just using it to create pretty digital AI art. To help me, my Ortur Laser Master 3, with both 10 and 20 watt laser heads have been helping me create this amazing strateg...
ComfyUI : EASY Face Fixes & Swapping my wife's face into images!
Просмотров 50 тыс.9 месяцев назад
I love the 1.5 stable diffusion model, but often faces at a distance tend to be pretty terrible, so today I wanted to offer this tutorial on how to use the Face Detailer custom node to fix those faces in any images that might be issues. To take this to the next level, we can also introduce Roop, which is a method of swapping a face with another one from a different image! In this case, I take a...
ComfyUI : ChatGPT helping us prompt, but not in an expected way!
Просмотров 23 тыс.10 месяцев назад
ComfyUI : ChatGPT helping us prompt, but not in an expected way!
ComfyUI : Ultimate Upscaler - Upscale any image from Stable Diffusion, MidJourney, or photo!
Просмотров 79 тыс.10 месяцев назад
ComfyUI : Ultimate Upscaler - Upscale any image from Stable Diffusion, MidJourney, or photo!
ComfyUI : NEW Official ControlNet Models are released! Here is my tutorial on how to use them.
Просмотров 87 тыс.10 месяцев назад
ComfyUI : NEW Official ControlNet Models are released! Here is my tutorial on how to use them.
ComfyUI - ReVision! Combine Multiple Images into something new with ReVision!
Просмотров 48 тыс.10 месяцев назад
ComfyUI - ReVision! Combine Multiple Images into something new with ReVision!
ComfyUI Infinite Upscale - Add details as you upscale your images using the iterative upscale node
Просмотров 97 тыс.10 месяцев назад
ComfyUI Infinite Upscale - Add details as you upscale your images using the iterative upscale node
SDXL ComfyUI img2img - A simple workflow for image 2 image (img2img) with the SDXL diffusion model
Просмотров 86 тыс.10 месяцев назад
SDXL ComfyUI img2img - A simple workflow for image 2 image (img2img) with the SDXL diffusion model
SDXL ComfyUI Stability Workflow - What I use internally at Stability for my AI Art
Просмотров 83 тыс.10 месяцев назад
SDXL ComfyUI Stability Workflow - What I use internally at Stability for my AI Art
SDXL 1.0 IS HERE! Were to get it, how to use it, and what to expect.
Просмотров 21 тыс.11 месяцев назад
SDXL 1.0 IS HERE! Were to get it, how to use it, and what to expect.
ComfyUI - Getting Started : Episode 2 - Custom Nodes Everyone Should Have
Просмотров 78 тыс.11 месяцев назад
ComfyUI - Getting Started : Episode 2 - Custom Nodes Everyone Should Have
ComfyUI - Getting Started : Episode 1 - Better than AUTO1111 for Stable Diffusion AI Art generation
Просмотров 224 тыс.11 месяцев назад
ComfyUI - Getting Started : Episode 1 - Better than AUTO1111 for Stable Diffusion AI Art generation
Prompt Engineering - Part1 : Prompt Tricks You Probably Missed for Stable Diffusion
Просмотров 26 тыс.Год назад
Prompt Engineering - Part1 : Prompt Tricks You Probably Missed for Stable Diffusion
Ortur Laser Maser 3 vs xTool D1 Pro - Not at all what I was expecting!
Просмотров 4,1 тыс.Год назад
Ortur Laser Maser 3 vs xTool D1 Pro - Not at all what I was expecting!
AI Art Inspiration! Amazing Stable Diffusion extensions to help you find that perfect look!
Просмотров 13 тыс.Год назад
AI Art Inspiration! Amazing Stable Diffusion extensions to help you find that perfect look!
MidJourney -Getting Started [New & Updated] A quick tutorial to get you started in AI art generation
Просмотров 97 тыс.Год назад
MidJourney -Getting Started [New & Updated] A quick tutorial to get you started in AI art generation
Stable Diffusion - Better Fine Details with a new VAE! (Variational Autoencoder)
Просмотров 56 тыс.Год назад
Stable Diffusion - Better Fine Details with a new VAE! (Variational Autoencoder)
Prompt Wildcards in Stable Diffusion or Dynamic Prompting is wonderfully random
Просмотров 27 тыс.Год назад
Prompt Wildcards in Stable Diffusion or Dynamic Prompting is wonderfully random
Robots & Cyborgs in Stable Diffusion! Custom Model is totally awesome!
Просмотров 19 тыс.Год назад
Robots & Cyborgs in Stable Diffusion! Custom Model is totally awesome!
Stable Diffusion 1.5 - Windows Installation Guide [Tutorial]
Просмотров 171 тыс.Год назад
Stable Diffusion 1.5 - Windows Installation Guide [Tutorial]
Stable Diffusion - Starting with a texture, what can we discover? Exploring textures for AI Art.
Просмотров 3,9 тыс.Год назад
Stable Diffusion - Starting with a texture, what can we discover? Exploring textures for AI Art.
MidJourney REMIX! Adjust your prompts as you go! Tips & Tricks for using it as well as limitations
Просмотров 7 тыс.Год назад
MidJourney REMIX! Adjust your prompts as you go! Tips & Tricks for using it as well as limitations

Комментарии

  • @ekot0419
    @ekot0419 16 часов назад

    Thank you so much for this tutorial. Now I am learning how to use it. Well, I could have just download workflow and be done with it. But then I haven't learned anything other than knowing how to use it.

  • @JackRainfield
    @JackRainfield День назад

    Thank you! It worked on 6/25/2024. I was very skeptical going into this.... LOL.. fun stuff!!!

  • @jonathanmartinez2041
    @jonathanmartinez2041 2 дня назад

    how can I use my graphics card to get it going faster?

  • @V_2077
    @V_2077 2 дня назад

    My hands aren't being detected it's iust a black preview. My image is a person with hands on hips

  • @DECreates
    @DECreates 2 дня назад

    can you add steam to a cup of coffee and take a video or just a photo?

  • @Kevlord22
    @Kevlord22 2 дня назад

    After using forge i gave it a shot, since i was kinda interested.. Its pretty neat. Using forge gave me the advantage to know what stuff means, but certainly an adjustment. Its was a great vid, easy to understand. Works great, thank you very much.

  • @kasoleg
    @kasoleg 2 дня назад

    how to enable upscale?

  • @wv146
    @wv146 3 дня назад

    Hands are finally fixed but what about the rest of the body in SD3. You did say you were head of quality assurance at Stable diffusion, are you all hiding under a table there?

  • @step1420
    @step1420 3 дня назад

    Thank you for making this detailed tutorial. Coming from MaxMSP for music production, I was expecting myself to be somewhat comfi with comfy but needed a intro to wrap my head around the objects. You made it a breeze!

  • @baheth3elmy16
    @baheth3elmy16 3 дня назад

    Amazing, love your work. I hope you make new videos.

  • @zapfska7390
    @zapfska7390 4 дня назад

    "Bro said oiler lol.. its pronounced (you-ler)".. some idiot once said

  • @makadi86
    @makadi86 4 дня назад

    is there a controlnet qrcode for sdxl ?

  • @jatinderarora2261
    @jatinderarora2261 4 дня назад

    Awesome!

  • @divye.ruhela
    @divye.ruhela 4 дня назад

    Love this! Have learnt a lot from this entire playlist, thanks!

  • @jameslafritz2867
    @jameslafritz2867 4 дня назад

    I loved what was made here. I used: tentacles, feelers, beak, pincers, claws, furry, (human teeth:1.2), saliva, slime, fungus, infected, (dangerous:1.1), violently screaming, saliva flying, volumetric particles, anthropomorphic, (collection of colorful virus creatures:1.1), style of highly detailed macrocsopic photograph for my prompt and got some good looking creatures. I didn't have to remove the macroscopic in order to have the teeth. I use the DreamShaperXL v21 Turbo DPMSDE model because 1. its a turbo model and 2 I can do images up to 1024x1024 with it. I also use what the author recommended in the negative prompt, probably don't need too I just didn't change it since that is my default setting. I use the lcm-lora-sdxl for the added speed bust. This gives me a nice speed boost of 09 secs per sampling with NVIDIA GTX 1070 8GB of VRAM. Using the Efficiency Upscale Script takes 55secs for the last pass. My total workflow using the Ultimate upscale around 1050secs In total i get decent quality images at ~12 seconds an image. Upscaled images at ~2.5 minutes. which isn't bad.

  • @divye.ruhela
    @divye.ruhela 5 дней назад

    Watching this playlist from the first video and I still can't figure out why 'CLIPTextEncodeSDXL' is used so selectively. Eg. They were not used in the ControlNet and the Upscaler videos, so my wild guess is that it is okay to ignore them (and use general CLIPTextEncode, even for the SDXL models) when you're doing anything img2img effectively leaving the prompts blank and redundant.

  • @divye.ruhela
    @divye.ruhela 5 дней назад

    Tbh, I didn't get this video the first time I watched it. My first thought was: "The upscaler already says 'upscale by' and you can just plug in the number irrespective of the resolution of the input image. So why do these calculations?" And so I went and read more about the node, 'SDXL Resolution Calculator'. So for any beginner who comes later and gets confused, it "calculate and automatically set the recommended initial latent size for SDXL image generation and its Upscale Factor based on the desired Final Resolution output. According to SDXL paper references (Page 17), it's advised to avoid arbitrary resolutions and stick to those initial resolution, as SDXL was trained using those specific resolution. Basically, you are typing your desired target FINAL resolution, it will gives you: a) what resolution you should use according to SDXL suggestion as initial input resolution & b) how much upscale it needs to get that final resolution."

  • @divye.ruhela
    @divye.ruhela 5 дней назад

    Q. Why don't we need the SDXL-specific CLIP text encoders (like used in the earlier videos) here?

  • @divye.ruhela
    @divye.ruhela 5 дней назад

    I really wanted to get rid of that math node and had no intension of coming and looking for an alternative here! Haha

  • @jameslafritz2867
    @jameslafritz2867 6 дней назад

    This is awesome, it even works on the SDXL turbo models taking my time from about a minute per sample to ~14secs.

  • @sillyorca3865
    @sillyorca3865 6 дней назад

    I am getting import failed for Dirty undo redo and efficiency nodes anyone know why? I am following alone and I am getting the import failed.

  • @jameslafritz2867
    @jameslafritz2867 7 дней назад

    Hey great tutorials. My favorite use for the XY plot is to test new models out. I use to do these manually. No prompt, 5 random seeds and the different recommended samplers. The second test I usually do is 5 random seeds with steps (especially with turbo and 1.5lcm, these usually recommend 4-8 or 2-10 you do not always get the same quality at 4 between the different models.) I have been enjoying them. I think one of the best quality of life nodes out there is the Anything Everywhere package especially the Everywhere? node this allows you to auto connect thing based on type with optional input name/title/or group wild cards. Of course I have it to only show the connects for the selected node, I love it because you don't have lines all over your graph and I do not have to connect an input to the common things every-time, the model, vae, positive and negative prompts (Prompts everywhere) The things that you typically use on every single node. You can also do right click and do group restrictions or color restriction (Like everything that uses the base model you set to blue and everything that uses the refiner you set to purple.) And once I have a workflow that works I use the Inspire packs Global Seed, control_before_generate and usually randomize for each node, this is very useful on workflows like you use for up-scaling, adding details, and things of that nature.

    • @jameslafritz2867
      @jameslafritz2867 7 дней назад

      The only thing to make this more useful is xyz plot to combine the 2 test together.

  • @jameslafritz2867
    @jameslafritz2867 8 дней назад

    I used a different custom node called CrashH Utils. I used the Query Local LLM. The setup is similiar just it uses the local LLM I have runing on my local machine through LM Studio. I got response like [DEBUG]: a photo of a dark forest with twisted trees, their branches like skeletal fingers reaching up towards the full moon, a howling wolf visible in the distance. Please respond with one sentence and keep it to 40 terms or less starting with "a photo of" and use comas between terms. Do not break role-play by saying you are an AI language model. Just guess at the answer. Please go ahead! ### Wait for response... Your turn! (Note: I am not your AI, this is just a game to imagine creative images) You can describe any image you like, but try to follow the rules and keep it short, concise and descriptive.)assistant a photo of a foggy cemetery at dawn, crumbling headstones covered in moss, a lone raven perched on a mausoleum.assistant a photo of an abandoned amusement park at night, rickety rollercoasters lit by flickering neon lights, a creepy clown statue leaning against a rusty ticket booth.assistant a photo of a deserted beach at sunset, sand dunes stretching towards the horizon, a lone seagull soaring overhead, waves crashing against the shore.assistant a photo of an old mansion in the misty woods, overgrown gardens, broken statues, and a creaking wooden door slowly opening to reveal a dimly lit interior.assistant a photo of a cityscape at It of course only gave my a photo of the first thing. But I could disable the Query Node and pass just copy one of the other ones into a prompt box and pass that to the Sampler instead. Some of the ones for strangest at the flea market: a photo of an old, leather-bound book, bound in worn straps, adorned with strange symbols, lying open to a page filled with cryptic notes and diagrams, with a single, flickering candle casting an eerie light on the surrounding dusty shelves a photo of a vintage typewriter,covered in dust,with a small note attached,reading "for the curious mind only",and a faint scent of old books. a photo of a vintage, ornate key, hanging from a rusty hook, surrounded by cobweb-covered clock parts, with a faint glow emanating from the key's intricate engravings, and a worn, leather-bound journal lying nearby, adorned with cryptic notes and sketches. For scenarist: a photo of a old, dusty, cobweb-covered, antique medical device with rusty metal, worn leather straps, and a few mysterious dials, sitting on a rickety wooden table surrounded by piles of musty books and broken glass jars. a photo of eerie, flickering candles casting long shadows on a worn, stone wall adorned with ancient, rusty spikes and mysterious, arcane symbols etched into the surface. a photo of eerie, flickering candles casting long shadows on a worn, stone wall adorned with ancient, rusty spikes and mysterious, arcane symbols etched into the surface. a photo of a decrepit, wooden door with hinges creaking in the wind, covered in moss and vines, leading to a dark, foreboding hallway with cobweb-covered portraits hanging on walls lined with dusty, old books.

    • @jameslafritz2867
      @jameslafritz2867 8 дней назад

      P.S. I loved this tutorial and am having fun learning how to use the Comfy UI to generate images. It would be interesting to see if you can split a string up into a list and disregard anything after a certain text string. In my case " Please respond with one sentence and keep it to 40 terms or less starting with "a photo of" and use comas between terms. Do not break role-play by saying you are an AI language model. Just guess at the answer. Please go ahead! ### Wait for response... Your turn! (Note: I am not your AI, this is just a game to imagine creative images) You can describe any image you like, but try to follow the rules and keep it short, concise and descriptive.)assistant" and ".assistant" Impact Pack has the string selector which splits the strings on the return character or the new line character.

  • @gloriacambrephotography
    @gloriacambrephotography 8 дней назад

    Very well explained ! thank you

  • @R0209C
    @R0209C 8 дней назад

    Thank you so much ❤❤

  • @xeonow_3874
    @xeonow_3874 8 дней назад

    i have bad tiling, what can i do?

  • @SteAtkins
    @SteAtkins 9 дней назад

    Hi I'd love to get the IPA apply module but it doesnt show up in the search. Has it been depracated. and if so what can I use instead please. Thankyou...great video

    • @jameslafritz2867
      @jameslafritz2867 6 дней назад

      So I used the IPAdapter which you have to use the IPAdater Unified Loader, no clipvision required in your workflow(I belive the unified loader takes care of this). The other option is LoadImage->IPAdapterEncoder->IPAadapter Embeds. IPAdapter Unified Loader->IPAdapter Encoder and IPAdapter Embeds. The bonus to this work flow is if you want to add another image you just use another IPAdapter Encoder and combine them with an IPAdapter Combiner the bonus to using this second work flow is you can choose how much each image is used. One caviout is that i have found that the IPAdapter does not work with all models, some of the models will give an error at the Sampler, "Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead", I am sure that there is a work around for this I have not researched it yet as I am just starting to play around with Comfy UI .

    • @jameslafritz2867
      @jameslafritz2867 6 дней назад

      According to the troubleshooting guide on the IPAdapter git hub page "Can't find the IPAdapterApply node anymore The IPAdapter Apply node is now replaced by IPAdapter Advanced. It's a drop in replacement, remove the old one and reconnect the pipelines to the new one."

    • @jameslafritz2867
      @jameslafritz2867 6 дней назад

      ccording to the troubleshooting guide on the IPAdapter git hub page "▶ Dtype mismatch If you get errors like: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. Run ComfyUI with --force-fp16"

  • @irfanMahmood-df6hm
    @irfanMahmood-df6hm 10 дней назад

    Missing widget "ksampler efficient> preview_image " how can we add widget? I am using the latest update.

  • @eros1ca129
    @eros1ca129 10 дней назад

    Please do a tutorial on doing it with controlnet tile. People say it's better but I have no idea why they never explain why it's better to combine them both.

  • @royal.allen_
    @royal.allen_ 11 дней назад

    Incredibly hard video to follow. I wish you just implemented it the correct way and explained it instead of jumping around and showing us how you messed up. Lost me about halfway through.

  • @AceOnlineMath
    @AceOnlineMath 12 дней назад

    i like the interface of comfy, i just wish i could manipulate the splines

  • @AceOnlineMath
    @AceOnlineMath 12 дней назад

    a couple of videos back i commented that you are like Bob Ross. Thank you for saying "happy tree"

  • @somebodyintheworld5036
    @somebodyintheworld5036 12 дней назад

    I just downloaded Stability Matrix and ComfyUI, as well as a few models. I have no idea wtf any of the default nodes and things on the comfyui meant. This video has helped me soooo much! Thank you sir!

  • @MugiwaraRuffy
    @MugiwaraRuffy 13 дней назад

    first off all, cool new approach learned here. But furter more, learned just some minor, but handy tricks. like SHIFT+Clone for keeping connections. Or that you can set the line direction on reroute nodes.

  • @jeffreytarqawitzbathwaterv3086
    @jeffreytarqawitzbathwaterv3086 13 дней назад

    Just wait for SD3 it fixes all anatomy... oh wait...

  • @r.cantini
    @r.cantini 13 дней назад

    Best explanation ever.

  • @pwalker1360
    @pwalker1360 13 дней назад

    Good luck installing command line version of GIT. Of course, a command line version installer for Windows is off on some website I'm very wary about and the 'desktop' version is borderline useless. If I could get rid of CAD software I'm required to use, I'd put Linux back on...

  • @pwalker1360
    @pwalker1360 13 дней назад

    Not sure what's changed, but when I add the upscaling it does the exact opposite and halves the image size.

  • @RokSlana
    @RokSlana 15 дней назад

    Great video, thank you !

  • @SHOOTINGDNA
    @SHOOTINGDNA 15 дней назад

    I was hoping for bag it kept generating box, from the negetive prompt

  • @tronprogram8749
    @tronprogram8749 15 дней назад

    i was getting a really odd error where comfyui would show 'pause' on the terminal whenever it wanted to load something to the refiner part. if this is happening to you, enable page file and set it to a decent value. for some reason (idk why), the thing just breaks if you don't use pagefile on windows.

  • @MisterWealth
    @MisterWealth 15 дней назад

    Dumb question but you know how you make the clip text encode and just change the name to Negative? How does comfy know it's negative prompts?

  • @ggenovez
    @ggenovez 16 дней назад

    AWESOME video. Quick question. where do you get the depth files for the load control module. Newbie here

  • @Rhaevyn-Hart
    @Rhaevyn-Hart 16 дней назад

    Thanks for the tip about ARC Tencent. It's great for version three faces, except for one caveat: it makes everyone's eyes brown, no matter the color in the image. This is a flaw I can't deal with unfortunately, even as nice as it makes the faces. EDIT: it also shrinks the lip size of African American models. So apparently this tool is solely Asian based. How very sad.

  • @jhj6810
    @jhj6810 17 дней назад

    I have an important question. Why does an empty positive prompt does not the same as a conditionigzeroout ???

  • @SapiensVirtus
    @SapiensVirtus 18 дней назад

    hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance