All Guides
BeginnerImage Gen12 min read2025-03-01

ComfyUI Complete Setup: RTX 5080 Edition

Install, configure, and benchmark your first ComfyUI workflow with optimal VRAM settings for RTX 5080 and other consumer GPUs.

What you'll learn

Install ComfyUI on Windows from scratch
Configure Python, CUDA, and PyTorch correctly
Launch with GPU-optimized flags for your VRAM
Install and manage custom nodes via ComfyUI Manager
Run your first SDXL workflow in under 30 minutes
Read benchmarks for RTX 5080, 4090, and 3080

ComfyUI is the most powerful node-based interface for running local AI image and video generation. This guide gets you from zero to a working workflow on Windows with an NVIDIA GPU — tested live on an RTX 5080 and RTX 3080 16GB.

16GBVRAM Required
12 minSetup Time
3.2sSDXL per image
FreeOpen Source

Hardware Requirements

GPUVRAMBest Use Case
RTX 508016GBFull quality, all models, fast batch
RTX 409024GBFull quality, FLUX FP16, large batch
RTX 3080 16GB16GBFull quality, slower on large models
RTX 3080 10GB10GBReduced resolution, GGUF models
RTX 306012GBStandard models, SD1.5 and SDXL
GTX 1660 Ti6GBSD1.5 only, very slow
6GB VRAM for SD1.5. FLUX and LTX Video require at least 12GB. For best results with modern models, 16GB is the sweet spot.

Step 1 — Install Prerequisites

You need three things before cloning ComfyUI: Python 3.10, Git, and the CUDA Toolkit.

Python 3.10.x

Download from python.org — use exactly 3.10.x, not 3.11 or 3.12. Many custom nodes have dependency conflicts with newer versions.

bash
python --version
Do NOT install Python 3.11 or 3.12. Custom nodes like AnimateDiff and VideoHelperSuite require 3.10.x. Using a newer version will cause cryptic import errors.

Git

bash
git --version

CUDA Toolkit

Download CUDA 12.1 from [nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads). Match your driver version.

bash
nvcc --version

Step 2 — Clone ComfyUI

bash
cd C:\
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI
Installing at the root avoids Windows path length issues that break certain custom node installs. Deeply nested paths cause silent failures.

Step 3 — Install Dependencies

bash
python -m venv venv


.\venv\Scripts\activate

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

pip install -r requirements.txt

The PyTorch install will download ~2.5GB. This is the step most people mess up — make sure you're using the cu121 index URL, not the default PyPI version.

Step 4 — Install ComfyUI Manager

ComfyUI Manager adds a one-click node installer directly inside the UI. It's essential.

bash
cd custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager
cd ..

After this, every time you find a workflow that needs missing nodes, ComfyUI Manager will detect and install them automatically.

Install Manager BEFORE downloading any models. When you load a workflow that requires missing custom nodes, Manager will prompt you to install them all at once.

Step 5 — Download Your First Model

Place models in ComfyUI/models/checkpoints/

Recommended starter:
bash

For FLUX (requires 12GB+ VRAM):

bash

Step 6 — Launch ComfyUI

bash
python main.py


python main.py --gpu-only --highvram

python main.py --gpu-only

python main.py --lowvram

Open your browser to: http://127.0.0.1:8188

ComfyUI runs as a local server. Bookmark 127.0.0.1:8188 — it's faster than typing it each time.

RTX 5080 Optimal Settings

These settings max out quality on 16GB VRAM with headroom for LoRAs:

Model:       FLUX Dev FP8 or SDXL
Batch size:  2–4 images
Resolution:  1024x1024 (SDXL) / 768x512 (video)
Steps:       20–30
CFG Scale:   7.0 (images) / 3.0 (video)
Sampler:     dpmpp_2m
Scheduler:   karras
VAE:         Built-in (baked into checkpoint)
Precision:   fp16

Installing Essential Custom Nodes

In ComfyUI Manager, search and install these first:

  1. 1ComfyUI-Impact-Pack — face detailing, segmentation, upscaling
  2. 2ComfyUI_IPAdapter_plus — image prompt control
  3. 3ComfyUI-AnimateDiff-Evolved — video animation
  4. 4ComfyUI-VideoHelperSuite — video input/output (required for LTX)
  5. 5rgthree-comfy — better node organization and groups

Your First Workflow

  1. 1Click Load in the top menu
  2. 2Select default_workflow.json
  3. 3In the Load Checkpoint node, select your downloaded model
  4. 4Click Queue Prompt
You should see your first image in 15–30 seconds on an RTX 5080.
While testing prompts, use batch size 1 and 15 steps. Once you find a direction you like, bump to 25 steps and batch 4 for the final run.

Performance Benchmarks

Tested on RTX 5080 16GB, --gpu-only --highvram:

ModelResolutionStepsTime
SDXL FP161024×102420~3.2s
FLUX Dev FP81024×102420~8.1s
SD 1.5512×51220~0.8s
LTX Video 2.3768×512 97f25~5.4s
3.2sSDXL per image
8.1sFLUX Dev FP8
5.4sLTX Video clip
0.8sSD 1.5 image

Common Issues

"CUDA out of memory" — Add --lowvram to launch command, reduce batch size to 1, or switch to a GGUF quantized model. Black images — Your VAE doesn't match the model. Download the correct VAE for your checkpoint and load it manually with a VAELoader node. Slow generation — Check --gpu-only flag is set and your venv is activated. Without the venv, you'll use the system Python and possibly CPU inference. Nodes missing — Open ComfyUI Manager → click "Install Missing Custom Nodes". This auto-detects and installs everything a workflow needs.

You're ready to run your first workflow. Next up — training your own LoRA to lock a character or style into any model.