LogoAIAny
Icon for item

ComfyUI-LTXVideo

ComfyUI-LTXVideo is a collection of custom nodes and workflows for ComfyUI that extend support for the LTX-2 video generation model. It makes LTX-2 features accessible inside ComfyUI, provides example pipelines (text-to-video, image-to-video, video-to-video), and includes instructions for required model files, upscalers, and low-VRAM usage.

Introduction

ComfyUI-LTXVideo — Detailed Introduction

ComfyUI-LTXVideo is an open-source extension package created by Lightricks that adds a set of powerful custom nodes and example workflows to ComfyUI, specifically to enable and streamline usage of the LTX-2 video-generation family of models inside ComfyUI.

Purpose and scope
  • Integrates LTX-2 capabilities into the ComfyUI node ecosystem so users can run text-to-video (T2V), image-to-video (I2V), and video-to-video (V2V) pipelines within ComfyUI.
  • Provides ready-made workflows, helper nodes, and recommended configurations to simplify model loading, upscaling, temporal processing, and control (poses, edges, depth, camera moves).
Key features
  • Example workflows: packaged JSON workflows for full and distilled T2V/I2V models, V2V detailer, and IC-LoRA pipelines.
  • Required-model list: clear guidance on which LTX-2 checkpoints, spatial and temporal upscalers, LoRAs, and text-encoder files to download and where to place them within ComfyUI folders.
  • Low-VRAM support: includes low_vram_loaders.py nodes and recommendations (e.g., --reserve-vram flags) to enable running large models on systems with constrained GPU memory.
  • Easy installation via Comfy Manager: users can install the custom nodes from ComfyUI’s Manager (search “LTXVideo”) and the package will register an “LTXVideo” node category.
System requirements & recommendations
  • CUDA-compatible GPU (recommendation: 32 GB+ VRAM for full models; low-VRAM loaders and offloading options provided for smaller setups).
  • Significant disk space: suggested 100+ GB for models and caches.
  • Required model files include multiple LTX-2 checkpoints (full and distilled variants), spatial and temporal upscalers, distilled LoRA, and a Gemma text encoder. The README lists exact filenames and target folders for ComfyUI.
Usage workflow
  1. Install ComfyUI and open the Manager.
  2. Install the ComfyUI-LTXVideo custom nodes via the Manager (search “LTXVideo”).
  3. Restart ComfyUI; the LTXVideo nodes appear in the node menu.
  4. Load an example workflow (provided in ComfyUI/custom_nodes/ComfyUI-LTXVideo/example_workflows) and ensure required model files are downloaded to the specified folders.
  5. Run generation; required model files (when missing) will be downloaded on first use if configured.
Integration and ecosystem
  • LTX-2 is included in ComfyUI core (repository references exist), while this project provides additional nodes, convenience wrappers, and workflows to exploit LTX-2’s advanced video features.
  • The project references Hugging Face-hosted model artifacts and Lightricks resources (model and documentation links), making it part of a broader LTX ecosystem.
Who should use it
  • ComfyUI users who want to experiment with or produce video using LTX-2 models.
  • Researchers and creators needing example pipelines, LoRA integrations, and upscaling/temporal refinement tools within ComfyUI.
  • Users with high-VRAM GPUs for full-quality pipelines, and those with smaller GPUs who can apply the provided low-VRAM strategies.
Further resources
  • The repository README enumerates models, example workflows, and installation steps. Lightricks’ LTX-2 main repo, Hugging Face model pages, and open-source documentation are recommended for deeper model details and downloads.

Information

  • Websitegithub.com
  • AuthorsLightricks
  • Published date2024/11/21

Categories