Overview
Deep-Live-Cam is an open-source project that enables real-time face swapping and one-click video deepfake generation using a single source image. It targets both creatives (content creators, entertainers) and technical users by offering pre-built binaries for a fast start as well as manual installation instructions for custom setups.
Features
- Real-time webcam/live mode: apply a source face to a live camera feed and use tools like OBS to stream the swapped output.
- One-click video deepfakes: select a source face and target video to generate a swapped output automatically.
- Mouth Mask: option to preserve the original mouth region for more accurate lip-sync and natural motion.
- Many-faces / Face mapping: support multiple source faces or mapping different faces to multiple subjects in a scene.
- Pre-built quickstart packages: convenience builds for Windows and macOS (Apple Silicon) to avoid complex manual setup.
- Multiple execution backends: supports CUDA (NVIDIA), CoreML (Apple Silicon), DirectML (Windows), and OpenVINO (Intel) to leverage hardware acceleration.
- Built-in content checks and ethical notices: project includes mechanisms and disclaimers to discourage processing of explicit/sensitive media and encourages consent for real-person uses.
How it works (high level)
Deep-Live-Cam leverages existing face-detection/face-alignment and face-restoration models. Typical components mentioned in the project include GFPGAN for face enhancement and an onnx-based face swapper (inswapper). The pipeline extracts facial features from a single source image, maps them onto detected faces in target frames, optionally preserves mouth regions, and post-processes the result for visual quality.
Installation & Usage (summary)
- Quickstart: download pre-built binaries for Windows or macOS Apple Silicon for fastest setup.
- Manual install: clone the repo, place required model files (e.g., GFPGANv1.4.pth, inswapper_128_fp16.onnx) into a models folder, create a Python venv, and install requirements.
- Run examples:
python run.pyfor GUI mode; CLI flags enable headless processing (--source,--target, etc.). - GPU/accelerator notes: installing appropriate runtimes and onnxruntime variants is required for each execution provider (CUDA/cuDNN for NVIDIA, onnxruntime-silicon/coreml for Apple, onnxruntime-directml for Windows DirectML, onnxruntime-openvino for Intel OpenVINO).
Ethical and legal considerations
Deep-Live-Cam is powerful and can be misused. The repository includes explicit disclaimers asking users to obtain consent from real people and to label deepfaked outputs when shared. The maintainers state built-in checks to avoid processing nudity, graphic or sensitive content and reserve the right to add watermarks or to disable the project if legally required. Users are reminded they are responsible for complying with local laws and platform policies.
Reception & coverage
The project received broad attention and was widely reported in tech press and social media for its ease of use and viral potential. Coverage highlighted both creative use cases and concerns about impersonation, fraud, and deepfake misuse.
Credits & contributors
The repository lists the GitHub owner/maintainer and many contributors; it builds on prior open-source work (for example, the roop project and insightface components) and references several community contributors and upstream projects.
Suitable use-cases
- Creative content and character animation for streaming or video production.
- Research/demonstration of real-time face-swapping technology (with ethics/consent).
- Rapid prototyping for AIGC workflows that need live face-replacement demos.
Limitations
- Quality depends on source image, target video lighting/pose, and available hardware acceleration.
- Potential legal/ethical restrictions depending on how outputs are distributed or used.
