While enterprise SaaS platforms dominate the AI video localization market, a quiet revolution is happening on GitHub. Open-source developers are building powerful video translation and dubbing pipelines—most run locally for full control and no vendor lock-in, while a few offer cloud-native options for those without GPU hardware. If you have the technical know-how (and, for local tools, some decent GPU power), these projects let you bypass recurring SaaS fees and customize every step of the pipeline.
This guide breaks down the top open-source video dubbing tools available today, with their standout features, pros, and cons—so you can choose the right one for your use case. Star counts (as of 2026) are noted for community activity, but the best tool for you depends on your workflow, hardware, and priorities.
The typical open-source dubbing pipeline: Transcribe → Translate → Synthesize → Sync. Most tools handle the full flow; some specialize in one stage.
Need professional dubbing with human review? Open-source gives you control—Videodubbing.com adds the polish.
Jump to
| Section | Tools |
|---|---|
| Full-featured pipelines | VideoLingo, pyVideoTrans, KrillinAI |
| Voice cloning & lip-sync | Linly-Dubbing, ViDubb |
| Lightweight & specialized | videoTranslator, AutoDub, Bluez-Dubbing |
| Cutting-edge & cloud | InfiniteTalk, Ariel, Dubbie |
| Niche workflows | GhostCut, Chenyme-AAVT, open-dubbing, youtube-auto-dub |
| Quick comparison | All 15 tools at a glance |
| Which tool to choose? | Recommendations by use case |
Full-featured pipelines
1. VideoLingo
Repository: github.com/Huanshere/VideoLingo (~16k stars)

VideoLingo — 3-step translation, Streamlit UI, GPT-SoVITS
If your primary goal is cinematic, Netflix-quality subtitles alongside your dubbing, VideoLingo is one of the most robust open-source options. It operates as an all-in-one video translation and localization tool that actively eliminates stiff machine translations and messy, multi-line subtitles.
Pros: VideoLingo uses a unique “Translate-Reflect-Adaptation” 3-step process to ensure translations are contextually coherent and rival professional subtitle teams. It strictly enforces Netflix-standard, single-line subtitles for a clean viewer experience. It features an easy-to-use Streamlit UI with one-click startup and progress resumption if your process gets interrupted. Deep integration with yt-dlp lets you pull source material directly from the web. Supports GPT-SoVITS, Azure, OpenAI, Fish-TTS, and voice cloning for dubbing.
Cons: Because it uses WhisperX for alignment, transcription performance can degrade with heavy background noise. It cannot reliably distinguish between multiple characters or speakers yet, and dubbing may occasionally face timing issues due to natural differences in speech rates between languages.
2. pyVideoTrans
Repository: github.com/jianchang512/pyvideotrans (~16k stars)

pyVideoTrans — CLI/GUI, packaged .exe, multi-role dubbing
pyVideoTrans is a modular, hybrid local-and-cloud toolkit designed for fully automatic video translation—from multi-role diarization to final video compilation. It appeals to both power users who want granular control and newcomers who prefer a packaged setup.
Pros: Offers a pre-packaged Windows .exe that bundles all dependencies, Python, and FFmpeg—eliminating the notorious “dependency hell” for non-technical users. For developers, it uses uv (a fast Rust-based Python package manager) for rapid, isolated installs. Supports both GUI and CLI with headless batch processing. Modular tasks map to pipeline stages: --task stt for transcription, --task sts for translation, --task tts for synthesis, and --task vtv for the full video pipeline. Integrates Faster-Whisper, FunASR, Whisper.cpp, plus LLM translation via Ollama, DeepSeek, ChatGPT, Gemini, and DeepL. TTS options include Edge-TTS, GPT-SoVITS, ChatTTS, and CosyVoice. Optional Demucs vocal separation preserves background music.
Cons: Requires NVIDIA GPU with CUDA 12.8 and cuDNN 9.11 for GPU acceleration. Memory issues (RAM and VRAM) can cause silent process termination during heavy workloads. macOS users report friction with libz.1.dylib and Qwen3-TTS. Default HEVC (libx265) encoding can cause playback issues on older devices—switch to libx264 if needed.
3. KrillinAI
Repository: github.com/krillinai/KrillinAI (~10k stars)

KrillinAI — Shorts, Reels, TikTok, Apple Silicon
KrillinAI is built specifically for content creators, with a strong focus on cross-platform compatibility. It is a minimalist but highly capable tool that handles video translation, voice cloning, and dubbing—with a major advantage: native support for both landscape and portrait video formats.
Pros: Perfectly optimized for short-form platforms like TikTok, YouTube Shorts, Instagram Reels, Douyin, and Bilibili. Automatically adjusts subtitle layouts for platform-specific UI overlays. One-click installation without complex environment setup. Intelligent terminology replacement for accurate professional vocabulary. High-quality voice cloning via CosyVoice. Native Apple Silicon support via WhisperKit—a standout for macOS users who are often left out by CUDA-only tools.
Cons: Running the full pipeline (Whisper, LLM segmentation, TTS) requires capable hardware. Highly automated workflows mean less granular, frame-by-frame control compared to professional video editors. Early desktop GUI iterations have had stability bugs; macOS users may need to bypass Gatekeeper for unsigned builds.
Voice cloning & lip-sync
4. Linly-Dubbing (Linly-Talker)
Repository: github.com/Kedreamix/Linly-Dubbing (~3k stars)

Linly-Dubbing — Demucs, UVR5, CosyVoice, Linly-Talker
Linly-Dubbing is a comprehensive multi-language video translation tool that brings advanced audio engineering into the open-source space. It doesn’t just translate words—it rebuilds the audio track and can modify the visual output to match.
Pros: Integrates Demucs and UVR5 for expert vocal separation from background music and sound effects. Powerful “one-shot” voice cloning requiring only 3–10 seconds of original audio. Lip-sync capabilities via Linly-Talker keep dubbed audio visually consistent with mouth movements. Supports WhisperX, FunASR (strong for Chinese), and multiple TTS engines: CosyVoice, GPT-SoVITS, Edge TTS, XTTS. Translation via Qwen reduces reliance on expensive cloud APIs. Colab script available for cloud execution without local setup.
Cons: Heavy integration of multiple AI models (UVR5, LLMs, CosyVoice, lip-sync) makes it resource-intensive. Setup is complex—strict PyTorch 2.3.1, FFmpeg 7.0.2, CUDA 11.8 or 12.1. Linux users often hit library pathing errors (e.g., libcudnn_ops_infer.so.8). Steep learning curve for users unfamiliar with Conda and complex Python environments.
Lightweight & specialized
5. videoTranslator (by davy1ex)
Repository: github.com/davy1ex/videoTranslator (~50 stars)

videoTranslator — M2M100, Edge TTS, 10 languages
For users who want a lightweight, straightforward command-line tool without a heavy graphical interface, videoTranslator is a highly effective utility.
Pros: Supports 10 languages out of the box. Uses Facebook’s M2M100 models for advanced translation. Implements Edge TTS for free, natural-sounding voice generation. Progress saving, automatic audio timing synchronization, and optional CUDA GPU acceleration. No complex GUI dependencies—ideal for scripting and automation.
Cons: RVC (Retrieval-based Voice Conversion) for human-like voice matching is still in development. Until then, users are limited to standard TTS voices, which can sound robotic compared to premium voice cloning. Requires manual FFmpeg installation and comfort with command-line execution.
6. ViDubb
Repository: github.com/medahmedkrichen/ViDubb (~100 stars)

ViDubb — Wav2Lip, speaker diarization, emotion analysis
ViDubb focuses on delivering a complete, high-quality audio-visual dubbing experience with strong emphasis on synchronization between the new audio track and the video.
Pros: Speaker diarization (separating different speakers), optional emotion analysis, and audio/video mixing. Integrates Wav2Lip models for lip-sync synchronization, matching the translated voiceover to the original actor’s mouth movements. Gradio web app provides a user-friendly interface. Voice and emotion cloning with multilingual support.
Cons: Installation is highly technical—Anaconda, specific Conda environments, CUDA configuration, and manual download/placement of Wav2Lip pre-trained models. Wav2Lip can produce blurry or pixelated lower-face artifacts on some footage.
7. AutoDub (by shyhirt)
Repository: github.com/shyhirt/AutoDub (~200 stars)

AutoDub — Whisper, Ollama, XTTS v2, 100+ languages
AutoDub is built for users who need fully local, offline processing—no data leaves your machine. Ideal for journalists, enterprises, or anyone with strict privacy or compliance requirements.
Pros: Fully offline pipeline using Whisper for transcription, Ollama for local LLM translation (100+ languages), and XTTS v2 for voice cloning. No API keys, no cloud dependencies. Supports 100+ languages with complete data sovereignty. Lightweight and focused—no heavy visual modification or complex UI.
Cons: Voice quality depends on XTTS v2, which is capable but not as refined as CosyVoice or GPT-SoVITS. Requires local GPU for reasonable performance. No lip-sync or visual modification—audio replacement only.
8. Bluez-Dubbing
Repository: github.com/Globluez/bluez-dubbing (~25 stars)

Bluez-Dubbing — WhisperX, VAD alignment, Netflix-style subtitles
Bluez-Dubbing is a modular, production-ready system built for developers who want REST API access, CLI tools, and a web UI. It focuses on sophisticated audio synchronization and flexible deployment.
Pros: End-to-end pipeline with WhisperX for ASR (word-level timestamps, speaker diarization, 50+ languages), M2M-100 or deep-translator for translation, Chatterbox or Edge TTS for voice synthesis. VAD-based duration alignment with pyrubberband time-stretching for natural pacing. MelBand RoFormer for vocal isolation while preserving background audio. Netflix-style burned-in subtitle rendering. Supports video dubbing with or without subtitles, audio-only translation, or subtitling-only modes. Apache 2.0 licensed.
Cons: Smaller community and fewer stars than the heavyweights. Documentation and troubleshooting may be sparser. Requires familiarity with FastAPI and Python for customization.
Cutting-edge & cloud
9. InfiniteTalk
Repository: github.com/MeiGen-AI/InfiniteTalk (~5k stars)

InfiniteTalk — Full body/head sync, Wan2.1 diffusion
InfiniteTalk represents the cutting edge: it doesn’t just replace audio—it regenerates the video so that head movements, body posture, and facial expressions sync with the dubbed speech. Built for sparse-frame video dubbing.
Pros: Synchronizes lips, head, body, and expressions with the new audio—eliminating visual dissonance entirely. Supports infinite-length video generation. Built on Wan2.1 diffusion model with FP8 quantization and multi-GPU support. Can function as image-to-video with audio input. Apache 2.0 licensed.
Cons: Extremely resource-intensive (14B+ parameter model). Long videos can exhibit color shifts and identity drift. FusionX distillation trades quality for speed. Requires significant VRAM and technical expertise to deploy.
10. Ariel (by Google Marketing Solutions)
Repository: github.com/google-marketing-solutions/ariel (~100 stars)

Ariel — Google Cloud, Gemini, ~$0.50/30s
Ariel is Google’s open-source dubbing solution, designed for video ads and scalable multilingual campaigns. It runs entirely in the cloud—no local GPU required.
Pros: Cloud-native architecture abstracts hardware entirely. Uses Demucs for separation, pyannote for diarization, faster-whisper and Gemini 1.5 Flash for transcription and translation, Google Cloud TTS or ElevenLabs for voice. Cost-effective: roughly $0.50 for a 30-second video. 40+ languages. Installable via pip (gtech-ariel). Apache 2.0 licensed.
Cons: Requires Google Cloud Platform setup (service accounts, IAM, Cloud Run). Data is processed in the cloud—not suitable for strict offline or privacy-first use cases. Optimized for ad workflows rather than general content.
11. Dubbie
Repository: github.com/DubbieHQ/dubbie (~200 stars)

Dubbie — Editor workflow, ~$0.1/min, 23 languages
Dubbie is an open-source AI dubbing studio with a video-editor-like interface. It targets cost-conscious creators and small agencies with pay-per-use pricing around $0.1/min.
Pros: Video-editor workflow: edit translations, regenerate audio segments, adjust segment timing to fix AI mistakes, preview in real-time. Uses Whisper for transcription, LLM for segmentation and translation, Azure/OpenAI TTS for voice. Extracts and preserves background music. 23 languages. Built with NextJS, TypeScript, Firebase—familiar stack for web developers. AGPL-3.0 licensed.
Cons: Still in early development; not at feature parity with enterprise tools. Cloud-based TTS incurs API costs. AGPL license may affect commercial deployment. Best for personal use and small teams.
Niche workflows
12. GhostCut (JollyToday)
Repository: github.com/JollyToday/GhostCut-auto_video_translation (~165 stars)

GhostCut — Inpainting, remove & translate burned-in text
GhostCut fills a niche that most dubbing tools ignore: content with hard-coded or burned-in subtitles. It uses video inpainting to detect and erase embedded text, then translates and re-embeds subtitles in the target language—or dubs the audio—while preserving style and position.
Pros: Handles hard subtitles automatically—no need for clean source files or manual masking. Smart text and watermark removal via OCR and inpainting. Full pipeline: remove embedded text → translate → re-embed or dub. Supports 40+ languages. Works on content where the original project files are unavailable (e.g., downloaded videos, screen recordings). API support for integration into custom workflows.
Cons: May rely on cloud/API services for some processing rather than fully local execution. Smaller community than the major tools. Best suited to content that already has hard subtitles—less relevant if you’re working with clean video and soft subs from the start.
13. Chenyme-AAVT
Repository: github.com/chenyme/Chenyme-AAVT (~3k stars)

Chenyme-AAVT — GPT-4/4o + Whisper, batch processing
Chenyme-AAVT is a streamlined, fully automatic video and audio translation tool optimized for batch processing. It prioritizes LLM-powered translation over complex UIs or visual modification—ideal for high-volume workflows.
Pros: Lightweight, API-heavy pipeline: Whisper for ASR, GPT-4/GPT-4o (or KIMI, DeepSeek) for translation, then FFmpeg merge. No multi-role assignment UI or lip-sync—just fast, unattended batch processing. Streamlit WebUI for easy use. GPU acceleration and VAD support. Multiple subtitle format outputs with editing and preview. Flexible deployment: Windows (CPU/GPU), Docker, PyPI. MIT licensed.
Cons: Focused on subtitle translation and merge rather than full voice dubbing—output is primarily translated subtitles with optional TTS. Relies on cloud LLM APIs for best results. Less suitable for users who need voice cloning or lip-sync.
14. open-dubbing (Softcatala)
Repository: github.com/Softcatala/open-dubbing (~373 stars)

open-dubbing — Coqui TTS, NLLB, Apertium, European languages
open-dubbing is a minimalist AI dubbing system built around open-source models. It runs locally and supports multiple TTS engines, with a focus on European languages—especially Catalan.
Pros: Runs entirely on open-source models locally. Supports Coqui TTS, MMS, Edge TTS, and OpenAI TTS. Whisper for source language detection. Multiple translation engines: Meta’s NLLB and Apertium. Gender-aware voice assignment for natural-sounding output. Live demo at softcatala.org/doblatge for English/Spanish to Catalan. Apache 2.0 licensed.
Cons: Experimental—documentation notes that errors can accumulate at each step (ASR, translation, TTS). Best documented for Catalan; other language pairs may have less coverage. Smaller community than the major tools.
15. youtube-auto-dub
Repository: github.com/mangodxd/youtube-auto-dub (~210 stars)

youtube-auto-dub — Whisper, Edge-TTS, music preservation
youtube-auto-dub is a purpose-built pipeline for YouTube video dubbing. It automates the full workflow with a focus on preserving background music and achieving smart audio-video synchronization.
Pros: Built specifically for YouTube: transcribes with Whisper, translates with Google Translate, generates dubbed audio with Edge-TTS. Smart audio-video synchronization. Background music preservation—critical for music-heavy or documentary-style content. Free TTS (Edge-TTS). Straightforward pipeline for creators who primarily work with YouTube content.
Cons: YouTube-specific—less optimized for other platforms or general video files. Google Translate for translation (no LLM context awareness). Relies on Edge-TTS for voice (no voice cloning). Smaller scope than full-featured tools like pyVideoTrans or VideoLingo.
Quick Comparison
Compare all 15 tools at a glance.
| Tool | Best For | Key Strength | Setup Complexity |
|---|---|---|---|
| VideoLingo | Netflix-style subtitles, polished output | 3-step translation, Streamlit UI | Medium |
| pyVideoTrans | Power users, batch processing, Windows | Modular CLI/GUI, packaged .exe | Low (Windows) / Medium (dev) |
| KrillinAI | Shorts, Reels, TikTok, portrait video | Cross-platform, Apple Silicon | Low |
| Linly-Dubbing | Vocal separation, lip-sync, voice cloning | Demucs/UVR5, Linly-Talker | High |
| videoTranslator | CLI automation, lightweight | Edge TTS, M2M100, no GUI | Low |
| ViDubb | Lip-sync, emotion preservation | Wav2Lip, speaker diarization | High |
| AutoDub | Privacy, offline, compliance | 100% local, Ollama + XTTS | Medium |
| Bluez-Dubbing | API/CLI automation, developers | REST API, VAD alignment | Medium |
| InfiniteTalk | Generative video dubbing | Full body/head sync, diffusion | Very high |
| Ariel | Video ads, cloud-native | Google-backed, ~$0.50/30s | Low (cloud) |
| Dubbie | Editor workflow, cost-conscious | ~$0.1/min, edit segments | Low |
| GhostCut | Hard subtitles, burned-in text | Inpainting, remove & translate | Medium |
| Chenyme-AAVT | Batch processing, content farms | GPT-4/4o + Whisper, lightweight | Low |
| open-dubbing | Local, European languages | Coqui TTS, NLLB, Apertium | Medium |
| youtube-auto-dub | YouTube-specific dubbing | Whisper + Edge-TTS, music preservation | Low |
Which Tool Should You Choose?
- Cinematic subtitles and a great UI? VideoLingo is the frontrunner.
- Social media creator churning out Shorts and Reels? KrillinAI will streamline your workflow.
- Windows user who wants minimal setup? pyVideoTrans’s packaged .exe gets you running fast.
- Absolute control over vocal separation, voice cloning, and lip-sync? Linly-Dubbing or ViDubb—if you’re willing to invest in setup.
- Lightweight CLI for scripting or automation? videoTranslator fits the bill.
- Strict privacy or offline requirements? AutoDub is built for you.
- REST API or pipeline automation? Bluez-Dubbing offers modular, production-ready tooling.
- Cutting-edge generative video (body + head + lips sync)? InfiniteTalk—if you have the GPU power.
- Cloud-native, no local GPU? Ariel (Google) is built for video ads and scalable campaigns.
- Editor-style workflow with low per-minute cost? Dubbie provides an open-source studio at ~$0.1/min.
- Content with burned-in or hard subtitles? GhostCut removes embedded text, translates, and re-embeds—or dubs—in one pipeline.
- High-volume batch processing? Chenyme-AAVT offers a lightweight, API-heavy pipeline with GPT-4/4o.
- Local-only, European languages? open-dubbing runs on open-source models with Coqui TTS and NLLB.
- YouTube-focused with music preservation? youtube-auto-dub is built for that workflow.
When Open-Source Isn’t Enough
Open-source tools excel at control, cost, and customization—but they require technical investment. If you need human-in-the-loop quality assurance, emotion-preserving AI, or enterprise workflows with native reviewer support, a managed platform may be a better fit. Videodubbing.com combines AI dubbing with professional editing, multi-speaker timelines, and human review—ideal when accuracy and brand consistency are non-negotiable.
Need professional dubbing with human review? Try AI dubbing with a professional editor.




Use the share button below if you liked it.