Open Source · Apache 2.0

Dub Any Video
Into Any Language

Modular end-to-end AI dubbing pipeline. WhisperX speech recognition, neural translation, and voice synthesis—completely open source.

Star on GitHubSee How It Works
EnglishFrançaisEspañolDeutsch日本語中文한국어العربيةहिंदीPortuguês
50+
Languages
4
AI Models
3
Output Modes
0
Vendor Lock-in

From Original to Dubbed in Seconds

Original · Chinese
Source Video
Any video or audio file, in any language
PROCESSING
AI Pipeline
ASR → Translate → TTS → Merge
Dubbed · EN
Dubbed Result
Natural voice + burned-in subtitles

Everything You Need for
Video Localization

Professional-grade dubbing without the professional-grade price tag.

End-to-End Pipeline

Complete workflow from video input to dubbed output with burned-in subtitles. Upload a video, pick your target languages, and get back a fully localized file. No external tools, no manual steps.

Multiple Modes

Video dubbing with subtitles, audio-only translation, or subtitling-only mode. Maximum flexibility for every use case.

Modular Architecture

Swap ASR, translation, and TTS models independently. Use our defaults or plug in your own.

Smart Sync

VAD-based duration alignment and pyrubberband time-stretching for seamless voice replacement.

Netflix-Style Subtitles

Professional subtitle rendering with multiple styles — Netflix, bold-desktop, or mobile-optimized. Subtitles are burned directly into the video with pixel-perfect typography and positioning.

50+ Languages

Major world languages with automatic detection. From Mandarin to Arabic, Hindi to Portuguese.

Four Steps to a
Dubbed Video

Each component runs as an independent microservice. Scale what you need.

STEP 01

Speech Recognition

WhisperX extracts speech with word-level timestamps and speaker diarization

STEP 02

Translation

M2M-100 or deep-translator converts text while preserving context and timing

STEP 03

Voice Synthesis

Chatterbox voice cloning or Edge TTS generates natural speech in the target language

STEP 04

Merge & Output

Intelligent audio alignment, background mixing, and subtitle burning via FFmpeg

REST API &
CLI Tools

Integrate into your workflow with a comprehensive REST API or use the CLI for batch processing.

FastAPI Backend

High-performance async API with automatic OpenAPI documentation

Web UI Included

Intuitive interface with live SSE progress tracking and preview

Microservice Architecture

Scale ASR, translation, and TTS independently as demand grows

# Dub a video to French with Netflix subtitles

curl -X POST 'https://globluez.com/v1/dub' \
  --data-urlencode 'video_url=./video.mp4' \
  --data-urlencode 'target_langs=fr' \
  --data-urlencode 'asr_model=whisperx' \
  --data-urlencode 'tts_model=edge_tts' \
  --data-urlencode 'subtitle_style=netflix'

# Response streams progress via SSE
# Final output: ./outs/{run_id}/fr/final_video.mp4
Powered By
WhisperX
PyTorch
FastAPI
FFmpeg
M2M-100
Edge TTS
Chatterbox
Silero VAD
pyannote
Python 3.11
pyrubberband
yt-dlp

Ready to Go
Global?

Start dubbing your content in minutes. Self-hosted, completely free, no vendor lock-in.