Skip to main content

Installation

BallonTranslator can be installed on Windows, macOS, and Linux. Choose the method that best fits your needs and technical comfort level.

System Requirements

Minimum Requirements

  • OS: Windows 10+, macOS 10.14+, or Linux (Ubuntu 20.04+ recommended)
  • RAM: 8GB (16GB recommended for better performance)
  • Storage: 5GB free space (for application + models)
  • Python: 3.8 - 3.12 (if running from source)
  • GPU: NVIDIA GPU with 4GB+ VRAM (GTX 1060 or better)
  • RAM: 16GB or more
  • Storage: SSD for faster model loading
GPU acceleration is optional but highly recommended. The application will work on CPU-only systems, but processing will be significantly slower.

Windows Installation

The easiest way to get started on Windows without installing Python manually.
1

Download the Package

Download BallonsTranslator_dev_src_with_gitpython.7z from:
2

Extract the Archive

Extract the .7z file using 7-Zip to your preferred location.
# The extracted folder will contain:
# - launch_win.bat
# - scripts/
# - ballontranslator/
# - and other files
3

Run the Application

Double-click launch_win.bat to start BallonTranslator.
On first launch, the application will automatically download required libraries and AI models. This may take 10-30 minutes depending on your internet connection.
4

Manual Model Download (if needed)

If automatic downloads fail, manually download the data folder and ballontrans_pylibs_win.7z from the same location and extract them to the program directory.
Windows 7 Users: The pre-packaged version does not work on Windows 7. Please install Python 3.8 and use Option 2 instead.

Updating the Pre-packaged Version

To update to the latest version:
# Navigate to the installation directory and run:
scripts\local_gitpull.bat

Option 2: Run from Source

For more control and easier updates, run directly from source code.
1

Install Prerequisites

Install Python and Git:
  • Python: Download from python.org (version 3.8 - 3.12)
    Do NOT use Python from the Microsoft Store - download from python.org directly.
  • Git: Download from git-scm.com
During Python installation, make sure to check “Add Python to PATH”.
2

Clone the Repository

Open Command Prompt or PowerShell and run:
git clone https://github.com/dmMaze/BallonsTranslator.git
cd BallonsTranslator
3

Launch the Application

python launch.py
On first run, this will:
  • Install PyTorch and other dependencies
  • Download required AI models (~2GB)
  • Set up the environment

Updating from Source

python launch.py --update

macOS Installation

1

Install Prerequisites

Install Homebrew (if not already installed):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Install Python and Git:
brew install python@3.11 git
2

Clone and Launch

git clone https://github.com/dmMaze/BallonsTranslator.git
cd BallonsTranslator
python3 launch.py
Apple Silicon (M1/M2/M3) Users: The application will automatically use Metal acceleration for better performance. No additional configuration needed.

Building macOS Application (Advanced)

You can build a standalone .app bundle, but this is experimental and may have issues. See the macOS app build guide for details.
Running from source is currently the most stable method on macOS.

Linux Installation

1

Install Dependencies

Ubuntu/Debian:
sudo apt update
sudo apt install python3 python3-pip git
Fedora:
sudo dnf install python3 python3-pip git
Arch Linux:
sudo pacman -S python python-pip git
2

Clone and Launch

git clone https://github.com/dmMaze/BallonsTranslator.git
cd BallonsTranslator
python3 launch.py

GPU Acceleration Setup

NVIDIA CUDA

For NVIDIA GPUs, CUDA acceleration is enabled automatically when PyTorch with CUDA support is installed.
# Force reinstall PyTorch with CUDA support:
python launch.py --reinstall-torch
The launcher script automatically installs PyTorch with CUDA 11.8 support, which is compatible with most modern NVIDIA GPUs.

AMD ROCm (Windows)

BallonTranslator supports AMD GPUs through two methods:
1

Update GPU Drivers

Update to the latest AMD drivers (24.12.1 or newer recommended). Download and install AMD HIP SDK.
2

Download ZLUDA

Download ZLUDA and extract to C:\zluda
3

Configure Environment Variables

Add these to your system PATH:
  • C:\zluda
  • %HIP_PATH%bin
4

Replace CUDA Libraries

Copy and rename files from C:\zluda:
cublas.dll → cublas64_11.dll
cusparse.dll → cusparse64_11.dll
nvrtc.dll → nvrtc64_112_0.dll
Replace the files in: BallonsTranslator\ballontrans_pylibs_win\Lib\site-packages\torch\lib\
5

First Run

Launch the application and set OCR/Detection to use CUDA.
Keep Image Inpainting on CPU mode when using ZLUDA.
The first run will compile PTX files (5-10 minutes). Subsequent runs will be faster.
Version Compatibility:
WindowsHIP SDKZLUDA
117.1.13.9.6
10/116.4.23.9.5
10/116.2.43.9.5
Requires Python 3.12, HIP SDK 6.4+, and AMD 2026.1.1 drivers. Only works with newer AMD GPUs (RX 7000 series and newer).
Supported GPUs: RX 7900, RX 7800, RX 7700, RX 7600, RX 9070, RX 9060, PRO W7900, W7800, W7700
# Launch with AMD ROCm support:
launch_win_amd_nightly.bat

Apple Silicon (M1/M2/M3)

Metal acceleration is enabled automatically on Apple Silicon Macs. No additional setup required.

Troubleshooting

Installation Issues

Solution: Manually download the data folder from:Extract it to the BallonTranslator source directory.
Solution:
  1. Ensure Python is installed from python.org (not Microsoft Store)
  2. Check that Python is added to PATH during installation
  3. Try running python3 instead of python
Solution:
# Reinstall PyTorch manually:
python -m pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu118
Solution: Run from command line to see error messages:
python launch.py --debug
Check the logs in the terminal for specific errors.
Known Issue: Some third-party input methods (IME) may cause display issues in the text editor.Workaround: Use system default input method when editing text. See issue #76.

Performance Issues

Check GPU acceleration:
  1. Verify your GPU is detected
  2. Ensure CUDA/ROCm is properly installed
  3. Set Detection and OCR modules to use CUDA in settings
Enable on-demand loading (Settings panel):
  • Check “Load models on demand” to reduce memory usage
Solutions:
  1. Close other GPU-intensive applications
  2. Reduce batch size in settings
  3. Enable “Low VRAM mode” for certain translators (like Sakura-13B)
  4. Process smaller images or resize them first

Verifying Installation

After installation, verify everything works:
# Check version and commit:
python launch.py

# Look for output like:
# Python version: 3.11.x
# Version: 1.4.0
# Branch: dev
# Commit hash: <commit-hash>
If the application launches successfully, you’re ready to proceed to the Quick Start Guide!

Next Steps

Quick Start

Learn how to translate your first comic in under 5 minutes