Installation
BallonTranslator can be installed on Windows, macOS, and Linux. Choose the method that best fits your needs and technical comfort level.System Requirements
Minimum Requirements
- OS: Windows 10+, macOS 10.14+, or Linux (Ubuntu 20.04+ recommended)
- RAM: 8GB (16GB recommended for better performance)
- Storage: 5GB free space (for application + models)
- Python: 3.8 - 3.12 (if running from source)
Recommended for Best Performance
- GPU: NVIDIA GPU with 4GB+ VRAM (GTX 1060 or better)
- RAM: 16GB or more
- Storage: SSD for faster model loading
GPU acceleration is optional but highly recommended. The application will work on CPU-only systems, but processing will be significantly slower.
Windows Installation
Option 1: Pre-packaged Version (Recommended for Beginners)
The easiest way to get started on Windows without installing Python manually.Download the Package
Download
BallonsTranslator_dev_src_with_gitpython.7z from:Extract the Archive
Extract the
.7z file using 7-Zip to your preferred location.Run the Application
Double-click
launch_win.bat to start BallonTranslator.On first launch, the application will automatically download required libraries and AI models. This may take 10-30 minutes depending on your internet connection.
Updating the Pre-packaged Version
To update to the latest version:Option 2: Run from Source
For more control and easier updates, run directly from source code.Install Prerequisites
Install Python and Git:
- Python: Download from python.org (version 3.8 - 3.12)
- Git: Download from git-scm.com
Updating from Source
macOS Installation
Running from Source (Recommended)
Apple Silicon (M1/M2/M3) Users: The application will automatically use Metal acceleration for better performance. No additional configuration needed.
Building macOS Application (Advanced)
You can build a standalone.app bundle, but this is experimental and may have issues. See the macOS app build guide for details.
Linux Installation
GPU Acceleration Setup
NVIDIA CUDA
For NVIDIA GPUs, CUDA acceleration is enabled automatically when PyTorch with CUDA support is installed.AMD ROCm (Windows)
BallonTranslator supports AMD GPUs through two methods:Method 1: ZLUDA (Compatible with more GPUs)
Method 1: ZLUDA (Compatible with more GPUs)
Update GPU Drivers
Update to the latest AMD drivers (24.12.1 or newer recommended).
Download and install AMD HIP SDK.
Download ZLUDA
Download ZLUDA and extract to
C:\zludaReplace CUDA Libraries
Copy and rename files from Replace the files in:
C:\zluda:BallonsTranslator\ballontrans_pylibs_win\Lib\site-packages\torch\lib\| Windows | HIP SDK | ZLUDA |
|---|---|---|
| 11 | 7.1.1 | 3.9.6 |
| 10/11 | 6.4.2 | 3.9.5 |
| 10/11 | 6.2.4 | 3.9.5 |
Method 2: Native ROCm (RDNA3+ only)
Method 2: Native ROCm (RDNA3+ only)
Supported GPUs: RX 7900, RX 7800, RX 7700, RX 7600, RX 9070, RX 9060, PRO W7900, W7800, W7700
Apple Silicon (M1/M2/M3)
Metal acceleration is enabled automatically on Apple Silicon Macs. No additional setup required.Troubleshooting
Installation Issues
Models fail to download automatically
Models fail to download automatically
Solution: Manually download the
data folder from:Extract it to the BallonTranslator source directory.'Python not found' error
'Python not found' error
Solution:
- Ensure Python is installed from python.org (not Microsoft Store)
- Check that Python is added to PATH during installation
- Try running
python3instead ofpython
PyTorch installation fails
PyTorch installation fails
Solution:
Application crashes on startup
Application crashes on startup
Solution: Run from command line to see error messages:Check the logs in the terminal for specific errors.
Third-party Input Method causes display bugs
Third-party Input Method causes display bugs
Known Issue: Some third-party input methods (IME) may cause display issues in the text editor.Workaround: Use system default input method when editing text. See issue #76.
Performance Issues
Slow processing speed
Slow processing speed
Check GPU acceleration:
- Verify your GPU is detected
- Ensure CUDA/ROCm is properly installed
- Set Detection and OCR modules to use CUDA in settings
- Check “Load models on demand” to reduce memory usage
Out of memory errors
Out of memory errors
Solutions:
- Close other GPU-intensive applications
- Reduce batch size in settings
- Enable “Low VRAM mode” for certain translators (like Sakura-13B)
- Process smaller images or resize them first
Verifying Installation
After installation, verify everything works:Next Steps
Quick Start
Learn how to translate your first comic in under 5 minutes
