Run powerful local AI models directly on your Android device using Termux and Ollama — completely offline and free.
This guide walks you through:
- Installing Termux
- Setting up required packages
- Installing Ollama
- Running a local AI model
- Android device (recommended: 6GB+ RAM for better performance)
- Stable internet connection (for initial setup and model download)
- Basic familiarity with terminal commands
Download Termux from F-Droid (recommended):
👉 https://f-droid.org/packages/com.termux/
⚠️ Avoid the Play Store version — it is outdated.
Open Termux and run:
pkg update && pkg upgrade -yGrant storage permission:
termux-setup-storageInstall Ollama package:
pkg install ollamaOllama requires a background service to run:
ollama serve💡 Keep this session running.
Open a new Termux session:
- Swipe from the left
- Tap "New Session"
Link the Ollama binary:
ln -s $PREFIX/bin/ollama $PREFIX/bin/serveRun a model (example):
ollama run llama3.2:1b| Model | Size | Notes |
|---|---|---|
| llama3.2:1b | Small | Fast, low resource usage |
| llama3:8b | Medium | Better quality |
| mistral | Medium | Balanced performance |
- Close background apps
- Use smaller models if your device lags
- Keep device cool (AI workloads heat phones fast)
- Ensure network is active before downloading models
- Restart Termux or re-run install steps
- Use a smaller model like
llama3.2:1b
- First run downloads the model (can be large)
- Works fully offline after download
- Performance depends on your device hardware
n4od
GitHub: https://github.com/n4od
Feel free to fork, improve, and submit pull requests.
MIT License