π GitHub Template Repository - Use this template to create your own customized Vivado Docker environment!
This is a GitHub template repository that provides a complete Docker configuration for building a Vyges development environment with Xilinx Vivado tools. Fork this template to create your own customized version.
- Click "Use this template" above, or visit:
https://github.com/vyges/xilinx-tools/generate - Clone your new repository
- Customize patches if needed (see Custom Patches section below)
- Build your Docker image
# 1. Fork this repository to your GitHub account
# 2. Clone your forked repository
git clone https://github.com/YOUR_USERNAME/xilinx-tools.git
cd xilinx-tools
# 3. Customize patches if needed
# 4. Build your image
./build.shAdd your own patches to the patches/ directory:
# Create custom patch files
patches/
βββ vivado-2025.1-postinstall.patch
βββ vivado-2025.2-postinstall.patch
βββ ubuntu-24.04-vivado-2025.1-postinstall.patch
βββ your-custom-patch.patch # Add your patches hereNote: Environment variables can be overridden via command line during build (see Configuration section below).
This template provides:
- β Complete Dockerfile with Ubuntu 24.04 + Vivado 2025.1
- β
Automated build script (
build.sh) with caching and monitoring - β
Download script (
download-installer.sh) for enterprise/internal networks - β Patch system for post-install fixes
- β Health monitoring with built-in health checks
- β Comprehensive logging and build recovery
- β Multi-organization support for internal networks
- β Production-ready configuration
This GitHub template repository provides a complete Docker configuration for building a Vyges development environment with Xilinx Vivado tools. The Docker image includes:
- Ubuntu 24.04 LTS base image
- Xilinx Vivado and Vitis (configurable version) with device support for 7 Series (Artix-7, Kintex-7, Spartan-7, Virtex-7) and Zynq-7000 SoCs only (see Install configuration below)
- Development tools and dependencies for IP development
- Pre-configured environment optimized for Vyges workflows
- Built-in health monitoring and container management
- Automated build system with caching and recovery
- Docker or Podman installed and running
- Access to Xilinx Vivado installer files
wgetcommand available (for download script)- Minimum 300GB free disk space (see Disk Space Requirements below)
- Server machine that does not sleep or suspend (required for 3.5+ hour builds)
- Proper system limits configured (see System Limits section below)
For Running the Container (after build):
- RAM: 64GB+ minimum, 128GB+ recommended (184GB image requires significant memory)
- CPU: 8+ cores recommended (container loading and Vivado operations are CPU-intensive)
- Storage: NVMe SSD required (HDD will cause severe performance issues)
- Available Space: 100-200GB for container runtime and temporary files
- Startup Time: 15-30+ minutes to load the 184GB image (even with 384GB RAM)
- Docker: Minimum 20.10+, Recommended 24.0+, Current 28.3.3 with Buildx v0.26.1
- Podman: Minimum 4.0+, Recommended 4.9+ (β Successfully tested with Podman 4.9.3)
- Current: β Podman 4.9.3 - Excellent!
Your Container Runtime Benefits:
- Podman 4.9.3: Latest features with rootless containers and Docker compatibility
- Docker Compatibility: Podman can run Docker commands seamlessly
- Performance: Optimized layer caching and resource management
- Security: Enhanced security with rootless containers
- Ubuntu 24.04: Full compatibility and optimization
# Install Podman from Ubuntu repositories
sudo apt update
sudo apt install podman
# Verify installation
podman --version# Remove old Docker (if installed via apt)
sudo apt remove docker docker-engine docker.io containerd runc
# Remove Snap Docker (if you want to replace it)
sudo snap remove docker
# Install Docker from official repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Verify installation
docker --version
docker buildx version# Install from Ubuntu repositories (may be older but stable)
sudo apt update
sudo apt install docker.io docker-compose
# Verify installation
docker --version# Install Docker via Snap
sudo snap install docker
# Verify installation
docker --versionAll examples in this README use docker commands, but they work identically with podman:
# These commands are equivalent:
docker build -t vyges-vivado .
podman build -t vyges-vivado .
docker run --rm vyges-vivado echo "test"
podman run --rm vyges-vivado echo "test"The build process requires proper system limits to handle large files and operations. The build script automatically optimizes these, but you can pre-configure them:
# Check current limit
ulimit -n
# Set recommended limit (build script will do this automatically)
ulimit -n 65536
# Make permanent in ~/.bashrc or /etc/security/limits.conf
echo "ulimit -n 65536" >> ~/.bashrc# Check current limit
ulimit -f
# Set to unlimited (recommended for large builds)
ulimit -f unlimited
# Make permanent
echo "ulimit -f unlimited" >> ~/.bashrc# Check current limits
ulimit -a
# Set memory limits (adjust based on your system)
ulimit -v unlimited # Virtual memory
ulimit -m unlimited # Physical memoryThe build script automatically performs these optimizations:
- Open Files: Sets limit to 65,536 (required for large file operations)
- File Size: Checks and warns if limit is too low
- Memory Management: Monitors memory usage during build
- Cache Management: Optimizes container build cache
- Resource Monitoring: Real-time monitoring of CPU, memory, and disk usage
# Increase shared memory limits (for large builds)
echo "kernel.shmmax = 68719476736" >> /etc/sysctl.conf
echo "kernel.shmall = 4294967296" >> /etc/sysctl.conf
sysctl -p
# Optimize memory management
echo "vm.swappiness = 10" >> /etc/sysctl.conf
echo "vm.dirty_ratio = 15" >> /etc/sysctl.conf
echo "vm.dirty_background_ratio = 5" >> /etc/sysctl.conf
sysctl -pThe build script includes comprehensive system monitoring:
# Monitor build progress (in another terminal)
./build.sh --progress
# Monitor system resources (in another terminal)
./build.sh --monitor- Machine Information: Hostname, OS, kernel, architecture
- CPU Details: Model, cores, speed
- Memory: Total and available RAM
- Storage: Disk space and inode usage
- Container Runtime: Version and capabilities
- System Limits: All ulimit values
- File System: Mount points and permissions
The script provides intelligent build time estimation based on:
- CPU Cores: Parallel processing capability
- RAM: Memory pressure and swapping risk
- Storage: I/O performance and available space
- Container Runtime: Docker vs Podman performance characteristics
# "Too many open files" error
ulimit -n 65536
# "File too large" error
ulimit -f unlimited
# "Out of memory" during build
# Check available RAM and consider increasing swap
free -h
swapon --show# Check all current limits
ulimit -a
# Check system-wide limits
cat /proc/sys/fs/file-max
cat /proc/sys/kernel/shmmax
# Check available resources
free -h
df -hThe Docker build process requires significant disk space due to multiple stages:
- Vivado Installer: ~120GB (original tar file)
- Ubuntu Base Image: ~2-3GB (downloaded during build)
- Package Installation: ~5-10GB (apt packages and dependencies)
- Vivado Installation: ~120GB (extracted and installed)
- Final Image: ~120-150GB (compressed Docker image)
- Minimum Free Space: 300GB (build only)
- Recommended Free Space: 500GB (build + runtime)
- Peak Usage During Build: ~250GB
- Runtime Requirements: Additional 50-100GB for container operations
- Vivado Installer: Can be deleted after successful build
- Docker Build Cache: Can be cleaned with
docker system prune - Final Image Size: ~184GB (verified with successful build)
- Use
--no-cacheflag for clean builds - Clean Docker system regularly:
docker system prune -a - Consider building on a dedicated drive with sufficient space
- Monitor disk usage during build:
df -h
# Download Ubuntu 24.04 base image before building (AMD supports Ubuntu 24.04 LTS)
docker pull ubuntu:24.04
# Verify the image is cached locally
docker images ubuntu:24.04# Build with explicit cache usage
docker build --cache-from ubuntu:24.04 -t vyges-vivado .
# Use buildx for advanced caching and parallel builds
docker buildx build --cache-from type=local,src=/tmp/.buildx-cache -t vyges-vivado .
# Buildx with parallel layer building (Docker 28.3.3 feature)
docker buildx build --cache-from type=local,src=/tmp/.buildx-cache \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--platform linux/amd64 \
-t vyges-vivado .The Dockerfile is optimized for layer caching:
- Base Image:
ubuntu:24.04(cached separately; aligns with AMD installer support for Ubuntu 24.04 LTS; Docker Hub does not tag sub-minor versions like 24.04.2) - Package Installation: Single RUN command for all packages
- Vivado/Vitis Installation: Runs installer ConfigGen, then
scripts/filter_install_config_7series_zynq.shto restrict to 7 Series and Zynq-7000 SoCs, then batch Install; separate layers for installer, installation, and cleanup - Patches: Applied in separate layers
For Enterprise/Internal Networks: Use the provided script to download from your internal Xilinx installer repository:
# Download from internal network
./download-installer.sh -i "https://internal.example.com/xilinx"
# Download specific version
./download-installer.sh -v 2024.2 -i "https://internal.example.com/xilinx"
# Download with update
./download-installer.sh -u "FPGAs_AdaptiveSoCs_Unified_SDI_2025.1_0530_0145_update.tar" -i "https://internal.example.com/xilinx"For Public/Individual Use:
Download the Xilinx Vivado installer manually from the Xilinx website and place it in the vivado-installer/ directory.
Required files (tar only): You need only the .tar archive (e.g. FPGAs_AdaptiveSoCs_Unified_SDI_2025.2_1114_2157.tar). Optionally add the .tar.digests file (same name with .digests suffix) to verify integrity. The build does not use the .bin self-extractor or other download variantsβonly the .tar is extracted and used for the batch install.
The build follows AMDβs batch-mode installation flow: the installer is run with ConfigGen to generate a valid config for the installed version, then a filter script restricts that config to the desired modules, then Install runs. See Batch-Mode Installation Flow.
- Flow: ConfigGen β
scripts/filter_install_config_7series_zynq.shβ Install (with--location /tools/Xilinx). - Install path:
/tools/Xilinx(set via xsetup--location; no config-file editing). - Edition: Vivado ML Standard by default (no Enterprise license required). Override with
--build-arg VIVADO_EDITION="Vivado ML Enterprise"if you have an Enterprise license. - Products: Vivado, Vitis Embedded Development
- Device families enabled: Artix-7, Kintex-7, Spartan-7, Virtex-7 (7 Series), Zynq-7000 All Programmable SoC
- Other families: Disabled (UltraScale+, etc.)
- Post-install: The build runs
installLibs.sh(under/tools/Xilinx/<version>/Vitis/scripts) as root to install OS packages (e.g. libtinfo5, libncurses5) required by the tools. - Debugging: The exact config used for the install is (1) logged during the image build (printed in the build output), (2) stored in the image at
/opt/vyges/install_config_used.txt, and (3) after a successful build, saved to the directory where you ran the build asinstall_config_used.txt(so you can inspect it without running a container).
This avoids hand-maintained config files that can break when the installerβs module names change (e.g. 2025.2). To change device support, edit scripts/filter_install_config_7series_zynq.sh and add or remove module names in the ENABLE list. For reference: the installer .tar extracts with a top-level directory; with --strip-components=1 the xsetup binary is at ./xsetup in the extract dir. Module names (7 Series, Zynq-7000, etc.) are those produced by ConfigGen; you can inspect a generated config or use vivado-installer/file_listing.txt (from tar -tvf <installer>.tar) to confirm layout.
Reference: xsetup batch options β From the extracted installer dir, run ./xsetup --help. Key options: -b ConfigGen (generate config; then edit and use -b Install -c <file>), -p product (e.g. Vivado), -e edition (e.g. Vivado ML Standard), -c config file. Example: ./xsetup -b ConfigGen then choose product/edition; config is written to ~/.Xilinx/install_config.txt. Install: ./xsetup -a 3rdPartyEULA,XilinxEULA -b Install -c install_config.txt.
- File Size: ~120GB
- Download Time: 2-6+ hours depending on network speed
- Server Requirement: Use a machine that does not sleep or suspend
- Network Stability: Ensure stable connection for large file downloads
- Ensure you have at least 300GB free disk space before building
- Use a server machine that does not sleep or suspend
- Build time: 3.5+ hours for complete process
Build Time Breakdown:
- Copying 120GB installer: 30-60 minutes
- Running Vivado installer: 2-3 hours
- Applying patches and finalizing: 15-30 minutes
- Total estimated time: 3.5-4.5 hours
Why Server Machine is Required:
- No sleep/suspend: Prevents build interruption during long operations
- Stable power: Ensures continuous operation for 3.5+ hours
- Network stability: Maintains connection for large file operations
- Resource availability: Consistent CPU/memory allocation
# Make script executable (first time only)
chmod +x build.sh
# Standard build with caching (includes automatic system optimization)
./build.sh
# Clean build (no cache)
./build.sh -c
# Force pull base image
./build.sh -p
# Custom image name
./build.sh -i my-vivado-image
# Show all options
./build.sh -hBuild Script Features:
- Automatic System Optimization: Sets ulimit -n to 65,536, checks file size limits
- Real-time Monitoring: Monitors CPU, memory, disk usage during build
- Build Time Estimation: Intelligent estimation based on your system specs
- Progress Tracking: Detailed logging and progress monitoring
- Error Handling: Comprehensive error detection and recovery
- Resource Monitoring: Background monitoring with
./build.sh --monitor
# Build with default settings
docker build -t vyges-vivado .
# Build with custom Vivado version
docker build --build-arg VIVADO_VERSION=2025.2 -t vyges-vivado .
# Build with custom Ubuntu mirror
docker build --build-arg UBUNTU_MIRROR=mirror.example.com/ubuntu -t vyges-vivado .
# Clean build (recommended for first-time builds)
docker build --no-cache -t vyges-vivado .
# Build with logs saved to file (recommended for long builds)
docker build --no-cache -t vyges-vivado . 2>&1 | tee build.log
# Build with logs saved to file (no terminal output)
docker build --no-cache -t vyges-vivado . > build.log 2>&1
# Build with timestamped log file
docker build --no-cache -t vyges-vivado . 2>&1 | tee "build-$(date +%Y%m%d-%H%M%S).log"Build Time: Expect 3.5-6 hours for complete builds depending on your system and network speed. Verified: 5h49m for successful build on Ubuntu 24.04 with Podman 4.9.3.
# 1. Pre-download base image (saves 2-3 minutes)
docker pull ubuntu:24.04
# 2. Build with caching enabled (default)
docker build -t vyges-vivado .
# 3. Build with explicit cache usage
docker build --cache-from ubuntu:24.04 -t vyges-vivado .
# 4. For clean builds (no cache)
docker build --no-cache -t vyges-vivado .Expected Time Savings:
- First Build: 3.5-6 hours (full build) - Verified: 5h49m
- Subsequent Builds: 2-4 hours (cached layers)
- Base Image Cached: 2-3 minutes saved
- Package Layer Cached: 5-10 minutes saved
- Installer Layer Cached: 30-60 minutes saved (120GB file copy)
Podman 4.9.3 + BuildKit Benefits:
- Parallel Layer Building: Multiple layers build simultaneously
- Advanced Caching: Better cache hit rates and management
- Resource Optimization: Improved memory and disk usage
- Docker Compatibility: Seamless Docker command compatibility
- Modern BuildKit: Latest build engine with optimizations
With Podman, you can use these advanced build commands (Docker commands work identically):
docker build -t vyges-vivado .# Create a buildx builder instance
docker buildx create --name vyges-builder --use
# Build with advanced caching
docker buildx build --cache-from type=local,src=/tmp/.buildx-cache \
--cache-to type=local,dest=/tmp/.buildx-cache \
-t vyges-vivado .# Build with parallel layers and resource constraints
docker buildx build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--platform linux/amd64 \
--memory=8g \
--memory-swap=8g \
-t vyges-vivado .After reconnecting to your remote machine, run these commands in order:
# 1. Check if build image exists
docker images | grep vyges-vivado
# 2. Check build logs
ls -lh build*.log
# 3. Verify image functionality
docker run --rm vyges-vivado echo "Build verification test"
# 4. Check Vivado installation
docker run --rm vyges-vivado /tools/Xilinx/2025.1/Vivado/bin/vivado -versionImage tags: A successful build tags the image as both vyges-vivado:latest and vyges-vivado:<VIVADO_VERSION>-<YYYYMMDD> (e.g. vyges-vivado:2025.2-20260214). Use the version-date tag when you need an explicit build identifier.
β Success Indicators:
- Image appears in
docker imageslist (with bothlatestand version-date tags) - Log file is large (>100MB) and contains completion message
- Container runs without errors
- Vivado binary responds with version information
β Failure Indicators:
- No image found
- Small or missing log file
- Container fails to start
- Vivado binary not found or fails
# Shows output on terminal AND saves to file
docker build --no-cache -t vyges-vivado . 2>&1 | tee build.log- Pros: See progress in real-time + save logs
- Cons: Slightly slower due to tee overhead
- Use Case: Most builds, especially long ones
# Saves logs to file, no terminal output
docker build --no-cache -t vyges-vvado . > build.log 2>&1- Pros: Fastest, clean logs
- Cons: No real-time progress visibility
- Use Case: Background builds, CI/CD
# Creates unique log file for each build
docker build --no-cache -t vyges-vivado . 2>&1 | tee "build-$(date +%Y%m%d-%H%M%S).log"- Pros: Multiple builds don't overwrite logs
- Cons: More log files to manage
- Use Case: Multiple builds, debugging
# Watch log file in real-time
tail -f build.log
# Monitor disk space during build
watch -n 30 'df -h'
# Check Docker build progress
docker system df# Resume interrupted build (if using --cache-from)
docker build --cache-from vyges-vivado -t vyges-vivado .
# Check Docker system disk usage
docker system df
# Check image build history
docker history vyges-vivadoAfter reconnecting to your remote machine, check build status:
# List all Docker images
docker images
# Look for your image
docker images | grep vyges-vivado
# Check image details
docker inspect vyges-vivado# View the log file if it exists
ls -la build*.log
# Check log file size (should be substantial if build completed)
ls -lh build*.log
# View end of log file for completion message
tail -50 build.log
# Search for completion indicators
grep -i "successfully built\|build completed\|finished" build.log# Check if build process is still running
docker ps -a
# Check Docker system disk usage
docker system df
# Look for any running containers
docker ps
# Check Docker build cache and layers
docker history vyges-vivado# Test if the image can run
docker run --rm vyges-vivado echo "Image works!"
# Check Vivado installation
docker run --rm vyges-vivado ls -la /tools/Xilinx/
# Test Vivado binary
docker run --rm vyges-vivado /tools/Xilinx/2025.1/Vivado/bin/vivado -version- Log Rotation: Use timestamped logs for multiple builds
- Storage: Ensure sufficient space for logs (logs can be several GB)
- Cleanup: Archive old logs:
gzip build-*.log
# Start build in background with completion notification
nohup docker build --no-cache -t vyges-vivado . > build.log 2>&1 && \
echo "Build completed successfully!" && \
notify-send "Docker Build Complete" "Vyges Vivado image built successfully!" &
# Get the background process ID
echo $! > build.pid
# Monitor background process
tail -f build.log# Create a monitoring script
cat > monitor_build.sh << 'EOF'
#!/bin/bash
BUILD_LOG="build.log"
IMAGE_NAME="vyges-vivado"
echo "Monitoring build: $BUILD_LOG"
echo "Target image: $IMAGE_NAME"
while true; do
# Check if build process is still running
if ! pgrep -f "docker build.*$IMAGE_NAME" > /dev/null; then
# Check if image was created
if docker images | grep -q "$IMAGE_NAME"; then
echo "β
Build completed successfully!"
echo "Image details:"
docker images | grep "$IMAGE_NAME"
break
else
echo "β Build failed or was interrupted"
echo "Last 20 lines of log:"
tail -20 "$BUILD_LOG"
break
fi
fi
echo "Build still running... $(date)"
sleep 30
done
EOF
chmod +x monitor_build.sh
./monitor_build.sh# Send email notification when build completes
docker build --no-cache -t vyges-vivado . > build.log 2>&1 && \
echo "Build completed at $(date)" | mail -s "Docker Build Success" [email protected] || \
echo "Build failed at $(date)" | mail -s "Docker Build Failed" [email protected]- RAM: 64GB+ minimum, 128GB+ recommended (184GB image requires massive memory)
- CPU: 8+ cores recommended (container loading is extremely CPU-intensive)
- Storage: NVMe SSD required, additional 100-200GB for container runtime
- Startup Time: 15-30+ minutes to load the 184GB image (verified with 384GB RAM)
# Interactive shell (be patient during startup)
docker run -it vyges-vivado
# Mount current directory
docker run -it -v $(pwd):/workspace vyges-vivado
# Run specific command
docker run -it vyges-vivado vivado -version
# Run with resource limits (recommended for large images)
docker run -it --memory=32g --cpus=8 vyges-vivado
# Run in background with resource monitoring
docker run -d --name vivado-container --memory=32g --cpus=8 vyges-vivado tail -f /dev/nullRun the container and print Vivadoβs version (no GUI, no project). May take a few minutes.
podman run --rm vyges-vivado /tools/Xilinx/2025.1/Vivado/bin/vivado -version
podman run --rm vyges-vivado /tools/Xilinx/2025.2/Vivado/bin/vivado -version
Expected output (Vivado 2025.1):
vivado v2025.1 (64-bit)
Tool Version Limit: 2025.05
SW Build 6140274 on Wed May 21 22:58:25 MDT 2025
...
Copyright 1986-2022 Xilinx, Inc. All Rights Reserved.
Copyright 2022-2025 Advanced Micro Devices, Inc. All Rights Reserved.
Confirm Vivado runs in batch mode (exits 0). Allow a few minutes for container and Vivado to start:
podman run --rm vyges-vivado /tools/Xilinx/2025.1/Vivado/bin/vivado -mode batch -source /dev/stdin <<< "exit 0"Success: exit code 0 and log lines ending with Exiting Vivado at .... Example:
****** Vivado v2025.1 (64-bit)
**** SW Build 6140274 on Wed May 21 22:58:25 MDT 2025
...
Sourcing tcl script '/tools/Xilinx/2025.1/Vivado/scripts/Vivado_init.tcl'
2 Beta devices matching pattern found, 0 enabled.
source /dev/stdin
INFO: [Common 17-206] Exiting Vivado at Fri Feb 13 18:46:37 2026...
Failure: non-zero exit or license/error messages.
To inspect the image and run Vivado manually:
podman run -it --rm vyges-vivado bash
# inside container:
/tools/Xilinx/2025.1/Vivado/bin/vivado -version
# or add to PATH and use vivado -mode batch -source ...
exitThe 184GB container image presents unique challenges for runtime performance:
- Image Loading: Container runtime loads 184GB image layers into memory
- Vivado Runtime: Vivado itself requires 8-16GB RAM for typical operations
- Total RAM Usage: 64GB+ RAM required for smooth operation (verified with 384GB system)
- Memory Pressure: Monitor with
free -handdocker stats- expect high usage during startup
- Startup Time: 15-30+ minutes to initialize the 184GB container (even with high-end hardware)
- Vivado Operations: CPU-intensive synthesis and simulation
- Recommended: 8+ CPU cores for responsive performance
- Monitoring: Use
htopordocker statsto monitor CPU usage - expect sustained high CPU during startup
- Container Layers: 184GB of data requires massive I/O operations
- Temporary Files: Vivado creates large temporary files during operation
- NVMe SSD Required: HDD will cause severe performance degradation (hours to load)
- Available Space: Ensure 100-200GB free space for container operations
# Set memory and CPU limits (realistic for 184GB image)
docker run -it --memory=32g --cpus=8 --name vivado-dev vyges-vivado
# Monitor resource usage
docker stats vivado-dev
# Check container resource limits
docker inspect vivado-dev | grep -A 10 "Resources"# Use tmpfs for temporary files (faster I/O)
docker run -it --tmpfs /tmp --tmpfs /var/tmp vyges-vivado
# Mount SSD storage for better performance
docker run -it -v /fast-storage:/workspace vyges-vivado
# Use overlay2 storage driver (default, but verify)
docker info | grep "Storage Driver"# Pre-allocate memory for better performance (realistic for 184GB image)
docker run -it --memory=32g --memory-swap=32g vyges-vivado
# Monitor memory usage
docker exec vivado-container free -h
docker exec vivado-container cat /proc/meminfo# Start container in background (realistic resource allocation)
docker run -d --name vivado-dev --memory=32g --cpus=8 vyges-vivado tail -f /dev/null
# Attach to running container
docker exec -it vivado-dev bash
# Stop and remove when done
docker stop vivado-dev
docker rm vivado-dev# Create persistent workspace
docker run -it --name vivado-dev -v $(pwd):/workspace vyges-vivado
# Commit changes to new image
docker commit vivado-dev my-vivado-custom
# Save custom image
docker save my-vivado-custom -o my-vivado-custom.tarBased on Real-World Testing (384GB RAM, High-End Hardware):
- Container Startup: 15-30+ minutes (even with 384GB RAM)
- Memory Usage: 3-4GB+ during loading process (podman process)
- CPU Usage: Sustained 30-40% CPU during startup
- Storage I/O: Massive I/O operations during image loading
- Patience Required: This is normal for a 184GB container image
When the container is working correctly, you should see:
# Check environment variables
podman run --rm localhost/vyges-vivado env | grep -i vivado
# Expected output:
XILINX_VIVADO=/tools/Xilinx/2025.1/Vivado
PATH=/tools/Xilinx/2025.1/Vitis/bin:/tools/Xilinx/2025.1/Vivado/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
VIVADO_BASE_VERSION=2025.1
VIVADO_VERSION=2025.1Additional verification commands:
# Check Vivado installation
podman run --rm localhost/vyges-vivado ls -la /tools/Xilinx/2025.1/Vivado/bin/
# Test Vivado version
podman run --rm localhost/vyges-vivado /tools/Xilinx/2025.1/Vivado/bin/vivado -version
# Check Vitis installation
podman run --rm localhost/vyges-vivado ls -la /tools/Xilinx/2025.1/Vitis/bin/Basic functionality tests:
# Test container startup
podman run --rm localhost/vyges-vivado echo "β
Container works"
# Check environment variables (should show Vivado paths)
podman run --rm localhost/vyges-vivado env | grep -i vivado
# Test Vivado version
podman run --rm localhost/vyges-vivado /tools/Xilinx/2025.1/Vivado/bin/vivado -version | head -5
# Count Vivado binaries
podman run --rm localhost/vyges-vivado ls -la /tools/Xilinx/2025.1/Vivado/bin/ | wc -lExpected results when working correctly:
- Environment variables show proper Vivado paths
- Vivado version command returns version information
- Multiple Vivado binaries are available in the bin directory
# Monitor container resources
docker stats --no-stream
# Monitor specific container
docker stats vivado-dev
# Monitor system resources
htop
iotop
# Monitor podman process specifically
ps aux | grep podman
top -p $(pgrep podman)# Check container startup time
time docker run --rm vyges-vivado echo "startup test"
# Check Vivado startup time
time docker run --rm vyges-vivado /tools/Xilinx/2025.1/Vivado/bin/vivado -version
# Monitor disk I/O during operations
iostat -x 1# Check if image is fully loaded
docker images | grep vyges-vivado
# Verify container runtime performance
docker system df
docker system events
# Check for resource constraints
ulimit -a
free -h# Monitor memory usage
docker stats --no-stream
free -h
# Check for memory leaks
docker exec vivado-container ps aux --sort=-%mem
# Restart container if needed
docker restart vivado-dev# Monitor CPU usage
htop
docker stats --no-stream
# Check for CPU-intensive processes
docker exec vivado-container top -o %CPU
# Limit CPU usage if needed
docker update --cpus=2 vivado-devThe Docker image includes a built-in health check that monitors the Vivado installation:
# Check container health status
docker ps
# View detailed health check information
docker inspect --format='{{json .State.Health}}' <container_name>
# Run health check manually
docker exec <container_name> /tools/Xilinx/2025.1/Vivado/bin/vivado -versionHealth Check Details:
- Interval: Every 30 seconds
- Timeout: 10 seconds per check
- Start Period: 60 seconds after container starts
- Retries: 3 consecutive failures before marking unhealthy
- Check: Verifies Vivado binary is accessible and executable
Health Status:
- π’ healthy: Vivado is working correctly
- π΄ unhealthy: Vivado is not accessible or failing
- π‘ starting: Container is in initial startup phase
VIVADO_VERSION: Vivado version to install (default: 2025.1)VIVADO_UPDATE: Optional update file to installINTERNAL_DOWNLOAD_URL: Internal download URL for organizational use
UBUNTU_MIRROR: Custom Ubuntu package mirror URLVIVADO_VERSION: Vivado version to installVIVADO_UPDATE: Update file to install
For organizations with internal Xilinx installers:
-
Set internal download URL:
export INTERNAL_DOWNLOAD_URL="https://internal.example.com/xilinx"
-
Download installer:
./download-installer.sh -i "$INTERNAL_DOWNLOAD_URL" -
Build image:
docker build -t vyges-vivado .
xilinx-tools/
βββ Dockerfile # Docker image definition
βββ build.sh # Automated build script with caching
βββ download-installer.sh # Installer download script (enterprise use)
βββ vivado-installer/ # Directory for installer files (.tar and optional .tar.digests)
β # Optional: file_listing.txt from tar -tvf <installer>.tar for xsetup path and layout
βββ patches/ # Post-install patches
β βββ vivado-2025.1-postinstall.patch # Vivado 2025.1 fixes
β βββ vivado-2025.2-postinstall.patch # Vivado 2025.2 fixes
β βββ ubuntu-24.04-vivado-2025.1-postinstall.patch # Ubuntu-specific fixes (optional)
βββ scripts/
β βββ filter_install_config_7series_zynq.sh # Restricts ConfigGen output to 7 Series + Zynq (batch-mode)
βββ entrypoint.sh # Container entrypoint
βββ logs/ # Build log files (created automatically)
βββ exports/ # Exported Docker images (created after build)
βββ README.md # This file
The Docker image applies patches to fix known issues:
- File:
vivado-${VIVADO_VERSION}-postinstall.patch - Purpose: Fixes specific to Vivado version (e.g., X11 workarounds, device enablement)
- Status: Required - build will fail if missing
- File:
ubuntu-${UBUNTU_VERSION}-vivado-${VIVADO_VERSION}-postinstall.patch - Purpose: OS-specific fixes for particular Ubuntu releases
- Status: Optional - build continues if missing
- X11 Workaround: Disables problematic X11 locale support code
- U280 Device: Enables beta device support for Alveo U280
- Ubuntu 24.04.3: No specific patches needed (newer release)
- Ensure you've run
download-installer.shfirst (for enterprise use) - Check that installer files are in
vivado-installer/directory - Use the
.tarfile onlyβnot the.bin. The build extracts the.tar; the.binand other variants are not used. - Verify file names match expected patterns (e.g.
FPGAs_AdaptiveSoCs_Unified_SDI_2025.2_1114_2157.tar) - For public use, download manually from Xilinx website
The config that was actually passed to the installer is (1) printed in the build log during the Docker/Podman build, (2) saved in the directory where you ran ./build.sh as install_config_used.txt after a successful build, and (3) stored in the image at /opt/vyges/install_config_used.txt. To view from an existing image: podman run --rm vyges-vivado cat /opt/vyges/install_config_used.txt. Use this when debugging device selection or install failures.
- Check network connectivity
- Verify internal URLs are accessible
- Ensure proper authentication for internal networks
- Check Docker has sufficient disk space
- Ensure at least 300GB free space available
- Monitor with
df -hduring build - Clean Docker cache:
docker system prune -a
- Verify all required files are present
- Check Docker logs for specific error messages
- System Limits Issues: Use the automated build script for automatic optimization
- Run
./build.shinstead of manualdocker buildcommands - The script automatically sets
ulimit -n 65536and other optimizations - Check system limits:
ulimit -a
- Run
- Disk Space Errors: Common causes include:
- Insufficient space for Vivado installer extraction
- Docker build cache consuming too much space
- System running out of inodes or disk space
- Connection Issues: SSH disconnections, network timeouts
- Always use log redirection:
docker build ... 2>&1 | tee build.log - Check log file for last completed step
- Resume build from last successful layer if possible
- Always use log redirection:
- System Crashes/Reboots
- Docker build cache may be preserved
- Check
docker imagesfor partial builds - Consider using
--cache-fromfor resuming builds
- Manual Interruption (Ctrl+C)
- Build cache is preserved
- Resume with:
docker build --cache-from vyges-vivado -t vyges-vivado .
- Container shows as "unhealthy"
- Verify Vivado installation completed successfully
- Check that
/tools/Xilinx/${VIVADO_VERSION}/Vivado/bin/vivadoexists - Ensure proper file permissions on Vivado binary
- Review health check logs:
docker inspect <container_name>
- Health check timing out
- Vivado may be taking longer to start on slower systems
- Consider increasing timeout values in Dockerfile if needed
- The image runs as root (required for Vivado installation)
- Consider security implications for production use
- Internal download URLs should use HTTPS when possible
- Review and validate all downloaded files
- Inspiration: This project was inspired by the work of ESnet SmartNIC team and their xilinx-tools-docker repository
- ESnet License: ESnet SmartNIC License - Copyright (c) 2022, The Regents of the University of California, through Lawrence Berkeley National Laboratory
- Xilinx Vivado Documentation
- Docker Best Practices
- Vivado Installation Guide
- Xilinx Vivado Installation, Licensing
- Vivado / FPGA tutorial (YouTube)
- Real Digital Urbana board β Spartan-7 FPGA educational platform (DDR3, SD, Bluetooth, USB, HDMI, Pmods, etc.)
- Real Digital Boolean board β Spartan-7 FPGA educational platform; works with free Vivado
- Digilent vivado-boards β Board definition files for Digilent FPGA boards (board interfaces, presets, constraints). Copy the
new/board_filescontent into the container at/tools/Xilinx/Vivado/<version>/data/boards(e.g./tools/Xilinx/Vivado/2025.2/data/boards); once in that directory, boards show up automatically in Vivado IP Integrator and board selection. See Digilentβs Installing Vivado, Vitis, and Digilent Board Files for details.
For issues or questions:
- Check the troubleshooting section above
- Review Docker build logs
- Verify file permissions and availability
- Check network connectivity for downloads
Maintained by: Vyges Team
Last Updated: August 2025
Vivado Version: 2025.1
Build Status: β
Successfully Verified (August 31, 2025)
Tested With: Podman 4.9.3 on Ubuntu 24.04 LTS
Build Time: 5h49m (184GB final image)
# Check current Docker disk usage
docker system df
# Clean up build cache (most aggressive)
docker builder prune -a
# Clean up everything (images, containers, networks, build cache)
docker system prune -a
# Clean up only build cache
docker builder prune
# Clean up only dangling images
docker image prune
# Clean up only stopped containers
docker container prunedocker system prune -a will remove ALL unused images, containers, networks, and build cache. Use with caution!
If you see a large Build Cache (like your 236.8GB), here's how to handle it:
# 1. First, check what's in the build cache
docker system df
# 2. Clean ONLY the build cache (safest option)
docker builder prune
# 3. If you want to be more aggressive with build cache
docker builder prune -a
# 4. For complete cleanup (removes everything unused)
docker system prune -aBuild Cache vs Images:
- Build Cache: Temporary layers from failed or interrupted builds
- Images: Successfully built Docker images
- Dangling Images: Images with
<none>tags (can be safely removed)