A real-time, interactive 3D volume renderer for medical imaging data (NIFTI), built from first principles using the WebGPU API. This project explores advanced visualization techniques by applying computational geometry and calculus directly on the GPU to enhance anatomical structures.
Live Link:https://webgpu-mri.vercel.app/
3D slicing along the z axis
Screen.Recording.2026-01-02.185325.mp4
Visualising lateral ventricles and deep cerebral venous plexus. |
Transverse section showing the optic chiasm and pathway. |
This project was born from a desire to bridge the gap between clinical medicine and cutting-edge computer graphics. As a medical student, I found that standard 2D slice-by-slice viewing of MRI scans was often insufficient for understanding complex 3D anatomical relationships.
This renderer is my solution: a high-performance, web-native tool that provides an intuitive, interactive way to explore volumetric medical data. It moves beyond simple visualization to perform real-time analysis, using the power of the GPU to reveal details that might be missed in a traditional viewer.
The entire rendering pipeline is built from scratch, demonstrating a fundamental understanding of modern GPU architecture, 3D mathematics, and shader programming.
- Real-Time Volume Ray-Marching: Implements a custom ray-marching algorithm in a WGSL shader to render volumetric data interactively in the browser.
- First-Principles WebGPU Engine: Built directly on the WebGPU API without relying on third-party libraries for the core rendering logic. This includes a custom render pipeline manager and resource handling.
- Advanced Geometric Analysis: Performs on-the-fly analysis of the volume data directly on the GPU.
- Gradient-Based Edge Detection: Uses the first derivative (gradient) to highlight boundaries between different tissue types.
- Hessian-Based Curvature Analysis: Uses the second derivative (Hessian) to distinguish between different shapes (e.g., planes, tubes, spheres), allowing for more sophisticated tissue classification and visualization.
- Interactive Transfer Function: A GUI allows for real-time control over the opacity and color mapping of different tissues, enabling dynamic exploration of the data.
- NIFTI File Support: Includes Python scripts and JavaScript helpers to parse, process, and load data from the NIFTI file format, a standard in medical imaging research.
To enhance the diagnostic utility of the renderer, I engineered a deep learning pipeline to automatically isolate brain tissue from non-brain anatomy (skull, eyes, background).
- Model: Custom U-Net Convolutional Neural Network (CNN).
- Training: Built in TensorFlow/Keras, utilizing a Curriculum Learning strategy. The model was first pre-trained on 2D Sobel edge maps to learn structural gradients, then fine-tuned on full 3D volumetric data for spatial coherence.
- Data: Proprietary "Golden Set" manually segmented using ITK-SNAP to ensure high-fidelity ground truth.
The custom U-Net model demonstrates robust segmentation capabilities, achieving 97% accuracy on the validation dataset. This high-fidelity segmentation allows for precise isolation of the brain tissue, serving as a reliable ground truth for the visualization engine.
Edge detection using sobel filter. |
Full brain tissue mask result from fine tuning. |
Note: The repository currently reflects the integration of the pre-trained model into the visualization engine. The source code for the model architecture and training pipeline is being refactored and will be uploaded shortly.
This project serves as the foundational rendering engine for a much larger goal: creating a true "digital twin" of human anatomy. The vision is to build comprehensive, interactive models that fuse medical imaging with other biological data to enable:
- Surgical Simulation: Allowing surgeons to plan and rehearse complex procedures.
- Biomechanical Modeling: Simulating tissue behavior under different conditions.
- Enhanced Diagnostics: Providing clinicians with a more intuitive and data-rich view of patient anatomy.
- Core Rendering: WebGPU, WGSL (WebGPU Shading Language)
- Frontend & Application Logic: JavaScript (ES6 Modules)
- Build Tooling: Vite
- Matrix Math: wgpu-matrix (or gl-matrix)
- GUI: dat.gui
- Data Pre-processing: Python, NumPy, NiBabel
The repository is organized to separate the core rendering engine from the application logic and shaders, promoting modularity and clarity.
webgpu-mri/
βββ public/
β βββ sub-001/ # Contains the NIFTI data assets
βββ src/
β βββ engine/
β β βββ engine.js # Core WebGPU engine setup and state management
β β βββ renderPipeline.js # Manages the GPU render pipeline and resources
β βββ shaders/
β β βββ raytracer.wgsl # The heart of the project: the volume ray-marching shader
β βββ utils/
β β βββ helpers.js # Utility functions
β βββ viewer_scripts/
β β βββ main.py # Python scripts for NIFTI data pre-processing
β βββ main.js # Main application entry point
β βββ style.css # Application styles
βββ README.md
βββ package.json
To get a local copy up and running, follow these simple steps.
- Node.js and npm (or yarn) installed.
- A modern web browser with WebGPU support (e.g., Chrome, Edge, Firefox Nightly).
- Clone the repository:
git clone https://github.com/Bahdmanbabzo/webgpu-mri.git
- Navigate to the project directory:
cd webgpu-mri - Install NPM packages:
npm install
- Run the development server:
npm run dev
- Open your browser and navigate to the local address provided by Vite (usually
http://localhost:5173).
Distributed under the MIT License. See LICENSE for more information.
Oserebameh Beckley
- LinkedIn: linkedin.com/in/oserebameh-beckley
- GitHub: github.com/Bahdmanbabzo



