While 3D Gaussian Splatting (3DGS) has revolutionized novel view synthesis, its performance is often bottlenecked by heuristic densification strategies. These fixed rules frequently lead to suboptimal Gaussian distributions and noticeable artifacts in under-reconstructed regions. State-GS is an adaptive Gaussian optimization framework driven by state-aware feedback. To overcome the limitations of traditional densification, our method establishes a closed-loop system that autonomously evaluates, schedules, and recycles Gaussian primitives. By dynamically relocating non-contributing Gaussians, State-GS rapidly breaks through reconstruction blind spots by increasing local sampling density, thereby effectively mitigating under-reconstruction artifacts and accelerating convergence. Enabled by this feedback regulation mechanism, our approach autonomously determines the appropriate number of Gaussians required.
- 🔄 Closed-Loop Feedback Optimization: We replace fixed heuristic rules with an autonomous, closed-loop feedback framework that refines the densification process, ensuring robust convergence toward high-fidelity representations.
- 📊 Fine-Grained State Perception: Our mechanism categorizes Gaussian primitives into four distinct states—Converged, Active, Exploratory, and Redundant—utilizing positional gradients and opacity as real-time feedback signals.
- 🎯 Dynamic Regulation & Recycling: The system proactively identifies ineffective Gaussians and relocates them to hard-to-reconstruct regions. By increasing local sampling density to rapidly break through blind spots, this adaptive recycling enhances scene integrity, fine-detail reconstruction, and optimizes the overall primitive count.
- CUDA: 12.9
- Python: 3.10
- PyTorch: 2.8.0+cu128
pip install -r requirements.txt
pip install lib3rd/simple-knn
pip install lib3rd/fused-ssim
pip install lib3rd/diff-gaussian-rasterizationDepending on your data source, please format your directories as follows:
If you collected your own images and need to run COLMAP first, place them in an input folder inside your dataset directory:
<dataset_location>
└── input
├── <image_0>
├── <image_1>
└── ...
If you already have a calibrated dataset (e.g., processed via COLMAP), structure it like this:
<dataset_location>
├── images
│ ├── <image_0>
│ ├── <image_1>
│ └── ...
└── sparse
└── 0
├── cameras.bin
├── images.bin
└── points3D.bin
You can run the pipeline using either run.py (for single datasets) or run_all_dataset.py (for batch processing).
python run.py -s <path_to_dataset> [options]
python run_all_dataset.py [options] | Argument | Description |
|---|---|
-s |
Path to the source dataset (either a pre-processed COLMAP directory or custom raw images). |
-i |
Specifies the target GPU ID for computation. |
-c |
Executes COLMAP to extract camera poses and generate a sparse point cloud (custom datasets only). |
-g |
Initiates the Gaussian model training process. |
-m |
Evaluates the trained Gaussian model. |
💡 Tip: Advanced hyperparameters and detailed training configurations can be customized within the
args/directory.
For a Single Custom Dataset:
# 1. Run COLMAP to extract camera poses
python run.py -s <path_to_dataset> -c
# 2. Train the Gaussian model
python run.py -s <path_to_dataset> -g
# 3. Evaluate the trained model
python run.py -s <path_to_dataset> -mFor Standard Benchmark Datasets: (e.g., Tanks & Temples, Deep Blending, Mip-NeRF360)
# Train on all datasets
python run_all_dataset.py -g
# Evaluate on all datasets
python run_all_dataset.py -mOur work is partially based on the following outstanding open-source projects:
We appreciate their contributions to the community.
If you find this project useful for your research or work, please consider citing our paper:
@article{state-gs,
title={},
author={},
journal={},
year={}
}