OpenSportsLib is a modular Python library for sports video understanding.
It provides a unified framework to train, evaluate, and run inference for key temporal understanding tasks in sports video, including:
- Action classification
- Action localization / spotting
- Action retrieval
- Action description / captioning
OpenSportsLib is designed for researchers, ML engineers, and sports analytics teams who want reproducible and extensible workflows for sports video AI.
- Unified workflow for training and inference
- Modular design for adding new tasks, datasets, and models
- Config driven experiments for reproducibility
- Support for multiple modalities and sports workflows
- Research friendly while still usable in applied settings
- Documentation: https://opensportslab.github.io/opensportslib/
- PyPI: https://pypi.org/project/opensportslib/
- Issues: https://github.com/OpenSportsLab/opensportslib/issues
pip install opensportslibpip install --pre opensportslibpip install "opensportslib[localization]"
pip install "opensportslib[py-geometric]" -f https://pytorch-geometric.com/whl/torch-2.10.0+cu128.htmlRequires Python 3.12+.
OpenSportsLib uses external annotation files, datasets, and pretrained checkpoints.
Public assets are hosted under the OpenSportsLab Hugging Face organization:
https://huggingface.co/OpenSportsLab
Use it as the main entry point to find:
- datasets
- annotation files
- extracted features
- pretrained models and checkpoints
--
import opensportslib
print("OpenSportsLib imported successfully")from opensportslib import model
myModel = model.classification(
config="/path/to/classification.yaml"
)
myModel.train(
train_set="/path/to/train_annotations.json",
valid_set="/path/to/valid_annotations.json",
pretrained="/path/to/pretrained.pt", # optional
)from opensportslib import model
myModel = model.classification(
config="/path/to/classification.yaml"
)
metrics = myModel.infer(
test_set="/path/to/test_annotations.json",
pretrained="/path/to/checkpoints/final_model",
predictions="/path/to/predictions.json"
)
print(metrics)from opensportslib import model
myModel = model.localization(
config="/path/to/localization.yaml"
)Classify clips or event centered samples into predefined categories.
Predict when key events happen in long untrimmed sports videos.
Search and retrieve relevant clips or moments from a collection of sports videos.
Generate text descriptions for sports events and temporal segments.
- Prepare your dataset in the expected format
- Select or create a YAML config
- Initialize the task specific model
- Train on your annotations
- Run inference on new data
- Extend the pipeline with your own datasets or models
Use the README for the fast start, then go deeper through:
- Full documentation: https://opensportslab.github.io/opensportslib/
- Example configs: examples/configs/
- Quickstart scripts: examples/quickstart/
- Contribution guide: CONTRIBUTING.md
- Developer guide: DEVELOPERS.md
For contributors who want to work from source:
git clone https://github.com/OpenSportsLab/opensportslib.git
cd opensportslib
pip install -e .pip install -e ".[localization]"
pip install -e ".[py-geometric]" -f https://pytorch-geometric.com/whl/torch-2.10.0+cu128.htmlIf you prefer conda:
conda create -n osl python=3.12 pip
conda activate osl
pip install -e .- Make sure you are branching from
dev - Create your feature or fix branch from
dev - Open a pull request back into
dev
We welcome contributions to OpenSportsLib.
Please check:
These documents describe:
- how to add models and datasets
- coding standards
- training pipeline structure
- how to run and test the framework
OpenSportsLib is available under dual licensing.
AGPL 3.0 for research, academic, and community use.
For proprietary or commercial deployment, please refer to LICENSE-COMMERCIAL.
If you use OpenSportsLib in your research, please cite the project.
@misc{opensportslib,
title={OpenSportsLib},
author={OpenSportsLab},
year={2026},
howpublished={\url{https://github.com/OpenSportsLab/opensportslib}}
}OpenSportsLib is developed within the broader OpenSportsLab effort for sports video understanding.