We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
A high-throughput and memory-efficient inference and serving engine for LLMs
Python 65.7k 12k
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
Python 2.4k 323
Common recipes to run vLLM
Jupyter Notebook 282 103
A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM
Python 166 22
Intelligent Router for Mixture-of-Models
Go 2.5k 326
Community maintained hardware plugin for vLLM on Apple Silicon
TPU inference for vLLM, with unified JAX and PyTorch support.
There was an error while loading. Please reload this page.
Community maintained hardware plugin for vLLM on Ascend
Community maintained hardware plugin for vLLM on Spyre
Community maintained hardware plugin for vLLM on Intel Gaudi
A framework for efficient model inference with omni-modality models
This repo hosts code for vLLM CI & Performance Benchmark infrastructure.
The vLLM XPU kernels for Intel GPU