Docling-Graph turns documents into validated Pydantic objects, then builds a directed knowledge graph with explicit semantic relationships.
This transformation enables high-precision use cases in chemistry, finance, and legal domains, where AI must capture exact entity connections (compounds and reactions, instruments and dependencies, properties and measurements) rather than rely on approximate text embeddings.
This toolkit supports two extraction paths: local VLM extraction via Docling, and LLM-based extraction routed through LiteLLM for local runtimes (vLLM, Ollama) and API providers (Mistral, OpenAI, Gemini, IBM WatsonX), all orchestrated through a flexible, config-driven pipeline.
-
βπ» Input Formats: Ingest PDFs, images, DoclingDocument, Markdown, URLs and more.
-
π§ Data Extraction: Extract structured data using VLM or LLM. Supports intelligent chunking and flexible processing modes.
-
π Graph Construction: Convert validated Pydantic models into NetworkX directed graphs with semantic relationships and stable node IDs, and rich edge metadata.
-
π¦ Export: Save graphs in multiple Neo4j-compatible formats CSV, and Cypher for bulk import.
-
π Visualization: Explore graphs with interactive HTML pages, and detailed Markdown reports.
-
β¨ Batch Optimization: Provider-specific batching with real tokenizers and improved GPU utilization for faster inference and better memory handling.
-
π LiteLLM abstraction: Single interface to local and remote LLM providers (vLLM, Mistral, OpenAI, WatsonX, etc.) via LiteLLM.
-
π Trace Capture: Comprehensive debug data via
TraceDatacaptures pages, chunks and intermediate schemas and graphs.
-
πͺ Multi-Stage Extraction: Define
extraction_stagein templates to control multi-pass extraction. -
π§© Interactive Template Builder: Guided workflows for building Pydantic templates.
-
π§² Ontology-Based Templates: Match content to the best Pydantic template using semantic similarity.
-
π External OCR Engine: Pass custom OCR engine URL to convert documents before graph creation.
-
πΎ Graph Database Integration: Export data straight into
Neo4j,ArangoDB, and similar databases.
- Python 3.10 or higher
- uv package manager
# Clone the repository
git clone https://github.com/IBM/docling-graph
cd docling-graph
# Install with uv
uv sync # Core + LiteLLM + VLMFor detailed installation instructions, see Installation Guide.
export OPENAI_API_KEY="..." # OpenAI
export MISTRAL_API_KEY="..." # Mistral
export GEMINI_API_KEY="..." # Google Gemini
# IBM WatsonX
export WATSONX_API_KEY="..." # IBM WatsonX API Key
export WATSONX_PROJECT_ID="..." # IBM WatsonX Project ID
export WATSONX_URL="..." # IBM WatsonX URL (optional)# Initialize configuration
uv run docling-graph init
# Convert document from URL
uv run docling-graph convert "https://arxiv.org/pdf/2207.02720" \
--template "docs.examples.templates.rheology_research.ScholarlyRheologyPaper" \
--processing-mode "many-to-one"
# Visualize results
uv run docling-graph inspect outputsfrom docling_graph import run_pipeline, PipelineContext
from docs.examples.templates.rheology_research import ScholarlyRheologyPaper
# Create configuration
config = {
"source": "https://arxiv.org/pdf/2207.02720",
"template": ScholarlyRheologyPaper,
"backend": "llm",
"inference": "remote",
"processing_mode": "many-to-one",
"provider_override": "mistral",
"model_override": "mistral-medium-latest",
"use_chunking": True,
}
# Run pipeline - returns data directly, no files written to disk
context: PipelineContext = run_pipeline(config)
# Access results
graph = context.knowledge_graph
models = context.extracted_models
metadata = context.graph_metadata
print(f"Extracted {len(models)} model(s)")
print(f"Graph: {graph.number_of_nodes()} nodes, {graph.number_of_edges()} edges")For debugging, use --debug with the CLI to save intermediate artifacts to disk; see Trace Data & Debugging. For more examples, see Examples.
Templates define both the extraction schema and the resulting graph structure.
from pydantic import BaseModel, Field
from docling_graph.utils import edge
class Person(BaseModel):
"""Person entity with stable ID."""
model_config = {
'is_entity': True,
'graph_id_fields': ['last_name', 'date_of_birth']
}
first_name: str = Field(description="Person's first name")
last_name: str = Field(description="Person's last name")
date_of_birth: str = Field(description="Date of birth (YYYY-MM-DD)")
class Organization(BaseModel):
"""Organization entity."""
model_config = {'is_entity': True}
name: str = Field(description="Organization name")
employees: list[Person] = edge("EMPLOYS", description="List of employees")For complete guidance, see:
Comprehensive documentation can be found on Docling Graph's Page.
The documentation follows the docling-graph pipeline stages:
- Introduction - Overview and core concepts
- Installation - Setup and environment configuration
- Schema Definition - Creating Pydantic templates
- Pipeline Configuration - Configuring the extraction pipeline
- Extraction Process - Document conversion and extraction
- Graph Management - Exporting and visualizing graphs
- CLI Reference - Command-line interface guide
- Python API - Programmatic usage
- Examples - Working code examples
- Advanced Topics - Performance, testing, error handling
- API Reference - Detailed API documentation
- Community - Contributing and development guide
We welcome contributions! Please see:
- Contributing Guidelines - How to contribute
- Development Guide - Development setup
- GitHub Workflow - Branch strategy and CI/CD
# Clone and setup
git clone https://github.com/IBM/docling-graph
cd docling-graph
# Install with dev dependencies
uv sync --extra dev
# Run Execute pre-commit checks
uv run pre-commit run --all-filesMIT License - see LICENSE for details.
- Powered by Docling for advanced document processing
- Uses Pydantic for data validation
- Graph generation powered by NetworkX
- Visualizations powered by Cytoscape.js
- CLI powered by Typer and Rich
Docling Graph has been brought to you by IBM.