Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .devcontainer/ollama/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
"ghcr.io/azure/azure-dev/azd:latest": {},
"ghcr.io/prulloac/devcontainer-features/ollama:1": {
"pull": "qwen3.5:9b"
"pull": "gemma4:e2b"
}
},
"postCreateCommand": "uv sync && cp .env.sample.ollama .env",
"postCreateCommand": "uv sync && uv run prek install && cp .env.sample.ollama .env",
"forwardPorts": [6277, 6274],
"portsAttributes": {
"6277": {
Expand Down
2 changes: 1 addition & 1 deletion .env-sample
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_CHAT_DEPLOYMENT=your-deployment-name

# Ollama Configuration
OLLAMA_MODEL=qwen3.5:9b
OLLAMA_MODEL=gemma4:e2b
OLLAMA_ENDPOINT=http://localhost:11434/v1
OLLAMA_API_KEY=no-key-needed

Expand Down
2 changes: 1 addition & 1 deletion .env.sample.ollama
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
API_HOST=ollama

# Ollama Configuration
OLLAMA_MODEL=qwen3.5:9b
OLLAMA_MODEL=gemma4:e2b
OLLAMA_ENDPOINT=http://localhost:11434/v1
OLLAMA_API_KEY=no-key-needed

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ You can run this project virtually by using GitHub Codespaces. Click one of the

[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Azure-Samples/python-mcp-demos?devcontainer_path=.devcontainer/ollama/devcontainer.json)

The Ollama Codespace pre-installs Ollama and pulls the `qwen3.5:9b` model, and copies `.env.sample.ollama` as your `.env` file. Note that the 64GB memory requirement will consume your Codespace quota faster.
The Ollama Codespace pre-installs Ollama and pulls the `gemma4:e2b` model, and copies `.env.sample.ollama` as your `.env` file. Note that the 64GB memory requirement will consume your Codespace quota faster.

Once the Codespace is open, open a terminal window and continue with the deployment steps.

Expand Down
2 changes: 1 addition & 1 deletion agents/agentframework_http.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
client = OpenAIResponsesClient(
base_url=os.environ.get("OLLAMA_ENDPOINT", "http://localhost:11434/v1"),
api_key=os.getenv("OLLAMA_API_KEY", "no-key-needed"),
model_id=os.environ.get("OLLAMA_MODEL", "qwen3.5:9b"),
model_id=os.environ.get("OLLAMA_MODEL", "gemma4:e2b"),
)
elif API_HOST == "openai":
client = OpenAIResponsesClient(
Expand Down
4 changes: 2 additions & 2 deletions agents/agentframework_learn.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
client = OpenAIResponsesClient(
base_url=os.environ.get("OLLAMA_ENDPOINT", "http://localhost:11434/v1"),
api_key=os.getenv("OLLAMA_API_KEY", "no-key-needed"),
model_id=os.environ.get("OLLAMA_MODEL", "qwen3.5:9b"),
model_id=os.environ.get("OLLAMA_MODEL", "gemma4:e2b"),
)
elif API_HOST == "openai":
client = OpenAIResponsesClient(
Expand All @@ -58,7 +58,7 @@ async def http_mcp_example() -> None:
tools=[mcp_server],
) as agent,
):
query = "How to create an Azure storage account using az cli?"
query = "What are the available hosting options for a Python web app on Azure? Compare them briefly."
result = await agent.run(query)
print(result.text)

Expand Down
4 changes: 2 additions & 2 deletions agents/langchainv1_github.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
)
elif API_HOST == "ollama":
model = ChatOpenAI(
model=os.environ.get("OLLAMA_MODEL", "qwen3.5:9b"),
model=os.environ.get("OLLAMA_MODEL", "gemma4:e2b"),
base_url=os.environ.get("OLLAMA_ENDPOINT", "http://localhost:11434/v1"),
api_key=SecretStr(os.getenv("OLLAMA_API_KEY", "no-key-needed")),
use_responses_api=True,
Expand Down Expand Up @@ -92,7 +92,7 @@ async def main():
agent = create_agent(
model,
tools=filtered_tools,
prompt="You help users research GitHub repositories. Search and analyze information.",
system_prompt="You help users research GitHub repositories. Search and analyze information.",
)

query = "Make a list of last 5 issues from the 'PrefectHQ/FastMCP' repository that discuss auth."
Expand Down
2 changes: 1 addition & 1 deletion agents/langchainv1_http.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
)
elif API_HOST == "ollama":
base_model = ChatOpenAI(
model=os.environ.get("OLLAMA_MODEL", "qwen3.5:9b"),
model=os.environ.get("OLLAMA_MODEL", "gemma4:e2b"),
base_url=os.environ.get("OLLAMA_ENDPOINT", "http://localhost:11434/v1"),
api_key=SecretStr(os.getenv("OLLAMA_API_KEY", "no-key-needed")),
use_responses_api=True,
Expand Down
8 changes: 4 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@ requires-python = "==3.13.*"
dependencies = [
"fastmcp>=3.0.0",
"debugpy>=1.8.0",
"langchain-core>=0.3.0",
"langchain-core>=1.2.26",
"mcp>=1.3.0",
"azure-identity>=1.25.1",
"msgraph-sdk>=1.0.0",
"dotenv-azd>=0.1.0",
"langchain>=1.0.0",
"langchain-openai>=1.0.1",
"langchain-mcp-adapters>=0.1.11",
"langchain>=1.2.15",
"langchain-openai>=1.1.12",
"langchain-mcp-adapters>=0.2.2",
"azure-ai-agents>=1.1.0",
"agent-framework-core==1.0.0rc5",
"azure-cosmos>=4.9.0",
Expand Down
2 changes: 1 addition & 1 deletion spanish/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Puedes ejecutar este proyecto de forma virtual usando GitHub Codespaces. Haz cli

[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Azure-Samples/python-mcp-demos?devcontainer_path=.devcontainer/ollama/devcontainer.json)

El Codespace de Ollama pre-instala Ollama y descarga el modelo `qwen3.5:9b`, y copia `.env.sample.ollama` como tu archivo `.env`. Ten en cuenta que el requisito de 64GB de memoria consumirá tu cuota de Codespace más rápido.
El Codespace de Ollama pre-instala Ollama y descarga el modelo `gemma4:e2b`, y copia `.env.sample.ollama` como tu archivo `.env`. Ten en cuenta que el requisito de 64GB de memoria consumirá tu cuota de Codespace más rápido.

Una vez abierto el Codespace, abre una terminal y continúa con los pasos de despliegue.

Expand Down
32 changes: 16 additions & 16 deletions uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.