UNS-480 [FEAT] Add Gemini LLM adapter for Google AI Studio#1890
UNS-480 [FEAT] Add Gemini LLM adapter for Google AI Studio#1890jaseemjaskp wants to merge 4 commits intomainfrom
Conversation
Add a new LLM adapter for Google's Gemini models using LiteLLM's gemini/ provider prefix. The adapter follows the established SDK adapter pattern and is auto-discovered by register_adapters().
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
Summary by CodeRabbit
WalkthroughAdds a Gemini LLM integration: a new Pydantic parameter model ( Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
| Filename | Overview |
|---|---|
| unstract/sdk1/src/unstract/sdk1/adapters/base1.py | Adds GeminiLLMParameters class with safe copy-on-write validate(), idempotent gemini/ prefix logic, and ValueError guard for empty model — correctly placed between AnthropicLLMParameters and AnyscaleLLMParameters. |
| unstract/sdk1/src/unstract/sdk1/adapters/llm1/gemini.py | New GeminiLLMAdapter following the Anthropic adapter pattern exactly; all required abstract methods implemented; get_json_schema() inherited from BaseAdapter will resolve to llm1/static/gemini.json correctly via get_provider() == "gemini". |
| unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/gemini.json | JSON schema for UI form; model default is now bare "gemini-2.0-flash" (without prefix), consistent with the description telling users the prefix is added automatically; temperature max of 2 matches BaseChatCompletionParameters constraint. |
| frontend/public/icons/adapter-icons/Gemini.png | Binary asset — official Gemini sparkle icon added; path matches the string returned by GeminiLLMAdapter.get_icon(). |
Sequence Diagram
sequenceDiagram
participant UI as Frontend UI
participant RA as register_adapters()
participant GA as GeminiLLMAdapter
participant GP as GeminiLLMParameters
participant LL as LiteLLM (gemini/ provider)
UI->>RA: discover adapters in llm1/
RA->>GA: import gemini.py, call get_id() + get_metadata()
RA-->>UI: adapter registered
UI->>GA: get_json_schema()
GA-->>UI: llm1/static/gemini.json (rendered form)
UI->>GP: validate(adapter_metadata)
GP->>GP: copy metadata, validate_model()
GP->>GP: prepend gemini/ prefix if absent
GP->>GP: GeminiLLMParameters(**result_metadata).model_dump()
GP-->>UI: validated params dict
UI->>LL: completion(model="gemini/gemini-2.0-flash", ...)
LL-->>UI: LLM response
Reviews (4): Last reviewed commit: "Merge branch 'main' into feature/UNS-480..." | Re-trigger Greptile
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/gemini.json (1)
33-53: Preferintegerfor discrete numeric inputs.
max_tokens,timeout, andmax_retriesare discrete/count-like settings. Usingintegeris clearer and prevents fractional inputs from slipping through lenient validators/clients.Proposed schema tightening
"max_tokens": { - "type": "number", + "type": "integer", "minimum": 0, - "multipleOf": 1, "title": "Maximum Output Tokens", "description": "Maximum number of output tokens to limit LLM replies, the maximum possible differs from model to model." }, "timeout": { - "type": "number", + "type": "integer", "minimum": 0, - "multipleOf": 1, "title": "Timeout", "default": 600, "description": "Timeout in seconds" }, "max_retries": { - "type": "number", + "type": "integer", "minimum": 0, - "multipleOf": 1, "title": "Max Retries", "description": "Maximum number of retries" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/gemini.json` around lines 33 - 53, The schema uses "type": "number" for discrete/count fields; change the types for max_tokens, timeout, and max_retries to "integer" (keeping their existing constraints like minimum, multipleOf, default, titles and descriptions) so validators/clients reject fractional values—update the entries for the keys "max_tokens", "timeout", and "max_retries" in the Gemini schema accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@unstract/sdk1/src/unstract/sdk1/adapters/base1.py`:
- Around line 653-658: The validate_model function should fail fast on missing
or blank model values instead of returning "gemini/"; update
validate_model(adapter_metadata) to read the raw model, strip whitespace, and if
it's empty or missing raise a ValueError (or appropriate ValidationError) with a
clear message; otherwise preserve the existing prefixing logic (if model already
startswith "gemini/" return it, else return "gemini/{model}"). Ensure the check
uses adapter_metadata.get("model") and applies .strip() before prefixing so
blank strings are rejected.
---
Nitpick comments:
In `@unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/gemini.json`:
- Around line 33-53: The schema uses "type": "number" for discrete/count fields;
change the types for max_tokens, timeout, and max_retries to "integer" (keeping
their existing constraints like minimum, multipleOf, default, titles and
descriptions) so validators/clients reject fractional values—update the entries
for the keys "max_tokens", "timeout", and "max_retries" in the Gemini schema
accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 36d4675d-43c9-4c73-92cf-aa2667969cf6
⛔ Files ignored due to path filters (1)
frontend/public/icons/adapter-icons/Gemini.pngis excluded by!**/*.png
📒 Files selected for processing (3)
unstract/sdk1/src/unstract/sdk1/adapters/base1.pyunstract/sdk1/src/unstract/sdk1/adapters/llm1/gemini.pyunstract/sdk1/src/unstract/sdk1/adapters/llm1/static/gemini.json
…model, update default model
Frontend Lint Report (Biome)✅ All checks passed! No linting or formatting issues found. |
Test ResultsSummary
Runner Tests - Full Report
SDK1 Tests - Full Report
|
|



What
Why
How
GeminiLLMParametersclass inbase1.pywithapi_keyfield,validate()(idempotentgemini/prefix), andvalidate_model()GeminiLLMAdapterinllm1/gemini.pyfollowing the established Anthropic adapter pattern (single API key, no api_base)gemini.jsonJSON schema for the UI configuration form with fields: adapter_name, api_key, model, temperature, max_tokens, timeout, max_retriesGemini.png)register_adapters()— no manual registration or__init__.pychanges neededCan this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)
base1.py. No existing code is modified. All 98 existing SDK tests pass without regressions.Database Migrations
Env Config
Relevant Docs
Related Issues or PRs
Dependencies Versions
litellmpackage forgemini/provider routing.Notes on Testing
register_adapters()modelfield is free-text (not enum dropdown) — LiteLLM handles routing errors for unsupported model stringsChecklist
I have read and understood the Contribution Guidelines.