-
Notifications
You must be signed in to change notification settings - Fork 860
migrate remaining providers to pydantic-ai #7732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
| from marimo._server.models.completion import UIMessage as ServerUIMessage | ||
| from marimo._utils.http import HTTPStatus | ||
|
|
||
| TIMEOUT = 30 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed timeout because imo we can let users cancel their own requests
c57c402 to
3523361
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request migrates the remaining AI providers to use pydantic-ai, unifying the AI provider infrastructure and fixing several reported issues with provider compatibility. The migration simplifies the codebase by replacing custom provider implementations with pydantic-ai's standardized approach.
Key Changes:
- Migrated OpenAI, Azure OpenAI, and custom providers to use pydantic-ai instead of direct SDK calls
- Introduced a new
CustomProviderclass to handle OpenAI-compatible and other custom providers - Removed legacy provider abstractions like
CompletionProviderand provider-specific conversion functions - Updated dependencies to include pydantic-ai with OpenAI support in the recommended and dev extras
Reviewed changes
Copilot reviewed 7 out of 8 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| pyproject.toml | Updated dependencies to replace standalone openai>=1.55.3 with pydantic-ai-slim[openai]>=1.39.0 in recommended and dev extras |
| pixi.lock | Updated lock file to reflect dependency changes with new SHA and editable flag |
| tests/_server/api/endpoints/test_ai.py | Refactored tests to mock pydantic-ai provider methods instead of OpenAI SDK directly; simplified test structure |
| tests/_server/ai/test_providers.py | Removed tests for legacy provider implementations; added tests for new provider capabilities (extended thinking, responses API support) |
| tests/_ai/test_chat_convert.py | Removed tests for deprecated conversion functions that are now handled by pydantic-ai; updated ChatMessage initialization to use empty list for parts |
| marimo/_server/ai/providers.py | Major refactor: introduced PydanticProvider base class, OpenAIClientMixin, and CustomProvider for flexible provider support; removed legacy CompletionProvider abstraction |
| marimo/_server/api/endpoints/ai.py | Simplified endpoint handlers by removing provider type checks and using unified pydantic-ai interface |
| marimo/_ai/_convert.py | Removed provider-specific tool conversion functions (OpenAI, Anthropic, Google) as pydantic-ai handles these internally |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
21ea183 to
51d7e48
Compare
📝 Summary
Fixes #7428,
Fixes #6369,
Fixes #6228,
Fixes #7036.
Should fix #5340
Should fix #7040
Also supports other providers (non OpenAI compatible like Mistral), but they may require updating config.
I did not test with Azure, and my config is quite simple. Removes old chat_messages from types.
Todo:
OpenAI

Deepseek

Mistral

inline completion

🔍 Description of Changes
📋 Checklist