diff --git a/.vscode/mcp.json b/.vscode/mcp.json index 9a50026f2..017e4db63 100644 --- a/.vscode/mcp.json +++ b/.vscode/mcp.json @@ -8,7 +8,22 @@ }, "gallery": "https://api.mcp.github.com", "version": "1.0.0" - } + }, + "aignostics": { + "type": "stdio", + "command": "uv", + "args": [ + "run", + "--with", + "aignostics[mcp]", + "python", + "-m", + "aignostics.mcp" + ], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform.aignostics.com" + } + }, }, "inputs": [ { @@ -18,4 +33,4 @@ "password": true } ] -} \ No newline at end of file +} diff --git a/CLAUDE.md b/CLAUDE.md index 388c1fd52..bec168abd 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -50,6 +50,7 @@ Every module has detailed CLAUDE.md documentation. For module-specific guidance, * [src/aignostics/notebook/CLAUDE.md](src/aignostics/notebook/CLAUDE.md) - Marimo notebook integration * [src/aignostics/qupath/CLAUDE.md](src/aignostics/qupath/CLAUDE.md) - QuPath bioimage analysis * [src/aignostics/system/CLAUDE.md](src/aignostics/system/CLAUDE.md) - System diagnostics +* [src/aignostics/mcp/CLAUDE.md](src/aignostics/mcp/CLAUDE.md) - MCP server for LLM integration * [tests/CLAUDE.md](tests/CLAUDE.md) - Test suite documentation ## Development Commands diff --git a/README.md b/README.md index c61a73da6..3dd5e9d78 100644 --- a/README.md +++ b/README.md @@ -222,6 +222,15 @@ Choose your preferred interface for working with the Aignostics Platform. Each i | **Use when** | Building custom analysis pipeline in Python for repeated usage and processing large datasets (10s-1000s of slides) | | **Get started** | Run example notebooks or call the Aignostics Platform API from your Python scripts | +### 🤖 MCP Server (AI/LLM Integration) + +| | | +|---|---| +| **What it is** | MCP server enabling AI assistants like Claude and GitHub Copilot to interact with the Aignostics Platform | +| **Best for** | Users who want to query runs and analyze readout data through natural language | +| **Use when** | Exploring results conversationally, performing analyses on readout data, or getting quick summaries of runs | +| **Get started** | Configure your AI assistant | + > 💡 Launchpad and CLI handle authentication automatically. Python Library requires manual setup (see [authentication section](#example-notebooks-interact-with-the-aignostics-platform-from-your-python-notebook-environment)). ## Launchpad: Run your first computational pathology analysis in 10 minutes from your desktop @@ -608,6 +617,84 @@ Self-signed URLs for files in google storage buckets can be generated using the [required credentials](https://cloud.google.com/docs/authentication/application-default-credentials) for the Google Storage Bucket** +## MCP Server: Use AI assistants to interact with the Aignostics Platform + +The **Aignostics MCP Server** enables AI assistants like Claude and GitHub Copilot to interact with the Aignostics Platform via natural language. Query your application runs, analyze cell and slide readout data with SQL, and explore results through conversational AI. + +> 💡 If you haven't logged in before, run `uvx aignostics user login` first. The MCP server uses your cached credentials automatically. + +### Claude Desktop + +Add to your Claude Desktop configuration file: + +**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json` +**Windows**: `%APPDATA%\Claude\claude_desktop_config.json` +**Linux**: `~/.config/Claude/claude_desktop_config.json` + +```json +{ + "mcpServers": { + "aignostics": { + "command": "uvx", + "args": ["--from", "aignostics[mcp]", "python", "-m", "aignostics.mcp"] + } + } +} +``` + +Restart Claude Desktop after saving the configuration. + +### VS Code with GitHub Copilot + +Add to your VS Code settings (`settings.json`) or workspace `.vscode/settings.json`: + +```json +{ + "github.copilot.chat.mcp.servers": { + "aignostics": { + "command": "uvx", + "args": ["--from", "aignostics[mcp]", "python", "-m", "aignostics.mcp"] + } + } +} +``` + +### Claude Code (CLI) + +Add to `~/.claude/claude_mcp_config.json`: + +```json +{ + "mcpServers": { + "aignostics": { + "command": "uvx", + "args": ["--from", "aignostics[mcp]", "python", "-m", "aignostics.mcp"] + } + } +} +``` + +### Example Conversations + +Once configured, you can interact with the platform through natural language: + +``` +You: Show me my recent runs in the Aignostics Platform +Assistant: Here are your 5 most recent runs... + +You: What's the status of run abc-123? +Assistant: This run completed with 10 items processed... + +You: Download the readouts and show me the cell distribution +Assistant: Downloaded 2 files. Here's the summary: + - Total cells: 45,231 + - Carcinoma cells: 12,456 (27.5%) + ... + +You: How many cells are in carcinoma regions? +Assistant: There are 23,456 cells in carcinoma regions. +``` + ## Next Steps Now that you have an overview of the Aignostics Python SDK and its interfaces, here are some recommended next steps to deepen your understanding and get the most out of the platform: @@ -635,7 +722,11 @@ Aignostics Platform offers key features designed to maximize value for its users 4. **High-throughput processing with incremental results delivery:** Submit up to 500 whole slide images (WSI) in one batch request. Access results for individual slides as they completed processing, without having to wait for the entire batch to finish. 5. **Standard formats:** Support for commonly used image formats in digital pathology such as pyramidal DICOM, TIFF, and SVS. Results provided in standard formats like QuPath GeoJSON (polygons), TIFF (heatmaps) and CSV (measurements and statistics). -### Registration and User Access +### **MCP (Model Context Protocol)** +Open protocol enabling AI assistants to interact with external tools and data sources. The Aignostics MCP Server allows LLMs like Claude and GitHub Copilot to query runs and analyze readout data. + +**MCP Server** +See Aignostics MCP Server.Registration and User Access To start using the Aignostics Platform and its advanced applications, your organization must be registered by our business support team: @@ -767,9 +858,12 @@ Python library for seamless integration of the Aignostics Platform with enterpri **Aignostics Console** Web-based user interface for managing organizations, applications, quotas, users, and monitoring platform usage. -**Aignostics Launchpad** +**Aignostics Launchpad** Graphical desktop application (available for Mac OS X, Windows, and Linux) that allows users to run computational pathology applications on whole slide images and inspect results with QuPath and Python Notebooks. +**Aignostics MCP Server** +MCP (Model Context Protocol) server that enables AI assistants like Claude and GitHub Copilot to interact with the Aignostics Platform, query application runs, and analyze readout data through natural language. + **Aignostics Platform** Comprehensive cloud-based service providing standardized, secure interface for accessing advanced computational pathology applications without requiring specialized expertise or complex infrastructure. @@ -866,7 +960,7 @@ Laboratory systems that can be integrated with the Aignostics Platform for workf ### M -**Marimo** +**Marimo** Modern notebook environment supported by the Aignostics Platform as an alternative to Jupyter. **Metadata** diff --git a/pyproject.toml b/pyproject.toml index 002a7d2b0..c5a0c8cb4 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -157,6 +157,10 @@ marimo = [ "shapely>=2.1.0,<3", ] qupath = [] +mcp = [ + "mcp>=1.0.0,<2", + # duckdb is already a main dependency +] [dependency-groups] dev = [ "autodoc-pydantic>=2.2.0,<3", diff --git a/src/aignostics/CLAUDE.md b/src/aignostics/CLAUDE.md index d08c02342..c0d857402 100644 --- a/src/aignostics/CLAUDE.md +++ b/src/aignostics/CLAUDE.md @@ -16,6 +16,7 @@ This file provides a comprehensive overview of all modules in the Aignostics SDK | **notebook** | Marimo notebook server | ❌ | ✅ | ✅ | | **qupath** | QuPath integration | ✅ | ✅ | ✅ | | **system** | System information | ✅ | ✅ | ✅ | +| **mcp** | MCP server for LLM integration | ❌ | ❌ | ✅ | ## Module Descriptions @@ -122,6 +123,16 @@ This file provides a comprehensive overview of all modules in the Aignostics SDK - **CLI**: `info` command for system diagnostics - **Dependencies**: `utils` (logging) +### 🤖 mcp + +**MCP server for LLM integration** + +- **Core Features**: Run management tools, readout querying via DuckDB SQL, compound workflow skills +- **Operating Mode**: Local stdio for Claude Desktop, VS Code with GitHub Copilot, and Claude Code +- **Tools**: 12 tools across tiers (Core, Query, Auth, Skills) +- **Dependencies**: `platform` (auth), `mcp` package, `duckdb` +- **Run**: `python -m aignostics.mcp` + ## Module Interaction Patterns ### Architecture: Service Layer with Dual Presentation Layers @@ -222,6 +233,7 @@ utils.locate_implementations(BaseService) - **notebook** → `utils`, `marimo` (external) - **qupath** → `utils` - **system** → All modules (for health checks) +- **mcp** → `platform`, `mcp` (external), `duckdb`, `fastmcp` (optional external) ### Shared Resources @@ -276,6 +288,7 @@ For detailed information about each module, see: - [notebook/CLAUDE.md](notebook/CLAUDE.md) - Marimo notebook integration - [qupath/CLAUDE.md](qupath/CLAUDE.md) - QuPath integration - [system/CLAUDE.md](system/CLAUDE.md) - System diagnostics +- [mcp/CLAUDE.md](mcp/CLAUDE.md) - MCP server for LLM integration ## Development Guidelines diff --git a/src/aignostics/mcp/CLAUDE.md b/src/aignostics/mcp/CLAUDE.md new file mode 100644 index 000000000..950f74a05 --- /dev/null +++ b/src/aignostics/mcp/CLAUDE.md @@ -0,0 +1,303 @@ +# CLAUDE.md - MCP Module + +This file provides comprehensive guidance to Claude Code and human engineers when working with the `mcp` module in this repository. + +## Module Overview + +The MCP (Model Context Protocol) module provides an MCP server that enables LLMs to interact with the Aignostics Platform via natural language. It exposes run management and readout querying capabilities through standardized MCP tools. + +### Core Responsibilities + +**Run Management:** + +- List and inspect application runs +- Check run status, statistics, and item details +- Download artifacts and readouts +- **Flexible identification**: All tools accept either run IDs (UUID) or external IDs (item identifiers) + +**Readout Analysis (powered by DuckDB):** + +- Query slide-level and cell-level readout data using SQL +- Get schema information for available columns +- Summarize cell distributions and tissue regions +- Perform complex analytical queries with full SQL support + +**High-Level Skills:** + +- Compound operations that combine multiple tool calls +- Optimized for common LLM workflows +- Error handling with helpful guidance + +### Operating Mode + +The server operates in **local stdio mode**, suitable for: +- Claude Desktop +- VS Code with GitHub Copilot +- Claude Code (CLI) + +Authentication uses cached tokens from the Aignostics SDK (`~/.aignostics/token.json`). + +## Architecture & Design Patterns + +### Module Structure + +``` +src/aignostics/mcp/ +├── __init__.py # Public exports (mcp, run_server) +├── __main__.py # Entry point for `python -m aignostics.mcp` +├── _server.py # MCP server implementation (stdio transport) +├── _settings.py # Environment configuration +├── README.md # User documentation +├── CLAUDE.md # This file +└── skills/ # Claude Code workflow skills + ├── aignostics-quickstart/SKILL.md + ├── analyze-readouts/SKILL.md + └── troubleshoot-run/SKILL.md +``` + +### Authentication Architecture + +The server uses cached authentication from the Aignostics SDK: + +``` +┌──────────────────────────────────────────────────────┐ +│ Local stdio server │ +│ ┌──────────────────────────────────────────────────┐ │ +│ │ MCP Tool Handler │ │ +│ │ - Calls _get_client() │ │ +│ │ - Returns Client() instance │ │ +│ │ - Client uses cached token from disk │ │ +│ │ (~/.aignostics/token.json) │ │ +│ │ - Token auto-refreshes if expired │ │ +│ └──────────────────────────────────────────────────┘ │ +│ ↓ │ +│ Platform Client Layer │ +│ (aignostics.platform.Client) │ +│ ↓ │ +│ Aignostics Platform API │ +└──────────────────────────────────────────────────────┘ +``` + +**Auth Retry Pattern:** +- `@_retry_on_auth_failure` decorator on all tools +- Handles `UnauthorizedException` from expired tokens +- Clears cached token and retries operation once +- Transparent to tool caller + +### Run ID vs External ID Resolution + +All tools that accept a `run_id` parameter actually accept either: + +- **Run ID (UUID)**: The platform-assigned identifier for an entire run +- **External ID**: A user-provided identifier for an item/slide within a run + +The `_resolve_run_id()` helper function handles this transparently. + +### Tool Design Pattern + +Each tool follows a consistent pattern: + +```python +@mcp.tool() +def tool_name(required_param: str, optional_param: str | None = None) -> str: + """Tool description for LLM. + + Args: + required_param: Description of parameter. + optional_param: Optional description. + + Returns: + Markdown-formatted result string. + """ + client = _get_client() # Gets client with cached token + # Implementation + return "## Result\n\nMarkdown content..." +``` + +### DuckDB Integration + +The module uses DuckDB for high-performance SQL querying: + +```python +# Direct CSV querying without loading into memory +con = duckdb.connect() +table = f"read_csv_auto('{cache_path}', header=true, skip=1)" +result = con.execute(f"SELECT * FROM {table} WHERE ...") +``` + +### Caching Strategy + +Readouts are downloaded to a visible location in the user's home directory: + +``` +~/aignostics_readouts/ +└── {run_id}/ + ├── slide_readouts.csv + └── cell_readouts.csv +``` + +- Default path: `~/aignostics_readouts/{run_id}/` +- Configurable via `AIGNOSTICS_MCP_READOUTS_DIR` environment variable + +## Tools Reference + +The server exposes 12 tools organized by tier: + +| Tier | Tools | Purpose | +|------|-------|---------| +| Core | `list_runs`, `get_run_status`, `get_run_items` | Basic run operations | +| Query | `query_readouts_sql`, `get_readout_schema`, `query_slide_readouts`, `query_cell_readouts`, `summarize_cells`, `download_readouts` | Data analysis | +| Auth | `get_current_user` | Authentication info | +| Skills | `run_summary`, `readout_analysis` | Compound workflows | + +## Usage + +### Running the Server + +```bash +# Run the local stdio server +uv run python -m aignostics.mcp + +# With environment specification +AIGNOSTICS_API_ROOT=https://platform.aignostics.com uv run python -m aignostics.mcp +``` + +### Claude Desktop Configuration + +```json +{ + "mcpServers": { + "aignostics": { + "command": "uv", + "args": ["run", "--with", "aignostics[mcp]", "python", "-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform.aignostics.com" + } + } + } +} +``` + +### VS Code with GitHub Copilot + +```json +{ + "github.copilot.chat.mcp.servers": { + "aignostics": { + "command": "uv", + "args": ["run", "--with", "aignostics[mcp]", "python", "-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform.aignostics.com" + } + } + } +} +``` + +### Claude Code + +```json +{ + "mcpServers": { + "aignostics": { + "command": "uv", + "args": ["run", "--with", "aignostics[mcp]", "python", "-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform.aignostics.com" + } + } + } +} +``` + +## Environment Configuration + +| Variable | Description | Default | +|----------|-------------|---------| +| `AIGNOSTICS_API_ROOT` | Platform API URL | `https://platform.aignostics.com` | +| `AIGNOSTICS_MCP_READOUTS_DIR` | Readout cache directory | `~/aignostics_readouts` | +| `AIGNOSTICS_CACHE_DIR` | Auth token cache directory | `~/.aignostics` | + +## Dependencies + +The MCP module requires: + +- `mcp>=1.0.0,<2` - MCP server framework +- `duckdb` - SQL query engine (already a main SDK dependency) + +Install with: + +```bash +uv sync --extra mcp +``` + +## Testing + +### Manual Testing + +```python +from aignostics.mcp._server import list_runs, query_readouts_sql + +# Test basic functionality +print(list_runs(limit=3)) + +# Test SQL queries (works with run ID or external ID) +print(query_readouts_sql("your-run-id", "SELECT COUNT(*) FROM cells")) +print(query_readouts_sql("slide_001.svs", "SELECT COUNT(*) FROM cells")) # by external ID +``` + +### Verifying Tool Registration + +```python +from aignostics.mcp._server import mcp + +print(f"Registered tools: {len(mcp._tool_manager._tools)}") +for name in sorted(mcp._tool_manager._tools): + print(f" - {name}") +``` + +## Common Patterns + +### Workflow: Analyze a Run + +``` +1. list_runs(limit=5) → Find a run with succeeded items +2. run_summary(run_id) → Get overview and available artifacts +3. download_readouts(run_id) → Cache the data locally +4. get_readout_schema(run_id) → See available columns +5. query_readouts_sql(...) → Run custom analysis +``` + +### Workflow: Troubleshoot Failures + +``` +1. get_run_status(run_id) → Check termination reason +2. get_run_items(run_id) → See which items failed +3. Look at error messages → Identify USER_ERROR vs SYSTEM_ERROR +``` + +## Lint Suppressions + +The module uses several ruff noqa directives: + +```python +# ruff: noqa: S608 - SQL injection (intentional for LLM queries) +# ruff: noqa: S110 - try-except-pass (graceful degradation) +# ruff: noqa: C901 - Complexity (compound tools are inherently complex) +# ruff: noqa: PLR0914, PLR1702 - Local variables and nesting +``` + +These are intentional design choices for an MCP tool that: +- Must allow arbitrary SQL queries +- Should gracefully handle partial failures +- Combines multiple operations in compound tools + +## Future Enhancements + +Potential improvements: + +1. **Resources**: Add MCP resources for application/version discovery +2. **Prompts**: Pre-built prompt templates for common analyses +3. **Streaming**: Stream large query results +4. **Cache management**: Tool to clear/refresh cached readouts +5. **Run submission**: Tool to submit new runs (currently read-only) diff --git a/src/aignostics/mcp/README.md b/src/aignostics/mcp/README.md new file mode 100644 index 000000000..7d9af1a98 --- /dev/null +++ b/src/aignostics/mcp/README.md @@ -0,0 +1,469 @@ +# Aignostics MCP Server + +An MCP (Model Context Protocol) server that enables LLMs like Claude to interact with the Aignostics Platform via natural language. Query application runs, analyze cell and slide readout data, and explore pathology results through conversational AI. + +## What It Can Do + +### Run Management +- **List runs** - View your recent application runs with status and statistics +- **Check status** - Get detailed information about specific runs including item counts, errors, and termination reasons +- **View items** - Inspect individual items within a run and their processing states +- **Download readouts** - Fetch slide and cell readout CSV files to a local cache +- **Flexible identification** - All tools accept either run IDs (UUID) or external IDs (item/slide names) + +### Readout Analysis (Powered by DuckDB) +- **SQL queries** - Run arbitrary SQL against cell and slide readout data +- **Schema inspection** - View available columns and their types +- **Cell summaries** - Get distribution statistics by cell class and tissue region +- **Filtered queries** - Query cells with complex filter expressions + +### Authentication +- **User info** - Verify authentication and view organization details + +## What It Cannot Do + +- **Submit new runs** - This server is read-only for analysis +- **Cancel or delete runs** - No modification of existing runs +- **Upload files** - Cannot upload WSI files to the platform +- **Modify readout data** - Read-only access to downloaded readouts +- **Access other organizations' data** - Scoped to your authenticated user + +## Installation + +The MCP server is included with the Aignostics SDK: + +```bash +# Install with MCP support +pip install "aignostics[mcp]" + +# Or with uv +uv add "aignostics[mcp]" +``` + +## Authentication + +The MCP server needs access to the Aignostics Platform API. There are two ways to authenticate: + +### Option 1: Pre-authenticate via CLI (Recommended) + +Run this once before using the MCP server: + +```bash +# Login to staging (default) +aignostics user login + +# Login to production +AIGNOSTICS_API_ROOT=https://platform.aignostics.com aignostics user login +``` + +The token is cached at `~/.aignostics/token.json` and auto-refreshes. The MCP server will use this cached token automatically. + +### Option 2: Custom Token Cache Location + +If you need tokens stored in a different location (e.g., for containerized environments), set `AIGNOSTICS_CACHE_DIR`: + +```json +{ + "mcpServers": { + "aignostics": { + "command": "python", + "args": ["-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform-staging.aignostics.com", + "AIGNOSTICS_CACHE_DIR": "/path/to/secure/cache" + } + } + } +} +``` + +Then login once with the same cache dir: +```bash +AIGNOSTICS_CACHE_DIR=/path/to/secure/cache aignostics user login +``` + +### Option 3: Refresh Token via Environment (CI/Automated) + +For fully automated setups where interactive login isn't possible: + +**Set in shell profile** (`.bashrc`, `.zshrc`) - token is never in config files: +```bash +export AIGNOSTICS_REFRESH_TOKEN="$(cat ~/.aignostics/.token | cut -d: -f1)" +``` + +**Or use a wrapper script:** +```bash +#!/bin/bash +# ~/.local/bin/aignostics-mcp-wrapper.sh +export AIGNOSTICS_REFRESH_TOKEN=$(cat ~/.aignostics/.token 2>/dev/null | cut -d: -f1) +exec python -m aignostics.mcp +``` + +```json +{ + "mcpServers": { + "aignostics": { + "command": "/home/you/.local/bin/aignostics-mcp-wrapper.sh", + "env": { + "AIGNOSTICS_API_ROOT": "https://platform-staging.aignostics.com" + } + } + } +} +``` + +> **Note**: Most users don't need Options 2 or 3. If you've logged in via `aignostics user login`, the SDK automatically uses the cached token. These options are only for isolated environments or CI systems. + +### Why Can't the Server Handle Login? + +The initial OAuth2 login requires opening a browser for user interaction. Since MCP servers run as background processes without a UI, browser-based login isn't possible. Once authenticated (via either method above), the server handles token refresh automatically. + +--- + +## Client Configuration + +### Claude Desktop + +Add to your Claude Desktop configuration file: + +**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json` +**Windows**: `%APPDATA%\Claude\claude_desktop_config.json` +**Linux**: `~/.config/Claude/claude_desktop_config.json` + +```json +{ + "mcpServers": { + "aignostics": { + "command": "uv", + "args": ["run", "--with", "aignostics[mcp]", "python", "-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform-staging.aignostics.com" + } + } + } +} +``` + +**For production:** +```json +{ + "mcpServers": { + "aignostics": { + "command": "uv", + "args": ["run", "--with", "aignostics[mcp]", "python", "-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform.aignostics.com" + } + } + } +} +``` + +**If installed locally (e.g., in a virtual environment):** +```json +{ + "mcpServers": { + "aignostics": { + "command": "/path/to/venv/bin/python", + "args": ["-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform-staging.aignostics.com" + } + } + } +} +``` + +### VS Code with GitHub Copilot + +Add to your VS Code settings (`settings.json`): + +```json +{ + "github.copilot.chat.mcp.servers": { + "aignostics": { + "command": "uv", + "args": ["run", "--with", "aignostics[mcp]", "python", "-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform-staging.aignostics.com" + } + } + } +} +``` + +Or in your workspace `.vscode/settings.json` for project-specific configuration. + +### Claude Code (CLI) + +Add to your Claude Code MCP configuration at `~/.claude/claude_mcp_config.json`: + +```json +{ + "mcpServers": { + "aignostics": { + "command": "uv", + "args": ["run", "--with", "aignostics[mcp]", "python", "-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_API_ROOT": "https://platform-staging.aignostics.com" + } + } + } +} +``` + +**Using a local development install:** +```json +{ + "mcpServers": { + "aignostics": { + "command": "uv", + "args": ["run", "python", "-m", "aignostics.mcp"], + "cwd": "/path/to/python-sdk", + "env": { + "AIGNOSTICS_API_ROOT": "https://platform-staging.aignostics.com" + } + } + } +} +``` + +--- + +## Available Tools + +### Core Tools + +| Tool | Description | +|------|-------------| +| `list_runs` | List recent runs with optional limit and application filter | +| `get_run_status` | Get detailed status, statistics, and termination info for a run | +| `get_run_items` | List all items in a run with their states and errors | +| `download_readouts` | Download slide and cell readout CSVs to local cache | + +### Query Tools + +| Tool | Description | +|------|-------------| +| `query_readouts_sql` | Execute arbitrary SQL on `slides` and `cells` tables | +| `get_readout_schema` | View available columns and types for readout data | +| `query_slide_readouts` | Query slide-level measurements with optional column selection | +| `query_cell_readouts` | Query cell data with filters, column selection, and limits | +| `summarize_cells` | Get cell distribution by class and tissue region | + +### Compound Skills + +| Tool | Description | +|------|-------------| +| `run_summary` | Complete run overview with items, errors, and available artifacts | +| `readout_analysis` | Download readouts and generate statistical summary | + +### Authentication + +| Tool | Description | +|------|-------------| +| `get_current_user` | Show authenticated user email and organization | + +--- + +## Run ID vs External ID + +All tools that accept a `run_id` parameter actually accept either: + +- **Run ID (UUID)**: The platform-assigned identifier for an entire run (e.g., `a1b2c3d4-e5f6-7890-abcd-ef1234567890`) +- **External ID**: A user-provided identifier for an item/slide within a run (e.g., `slide_001.svs`, `patient123/sample_A`) + +The server automatically resolves external IDs by searching for runs containing an item with that identifier. This allows you to work with human-readable names instead of UUIDs: + +``` +You: Show me the status of slide_001.svs +Claude: [calls get_run_status("slide_001.svs")] + Found run abc-123... with slide_001.svs. Status: TERMINATED... + +You: Analyze the readouts for patient123/biopsy_A +Claude: [calls readout_analysis("patient123/biopsy_A")] + Downloaded readouts for run xyz-789... +``` + +**Note**: If multiple runs contain items with the same external ID, the most recent run is used. + +--- + +## Example Conversations + +### Getting Started +``` +You: Show me my recent runs +Claude: [calls list_runs] Here are your 5 most recent runs... + +You: What's the status of run abc-123? +Claude: [calls get_run_status] This run has completed with 10 items processed... +``` + +### Analyzing Readouts +``` +You: Download the readouts for run abc-123 and show me the cell distribution +Claude: [calls readout_analysis] Downloaded 2 files. Here's the summary: + - Total cells: 45,231 + - Carcinoma cells: 12,456 (27.5%) + - Lymphocytes: 8,234 (18.2%) + ... + +You: How many cells are in carcinoma regions? +Claude: [calls query_readouts_sql with "SELECT COUNT(*) FROM cells WHERE IN_CARCINOMA = true"] + There are 23,456 cells in carcinoma regions. + +You: Show me the average nucleus area by cell class +Claude: [calls query_readouts_sql with appropriate SQL] + Here's the breakdown... +``` + +### Troubleshooting +``` +You: Why did run xyz-789 fail? +Claude: [calls run_summary] This run terminated with 2 user errors: + - Item 1: Invalid file format - the uploaded file is not a valid WSI + - Item 2: Resolution metadata missing +``` + +--- + +## Downloaded Readouts + +Readouts are downloaded to a visible location in your home directory: + +``` +~/aignostics_readouts/{run_id}/ +├── slide_readouts.csv +└── cell_readouts.csv +``` + +**Custom location:** Set `AIGNOSTICS_MCP_READOUTS_DIR` to change where files are stored: + +```json +{ + "mcpServers": { + "aignostics": { + "command": "python", + "args": ["-m", "aignostics.mcp"], + "env": { + "AIGNOSTICS_MCP_READOUTS_DIR": "/path/to/your/readouts" + } + } + } +} +``` + +The files persist between sessions. To refresh data for a run, delete its directory. + +--- + +## Environment Variables + +| Variable | Description | Default | +|----------|-------------|---------| +| `AIGNOSTICS_API_ROOT` | Platform API URL | `https://platform.aignostics.com` | +| `AIGNOSTICS_MCP_READOUTS_DIR` | Directory for downloaded readouts | `~/aignostics_readouts` | +| `AIGNOSTICS_CACHE_DIR` | Directory for auth token cache | `~/.aignostics` | +| `AIGNOSTICS_REFRESH_TOKEN` | Refresh token for non-interactive auth | None (uses cached token) | + +--- + +## Troubleshooting + +### "Authentication required" errors +Ensure you've logged in via one of: +```bash +# Option 1: CLI login (browser-based) +aignostics user login + +# Option 2: Set refresh token in your MCP config +# Add AIGNOSTICS_REFRESH_TOKEN to the env section +``` + +### "Run not found" errors +- Verify the run ID or external ID is correct +- Ensure you're connected to the right environment (staging vs production) +- If using an external ID, check that an item with that name exists in your runs + +### "No readouts found" errors +- The run may not have completed successfully +- Check `get_run_status` to see if items succeeded + +### SQL query errors +- Use `get_readout_schema` to see available columns +- Column names are case-sensitive (e.g., `CELL_CLASS`, not `cell_class`) + +--- + +## Claude Code Skills + +The `skills/` directory contains workflow guides specifically for **Claude Code** (the CLI tool). These are not MCP tools - they're markdown-based instructions that guide Claude Code through common workflows when users invoke them with slash commands. + +### Available Skills + +| Skill | Trigger | Purpose | +|-------|---------|---------| +| `aignostics-quickstart` | `/aignostics-quickstart` | Introduction to the platform and available tools. Use when new to Aignostics. | +| `analyze-readouts` | `/analyze-readouts` | Step-by-step guide for analyzing cell and slide readout data with SQL examples. | +| `troubleshoot-run` | `/troubleshoot-run` | Diagnose failed runs, understand error types (USER_ERROR vs SYSTEM_ERROR), and resolve issues. | + +### When Are Skills Used? + +Skills are **Claude Code-specific** and are triggered when: +1. A user invokes them via slash command in Claude Code (e.g., typing `/analyze-readouts`) +2. Claude Code detects the skill is relevant based on the user's question + +**Skills vs MCP Tools:** +- **MCP Tools** (`list_runs`, `query_readouts_sql`, etc.) - Executable functions the LLM calls to interact with the platform +- **Skills** - Workflow documentation that teaches the LLM *how* to use the tools effectively for specific tasks + +Think of skills as "recipes" that combine multiple tool calls into coherent workflows. + +### Skill File Format + +Each skill is a markdown file with YAML frontmatter: + +```markdown +--- +name: skill-name +description: When to use this skill (used for auto-detection) +--- + +# Skill Title + +Workflow instructions, examples, and tips... +``` + +The `description` field helps Claude Code automatically suggest the skill when relevant. + +--- + +## Development + +### Running the server directly +```bash +# From the SDK repository +uv run python -m aignostics.mcp + +# With environment variable +AIGNOSTICS_API_ROOT=https://platform-staging.aignostics.com uv run python -m aignostics.mcp +``` + +### Testing tool functions +```python +from aignostics.mcp._server import list_runs, query_readouts_sql + +# Test listing runs +print(list_runs(limit=3)) + +# Test SQL query - works with run ID or external ID +print(query_readouts_sql("your-run-id", "SELECT COUNT(*) FROM cells")) +print(query_readouts_sql("slide_001.svs", "SELECT COUNT(*) FROM cells")) # by external ID +``` + +--- + +## License + +MIT License - See the main SDK license for details. diff --git a/src/aignostics/mcp/__init__.py b/src/aignostics/mcp/__init__.py new file mode 100644 index 000000000..0128a521a --- /dev/null +++ b/src/aignostics/mcp/__init__.py @@ -0,0 +1,25 @@ +"""MCP Server for Aignostics Platform. + +This module provides a Model Context Protocol (MCP) server that allows LLMs +to interact with the Aignostics platform via natural language queries. + +The server operates in local stdio mode, suitable for Claude Desktop, VS Code +with GitHub Copilot, and Claude Code. + +Usage: + # Run the server + python -m aignostics.mcp + +Programmatic usage: + from aignostics.mcp import run_server + + run_server() + +Authentication: + The server uses cached authentication tokens from the Aignostics SDK. + Users must first authenticate via `aignostics user login`. +""" + +from ._server import mcp, run_server + +__all__ = ["mcp", "run_server"] diff --git a/src/aignostics/mcp/__main__.py b/src/aignostics/mcp/__main__.py new file mode 100644 index 000000000..afa8659d1 --- /dev/null +++ b/src/aignostics/mcp/__main__.py @@ -0,0 +1,22 @@ +"""Entry point for running MCP server as a module. + +Usage: + # Run the local stdio server (for Claude Desktop, VS Code, Claude Code) + python -m aignostics.mcp + + # With environment specification + AIGNOSTICS_API_ROOT=https://platform.aignostics.com python -m aignostics.mcp +""" + +from __future__ import annotations + + +def main() -> None: + """Run the MCP server in local stdio mode.""" + from ._server import run_server + + run_server() + + +if __name__ == "__main__": + main() diff --git a/src/aignostics/mcp/_server.py b/src/aignostics/mcp/_server.py new file mode 100644 index 000000000..abd2582c8 --- /dev/null +++ b/src/aignostics/mcp/_server.py @@ -0,0 +1,881 @@ +"""MCP Server implementation for Aignostics Platform. + +Uses DuckDB for high-performance SQL querying of readout data. + +Note: SQL injection warnings (S608) are intentionally suppressed - this MCP tool +is designed to allow LLMs/users to run arbitrary SQL queries on local CSV data. +""" +# ruff: noqa: S608, S110, C901, PLR0914, PLR1702 + +from __future__ import annotations + +from collections.abc import Callable +from functools import wraps +from itertools import islice +from pathlib import Path +from typing import ParamSpec, TypeVar + +import duckdb +import requests +from aignx.codegen.exceptions import UnauthorizedException + +from aignostics import platform +from aignostics.platform._authentication import remove_cached_token +from aignostics.platform._client import Client + +from ._settings import configure_environment, get_readout_cache_path + +# Type variables for the retry decorator +P = ParamSpec("P") +R = TypeVar("R") + +# Lazy import for mcp to avoid import errors if not installed +try: + from mcp.server.fastmcp import FastMCP +except ImportError as e: + _msg = "MCP server requires the 'mcp' package. Install with: uv add 'mcp[cli]' or pip install 'mcp[cli]'" + raise ImportError(_msg) from e + +# Initialize MCP server +mcp = FastMCP("aignostics-readouts") + +# Configure environment on module load +configure_environment() + + +def _clear_client_cache() -> None: + """Clear the cached API client instances. + + This forces re-authentication on the next API call. + """ + Client._api_client_cached = None + Client._api_client_uncached = None + + +def _retry_on_auth_failure(func: Callable[P, R]) -> Callable[P, R]: + """Decorator that retries once on authentication failure. + + If an UnauthorizedException is raised (e.g., expired token), this decorator: + 1. Removes the cached token file + 2. Clears the cached API client instances + 3. Retries the operation once + + This handles the case where the cached token has expired and needs refresh. + + Args: + func: The function to wrap. + + Returns: + The wrapped function that handles auth failures gracefully. + """ + + @wraps(func) + def wrapper(*args: P.args, **kwargs: P.kwargs) -> R: + try: + return func(*args, **kwargs) + except UnauthorizedException: + # Token expired or invalid - clear caches and retry once + remove_cached_token() + _clear_client_cache() + return func(*args, **kwargs) + + return wrapper + + +def _get_client() -> platform.Client: + """Get an authenticated platform client. + + Returns: + Authenticated Platform client instance. + """ + return platform.Client() + + +def _resolve_run_id(client: platform.Client, identifier: str) -> str: + """Resolve a run_id or external_id to a run_id. + + Accepts either: + - A run_id (UUID) - used directly + - An external_id (item identifier) - finds the run containing that item + + Args: + client: Authenticated platform client. + identifier: Either a run_id or an external_id. + + Returns: + The resolved run_id. + + Raises: + platform.NotFoundException: If no matching run is found. + """ + # First, try to use it directly as a run_id + try: + run = client.runs(identifier) + run.details() # Validate it exists + return identifier + except platform.NotFoundException: + pass + + # Not a valid run_id, try to find a run by external_id + runs = list(client.runs.list(external_id=identifier, page_size=1)) + if runs: + return runs[0].run_id + + # No match found + msg = f"No run found with run_id or external_id: {identifier}" + raise platform.NotFoundException(msg) + + +# ============================================================================= +# TIER 1: Core Tools +# ============================================================================= + + +@mcp.tool() +@_retry_on_auth_failure +def list_runs( + limit: int = 10, + app_id: str | None = None, +) -> str: + """List recent application runs. + + Args: + limit: Maximum number of runs to return (default 10). + app_id: Optional application ID to filter by. + + Returns: + Markdown table of runs with ID, application, version, state, and item counts. + """ + client = _get_client() + + runs_iter = client.runs.list(application_id=app_id) if app_id else client.runs.list() + runs = list(islice(runs_iter, limit)) + + if not runs: + return "No runs found." + + lines = ["| Run ID | Application | Version | State | Items |", "|--------|-------------|---------|-------|-------|"] + + for run in runs: + details = run.details() + stats = details.statistics + items_summary = f"{stats.item_succeeded_count}/{stats.item_count} succeeded" + # Show full run_id so LLM can use it directly with other tools + lines.append( + f"| {run.run_id} | {details.application_id} | " + f"{details.version_number[:15]}... | {details.state.value} | {items_summary} |" + ) + + return "\n".join(lines) + + +@mcp.tool() +@_retry_on_auth_failure +def get_run_status(run_id: str) -> str: + """Get detailed status of a specific run. + + Args: + run_id: The run ID or external ID (item identifier) to check. + + Returns: + Detailed status including state, statistics, and any errors. + """ + client = _get_client() + + try: + resolved_id = _resolve_run_id(client, run_id) + run = client.runs(resolved_id) + details = run.details() + except platform.NotFoundException: + return f"Run not found: {run_id}" + + lines = [ + f"## Run Status: {resolved_id}", + "", + f"- **Application:** {details.application_id}", + f"- **Version:** {details.version_number}", + f"- **State:** {details.state.value}", + ] + + if details.termination_reason: + lines.append(f"- **Termination Reason:** {details.termination_reason.value}") + if details.error_message: + lines.append(f"- **Error:** {details.error_message}") + + if details.statistics: + stats = details.statistics + lines.extend( + [ + "", + "### Item Statistics", + f"- Total: {stats.item_count}", + f"- Succeeded: {stats.item_succeeded_count}", + f"- Processing: {stats.item_processing_count}", + f"- Pending: {stats.item_pending_count}", + f"- User Errors: {stats.item_user_error_count}", + f"- System Errors: {stats.item_system_error_count}", + f"- Skipped: {stats.item_skipped_count}", + ] + ) + + return "\n".join(lines) + + +@mcp.tool() +@_retry_on_auth_failure +def get_run_items(run_id: str) -> str: + """Get all items in a run with their states. + + Args: + run_id: The run ID or external ID (item identifier) to get items for. + + Returns: + List of items with their states and any errors. + """ + client = _get_client() + + try: + resolved_id = _resolve_run_id(client, run_id) + run = client.runs(resolved_id) + items = list(run.results()) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + if not items: + return "No items found in this run." + + lines = [ + f"## Items in Run: {resolved_id}", + "", + "| # | External ID | State | Output | Error |", + "|---|-------------|-------|--------|-------|", + ] + + max_id_len = 30 + max_error_len = 50 + for i, item in enumerate(items, 1): + external_id = item.external_id[-max_id_len:] if len(item.external_id) > max_id_len else item.external_id + error = "" + if item.error_message: + if len(item.error_message) > max_error_len: + error = item.error_message[:max_error_len] + "..." + else: + error = item.error_message + lines.append(f"| {i} | ...{external_id} | {item.state.value} | {item.output.value} | {error} |") + + return "\n".join(lines) + + +# ============================================================================= +# TIER 2: Artifact/Readout Tools +# ============================================================================= + + +@mcp.tool() +@_retry_on_auth_failure +def download_readouts(run_id: str, output_dir: str | None = None) -> str: + """Download slide and cell readouts for a run. + + Args: + run_id: The run ID or external ID (item identifier) to download readouts for. + output_dir: Optional output directory. Uses cache if not specified. + + Returns: + Paths to the downloaded files. + """ + client = _get_client() + + try: + resolved_id = _resolve_run_id(client, run_id) + run = client.runs(resolved_id) + items = list(run.results()) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + downloaded = [] + + for item in items: + if item.output != platform.ItemOutput.FULL: + continue + + for artifact in item.output_artifacts: + if "readout" not in artifact.name: + continue + + # Determine readout type + if "slide" in artifact.name: + readout_type = "slide" + elif "cell" in artifact.name: + readout_type = "cell" + else: + continue + + # Determine output path + if output_dir: + out_path = Path(output_dir) / f"{readout_type}_readouts.csv" + out_path.parent.mkdir(parents=True, exist_ok=True) + else: + out_path = get_readout_cache_path(resolved_id, readout_type) + + # Download + if artifact.download_url: + response = requests.get(artifact.download_url, timeout=300) + response.raise_for_status() + out_path.write_bytes(response.content) + downloaded.append(f"- {readout_type}: {out_path} ({len(response.content):,} bytes)") + + if not downloaded: + return "No readouts found in this run. The run may not have completed successfully." + + return "## Downloaded Readouts\n\n" + "\n".join(downloaded) + + +@mcp.tool() +@_retry_on_auth_failure +def query_readouts_sql(run_id: str, sql: str) -> str: + """Execute an arbitrary SQL query on the readout data. + + This is a powerful tool for complex analysis. The readout tables are available as: + - 'slides' - slide-level measurements (typically 1 row with many columns) + - 'cells' - cell-level data (many rows with cell features) + + Args: + run_id: The run ID or external ID (item identifier) to query readouts for. + sql: SQL query to execute. Use 'slides' and 'cells' as table names. + Example: "SELECT CELL_CLASS, COUNT(*) as n FROM cells GROUP BY CELL_CLASS" + + Returns: + Query results as markdown table, or error message. + """ + # Resolve identifier to run_id + client = _get_client() + try: + resolved_id = _resolve_run_id(client, run_id) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + # Ensure readouts are downloaded + slide_path = get_readout_cache_path(resolved_id, "slide") + cell_path = get_readout_cache_path(resolved_id, "cell") + + if not slide_path.exists() or not cell_path.exists(): + download_readouts(resolved_id) + + if not slide_path.exists() and not cell_path.exists(): + return f"No readouts found for run {run_id}. Download readouts first." + + try: + # Create connection with the readout tables + con = duckdb.connect() + + # Register tables if files exist + if slide_path.exists(): + con.execute(f"CREATE VIEW slides AS SELECT * FROM read_csv_auto('{slide_path}', header=true, skip=1)") + if cell_path.exists(): + con.execute(f"CREATE VIEW cells AS SELECT * FROM read_csv_auto('{cell_path}', header=true, skip=1)") + + # Execute the user's query + result = con.execute(sql) + rows = result.fetchall() + columns = result.description + + if not rows: + return "Query returned no results." + + # Format as markdown + col_names = [col[0] for col in columns] + header = "| " + " | ".join(col_names) + " |" + separator = "| " + " | ".join(["---"] * len(col_names)) + " |" + + lines = [header, separator] + max_rows = 100 + for i, row in enumerate(rows): + if i >= max_rows: + lines.append(f"\n*Showing first {max_rows} of {len(rows)} rows*") + break + row_str = "| " + " | ".join(str(v) if v is not None else "" for v in row) + " |" + lines.append(row_str) + + return "\n".join(lines) + + except Exception as e: + # Provide helpful error with available columns + error_msg = f"SQL Error: {e}\n\n" + + try: + con = duckdb.connect() + if cell_path.exists(): + cell_table = f"read_csv_auto('{cell_path}', header=true, skip=1)" + cols = con.execute(f"DESCRIBE SELECT * FROM {cell_table}").fetchall() + error_msg += f"**Available cell columns:** {', '.join(c[0] for c in cols[:20])}...\n" + if slide_path.exists(): + slide_table = f"read_csv_auto('{slide_path}', header=true, skip=1)" + cols = con.execute(f"DESCRIBE SELECT * FROM {slide_table}").fetchall() + error_msg += f"**Available slide columns:** {', '.join(c[0] for c in cols[:20])}...\n" + except Exception: + pass + + return error_msg + + +@mcp.tool() +@_retry_on_auth_failure +def get_readout_schema(run_id: str, readout_type: str = "cell") -> str: + """Get the schema (column names and types) of a readout file. + + Args: + run_id: The run ID or external ID (item identifier) to get schema for. + readout_type: Type of readout ('slide' or 'cell', default 'cell'). + + Returns: + Table schema with column names and types. + """ + # Resolve identifier to run_id + client = _get_client() + try: + resolved_id = _resolve_run_id(client, run_id) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + cache_path = get_readout_cache_path(resolved_id, readout_type) + + if not cache_path.exists(): + download_readouts(resolved_id) + + if not cache_path.exists(): + return f"No {readout_type} readouts found for run {run_id}." + + try: + con = duckdb.connect() + result = con.execute(f"DESCRIBE SELECT * FROM read_csv_auto('{cache_path}', header=true, skip=1)") + rows = result.fetchall() + + lines = [ + f"## {readout_type.title()} Readout Schema", + "", + "| Column | Type |", + "|--------|------|", + ] + lines.extend(f"| {row[0]} | {row[1]} |" for row in rows) + + lines.append(f"\n*Total columns: {len(rows)}*") + return "\n".join(lines) + + except Exception as e: + return f"Error reading schema: {e}" + + +@mcp.tool() +@_retry_on_auth_failure +def query_slide_readouts(run_id: str, columns: str | None = None) -> str: + """Query slide-level readout measurements. + + Args: + run_id: The run ID or external ID (item identifier) to query readouts for. + columns: Comma-separated list of columns to include (optional). + + Returns: + Slide readout data as markdown. + """ + # Resolve identifier to run_id + client = _get_client() + try: + resolved_id = _resolve_run_id(client, run_id) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + cache_path = get_readout_cache_path(resolved_id, "slide") + + if not cache_path.exists(): + download_readouts(resolved_id) + + if not cache_path.exists(): + return f"No slide readouts found for run {run_id}." + + try: + con = duckdb.connect() + + if columns: + col_list = ", ".join(c.strip() for c in columns.split(",")) + sql = f"SELECT {col_list} FROM read_csv_auto('{cache_path}', header=true, skip=1)" + else: + sql = f"SELECT * FROM read_csv_auto('{cache_path}', header=true, skip=1)" + + result = con.execute(sql) + rows = result.fetchall() + col_names = [col[0] for col in result.description] + + # For slide readouts (usually 1 row), show as key-value pairs + max_metrics = 50 + if len(rows) == 1: + lines = ["## Slide Readouts", ""] + row = rows[0] + shown = 0 + for key, value in zip(col_names, row, strict=True): + if shown >= max_metrics: + lines.append(f"\n*Showing first {max_metrics} of {len(col_names)} measurements*") + break + if value is not None: + lines.append(f"- **{key}:** {value}") + shown += 1 + return "\n".join(lines) + + # Multiple rows - show as table + header = "| " + " | ".join(col_names) + " |" + separator = "| " + " | ".join(["---"] * len(col_names)) + " |" + lines = ["## Slide Readouts", "", header, separator] + for row in rows: + row_str = "| " + " | ".join(str(v) if v is not None else "" for v in row) + " |" + lines.append(row_str) + return "\n".join(lines) + + except Exception as e: + return f"Error querying slide readouts: {e}" + + +@mcp.tool() +@_retry_on_auth_failure +def query_cell_readouts( + run_id: str, + filter_expr: str | None = None, + columns: str | None = None, + limit: int = 100, +) -> str: + """Query cell-level readout data with optional filtering. + + Args: + run_id: The run ID or external ID (item identifier) to query readouts for. + filter_expr: SQL WHERE clause (e.g., "IN_CARCINOMA = true", "CELL_CLASS = 'Carcinoma cell'"). + columns: Comma-separated list of columns to include. + limit: Maximum number of rows to return (default 100). + + Returns: + Filtered cell data as markdown table. + """ + # Resolve identifier to run_id + client = _get_client() + try: + resolved_id = _resolve_run_id(client, run_id) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + cache_path = get_readout_cache_path(resolved_id, "cell") + + if not cache_path.exists(): + download_readouts(resolved_id) + + if not cache_path.exists(): + return f"No cell readouts found for run {run_id}." + + try: + con = duckdb.connect() + table = f"read_csv_auto('{cache_path}', header=true, skip=1)" + + # Get total count + total_result = con.execute(f"SELECT COUNT(*) FROM {table}").fetchone() + total = total_result[0] if total_result else 0 + + # Build query + col_clause = columns or "*" + where_clause = f"WHERE {filter_expr}" if filter_expr else "" + + # Get filtered count if filtering + if filter_expr: # noqa: SIM108 + filtered_result = con.execute(f"SELECT COUNT(*) FROM {table} {where_clause}").fetchone() + filtered = filtered_result[0] if filtered_result else 0 + else: + filtered = total + + # Execute main query + sql = f"SELECT {col_clause} FROM {table} {where_clause} LIMIT {limit}" + result = con.execute(sql) + rows = result.fetchall() + col_names = [col[0] for col in result.description] + + # Format output + header = f"## Cell Readouts\n\n*Showing {len(rows)} of {filtered:,} cells" + if filter_expr: + header += f" (filtered from {total:,} total)" + header += "*\n" + + if not rows: + return header + "\nNo cells match the filter criteria." + + table_header = "| " + " | ".join(col_names) + " |" + separator = "| " + " | ".join(["---"] * len(col_names)) + " |" + lines = [header, table_header, separator] + for row in rows: + row_str = "| " + " | ".join(str(v) if v is not None else "" for v in row) + " |" + lines.append(row_str) + + return "\n".join(lines) + + except Exception as e: + return f"Error querying cell readouts: {e}\n\nUse get_readout_schema() to see available columns." + + +@mcp.tool() +@_retry_on_auth_failure +def summarize_cells(run_id: str, group_by: str = "CELL_CLASS") -> str: + """Get summary statistics of cell readouts. + + Args: + run_id: The run ID or external ID (item identifier) to summarize. + group_by: Column to group by (default: CELL_CLASS). + + Returns: + Summary statistics including counts and distributions. + """ + # Resolve identifier to run_id + client = _get_client() + try: + resolved_id = _resolve_run_id(client, run_id) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + cache_path = get_readout_cache_path(resolved_id, "cell") + + if not cache_path.exists(): + download_readouts(resolved_id) + + if not cache_path.exists(): + return f"No cell readouts found for run {run_id}." + + try: + con = duckdb.connect() + table = f"read_csv_auto('{cache_path}', header=true, skip=1)" + + # Get total count + total_result = con.execute(f"SELECT COUNT(*) FROM {table}").fetchone() + total = total_result[0] if total_result else 0 + + lines = [ + f"## Cell Summary for Run {resolved_id[:8]}...", + "", + f"**Total Cells:** {total:,}", + "", + ] + + # Group by specified column + try: + result = con.execute(f""" + SELECT {group_by}, COUNT(*) as count + FROM {table} + GROUP BY {group_by} + ORDER BY count DESC + """) + rows = result.fetchall() + + lines.extend([f"### Distribution by {group_by}", ""]) + for value, count in rows: + pct = count / total * 100 + lines.append(f"- **{value}:** {count:,} ({pct:.1f}%)") + except Exception: + lines.append(f"*Column '{group_by}' not found or invalid*") + + # Add tissue region distribution + try: + # Get columns that start with IN_ + cols_result = con.execute(f"DESCRIBE SELECT * FROM {table}") + all_cols = [row[0] for row in cols_result.fetchall()] + tissue_cols = [c for c in all_cols if c.startswith("IN_")] + + if tissue_cols: + lines.extend(["", "### Cells by Tissue Region"]) + for col in tissue_cols: + try: + count_result = con.execute(f"SELECT SUM(CAST({col} AS INTEGER)) FROM {table}").fetchone() + count = count_result[0] if count_result else None + if count is not None and total > 0: + pct = count / total * 100 + region = col.replace("IN_", "").replace("_", " ").title() + lines.append(f"- **{region}:** {count:,} ({pct:.1f}%)") + except Exception: + pass + except Exception: + pass + + return "\n".join(lines) + + except Exception as e: + return f"Error summarizing cells: {e}" + + +# ============================================================================= +# TIER 3: Authentication Info +# ============================================================================= + + +@mcp.tool() +@_retry_on_auth_failure +def get_current_user() -> str: + """Get information about the currently authenticated user. + + Returns: + User email and organization information. + """ + client = _get_client() + + try: + me = client.me() + return f"**User:** {me.user.email}\n**Organization:** {me.organization.name}" + except Exception as e: + return f"Not authenticated or error: {e}" + + +# ============================================================================= +# SKILLS: High-Level Compound Operations +# ============================================================================= + + +@mcp.tool() +@_retry_on_auth_failure +def run_summary(run_id: str) -> str: + """Get a comprehensive summary of a run including status, items, and errors. + + This is a high-level skill that combines multiple queries into one. + + Args: + run_id: The run ID or external ID (item identifier) to summarize. + + Returns: + Complete run summary with all details. + """ + client = _get_client() + + try: + resolved_id = _resolve_run_id(client, run_id) + run = client.runs(resolved_id) + details = run.details() + items = list(run.results()) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + lines = [ + f"# Run Summary: {resolved_id}", + "", + "## Overview", + f"- **Application:** {details.application_id}", + f"- **Version:** {details.version_number}", + f"- **State:** {details.state.value}", + ] + + if details.termination_reason: + lines.append(f"- **Termination:** {details.termination_reason.value}") + if details.error_message: + lines.append(f"- **Error:** {details.error_message}") + + # Statistics + if details.statistics: + stats = details.statistics + lines.extend( + [ + "", + "## Statistics", + f"- **Total Items:** {stats.item_count}", + f"- **Succeeded:** {stats.item_succeeded_count}", + f"- **Failed:** {stats.item_user_error_count + stats.item_system_error_count}", + f"- **Skipped:** {stats.item_skipped_count}", + ] + ) + + # Item details + max_error_preview = 100 + if items: + lines.extend(["", "## Items"]) + for i, item in enumerate(items, 1): + status_icon = "✓" if item.termination_reason == platform.ItemTerminationReason.SUCCEEDED else "✗" + name = item.external_id.split("/")[-1][:40] + lines.append(f"{i}. {status_icon} `{name}`") + if item.error_message: + if len(item.error_message) > max_error_preview: + error_short = item.error_message[:max_error_preview] + "..." + else: + error_short = item.error_message + lines.append(f" - Error: {error_short}") + + # Available artifacts + successful_items = [it for it in items if it.output == platform.ItemOutput.FULL] + if successful_items: + artifact_names: set[str] = set() + for item in successful_items: + artifact_names.update(art.name for art in item.output_artifacts) + lines.extend(["", "## Available Artifacts"]) + lines.extend(f"- {name}" for name in sorted(artifact_names)) + + return "\n".join(lines) + + +@mcp.tool() +@_retry_on_auth_failure +def readout_analysis(run_id: str) -> str: + """Download readouts and generate a complete analysis. + + This is a high-level skill that downloads readouts and provides statistics. + + Args: + run_id: The run ID or external ID (item identifier) to analyze. + + Returns: + Downloaded file paths and statistical summary. + """ + # Resolve identifier to run_id first + client = _get_client() + try: + resolved_id = _resolve_run_id(client, run_id) + except platform.NotFoundException: + return f"Run not found: {run_id}" + + # Download readouts + download_result = download_readouts(resolved_id) + + if "No readouts found" in download_result: + return str(download_result) + + lines = [download_result, ""] + + # Add cell summary + cell_summary = summarize_cells(resolved_id) + lines.append(cell_summary) + + # Add slide summary (key metrics only) + slide_path = get_readout_cache_path(resolved_id, "slide") + if slide_path.exists(): + try: + con = duckdb.connect() + table = f"read_csv_auto('{slide_path}', header=true, skip=1)" + + lines.extend(["", "## Key Slide Metrics"]) + + key_metrics = [ + "ABSOLUTE_AREA", + "ABSOLUTE_AREA_VALID_TISSUE", + "ABSOLUTE_AREA_CARCINOMA", + "ABSOLUTE_AREA_STROMA", + ] + for metric in key_metrics: + try: + value_result = con.execute(f"SELECT {metric} FROM {table}").fetchone() + value = value_result[0] if value_result else None + if value is not None: + lines.append(f"- **{metric.replace('_', ' ').title()}:** {value:,.0f} μm²") + except Exception: + pass + except Exception: + pass + + return "\n".join(lines) + + +# ============================================================================= +# Server Entry Point +# ============================================================================= + + +def run_server() -> None: + """Run the MCP server.""" + mcp.run(transport="stdio") + + +if __name__ == "__main__": + run_server() diff --git a/src/aignostics/mcp/_settings.py b/src/aignostics/mcp/_settings.py new file mode 100644 index 000000000..1bd07da72 --- /dev/null +++ b/src/aignostics/mcp/_settings.py @@ -0,0 +1,90 @@ +"""Settings for the MCP server.""" + +from __future__ import annotations + +import os +from enum import StrEnum +from pathlib import Path + + +class Environment(StrEnum): + """Platform environment options.""" + + PRODUCTION = "production" + STAGING = "staging" + + +# Environment API roots +ENV_API_ROOTS = { + Environment.PRODUCTION: "https://platform.aignostics.com", + Environment.STAGING: "https://platform-staging.aignostics.com", +} + +DEFAULT_ENVIRONMENT = Environment.PRODUCTION + +# Default cache directory for downloaded readouts +# Can be overridden via AIGNOSTICS_MCP_READOUTS_DIR environment variable +DEFAULT_CACHE_DIR = Path.home() / "aignostics_readouts" + + +def get_readouts_dir() -> Path: + """Get the readouts directory from environment or use default. + + The directory can be configured via AIGNOSTICS_MCP_READOUTS_DIR. + Default is ~/aignostics_readouts (visible in home directory). + + Returns: + Path to the readouts directory. + """ + env_dir = os.environ.get("AIGNOSTICS_MCP_READOUTS_DIR") + if env_dir: + return Path(env_dir) + return DEFAULT_CACHE_DIR + + +def configure_environment(env: Environment | None = None) -> str: + """Configure the SDK to use the specified environment. + + Args: + env: Environment to use. If None, uses AIGNOSTICS_API_ROOT env var + or defaults to production. + + Returns: + The configured API root URL. + """ + if env is None: + # Check if already configured via environment + existing = os.environ.get("AIGNOSTICS_API_ROOT") + if existing: + return existing + env = DEFAULT_ENVIRONMENT + + api_root = ENV_API_ROOTS[env] + os.environ["AIGNOSTICS_API_ROOT"] = api_root + return api_root + + +def get_cache_dir() -> Path: + """Get the cache directory, creating it if needed. + + Returns: + Path to the MCP cache directory. + """ + cache_dir = get_readouts_dir() + cache_dir.mkdir(parents=True, exist_ok=True) + return cache_dir + + +def get_readout_cache_path(run_id: str, readout_type: str) -> Path: + """Get the cache path for a specific readout file. + + Args: + run_id: The run ID. + readout_type: Type of readout ('slide' or 'cell'). + + Returns: + Path to the cached readout file. + """ + cache_dir = get_cache_dir() / run_id + cache_dir.mkdir(parents=True, exist_ok=True) + return cache_dir / f"{readout_type}_readouts.csv" diff --git a/src/aignostics/mcp/pyproject.toml b/src/aignostics/mcp/pyproject.toml new file mode 100644 index 000000000..3ee1d6e0b --- /dev/null +++ b/src/aignostics/mcp/pyproject.toml @@ -0,0 +1,71 @@ +[project] +name = "aignostics-mcp" +version = "0.1.0" +description = "MCP server for Aignostics Platform - enables LLMs to analyze readout data via natural language" +readme = "README.md" +authors = [ + { name = "Aignostics GmbH", email = "support@aignostics.com" }, +] +license = { text = "MIT" } + +keywords = [ + "aignostics", + "mcp", + "model-context-protocol", + "llm", + "claude", + "pathology", + "readouts", + "duckdb", +] + +classifiers = [ + "Development Status :: 3 - Alpha", + "Intended Audience :: Developers", + "Intended Audience :: Science/Research", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", + "Programming Language :: Python :: 3.13", + "License :: OSI Approved :: MIT License", + "Topic :: Scientific/Engineering :: Artificial Intelligence", + "Topic :: Scientific/Engineering :: Medical Science Apps.", +] + +requires-python = ">=3.11" + +dependencies = [ + "aignostics>=1.0.0", # Main SDK for platform access + "mcp>=1.0.0,<2", # MCP server framework + "duckdb>=1.0.0", # SQL querying (may already be in aignostics) + "requests>=2.28.0", # HTTP client for downloads +] + +[project.optional-dependencies] +dev = [ + "ruff>=0.4.0", + "pytest>=8.0.0", +] + +[project.scripts] +aignostics-mcp = "aignostics.mcp:run_server" + +[project.urls] +Homepage = "https://github.com/aignostics/python-sdk" +Documentation = "https://github.com/aignostics/python-sdk/tree/main/src/aignostics/mcp" +Repository = "https://github.com/aignostics/python-sdk" + +[build-system] +requires = ["hatchling"] +build-backend = "hatchling.build" + +[tool.hatch.build.targets.wheel] +packages = ["src/aignostics/mcp"] + +[tool.ruff] +line-length = 120 +target-version = "py311" + +[tool.ruff.lint] +select = ["E", "F", "W", "I", "UP", "S", "B", "C4", "SIM"] +ignore = ["S608", "S110"] # SQL injection and try-except-pass (intentional for MCP) diff --git a/src/aignostics/mcp/skills/aignostics-quickstart/SKILL.md b/src/aignostics/mcp/skills/aignostics-quickstart/SKILL.md new file mode 100644 index 000000000..92c0c6642 --- /dev/null +++ b/src/aignostics/mcp/skills/aignostics-quickstart/SKILL.md @@ -0,0 +1,104 @@ +--- +name: aignostics-quickstart +description: Quick introduction to the Aignostics Platform MCP tools. Use when user is new to Aignostics or asks how to get started with the platform. +--- + +# Aignostics Platform Quick Start + +Welcome! This skill introduces you to the Aignostics Platform tools. + +## What is Aignostics? + +Aignostics provides AI/ML applications for computational pathology - analyzing whole slide images (WSI) from tissue samples to detect cancer, classify cells, and generate quantitative readouts. + +## Available Tools + +### Core Tools (Start Here) +| Tool | Purpose | +|------|---------| +| `list_runs` | See your recent runs | +| `get_run_status` | Check a specific run's progress | +| `get_current_user` | Verify your authentication | + +### Readout Analysis +| Tool | Purpose | +|------|---------| +| `download_readouts` | Download results to local cache | +| `query_readouts_sql` | Run SQL queries on readout data | +| `get_readout_schema` | See available columns | +| `summarize_cells` | Quick cell distribution stats | +| `query_cell_readouts` | Filter and view cell data | +| `query_slide_readouts` | View slide-level metrics | + +### Compound Skills +| Tool | Purpose | +|------|---------| +| `run_summary` | Complete run overview with items and errors | +| `readout_analysis` | Download + analyze in one step | + +## Getting Started + +### Step 1: Check Authentication +``` +→ get_current_user() +``` +Should show your email and organization. + +### Step 2: List Your Runs +``` +→ list_runs(limit=5) +``` +Shows recent runs with their status. + +### Step 3: Explore a Successful Run +Find a run with succeeded items, then: +``` +→ run_summary(run_id) +→ readout_analysis(run_id) +``` + +### Step 4: Query the Data +Use SQL for custom analysis: +``` +→ query_readouts_sql(run_id, "SELECT * FROM cells LIMIT 5") +``` + +## What's in the Readouts? + +### Cell Readouts (`cells` table) +Each row is one detected cell with: +- `CELL_CLASS` - Classification (e.g., "Carcinoma cell", "Lymphocyte") +- `X`, `Y` - Position in the slide +- `IN_*` columns - Which tissue region (CARCINOMA, STROMA, VESSEL, etc.) +- `NUCLEUS_*` columns - Morphological features (area, roundness, etc.) + +### Slide Readouts (`slides` table) +One row with ~4500 measurements including: +- `ABSOLUTE_AREA_*` - Tissue areas in μm² +- `RELATIVE_AREA_*` - Percentages +- Various quality control metrics + +## Common First Questions + +**"How many cells were detected?"** +```sql +SELECT COUNT(*) as total_cells FROM cells +``` + +**"What types of cells are there?"** +```sql +SELECT CELL_CLASS, COUNT(*) as count +FROM cells GROUP BY CELL_CLASS ORDER BY count DESC +``` + +**"How much of the tissue is carcinoma?"** +```sql +SELECT + ROUND(ABSOLUTE_AREA_CARCINOMA * 100.0 / ABSOLUTE_AREA_VALID_TISSUE, 1) as carcinoma_pct +FROM slides +``` + +## Next Steps + +- Use `/analyze-readouts` for detailed data analysis workflows +- Use `/troubleshoot-run` if you encounter errors diff --git a/src/aignostics/mcp/skills/analyze-readouts/SKILL.md b/src/aignostics/mcp/skills/analyze-readouts/SKILL.md new file mode 100644 index 000000000..afb735efc --- /dev/null +++ b/src/aignostics/mcp/skills/analyze-readouts/SKILL.md @@ -0,0 +1,124 @@ +--- +name: analyze-readouts +description: Analyze cell and slide readouts from Aignostics Platform runs. Use when user asks about readout data, cell distributions, tissue analysis, or wants to explore ML inference results. +--- + +# Analyzing Aignostics Readouts + +This skill guides you through analyzing readout data from the Aignostics Platform. + +## Prerequisites + +Ensure the MCP server is connected. The following tools are available: +- `list_runs` - List recent application runs +- `get_run_status` - Check run status and statistics +- `download_readouts` - Download CSV readouts to local cache +- `query_readouts_sql` - Run arbitrary SQL queries (most powerful) +- `get_readout_schema` - Inspect available columns +- `summarize_cells` - Quick cell distribution summary + +## Workflow + +### 1. Find a Run with Results + +``` +First, list recent runs to find one with successful items: +→ list_runs(limit=5) + +Look for runs with "X/Y succeeded" where X > 0. +``` + +### 2. Check Run Details + +``` +Get full details including available artifacts: +→ run_summary(run_id) + +This shows items, errors, and available artifact types. +``` + +### 3. Download and Explore Schema + +``` +Download readouts and check what columns are available: +→ download_readouts(run_id) +→ get_readout_schema(run_id, "cell") +→ get_readout_schema(run_id, "slide") +``` + +### 4. Analyze with SQL + +The `query_readouts_sql` tool is your most powerful option. It exposes: +- `cells` table - cell-level data (many rows) +- `slides` table - slide-level measurements (typically 1 row) + +**Common Queries:** + +Cell distribution by class: +```sql +SELECT CELL_CLASS, COUNT(*) as count +FROM cells +GROUP BY CELL_CLASS +ORDER BY count DESC +``` + +Cells in carcinoma regions: +```sql +SELECT CELL_CLASS, COUNT(*) as total, + SUM(CASE WHEN IN_CARCINOMA THEN 1 ELSE 0 END) as in_carcinoma +FROM cells +GROUP BY CELL_CLASS +``` + +Average nucleus size by cell type: +```sql +SELECT CELL_CLASS, + ROUND(AVG(NUCLEUS_AREA), 2) as avg_area, + ROUND(AVG(NUCLEUS_ROUNDNESS), 3) as avg_roundness +FROM cells +GROUP BY CELL_CLASS +ORDER BY avg_area DESC +``` + +Slide-level tissue breakdown: +```sql +SELECT + ABSOLUTE_AREA_VALID_TISSUE as tissue_area, + ABSOLUTE_AREA_CARCINOMA as carcinoma_area, + ROUND(ABSOLUTE_AREA_CARCINOMA * 100.0 / ABSOLUTE_AREA_VALID_TISSUE, 2) as carcinoma_pct +FROM slides +``` + +## Tips + +1. **Start broad, then narrow**: Use `summarize_cells()` first, then drill down with SQL +2. **Check schema first**: Column names vary by application - always check with `get_readout_schema()` +3. **Use SQL for complex analysis**: The generic SQL tool supports JOINs, window functions, CTEs +4. **Tissue regions**: Columns starting with `IN_` indicate which tissue region a cell belongs to +5. **Nucleus features**: `NUCLEUS_*` columns contain morphological measurements + +## Common Questions + +**"How many cells are in the carcinoma region?"** +```sql +SELECT COUNT(*) FROM cells WHERE IN_CARCINOMA = true +``` + +**"What's the cell type breakdown in stroma vs carcinoma?"** +```sql +SELECT + CELL_CLASS, + SUM(CASE WHEN IN_CARCINOMA THEN 1 ELSE 0 END) as in_carcinoma, + SUM(CASE WHEN IN_STROMA THEN 1 ELSE 0 END) as in_stroma +FROM cells +GROUP BY CELL_CLASS +ORDER BY in_carcinoma DESC +``` + +**"Show me the largest cells"** +```sql +SELECT CELL_CLASS, CELL_ID, NUCLEUS_AREA, X, Y +FROM cells +ORDER BY NUCLEUS_AREA DESC +LIMIT 10 +``` diff --git a/src/aignostics/mcp/skills/troubleshoot-run/SKILL.md b/src/aignostics/mcp/skills/troubleshoot-run/SKILL.md new file mode 100644 index 000000000..897440880 --- /dev/null +++ b/src/aignostics/mcp/skills/troubleshoot-run/SKILL.md @@ -0,0 +1,102 @@ +--- +name: troubleshoot-run +description: Troubleshoot failed or problematic Aignostics Platform runs. Use when user asks about run errors, failures, or why items didn't process successfully. +--- + +# Troubleshooting Aignostics Runs + +This skill guides you through diagnosing and understanding run failures. + +## Quick Diagnosis + +### 1. Get Run Overview + +``` +→ run_summary(run_id) +``` + +This shows: +- Run state (PENDING, PROCESSING, TERMINATED) +- Termination reason (ALL_ITEMS_PROCESSED, CANCELED_BY_USER, CANCELED_BY_SYSTEM) +- Statistics (succeeded/failed/skipped counts) +- Per-item status with error previews + +### 2. Check Item Details + +``` +→ get_run_items(run_id) +``` + +Shows each item with: +- State and output status +- Error messages (truncated) + +## Common Issues + +### User Errors (USER_ERROR) + +These are problems with the input data that the user can fix: + +| Error Pattern | Likely Cause | Solution | +|---------------|--------------|----------| +| "cannot be processed" | Invalid file format | Check file is valid DICOM/SVS | +| "unsupported format" | Wrong image type | Verify supported formats | +| "resolution too low" | Image quality | Use higher resolution scan | +| "corrupt file" | File damaged | Re-upload the file | + +### System Errors (SYSTEM_ERROR) + +These are platform-side issues: + +| Error Pattern | Likely Cause | Action | +|---------------|--------------|--------| +| "timeout" | Processing took too long | Contact support | +| "internal error" | Platform issue | Retry or contact support | +| "resource exhausted" | Memory/compute limits | May need smaller batch | + +### Skipped Items (SKIPPED) + +Items marked as skipped were intentionally not processed: +- Duplicate detection +- Previous successful processing +- User-configured skip rules + +## Workflow for Failed Runs + +1. **Check overall statistics** + ``` + → get_run_status(run_id) + ``` + Look at the item counts - are ALL items failing or just some? + +2. **Identify the pattern** + ``` + → get_run_items(run_id) + ``` + - All items same error? → Likely configuration or application issue + - Some items succeed, some fail? → Likely input data quality varies + - Random failures? → Possible transient system issue + +3. **For USER_ERROR items** + - Review the input files + - Check file formats and quality + - Verify files meet application requirements + +4. **For SYSTEM_ERROR items** + - Note the error message + - Check if it's a known issue + - Consider retrying the run + - Contact support if persistent + +## Retrying Failed Items + +Currently, you need to: +1. Create a new run with only the failed items +2. Or contact support for a partial retry + +## Getting Help + +If you can't resolve the issue: +1. Note the run_id and error messages +2. Check the item external_ids to identify problematic files +3. Contact Aignostics support with this information diff --git a/src/aignostics/platform/_cli.py b/src/aignostics/platform/_cli.py index 649783601..63d729b6d 100644 --- a/src/aignostics/platform/_cli.py +++ b/src/aignostics/platform/_cli.py @@ -85,6 +85,27 @@ def whoami( logger.exception(message) console.print(message, style="error") sys.exit(1) + + +@cli_user.command("token") +def token() -> None: + """Print the cached authentication token. + + Outputs the raw JWT token to stdout for use in scripts or environment variables. + If no valid token is cached, triggers a login flow. + + Example usage: + export AIGNOSTICS_TOKEN=$(aignostics user token) + """ + service = _get_service() + try: + token_value = service.get_token() + # Print raw token to stdout (no formatting) for easy capture + print(token_value) + except Exception as e: + message = f"Error getting token: {e!s}" + logger.exception(message) + print(message, file=sys.stderr) sys.exit(1) diff --git a/src/aignostics/platform/_client.py b/src/aignostics/platform/_client.py index 02f8dd014..3d770048f 100644 --- a/src/aignostics/platform/_client.py +++ b/src/aignostics/platform/_client.py @@ -267,6 +267,51 @@ def run(self, run_id: str) -> Run: """ return Run(self._api, run_id) + @classmethod + def _from_token(cls, token: str) -> "Client": + """Create a client with a pre-validated token. + + This method is used by the remote MCP server to create a client + using a token that has already been validated and stored server-side. + + Unlike the normal Client() constructor which goes through the + authentication flow, this method directly uses the provided token. + + Args: + token: A valid OAuth access token. + + Returns: + A Client instance configured to use the provided token. + + Note: + This is an internal method used by the MCP server. + External code should use the normal Client() constructor. + """ + ca_file = os.getenv("REQUESTS_CA_BUNDLE") + + def token_provider() -> str: + return token + + config = _OAuth2TokenProviderConfiguration( + host=settings().api_root, + ssl_ca_cert=ca_file, + token_provider=token_provider, + ) + config.proxy = getproxies().get("https") + + api_client = ApiClient(config) + api_client.user_agent = user_agent() + api = PublicApi(api_client) + + # Create instance without going through __init__ + instance = cls.__new__(cls) + instance._api = api # noqa: SLF001 - Intentional for factory method + instance.applications = Applications(api) + instance.runs = Runs(api) + instance.versions = Versions(api) + + return instance + @staticmethod def get_api_client(cache_token: bool = True) -> PublicApi: """Create and configure an authenticated API client. diff --git a/src/aignostics/platform/_service.py b/src/aignostics/platform/_service.py index dcc16e439..81f7f7290 100644 --- a/src/aignostics/platform/_service.py +++ b/src/aignostics/platform/_service.py @@ -311,3 +311,18 @@ def get_user_info(relogin: bool = False) -> UserInfo: message = f"Error during login: {e!s}" logger.exception(message) raise + + @staticmethod + def get_token() -> str: + """Get the cached authentication token. + + Returns the raw JWT token for use in external systems (e.g., MCP clients). + If no valid token is cached, triggers a login flow. + + Returns: + str: The JWT access token. + + Raises: + RuntimeError: If token retrieval fails. + """ + return get_token(use_cache=True) diff --git a/tests/aignostics/mcp/__init__.py b/tests/aignostics/mcp/__init__.py new file mode 100644 index 000000000..d11eb0bb9 --- /dev/null +++ b/tests/aignostics/mcp/__init__.py @@ -0,0 +1 @@ +"""Tests for the MCP module.""" diff --git a/tests/aignostics/mcp/server_test.py b/tests/aignostics/mcp/server_test.py new file mode 100644 index 000000000..d9c2eb1fc --- /dev/null +++ b/tests/aignostics/mcp/server_test.py @@ -0,0 +1,540 @@ +"""Tests for MCP server module.""" + +from __future__ import annotations + +import os +from typing import TYPE_CHECKING +from unittest.mock import MagicMock, patch + +import pytest +from aignx.codegen.exceptions import UnauthorizedException + +from aignostics import platform + +if TYPE_CHECKING: + from pathlib import Path +from aignostics.mcp._server import ( + _clear_client_cache, + _resolve_run_id, + _retry_on_auth_failure, + download_readouts, + get_current_user, + get_readout_schema, + get_run_items, + get_run_status, + list_runs, + query_cell_readouts, + query_readouts_sql, + query_slide_readouts, + run_summary, + summarize_cells, +) + +# ============================================================================= +# _clear_client_cache Tests +# ============================================================================= + + +@pytest.mark.unit +def test_clear_client_cache_clears_cached_api_clients() -> None: + """Test that cached API client instances are cleared.""" + from aignostics.platform._client import Client + + # Set up cached clients + Client._api_client_cached = MagicMock() + Client._api_client_uncached = MagicMock() + + _clear_client_cache() + + assert Client._api_client_cached is None + assert Client._api_client_uncached is None + + +# ============================================================================= +# _retry_on_auth_failure Tests +# ============================================================================= + + +@pytest.mark.unit +def test_retry_on_auth_failure_returns_result_on_success() -> None: + """Test that function result is returned when no auth failure.""" + call_count = 0 + + @_retry_on_auth_failure + def successful_func() -> str: + nonlocal call_count + call_count += 1 + return "success" + + result = successful_func() + assert result == "success" + assert call_count == 1 + + +@pytest.mark.unit +def test_retry_on_auth_failure_retries_once_on_unauthorized() -> None: + """Test that function is retried once on UnauthorizedException.""" + call_count = 0 + + @_retry_on_auth_failure + def auth_failing_then_success() -> str: + nonlocal call_count + call_count += 1 + if call_count == 1: + raise UnauthorizedException(status=401, reason="Unauthorized") + return "success_after_retry" + + with ( + patch("aignostics.mcp._server.remove_cached_token") as mock_remove, + patch("aignostics.mcp._server._clear_client_cache") as mock_clear, + ): + result = auth_failing_then_success() + + assert result == "success_after_retry" + assert call_count == 2 + mock_remove.assert_called_once() + mock_clear.assert_called_once() + + +@pytest.mark.unit +def test_retry_on_auth_failure_raises_on_second_failure() -> None: + """Test that exception is raised if retry also fails.""" + call_count = 0 + + @_retry_on_auth_failure + def always_failing() -> str: + nonlocal call_count + call_count += 1 + raise UnauthorizedException(status=401, reason="Always unauthorized") + + with ( + patch("aignostics.mcp._server.remove_cached_token"), + patch("aignostics.mcp._server._clear_client_cache"), + pytest.raises(UnauthorizedException), + ): + always_failing() + + assert call_count == 2 # Original call + one retry + + +# ============================================================================= +# _resolve_run_id Tests +# ============================================================================= + + +@pytest.mark.unit +def test_resolve_run_id_returns_identifier_when_valid_run_id() -> None: + """Test that identifier is returned directly when it's a valid run ID.""" + mock_client = MagicMock() + mock_run = MagicMock() + mock_client.runs.return_value = mock_run + + result = _resolve_run_id(mock_client, "valid-run-id-123") + + assert result == "valid-run-id-123" + mock_client.runs.assert_called_once_with("valid-run-id-123") + mock_run.details.assert_called_once() + + +@pytest.mark.unit +def test_resolve_run_id_searches_by_external_id_when_not_run_id() -> None: + """Test that external_id search is used when identifier is not a run ID.""" + mock_client = MagicMock() + mock_run = MagicMock() + mock_run.run_id = "found-run-id-456" + + # First call (direct lookup) raises NotFoundException + mock_client.runs.return_value.details.side_effect = platform.NotFoundException("Not found") + # Second call (list by external_id) returns result + mock_client.runs.list.return_value = iter([mock_run]) + + result = _resolve_run_id(mock_client, "slide_001.svs") + + assert result == "found-run-id-456" + mock_client.runs.list.assert_called_once_with(external_id="slide_001.svs", page_size=1) + + +@pytest.mark.unit +def test_resolve_run_id_raises_not_found_when_no_match() -> None: + """Test that NotFoundException is raised when neither lookup succeeds.""" + mock_client = MagicMock() + + # Direct lookup fails + mock_client.runs.return_value.details.side_effect = platform.NotFoundException("Not found") + # External ID search returns empty + mock_client.runs.list.return_value = iter([]) + + with pytest.raises(platform.NotFoundException) as exc_info: + _resolve_run_id(mock_client, "nonexistent-id") + + assert "No run found" in str(exc_info.value) + + +# ============================================================================= +# MCP Tool Tests +# ============================================================================= + + +@pytest.fixture +def mock_client() -> MagicMock: + """Create a mock platform client and patch _get_client to return it. + + Yields: + Mock platform client instance. + """ + client = MagicMock() + with patch("aignostics.mcp._server._get_client", return_value=client): + yield client + + +@pytest.mark.unit +def test_list_runs_returns_markdown_table(mock_client: MagicMock) -> None: + """Test that list_runs returns a formatted markdown table.""" + # Set up mock run data + mock_run = MagicMock() + mock_run.run_id = "run-123-abc" + mock_details = MagicMock() + mock_details.application_id = "test-app" + mock_details.version_number = "1.0.0-test-version" + mock_details.state = platform.RunState.TERMINATED + mock_details.statistics = MagicMock() + mock_details.statistics.item_succeeded_count = 5 + mock_details.statistics.item_count = 5 + mock_run.details.return_value = mock_details + + mock_client.runs.list.return_value = iter([mock_run]) + + result = list_runs(limit=1) + + assert "Run ID" in result + assert "Application" in result + assert "run-123-abc" in result + assert "test-app" in result + assert "5/5 succeeded" in result + + +@pytest.mark.unit +def test_list_runs_returns_message_when_empty(mock_client: MagicMock) -> None: + """Test that list_runs returns appropriate message when no runs exist.""" + mock_client.runs.list.return_value = iter([]) + + result = list_runs() + + assert result == "No runs found." + + +@pytest.mark.unit +def test_get_run_status_returns_detailed_status(mock_client: MagicMock) -> None: + """Test that get_run_status returns formatted status information.""" + mock_run = MagicMock() + mock_details = MagicMock() + mock_details.application_id = "heta-app" + mock_details.version_number = "2.0.0" + mock_details.state = platform.RunState.TERMINATED + mock_details.termination_reason = platform.RunTerminationReason.ALL_ITEMS_PROCESSED + mock_details.error_message = None + mock_details.statistics = MagicMock() + mock_details.statistics.item_count = 10 + mock_details.statistics.item_succeeded_count = 8 + mock_details.statistics.item_processing_count = 0 + mock_details.statistics.item_pending_count = 0 + mock_details.statistics.item_user_error_count = 1 + mock_details.statistics.item_system_error_count = 1 + mock_details.statistics.item_skipped_count = 0 + mock_run.details.return_value = mock_details + + mock_client.runs.return_value = mock_run + + result = get_run_status("test-run-id") + + assert "Run Status" in result + assert "heta-app" in result + assert "2.0.0" in result + assert "TERMINATED" in result + assert "ALL_ITEMS_PROCESSED" in result + assert "Total: 10" in result + assert "Succeeded: 8" in result + + +@pytest.mark.unit +def test_get_run_status_handles_not_found(mock_client: MagicMock) -> None: + """Test that get_run_status handles non-existent runs gracefully.""" + mock_client.runs.return_value.details.side_effect = platform.NotFoundException("Not found") + mock_client.runs.list.return_value = iter([]) + + result = get_run_status("nonexistent-run") + + assert "Run not found" in result + + +@pytest.mark.unit +def test_get_current_user_returns_user_info(mock_client: MagicMock) -> None: + """Test that get_current_user returns formatted user information.""" + mock_me = MagicMock() + mock_me.user.email = "test@example.com" + mock_me.organization.name = "Test Organization" + mock_client.me.return_value = mock_me + + result = get_current_user() + + assert "test@example.com" in result + assert "Test Organization" in result + + +@pytest.mark.unit +def test_get_current_user_handles_auth_error(mock_client: MagicMock) -> None: + """Test that get_current_user handles authentication errors gracefully.""" + mock_client.me.side_effect = Exception("Authentication failed") + + result = get_current_user() + + assert "Not authenticated" in result or "error" in result.lower() + + +# ============================================================================= +# get_run_items Tests +# ============================================================================= + + +@pytest.mark.unit +def test_get_run_items_returns_item_table(mock_client: MagicMock) -> None: + """Test that get_run_items returns a formatted table of items.""" + mock_run = MagicMock() + mock_item = MagicMock() + mock_item.external_id = "slide_001.svs" + mock_item.state = platform.ItemState.TERMINATED + mock_item.output = platform.ItemOutput.FULL + mock_item.error_message = None + mock_run.results.return_value = [mock_item] + + mock_client.runs.return_value = mock_run + + result = get_run_items("test-run-id") + + assert "Items in Run" in result + assert "External ID" in result + assert "slide_001.svs" in result + assert "TERMINATED" in result + + +@pytest.mark.unit +def test_get_run_items_shows_error_messages(mock_client: MagicMock) -> None: + """Test that get_run_items displays error messages for failed items.""" + mock_run = MagicMock() + mock_item = MagicMock() + mock_item.external_id = "bad_slide.svs" + mock_item.state = platform.ItemState.TERMINATED + mock_item.output = platform.ItemOutput.NONE + mock_item.error_message = "File format not supported" + mock_run.results.return_value = [mock_item] + + mock_client.runs.return_value = mock_run + + result = get_run_items("test-run-id") + + assert "File format not supported" in result + + +@pytest.mark.unit +def test_get_run_items_handles_empty_run(mock_client: MagicMock) -> None: + """Test that get_run_items handles runs with no items.""" + mock_run = MagicMock() + mock_run.results.return_value = [] + + mock_client.runs.return_value = mock_run + + result = get_run_items("empty-run-id") + + assert "No items found" in result + + +# ============================================================================= +# download_readouts Tests +# ============================================================================= + + +@pytest.mark.unit +def test_download_readouts_downloads_files(mock_client: MagicMock, tmp_path: Path) -> None: + """Test that download_readouts downloads readout files to specified directory.""" + mock_run = MagicMock() + mock_item = MagicMock() + mock_item.output = platform.ItemOutput.FULL + + mock_artifact = MagicMock() + mock_artifact.name = "slide_readout.csv" + mock_artifact.download_url = "https://example.com/slide_readout.csv" + mock_item.output_artifacts = [mock_artifact] + + mock_run.results.return_value = [mock_item] + mock_client.runs.return_value = mock_run + + with patch("aignostics.mcp._server.requests.get") as mock_get: + mock_response = MagicMock() + mock_response.content = b"col1,col2\nval1,val2" + mock_get.return_value = mock_response + + result = download_readouts("test-run-id", output_dir=str(tmp_path)) + + assert "Downloaded Readouts" in result + assert "slide" in result + assert (tmp_path / "slide_readouts.csv").exists() + + +@pytest.mark.unit +def test_download_readouts_handles_no_readouts(mock_client: MagicMock) -> None: + """Test that download_readouts handles runs without readouts.""" + mock_run = MagicMock() + mock_item = MagicMock() + mock_item.output = platform.ItemOutput.NONE + mock_item.output_artifacts = [] + mock_run.results.return_value = [mock_item] + + mock_client.runs.return_value = mock_run + + result = download_readouts("test-run-id") + + assert "No readouts found" in result + + +# ============================================================================= +# SQL Query Tools Tests (with real DuckDB) +# ============================================================================= + + +@pytest.fixture +def mock_client_with_readouts(tmp_path: Path) -> MagicMock: + """Create mock client with sample readout CSV files for SQL query testing. + + This fixture: + 1. Creates sample slide and cell readout CSV files + 2. Patches _get_client to return a mock client + 3. Patches AIGNOSTICS_MCP_READOUTS_DIR to point to tmp_path + + Yields: + Mock platform client instance with readout files available. + """ + # Create sample readout files + run_dir = tmp_path / "test-run-id" + run_dir.mkdir() + + slide_path = run_dir / "slide_readouts.csv" + slide_path.write_text("# Header comment\nABSOLUTE_AREA,TISSUE_AREA\n1000,800\n") + + cell_path = run_dir / "cell_readouts.csv" + cell_path.write_text( + "# Header comment\nCELL_CLASS,X,Y,IN_CARCINOMA\nTumor,100,200,true\nTumor,150,250,true\nStroma,300,400,false\n" + ) + + # Create mock client + client = MagicMock() + client.runs.return_value.details.return_value = MagicMock() + + with ( + patch("aignostics.mcp._server._get_client", return_value=client), + patch.dict(os.environ, {"AIGNOSTICS_MCP_READOUTS_DIR": str(tmp_path)}), + ): + yield client + + +@pytest.mark.unit +def test_query_readouts_sql_executes_query(mock_client_with_readouts: MagicMock) -> None: + """Test that query_readouts_sql executes SQL and returns results.""" + result = query_readouts_sql("test-run-id", "SELECT COUNT(*) as total FROM cells") + + assert "total" in result + assert "3" in result # 3 cells in test data + + +@pytest.mark.unit +def test_query_readouts_sql_handles_invalid_query(mock_client_with_readouts: MagicMock) -> None: + """Test that query_readouts_sql handles SQL errors gracefully.""" + result = query_readouts_sql("test-run-id", "SELECT nonexistent_column FROM cells") + + assert "Error" in result or "error" in result.lower() + + +@pytest.mark.unit +def test_get_readout_schema_returns_columns(mock_client_with_readouts: MagicMock) -> None: + """Test that get_readout_schema returns column information.""" + result = get_readout_schema("test-run-id", "cell") + + assert "Schema" in result + assert "CELL_CLASS" in result + assert "Column" in result + assert "Type" in result + + +@pytest.mark.unit +def test_query_slide_readouts_returns_data(mock_client_with_readouts: MagicMock) -> None: + """Test that query_slide_readouts returns slide-level data.""" + result = query_slide_readouts("test-run-id") + + assert "Slide Readouts" in result + assert "ABSOLUTE_AREA" in result + assert "1000" in result + + +@pytest.mark.unit +def test_query_cell_readouts_returns_filtered_data(mock_client_with_readouts: MagicMock) -> None: + """Test that query_cell_readouts returns filtered cell data.""" + result = query_cell_readouts("test-run-id", filter_expr="CELL_CLASS = 'Tumor'", limit=10) + + assert "Cell Readouts" in result + assert "Tumor" in result + assert "2" in result # 2 tumor cells + + +@pytest.mark.unit +def test_summarize_cells_returns_distribution(mock_client_with_readouts: MagicMock) -> None: + """Test that summarize_cells returns cell distribution statistics.""" + result = summarize_cells("test-run-id", group_by="CELL_CLASS") + + assert "Cell Summary" in result + assert "Total Cells" in result + assert "3" in result # 3 total cells + assert "Tumor" in result + assert "Stroma" in result + + +# ============================================================================= +# Compound Tool Tests (Skills) +# ============================================================================= + + +@pytest.mark.unit +def test_run_summary_returns_complete_summary(mock_client: MagicMock) -> None: + """Test that run_summary returns a comprehensive run overview.""" + mock_run = MagicMock() + mock_details = MagicMock() + mock_details.application_id = "heta" + mock_details.version_number = "1.0.0" + mock_details.state = platform.RunState.TERMINATED + mock_details.termination_reason = platform.RunTerminationReason.ALL_ITEMS_PROCESSED + mock_details.error_message = None + mock_details.statistics = MagicMock() + mock_details.statistics.item_count = 2 + mock_details.statistics.item_succeeded_count = 2 + mock_details.statistics.item_user_error_count = 0 + mock_details.statistics.item_system_error_count = 0 + mock_details.statistics.item_skipped_count = 0 + mock_run.details.return_value = mock_details + + mock_item = MagicMock() + mock_item.external_id = "slide.svs" + mock_item.output = platform.ItemOutput.FULL + mock_item.termination_reason = platform.ItemTerminationReason.SUCCEEDED + mock_item.error_message = None + mock_artifact = MagicMock() + mock_artifact.name = "cell_readout.csv" + mock_item.output_artifacts = [mock_artifact] + mock_run.results.return_value = [mock_item] + + mock_client.runs.return_value = mock_run + + result = run_summary("test-run-id") + + assert "Run Summary" in result + assert "heta" in result + assert "Statistics" in result + assert "Items" in result + assert "Available Artifacts" in result diff --git a/tests/aignostics/mcp/settings_test.py b/tests/aignostics/mcp/settings_test.py new file mode 100644 index 000000000..4e6b69c16 --- /dev/null +++ b/tests/aignostics/mcp/settings_test.py @@ -0,0 +1,154 @@ +"""Tests for MCP settings module.""" + +from __future__ import annotations + +import os +from pathlib import Path +from unittest import mock + +import pytest + +from aignostics.mcp._settings import ( + DEFAULT_CACHE_DIR, + DEFAULT_ENVIRONMENT, + ENV_API_ROOTS, + Environment, + configure_environment, + get_cache_dir, + get_readout_cache_path, + get_readouts_dir, +) + +# ============================================================================= +# Environment Enum Tests +# ============================================================================= + + +@pytest.mark.unit +def test_environment_enum_values() -> None: + """Test that Environment enum has expected values.""" + assert Environment.PRODUCTION == "production" + assert Environment.STAGING == "staging" + + +@pytest.mark.unit +def test_env_api_roots_has_all_environments() -> None: + """Test that ENV_API_ROOTS has entries for all environments.""" + for env in Environment: + assert env in ENV_API_ROOTS + + +@pytest.mark.unit +def test_default_environment_is_production() -> None: + """Test that default environment is production.""" + assert DEFAULT_ENVIRONMENT == Environment.PRODUCTION + + +# ============================================================================= +# get_readouts_dir Tests +# ============================================================================= + + +@pytest.mark.unit +def test_get_readouts_dir_returns_default_when_env_not_set() -> None: + """Test that default directory is returned when env var not set.""" + with mock.patch.dict(os.environ, {}, clear=True): + os.environ.pop("AIGNOSTICS_MCP_READOUTS_DIR", None) + result = get_readouts_dir() + assert result == DEFAULT_CACHE_DIR + + +@pytest.mark.unit +def test_get_readouts_dir_returns_env_var_when_set() -> None: + """Test that env var value is returned when set.""" + custom_dir = "/custom/readouts/dir" + with mock.patch.dict(os.environ, {"AIGNOSTICS_MCP_READOUTS_DIR": custom_dir}): + result = get_readouts_dir() + assert result == Path(custom_dir) + + +# ============================================================================= +# configure_environment Tests +# ============================================================================= + + +@pytest.mark.unit +def test_configure_environment_uses_existing_env_var() -> None: + """Test that existing AIGNOSTICS_API_ROOT is preserved.""" + existing_url = "https://custom.api.example.com" + with mock.patch.dict(os.environ, {"AIGNOSTICS_API_ROOT": existing_url}): + result = configure_environment() + assert result == existing_url + + +@pytest.mark.unit +def test_configure_environment_sets_production_by_default() -> None: + """Test that production API root is set when no env var exists.""" + with mock.patch.dict(os.environ, {}, clear=True): + os.environ.pop("AIGNOSTICS_API_ROOT", None) + result = configure_environment() + assert result == ENV_API_ROOTS[Environment.PRODUCTION] + assert os.environ["AIGNOSTICS_API_ROOT"] == ENV_API_ROOTS[Environment.PRODUCTION] + + +@pytest.mark.unit +def test_configure_environment_sets_staging_when_specified() -> None: + """Test that staging API root is set when specified.""" + with mock.patch.dict(os.environ, {}, clear=True): + os.environ.pop("AIGNOSTICS_API_ROOT", None) + result = configure_environment(Environment.STAGING) + assert result == ENV_API_ROOTS[Environment.STAGING] + + +# ============================================================================= +# get_cache_dir Tests +# ============================================================================= + + +@pytest.mark.unit +def test_get_cache_dir_creates_directory(tmp_path: Path) -> None: + """Test that cache directory is created if it doesn't exist.""" + custom_dir = tmp_path / "test_cache" + assert not custom_dir.exists() + + with mock.patch.dict(os.environ, {"AIGNOSTICS_MCP_READOUTS_DIR": str(custom_dir)}): + result = get_cache_dir() + assert result == custom_dir + assert custom_dir.exists() + + +# ============================================================================= +# get_readout_cache_path Tests +# ============================================================================= + + +@pytest.mark.unit +def test_get_readout_cache_path_returns_correct_path_for_slide(tmp_path: Path) -> None: + """Test that correct path is returned for slide readouts.""" + run_id = "test-run-123" + with mock.patch.dict(os.environ, {"AIGNOSTICS_MCP_READOUTS_DIR": str(tmp_path)}): + result = get_readout_cache_path(run_id, "slide") + expected = tmp_path / run_id / "slide_readouts.csv" + assert result == expected + + +@pytest.mark.unit +def test_get_readout_cache_path_returns_correct_path_for_cell(tmp_path: Path) -> None: + """Test that correct path is returned for cell readouts.""" + run_id = "test-run-456" + with mock.patch.dict(os.environ, {"AIGNOSTICS_MCP_READOUTS_DIR": str(tmp_path)}): + result = get_readout_cache_path(run_id, "cell") + expected = tmp_path / run_id / "cell_readouts.csv" + assert result == expected + + +@pytest.mark.unit +def test_get_readout_cache_path_creates_run_directory(tmp_path: Path) -> None: + """Test that run-specific directory is created.""" + run_id = "new-run-789" + run_dir = tmp_path / run_id + assert not run_dir.exists() + + with mock.patch.dict(os.environ, {"AIGNOSTICS_MCP_READOUTS_DIR": str(tmp_path)}): + get_readout_cache_path(run_id, "cell") + assert run_dir.exists() diff --git a/uv.lock b/uv.lock index c55d02c52..654c9030e 100644 --- a/uv.lock +++ b/uv.lock @@ -98,6 +98,9 @@ marimo = [ { name = "matplotlib" }, { name = "shapely" }, ] +mcp = [ + { name = "mcp" }, +] pyinstaller = [ { name = "pyinstaller" }, ] @@ -188,6 +191,7 @@ requires-dist = [ { name = "marimo", marker = "extra == 'marimo'", specifier = ">=0.18.4,<1" }, { name = "marshmallow", specifier = ">=3.26.2" }, { name = "matplotlib", marker = "extra == 'marimo'", specifier = ">=3.10.7,<4" }, + { name = "mcp", marker = "extra == 'mcp'", specifier = ">=1.0.0,<2" }, { name = "nicegui", extras = ["native"], specifier = ">=3.5.0,<4" }, { name = "openslide-bin", specifier = ">=4.0.0.10,<5" }, { name = "openslide-python", specifier = ">=1.4.3,<2" }, @@ -225,7 +229,7 @@ requires-dist = [ { name = "urllib3", specifier = ">=2.6.3,<3" }, { name = "wsidicom", specifier = ">=0.28.1,<1" }, ] -provides-extras = ["pyinstaller", "jupyter", "marimo", "qupath"] +provides-extras = ["pyinstaller", "jupyter", "marimo", "qupath", "mcp"] [package.metadata.requires-dev] dev = [ @@ -2522,6 +2526,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" }, ] +[[package]] +name = "httpx-sse" +version = "0.4.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/0f/4c/751061ffa58615a32c31b2d82e8482be8dd4a89154f003147acee90f2be9/httpx_sse-0.4.3.tar.gz", hash = "sha256:9b1ed0127459a66014aec3c56bebd93da3c1bc8bb6618c8082039a44889a755d", size = 15943, upload-time = "2025-10-10T21:48:22.271Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/d2/fd/6668e5aec43ab844de6fc74927e155a3b37bf40d7c3790e49fc0406b6578/httpx_sse-0.4.3-py3-none-any.whl", hash = "sha256:0ac1c9fe3c0afad2e0ebb25a934a59f4c7823b60792691f779fad2c5568830fc", size = 8960, upload-time = "2025-10-10T21:48:21.158Z" }, +] + [[package]] name = "humanize" version = "4.14.0" @@ -3717,6 +3730,31 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/30/ac/5ce64a1d4cce00390beab88622a290420401f1cabf05caf2fc0995157c21/mbstrdecoder-1.1.4-py3-none-any.whl", hash = "sha256:03dae4ec50ec0d2ff4743e63fdbd5e0022815857494d35224b60775d3d934a8c", size = 7933, upload-time = "2025-01-18T10:07:29.562Z" }, ] +[[package]] +name = "mcp" +version = "1.25.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "anyio" }, + { name = "httpx" }, + { name = "httpx-sse" }, + { name = "jsonschema" }, + { name = "pydantic" }, + { name = "pydantic-settings" }, + { name = "pyjwt", extra = ["crypto"] }, + { name = "python-multipart" }, + { name = "pywin32", marker = "sys_platform == 'win32'" }, + { name = "sse-starlette" }, + { name = "starlette" }, + { name = "typing-extensions" }, + { name = "typing-inspection" }, + { name = "uvicorn", marker = "sys_platform != 'emscripten'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/d5/2d/649d80a0ecf6a1f82632ca44bec21c0461a9d9fc8934d38cb5b319f2db5e/mcp-1.25.0.tar.gz", hash = "sha256:56310361ebf0364e2d438e5b45f7668cbb124e158bb358333cd06e49e83a6802", size = 605387, upload-time = "2025-12-19T10:19:56.985Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e2/fc/6dc7659c2ae5ddf280477011f4213a74f806862856b796ef08f028e664bf/mcp-1.25.0-py3-none-any.whl", hash = "sha256:b37c38144a666add0862614cc79ec276e97d72aa8ca26d622818d4e278b9721a", size = 233076, upload-time = "2025-12-19T10:19:55.416Z" }, +] + [[package]] name = "mdit-py-plugins" version = "0.5.0" @@ -6055,16 +6093,16 @@ wheels = [ [[package]] name = "referencing" -version = "0.37.0" +version = "0.36.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "attrs" }, { name = "rpds-py" }, { name = "typing-extensions", marker = "python_full_version < '3.13'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/22/f5/df4e9027acead3ecc63e50fe1e36aca1523e1719559c499951bb4b53188f/referencing-0.37.0.tar.gz", hash = "sha256:44aefc3142c5b842538163acb373e24cce6632bd54bdb01b21ad5863489f50d8", size = 78036, upload-time = "2025-10-13T15:30:48.871Z" } +sdist = { url = "https://files.pythonhosted.org/packages/2f/db/98b5c277be99dd18bfd91dd04e1b759cad18d1a338188c936e92f921c7e2/referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa", size = 74744, upload-time = "2025-01-25T08:48:16.138Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/2c/58/ca301544e1fa93ed4f80d724bf5b194f6e4b945841c5bfd555878eea9fcb/referencing-0.37.0-py3-none-any.whl", hash = "sha256:381329a9f99628c9069361716891d34ad94af76e461dcb0335825aecc7692231", size = 26766, upload-time = "2025-10-13T15:30:47.625Z" }, + { url = "https://files.pythonhosted.org/packages/c1/b1/3baf80dc6d2b7bc27a95a67752d0208e410351e3feb4eb78de5f77454d8d/referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0", size = 26775, upload-time = "2025-01-25T08:48:14.241Z" }, ] [[package]] @@ -7067,6 +7105,19 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/bf/a4/66c1fd4f8fab88faf71cee04a945f9806ba0fef753f2cfc8be6353f64508/sphinxext_opengraph-0.13.0-py3-none-any.whl", hash = "sha256:936c07828edc9ad9a7b07908b29596dc84ed0b3ceaa77acdf51282d232d4d80e", size = 1004152, upload-time = "2025-08-29T12:20:29.072Z" }, ] +[[package]] +name = "sse-starlette" +version = "3.1.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "anyio" }, + { name = "starlette" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/da/34/f5df66cb383efdbf4f2db23cabb27f51b1dcb737efaf8a558f6f1d195134/sse_starlette-3.1.2.tar.gz", hash = "sha256:55eff034207a83a0eb86de9a68099bd0157838f0b8b999a1b742005c71e33618", size = 26303, upload-time = "2025-12-31T08:02:20.023Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b7/95/8c4b76eec9ae574474e5d2997557cebf764bcd3586458956c30631ae08f4/sse_starlette-3.1.2-py3-none-any.whl", hash = "sha256:cd800dd349f4521b317b9391d3796fa97b71748a4da9b9e00aafab32dda375c8", size = 12484, upload-time = "2025-12-31T08:02:18.894Z" }, +] + [[package]] name = "stack-data" version = "0.6.3" @@ -7861,83 +7912,61 @@ wheels = [ [[package]] name = "wrapt" -version = "2.0.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/49/2a/6de8a50cb435b7f42c46126cf1a54b2aab81784e74c8595c8e025e8f36d3/wrapt-2.0.1.tar.gz", hash = "sha256:9c9c635e78497cacb81e84f8b11b23e0aacac7a136e73b8e5b2109a1d9fc468f", size = 82040, upload-time = "2025-11-07T00:45:33.312Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/98/60/553997acf3939079dab022e37b67b1904b5b0cc235503226898ba573b10c/wrapt-2.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0e17283f533a0d24d6e5429a7d11f250a58d28b4ae5186f8f47853e3e70d2590", size = 77480, upload-time = "2025-11-07T00:43:30.573Z" }, - { url = "https://files.pythonhosted.org/packages/2d/50/e5b3d30895d77c52105c6d5cbf94d5b38e2a3dd4a53d22d246670da98f7c/wrapt-2.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:85df8d92158cb8f3965aecc27cf821461bb5f40b450b03facc5d9f0d4d6ddec6", size = 60690, upload-time = "2025-11-07T00:43:31.594Z" }, - { url = "https://files.pythonhosted.org/packages/f0/40/660b2898703e5cbbb43db10cdefcc294274458c3ca4c68637c2b99371507/wrapt-2.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c1be685ac7700c966b8610ccc63c3187a72e33cab53526a27b2a285a662cd4f7", size = 61578, upload-time = "2025-11-07T00:43:32.918Z" }, - { url = "https://files.pythonhosted.org/packages/5b/36/825b44c8a10556957bc0c1d84c7b29a40e05fcf1873b6c40aa9dbe0bd972/wrapt-2.0.1-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:df0b6d3b95932809c5b3fecc18fda0f1e07452d05e2662a0b35548985f256e28", size = 114115, upload-time = "2025-11-07T00:43:35.605Z" }, - { url = "https://files.pythonhosted.org/packages/83/73/0a5d14bb1599677304d3c613a55457d34c344e9b60eda8a737c2ead7619e/wrapt-2.0.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4da7384b0e5d4cae05c97cd6f94faaf78cc8b0f791fc63af43436d98c4ab37bb", size = 116157, upload-time = "2025-11-07T00:43:37.058Z" }, - { url = "https://files.pythonhosted.org/packages/01/22/1c158fe763dbf0a119f985d945711d288994fe5514c0646ebe0eb18b016d/wrapt-2.0.1-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ec65a78fbd9d6f083a15d7613b2800d5663dbb6bb96003899c834beaa68b242c", size = 112535, upload-time = "2025-11-07T00:43:34.138Z" }, - { url = "https://files.pythonhosted.org/packages/5c/28/4f16861af67d6de4eae9927799b559c20ebdd4fe432e89ea7fe6fcd9d709/wrapt-2.0.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7de3cc939be0e1174969f943f3b44e0d79b6f9a82198133a5b7fc6cc92882f16", size = 115404, upload-time = "2025-11-07T00:43:39.214Z" }, - { url = "https://files.pythonhosted.org/packages/a0/8b/7960122e625fad908f189b59c4aae2d50916eb4098b0fb2819c5a177414f/wrapt-2.0.1-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:fb1a5b72cbd751813adc02ef01ada0b0d05d3dcbc32976ce189a1279d80ad4a2", size = 111802, upload-time = "2025-11-07T00:43:40.476Z" }, - { url = "https://files.pythonhosted.org/packages/3e/73/7881eee5ac31132a713ab19a22c9e5f1f7365c8b1df50abba5d45b781312/wrapt-2.0.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:3fa272ca34332581e00bf7773e993d4f632594eb2d1b0b162a9038df0fd971dd", size = 113837, upload-time = "2025-11-07T00:43:42.921Z" }, - { url = "https://files.pythonhosted.org/packages/45/00/9499a3d14e636d1f7089339f96c4409bbc7544d0889f12264efa25502ae8/wrapt-2.0.1-cp311-cp311-win32.whl", hash = "sha256:fc007fdf480c77301ab1afdbb6ab22a5deee8885f3b1ed7afcb7e5e84a0e27be", size = 58028, upload-time = "2025-11-07T00:43:47.369Z" }, - { url = "https://files.pythonhosted.org/packages/70/5d/8f3d7eea52f22638748f74b102e38fdf88cb57d08ddeb7827c476a20b01b/wrapt-2.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:47434236c396d04875180171ee1f3815ca1eada05e24a1ee99546320d54d1d1b", size = 60385, upload-time = "2025-11-07T00:43:44.34Z" }, - { url = "https://files.pythonhosted.org/packages/14/e2/32195e57a8209003587bbbad44d5922f13e0ced2a493bb46ca882c5b123d/wrapt-2.0.1-cp311-cp311-win_arm64.whl", hash = "sha256:837e31620e06b16030b1d126ed78e9383815cbac914693f54926d816d35d8edf", size = 58893, upload-time = "2025-11-07T00:43:46.161Z" }, - { url = "https://files.pythonhosted.org/packages/cb/73/8cb252858dc8254baa0ce58ce382858e3a1cf616acebc497cb13374c95c6/wrapt-2.0.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:1fdbb34da15450f2b1d735a0e969c24bdb8d8924892380126e2a293d9902078c", size = 78129, upload-time = "2025-11-07T00:43:48.852Z" }, - { url = "https://files.pythonhosted.org/packages/19/42/44a0db2108526ee6e17a5ab72478061158f34b08b793df251d9fbb9a7eb4/wrapt-2.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3d32794fe940b7000f0519904e247f902f0149edbe6316c710a8562fb6738841", size = 61205, upload-time = "2025-11-07T00:43:50.402Z" }, - { url = "https://files.pythonhosted.org/packages/4d/8a/5b4b1e44b791c22046e90d9b175f9a7581a8cc7a0debbb930f81e6ae8e25/wrapt-2.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:386fb54d9cd903ee0012c09291336469eb7b244f7183d40dc3e86a16a4bace62", size = 61692, upload-time = "2025-11-07T00:43:51.678Z" }, - { url = "https://files.pythonhosted.org/packages/11/53/3e794346c39f462bcf1f58ac0487ff9bdad02f9b6d5ee2dc84c72e0243b2/wrapt-2.0.1-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7b219cb2182f230676308cdcacd428fa837987b89e4b7c5c9025088b8a6c9faf", size = 121492, upload-time = "2025-11-07T00:43:55.017Z" }, - { url = "https://files.pythonhosted.org/packages/c6/7e/10b7b0e8841e684c8ca76b462a9091c45d62e8f2de9c4b1390b690eadf16/wrapt-2.0.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:641e94e789b5f6b4822bb8d8ebbdfc10f4e4eae7756d648b717d980f657a9eb9", size = 123064, upload-time = "2025-11-07T00:43:56.323Z" }, - { url = "https://files.pythonhosted.org/packages/0e/d1/3c1e4321fc2f5ee7fd866b2d822aa89b84495f28676fd976c47327c5b6aa/wrapt-2.0.1-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:fe21b118b9f58859b5ebaa4b130dee18669df4bd111daad082b7beb8799ad16b", size = 117403, upload-time = "2025-11-07T00:43:53.258Z" }, - { url = "https://files.pythonhosted.org/packages/a4/b0/d2f0a413cf201c8c2466de08414a15420a25aa83f53e647b7255cc2fab5d/wrapt-2.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:17fb85fa4abc26a5184d93b3efd2dcc14deb4b09edcdb3535a536ad34f0b4dba", size = 121500, upload-time = "2025-11-07T00:43:57.468Z" }, - { url = "https://files.pythonhosted.org/packages/bd/45/bddb11d28ca39970a41ed48a26d210505120f925918592283369219f83cc/wrapt-2.0.1-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:b89ef9223d665ab255ae42cc282d27d69704d94be0deffc8b9d919179a609684", size = 116299, upload-time = "2025-11-07T00:43:58.877Z" }, - { url = "https://files.pythonhosted.org/packages/81/af/34ba6dd570ef7a534e7eec0c25e2615c355602c52aba59413411c025a0cb/wrapt-2.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a453257f19c31b31ba593c30d997d6e5be39e3b5ad9148c2af5a7314061c63eb", size = 120622, upload-time = "2025-11-07T00:43:59.962Z" }, - { url = "https://files.pythonhosted.org/packages/e2/3e/693a13b4146646fb03254636f8bafd20c621955d27d65b15de07ab886187/wrapt-2.0.1-cp312-cp312-win32.whl", hash = "sha256:3e271346f01e9c8b1130a6a3b0e11908049fe5be2d365a5f402778049147e7e9", size = 58246, upload-time = "2025-11-07T00:44:03.169Z" }, - { url = "https://files.pythonhosted.org/packages/a7/36/715ec5076f925a6be95f37917b66ebbeaa1372d1862c2ccd7a751574b068/wrapt-2.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:2da620b31a90cdefa9cd0c2b661882329e2e19d1d7b9b920189956b76c564d75", size = 60492, upload-time = "2025-11-07T00:44:01.027Z" }, - { url = "https://files.pythonhosted.org/packages/ef/3e/62451cd7d80f65cc125f2b426b25fbb6c514bf6f7011a0c3904fc8c8df90/wrapt-2.0.1-cp312-cp312-win_arm64.whl", hash = "sha256:aea9c7224c302bc8bfc892b908537f56c430802560e827b75ecbde81b604598b", size = 58987, upload-time = "2025-11-07T00:44:02.095Z" }, - { url = "https://files.pythonhosted.org/packages/ad/fe/41af4c46b5e498c90fc87981ab2972fbd9f0bccda597adb99d3d3441b94b/wrapt-2.0.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:47b0f8bafe90f7736151f61482c583c86b0693d80f075a58701dd1549b0010a9", size = 78132, upload-time = "2025-11-07T00:44:04.628Z" }, - { url = "https://files.pythonhosted.org/packages/1c/92/d68895a984a5ebbbfb175512b0c0aad872354a4a2484fbd5552e9f275316/wrapt-2.0.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:cbeb0971e13b4bd81d34169ed57a6dda017328d1a22b62fda45e1d21dd06148f", size = 61211, upload-time = "2025-11-07T00:44:05.626Z" }, - { url = "https://files.pythonhosted.org/packages/e8/26/ba83dc5ae7cf5aa2b02364a3d9cf74374b86169906a1f3ade9a2d03cf21c/wrapt-2.0.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:eb7cffe572ad0a141a7886a1d2efa5bef0bf7fe021deeea76b3ab334d2c38218", size = 61689, upload-time = "2025-11-07T00:44:06.719Z" }, - { url = "https://files.pythonhosted.org/packages/cf/67/d7a7c276d874e5d26738c22444d466a3a64ed541f6ef35f740dbd865bab4/wrapt-2.0.1-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:c8d60527d1ecfc131426b10d93ab5d53e08a09c5fa0175f6b21b3252080c70a9", size = 121502, upload-time = "2025-11-07T00:44:09.557Z" }, - { url = "https://files.pythonhosted.org/packages/0f/6b/806dbf6dd9579556aab22fc92908a876636e250f063f71548a8660382184/wrapt-2.0.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c654eafb01afac55246053d67a4b9a984a3567c3808bb7df2f8de1c1caba2e1c", size = 123110, upload-time = "2025-11-07T00:44:10.64Z" }, - { url = "https://files.pythonhosted.org/packages/e5/08/cdbb965fbe4c02c5233d185d070cabed2ecc1f1e47662854f95d77613f57/wrapt-2.0.1-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:98d873ed6c8b4ee2418f7afce666751854d6d03e3c0ec2a399bb039cd2ae89db", size = 117434, upload-time = "2025-11-07T00:44:08.138Z" }, - { url = "https://files.pythonhosted.org/packages/2d/d1/6aae2ce39db4cb5216302fa2e9577ad74424dfbe315bd6669725569e048c/wrapt-2.0.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c9e850f5b7fc67af856ff054c71690d54fa940c3ef74209ad9f935b4f66a0233", size = 121533, upload-time = "2025-11-07T00:44:12.142Z" }, - { url = "https://files.pythonhosted.org/packages/79/35/565abf57559fbe0a9155c29879ff43ce8bd28d2ca61033a3a3dd67b70794/wrapt-2.0.1-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:e505629359cb5f751e16e30cf3f91a1d3ddb4552480c205947da415d597f7ac2", size = 116324, upload-time = "2025-11-07T00:44:13.28Z" }, - { url = "https://files.pythonhosted.org/packages/e1/e0/53ff5e76587822ee33e560ad55876d858e384158272cd9947abdd4ad42ca/wrapt-2.0.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:2879af909312d0baf35f08edeea918ee3af7ab57c37fe47cb6a373c9f2749c7b", size = 120627, upload-time = "2025-11-07T00:44:14.431Z" }, - { url = "https://files.pythonhosted.org/packages/7c/7b/38df30fd629fbd7612c407643c63e80e1c60bcc982e30ceeae163a9800e7/wrapt-2.0.1-cp313-cp313-win32.whl", hash = "sha256:d67956c676be5a24102c7407a71f4126d30de2a569a1c7871c9f3cabc94225d7", size = 58252, upload-time = "2025-11-07T00:44:17.814Z" }, - { url = "https://files.pythonhosted.org/packages/85/64/d3954e836ea67c4d3ad5285e5c8fd9d362fd0a189a2db622df457b0f4f6a/wrapt-2.0.1-cp313-cp313-win_amd64.whl", hash = "sha256:9ca66b38dd642bf90c59b6738af8070747b610115a39af2498535f62b5cdc1c3", size = 60500, upload-time = "2025-11-07T00:44:15.561Z" }, - { url = "https://files.pythonhosted.org/packages/89/4e/3c8b99ac93527cfab7f116089db120fef16aac96e5f6cdb724ddf286086d/wrapt-2.0.1-cp313-cp313-win_arm64.whl", hash = "sha256:5a4939eae35db6b6cec8e7aa0e833dcca0acad8231672c26c2a9ab7a0f8ac9c8", size = 58993, upload-time = "2025-11-07T00:44:16.65Z" }, - { url = "https://files.pythonhosted.org/packages/f9/f4/eff2b7d711cae20d220780b9300faa05558660afb93f2ff5db61fe725b9a/wrapt-2.0.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:a52f93d95c8d38fed0669da2ebdb0b0376e895d84596a976c15a9eb45e3eccb3", size = 82028, upload-time = "2025-11-07T00:44:18.944Z" }, - { url = "https://files.pythonhosted.org/packages/0c/67/cb945563f66fd0f61a999339460d950f4735c69f18f0a87ca586319b1778/wrapt-2.0.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:4e54bbf554ee29fcceee24fa41c4d091398b911da6e7f5d7bffda963c9aed2e1", size = 62949, upload-time = "2025-11-07T00:44:20.074Z" }, - { url = "https://files.pythonhosted.org/packages/ec/ca/f63e177f0bbe1e5cf5e8d9b74a286537cd709724384ff20860f8f6065904/wrapt-2.0.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:908f8c6c71557f4deaa280f55d0728c3bca0960e8c3dd5ceeeafb3c19942719d", size = 63681, upload-time = "2025-11-07T00:44:21.345Z" }, - { url = "https://files.pythonhosted.org/packages/39/a1/1b88fcd21fd835dca48b556daef750952e917a2794fa20c025489e2e1f0f/wrapt-2.0.1-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:e2f84e9af2060e3904a32cea9bb6db23ce3f91cfd90c6b426757cf7cc01c45c7", size = 152696, upload-time = "2025-11-07T00:44:24.318Z" }, - { url = "https://files.pythonhosted.org/packages/62/1c/d9185500c1960d9f5f77b9c0b890b7fc62282b53af7ad1b6bd779157f714/wrapt-2.0.1-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e3612dc06b436968dfb9142c62e5dfa9eb5924f91120b3c8ff501ad878f90eb3", size = 158859, upload-time = "2025-11-07T00:44:25.494Z" }, - { url = "https://files.pythonhosted.org/packages/91/60/5d796ed0f481ec003220c7878a1d6894652efe089853a208ea0838c13086/wrapt-2.0.1-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6d2d947d266d99a1477cd005b23cbd09465276e302515e122df56bb9511aca1b", size = 146068, upload-time = "2025-11-07T00:44:22.81Z" }, - { url = "https://files.pythonhosted.org/packages/04/f8/75282dd72f102ddbfba137e1e15ecba47b40acff32c08ae97edbf53f469e/wrapt-2.0.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:7d539241e87b650cbc4c3ac9f32c8d1ac8a54e510f6dca3f6ab60dcfd48c9b10", size = 155724, upload-time = "2025-11-07T00:44:26.634Z" }, - { url = "https://files.pythonhosted.org/packages/5a/27/fe39c51d1b344caebb4a6a9372157bdb8d25b194b3561b52c8ffc40ac7d1/wrapt-2.0.1-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:4811e15d88ee62dbf5c77f2c3ff3932b1e3ac92323ba3912f51fc4016ce81ecf", size = 144413, upload-time = "2025-11-07T00:44:27.939Z" }, - { url = "https://files.pythonhosted.org/packages/83/2b/9f6b643fe39d4505c7bf926d7c2595b7cb4b607c8c6b500e56c6b36ac238/wrapt-2.0.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c1c91405fcf1d501fa5d55df21e58ea49e6b879ae829f1039faaf7e5e509b41e", size = 150325, upload-time = "2025-11-07T00:44:29.29Z" }, - { url = "https://files.pythonhosted.org/packages/bb/b6/20ffcf2558596a7f58a2e69c89597128781f0b88e124bf5a4cadc05b8139/wrapt-2.0.1-cp313-cp313t-win32.whl", hash = "sha256:e76e3f91f864e89db8b8d2a8311d57df93f01ad6bb1e9b9976d1f2e83e18315c", size = 59943, upload-time = "2025-11-07T00:44:33.211Z" }, - { url = "https://files.pythonhosted.org/packages/87/6a/0e56111cbb3320151eed5d3821ee1373be13e05b376ea0870711f18810c3/wrapt-2.0.1-cp313-cp313t-win_amd64.whl", hash = "sha256:83ce30937f0ba0d28818807b303a412440c4b63e39d3d8fc036a94764b728c92", size = 63240, upload-time = "2025-11-07T00:44:30.935Z" }, - { url = "https://files.pythonhosted.org/packages/1d/54/5ab4c53ea1f7f7e5c3e7c1095db92932cc32fd62359d285486d00c2884c3/wrapt-2.0.1-cp313-cp313t-win_arm64.whl", hash = "sha256:4b55cacc57e1dc2d0991dbe74c6419ffd415fb66474a02335cb10efd1aa3f84f", size = 60416, upload-time = "2025-11-07T00:44:32.002Z" }, - { url = "https://files.pythonhosted.org/packages/73/81/d08d83c102709258e7730d3cd25befd114c60e43ef3891d7e6877971c514/wrapt-2.0.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:5e53b428f65ece6d9dad23cb87e64506392b720a0b45076c05354d27a13351a1", size = 78290, upload-time = "2025-11-07T00:44:34.691Z" }, - { url = "https://files.pythonhosted.org/packages/f6/14/393afba2abb65677f313aa680ff0981e829626fed39b6a7e3ec807487790/wrapt-2.0.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:ad3ee9d0f254851c71780966eb417ef8e72117155cff04821ab9b60549694a55", size = 61255, upload-time = "2025-11-07T00:44:35.762Z" }, - { url = "https://files.pythonhosted.org/packages/c4/10/a4a1f2fba205a9462e36e708ba37e5ac95f4987a0f1f8fd23f0bf1fc3b0f/wrapt-2.0.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:d7b822c61ed04ee6ad64bc90d13368ad6eb094db54883b5dde2182f67a7f22c0", size = 61797, upload-time = "2025-11-07T00:44:37.22Z" }, - { url = "https://files.pythonhosted.org/packages/12/db/99ba5c37cf1c4fad35349174f1e38bd8d992340afc1ff27f526729b98986/wrapt-2.0.1-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7164a55f5e83a9a0b031d3ffab4d4e36bbec42e7025db560f225489fa929e509", size = 120470, upload-time = "2025-11-07T00:44:39.425Z" }, - { url = "https://files.pythonhosted.org/packages/30/3f/a1c8d2411eb826d695fc3395a431757331582907a0ec59afce8fe8712473/wrapt-2.0.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e60690ba71a57424c8d9ff28f8d006b7ad7772c22a4af432188572cd7fa004a1", size = 122851, upload-time = "2025-11-07T00:44:40.582Z" }, - { url = "https://files.pythonhosted.org/packages/b3/8d/72c74a63f201768d6a04a8845c7976f86be6f5ff4d74996c272cefc8dafc/wrapt-2.0.1-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:3cd1a4bd9a7a619922a8557e1318232e7269b5fb69d4ba97b04d20450a6bf970", size = 117433, upload-time = "2025-11-07T00:44:38.313Z" }, - { url = "https://files.pythonhosted.org/packages/c7/5a/df37cf4042cb13b08256f8e27023e2f9b3d471d553376616591bb99bcb31/wrapt-2.0.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:b4c2e3d777e38e913b8ce3a6257af72fb608f86a1df471cb1d4339755d0a807c", size = 121280, upload-time = "2025-11-07T00:44:41.69Z" }, - { url = "https://files.pythonhosted.org/packages/54/34/40d6bc89349f9931e1186ceb3e5fbd61d307fef814f09fbbac98ada6a0c8/wrapt-2.0.1-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:3d366aa598d69416b5afedf1faa539fac40c1d80a42f6b236c88c73a3c8f2d41", size = 116343, upload-time = "2025-11-07T00:44:43.013Z" }, - { url = "https://files.pythonhosted.org/packages/70/66/81c3461adece09d20781dee17c2366fdf0cb8754738b521d221ca056d596/wrapt-2.0.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:c235095d6d090aa903f1db61f892fffb779c1eaeb2a50e566b52001f7a0f66ed", size = 119650, upload-time = "2025-11-07T00:44:44.523Z" }, - { url = "https://files.pythonhosted.org/packages/46/3a/d0146db8be8761a9e388cc9cc1c312b36d583950ec91696f19bbbb44af5a/wrapt-2.0.1-cp314-cp314-win32.whl", hash = "sha256:bfb5539005259f8127ea9c885bdc231978c06b7a980e63a8a61c8c4c979719d0", size = 58701, upload-time = "2025-11-07T00:44:48.277Z" }, - { url = "https://files.pythonhosted.org/packages/1a/38/5359da9af7d64554be63e9046164bd4d8ff289a2dd365677d25ba3342c08/wrapt-2.0.1-cp314-cp314-win_amd64.whl", hash = "sha256:4ae879acc449caa9ed43fc36ba08392b9412ee67941748d31d94e3cedb36628c", size = 60947, upload-time = "2025-11-07T00:44:46.086Z" }, - { url = "https://files.pythonhosted.org/packages/aa/3f/96db0619276a833842bf36343685fa04f987dd6e3037f314531a1e00492b/wrapt-2.0.1-cp314-cp314-win_arm64.whl", hash = "sha256:8639b843c9efd84675f1e100ed9e99538ebea7297b62c4b45a7042edb84db03e", size = 59359, upload-time = "2025-11-07T00:44:47.164Z" }, - { url = "https://files.pythonhosted.org/packages/71/49/5f5d1e867bf2064bf3933bc6cf36ade23505f3902390e175e392173d36a2/wrapt-2.0.1-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:9219a1d946a9b32bb23ccae66bdb61e35c62773ce7ca6509ceea70f344656b7b", size = 82031, upload-time = "2025-11-07T00:44:49.4Z" }, - { url = "https://files.pythonhosted.org/packages/2b/89/0009a218d88db66ceb83921e5685e820e2c61b59bbbb1324ba65342668bc/wrapt-2.0.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:fa4184e74197af3adad3c889a1af95b53bb0466bced92ea99a0c014e48323eec", size = 62952, upload-time = "2025-11-07T00:44:50.74Z" }, - { url = "https://files.pythonhosted.org/packages/ae/18/9b968e920dd05d6e44bcc918a046d02afea0fb31b2f1c80ee4020f377cbe/wrapt-2.0.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c5ef2f2b8a53b7caee2f797ef166a390fef73979b15778a4a153e4b5fedce8fa", size = 63688, upload-time = "2025-11-07T00:44:52.248Z" }, - { url = "https://files.pythonhosted.org/packages/a6/7d/78bdcb75826725885d9ea26c49a03071b10c4c92da93edda612910f150e4/wrapt-2.0.1-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:e042d653a4745be832d5aa190ff80ee4f02c34b21f4b785745eceacd0907b815", size = 152706, upload-time = "2025-11-07T00:44:54.613Z" }, - { url = "https://files.pythonhosted.org/packages/dd/77/cac1d46f47d32084a703df0d2d29d47e7eb2a7d19fa5cbca0e529ef57659/wrapt-2.0.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2afa23318136709c4b23d87d543b425c399887b4057936cd20386d5b1422b6fa", size = 158866, upload-time = "2025-11-07T00:44:55.79Z" }, - { url = "https://files.pythonhosted.org/packages/8a/11/b521406daa2421508903bf8d5e8b929216ec2af04839db31c0a2c525eee0/wrapt-2.0.1-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6c72328f668cf4c503ffcf9434c2b71fdd624345ced7941bc6693e61bbe36bef", size = 146148, upload-time = "2025-11-07T00:44:53.388Z" }, - { url = "https://files.pythonhosted.org/packages/0c/c0/340b272bed297baa7c9ce0c98ef7017d9c035a17a6a71dce3184b8382da2/wrapt-2.0.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:3793ac154afb0e5b45d1233cb94d354ef7a983708cc3bb12563853b1d8d53747", size = 155737, upload-time = "2025-11-07T00:44:56.971Z" }, - { url = "https://files.pythonhosted.org/packages/f3/93/bfcb1fb2bdf186e9c2883a4d1ab45ab099c79cbf8f4e70ea453811fa3ea7/wrapt-2.0.1-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:fec0d993ecba3991645b4857837277469c8cc4c554a7e24d064d1ca291cfb81f", size = 144451, upload-time = "2025-11-07T00:44:58.515Z" }, - { url = "https://files.pythonhosted.org/packages/d2/6b/dca504fb18d971139d232652656180e3bd57120e1193d9a5899c3c0b7cdd/wrapt-2.0.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:949520bccc1fa227274da7d03bf238be15389cd94e32e4297b92337df9b7a349", size = 150353, upload-time = "2025-11-07T00:44:59.753Z" }, - { url = "https://files.pythonhosted.org/packages/1d/f6/a1de4bd3653afdf91d250ca5c721ee51195df2b61a4603d4b373aa804d1d/wrapt-2.0.1-cp314-cp314t-win32.whl", hash = "sha256:be9e84e91d6497ba62594158d3d31ec0486c60055c49179edc51ee43d095f79c", size = 60609, upload-time = "2025-11-07T00:45:03.315Z" }, - { url = "https://files.pythonhosted.org/packages/01/3a/07cd60a9d26fe73efead61c7830af975dfdba8537632d410462672e4432b/wrapt-2.0.1-cp314-cp314t-win_amd64.whl", hash = "sha256:61c4956171c7434634401db448371277d07032a81cc21c599c22953374781395", size = 64038, upload-time = "2025-11-07T00:45:00.948Z" }, - { url = "https://files.pythonhosted.org/packages/41/99/8a06b8e17dddbf321325ae4eb12465804120f699cd1b8a355718300c62da/wrapt-2.0.1-cp314-cp314t-win_arm64.whl", hash = "sha256:35cdbd478607036fee40273be8ed54a451f5f23121bd9d4be515158f9498f7ad", size = 60634, upload-time = "2025-11-07T00:45:02.087Z" }, - { url = "https://files.pythonhosted.org/packages/15/d1/b51471c11592ff9c012bd3e2f7334a6ff2f42a7aed2caffcf0bdddc9cb89/wrapt-2.0.1-py3-none-any.whl", hash = "sha256:4d2ce1bf1a48c5277d7969259232b57645aae5686dba1eaeade39442277afbca", size = 44046, upload-time = "2025-11-07T00:45:32.116Z" }, +version = "1.17.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/95/8f/aeb76c5b46e273670962298c23e7ddde79916cb74db802131d49a85e4b7d/wrapt-1.17.3.tar.gz", hash = "sha256:f66eb08feaa410fe4eebd17f2a2c8e2e46d3476e9f8c783daa8e09e0faa666d0", size = 55547, upload-time = "2025-08-12T05:53:21.714Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/52/db/00e2a219213856074a213503fdac0511203dceefff26e1daa15250cc01a0/wrapt-1.17.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:273a736c4645e63ac582c60a56b0acb529ef07f78e08dc6bfadf6a46b19c0da7", size = 53482, upload-time = "2025-08-12T05:51:45.79Z" }, + { url = "https://files.pythonhosted.org/packages/5e/30/ca3c4a5eba478408572096fe9ce36e6e915994dd26a4e9e98b4f729c06d9/wrapt-1.17.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5531d911795e3f935a9c23eb1c8c03c211661a5060aab167065896bbf62a5f85", size = 38674, upload-time = "2025-08-12T05:51:34.629Z" }, + { url = "https://files.pythonhosted.org/packages/31/25/3e8cc2c46b5329c5957cec959cb76a10718e1a513309c31399a4dad07eb3/wrapt-1.17.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:0610b46293c59a3adbae3dee552b648b984176f8562ee0dba099a56cfbe4df1f", size = 38959, upload-time = "2025-08-12T05:51:56.074Z" }, + { url = "https://files.pythonhosted.org/packages/5d/8f/a32a99fc03e4b37e31b57cb9cefc65050ea08147a8ce12f288616b05ef54/wrapt-1.17.3-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b32888aad8b6e68f83a8fdccbf3165f5469702a7544472bdf41f582970ed3311", size = 82376, upload-time = "2025-08-12T05:52:32.134Z" }, + { url = "https://files.pythonhosted.org/packages/31/57/4930cb8d9d70d59c27ee1332a318c20291749b4fba31f113c2f8ac49a72e/wrapt-1.17.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8cccf4f81371f257440c88faed6b74f1053eef90807b77e31ca057b2db74edb1", size = 83604, upload-time = "2025-08-12T05:52:11.663Z" }, + { url = "https://files.pythonhosted.org/packages/a8/f3/1afd48de81d63dd66e01b263a6fbb86e1b5053b419b9b33d13e1f6d0f7d0/wrapt-1.17.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8a210b158a34164de8bb68b0e7780041a903d7b00c87e906fb69928bf7890d5", size = 82782, upload-time = "2025-08-12T05:52:12.626Z" }, + { url = "https://files.pythonhosted.org/packages/1e/d7/4ad5327612173b144998232f98a85bb24b60c352afb73bc48e3e0d2bdc4e/wrapt-1.17.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:79573c24a46ce11aab457b472efd8d125e5a51da2d1d24387666cd85f54c05b2", size = 82076, upload-time = "2025-08-12T05:52:33.168Z" }, + { url = "https://files.pythonhosted.org/packages/bb/59/e0adfc831674a65694f18ea6dc821f9fcb9ec82c2ce7e3d73a88ba2e8718/wrapt-1.17.3-cp311-cp311-win32.whl", hash = "sha256:c31eebe420a9a5d2887b13000b043ff6ca27c452a9a22fa71f35f118e8d4bf89", size = 36457, upload-time = "2025-08-12T05:53:03.936Z" }, + { url = "https://files.pythonhosted.org/packages/83/88/16b7231ba49861b6f75fc309b11012ede4d6b0a9c90969d9e0db8d991aeb/wrapt-1.17.3-cp311-cp311-win_amd64.whl", hash = "sha256:0b1831115c97f0663cb77aa27d381237e73ad4f721391a9bfb2fe8bc25fa6e77", size = 38745, upload-time = "2025-08-12T05:53:02.885Z" }, + { url = "https://files.pythonhosted.org/packages/9a/1e/c4d4f3398ec073012c51d1c8d87f715f56765444e1a4b11e5180577b7e6e/wrapt-1.17.3-cp311-cp311-win_arm64.whl", hash = "sha256:5a7b3c1ee8265eb4c8f1b7d29943f195c00673f5ab60c192eba2d4a7eae5f46a", size = 36806, upload-time = "2025-08-12T05:52:53.368Z" }, + { url = "https://files.pythonhosted.org/packages/9f/41/cad1aba93e752f1f9268c77270da3c469883d56e2798e7df6240dcb2287b/wrapt-1.17.3-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ab232e7fdb44cdfbf55fc3afa31bcdb0d8980b9b95c38b6405df2acb672af0e0", size = 53998, upload-time = "2025-08-12T05:51:47.138Z" }, + { url = "https://files.pythonhosted.org/packages/60/f8/096a7cc13097a1869fe44efe68dace40d2a16ecb853141394047f0780b96/wrapt-1.17.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:9baa544e6acc91130e926e8c802a17f3b16fbea0fd441b5a60f5cf2cc5c3deba", size = 39020, upload-time = "2025-08-12T05:51:35.906Z" }, + { url = "https://files.pythonhosted.org/packages/33/df/bdf864b8997aab4febb96a9ae5c124f700a5abd9b5e13d2a3214ec4be705/wrapt-1.17.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6b538e31eca1a7ea4605e44f81a48aa24c4632a277431a6ed3f328835901f4fd", size = 39098, upload-time = "2025-08-12T05:51:57.474Z" }, + { url = "https://files.pythonhosted.org/packages/9f/81/5d931d78d0eb732b95dc3ddaeeb71c8bb572fb01356e9133916cd729ecdd/wrapt-1.17.3-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:042ec3bb8f319c147b1301f2393bc19dba6e176b7da446853406d041c36c7828", size = 88036, upload-time = "2025-08-12T05:52:34.784Z" }, + { url = "https://files.pythonhosted.org/packages/ca/38/2e1785df03b3d72d34fc6252d91d9d12dc27a5c89caef3335a1bbb8908ca/wrapt-1.17.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3af60380ba0b7b5aeb329bc4e402acd25bd877e98b3727b0135cb5c2efdaefe9", size = 88156, upload-time = "2025-08-12T05:52:13.599Z" }, + { url = "https://files.pythonhosted.org/packages/b3/8b/48cdb60fe0603e34e05cffda0b2a4adab81fd43718e11111a4b0100fd7c1/wrapt-1.17.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0b02e424deef65c9f7326d8c19220a2c9040c51dc165cddb732f16198c168396", size = 87102, upload-time = "2025-08-12T05:52:14.56Z" }, + { url = "https://files.pythonhosted.org/packages/3c/51/d81abca783b58f40a154f1b2c56db1d2d9e0d04fa2d4224e357529f57a57/wrapt-1.17.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:74afa28374a3c3a11b3b5e5fca0ae03bef8450d6aa3ab3a1e2c30e3a75d023dc", size = 87732, upload-time = "2025-08-12T05:52:36.165Z" }, + { url = "https://files.pythonhosted.org/packages/9e/b1/43b286ca1392a006d5336412d41663eeef1ad57485f3e52c767376ba7e5a/wrapt-1.17.3-cp312-cp312-win32.whl", hash = "sha256:4da9f45279fff3543c371d5ababc57a0384f70be244de7759c85a7f989cb4ebe", size = 36705, upload-time = "2025-08-12T05:53:07.123Z" }, + { url = "https://files.pythonhosted.org/packages/28/de/49493f962bd3c586ab4b88066e967aa2e0703d6ef2c43aa28cb83bf7b507/wrapt-1.17.3-cp312-cp312-win_amd64.whl", hash = "sha256:e71d5c6ebac14875668a1e90baf2ea0ef5b7ac7918355850c0908ae82bcb297c", size = 38877, upload-time = "2025-08-12T05:53:05.436Z" }, + { url = "https://files.pythonhosted.org/packages/f1/48/0f7102fe9cb1e8a5a77f80d4f0956d62d97034bbe88d33e94699f99d181d/wrapt-1.17.3-cp312-cp312-win_arm64.whl", hash = "sha256:604d076c55e2fdd4c1c03d06dc1a31b95130010517b5019db15365ec4a405fc6", size = 36885, upload-time = "2025-08-12T05:52:54.367Z" }, + { url = "https://files.pythonhosted.org/packages/fc/f6/759ece88472157acb55fc195e5b116e06730f1b651b5b314c66291729193/wrapt-1.17.3-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a47681378a0439215912ef542c45a783484d4dd82bac412b71e59cf9c0e1cea0", size = 54003, upload-time = "2025-08-12T05:51:48.627Z" }, + { url = "https://files.pythonhosted.org/packages/4f/a9/49940b9dc6d47027dc850c116d79b4155f15c08547d04db0f07121499347/wrapt-1.17.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:54a30837587c6ee3cd1a4d1c2ec5d24e77984d44e2f34547e2323ddb4e22eb77", size = 39025, upload-time = "2025-08-12T05:51:37.156Z" }, + { url = "https://files.pythonhosted.org/packages/45/35/6a08de0f2c96dcdd7fe464d7420ddb9a7655a6561150e5fc4da9356aeaab/wrapt-1.17.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:16ecf15d6af39246fe33e507105d67e4b81d8f8d2c6598ff7e3ca1b8a37213f7", size = 39108, upload-time = "2025-08-12T05:51:58.425Z" }, + { url = "https://files.pythonhosted.org/packages/0c/37/6faf15cfa41bf1f3dba80cd3f5ccc6622dfccb660ab26ed79f0178c7497f/wrapt-1.17.3-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6fd1ad24dc235e4ab88cda009e19bf347aabb975e44fd5c2fb22a3f6e4141277", size = 88072, upload-time = "2025-08-12T05:52:37.53Z" }, + { url = "https://files.pythonhosted.org/packages/78/f2/efe19ada4a38e4e15b6dff39c3e3f3f73f5decf901f66e6f72fe79623a06/wrapt-1.17.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0ed61b7c2d49cee3c027372df5809a59d60cf1b6c2f81ee980a091f3afed6a2d", size = 88214, upload-time = "2025-08-12T05:52:15.886Z" }, + { url = "https://files.pythonhosted.org/packages/40/90/ca86701e9de1622b16e09689fc24b76f69b06bb0150990f6f4e8b0eeb576/wrapt-1.17.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:423ed5420ad5f5529db9ce89eac09c8a2f97da18eb1c870237e84c5a5c2d60aa", size = 87105, upload-time = "2025-08-12T05:52:17.914Z" }, + { url = "https://files.pythonhosted.org/packages/fd/e0/d10bd257c9a3e15cbf5523025252cc14d77468e8ed644aafb2d6f54cb95d/wrapt-1.17.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e01375f275f010fcbf7f643b4279896d04e571889b8a5b3f848423d91bf07050", size = 87766, upload-time = "2025-08-12T05:52:39.243Z" }, + { url = "https://files.pythonhosted.org/packages/e8/cf/7d848740203c7b4b27eb55dbfede11aca974a51c3d894f6cc4b865f42f58/wrapt-1.17.3-cp313-cp313-win32.whl", hash = "sha256:53e5e39ff71b3fc484df8a522c933ea2b7cdd0d5d15ae82e5b23fde87d44cbd8", size = 36711, upload-time = "2025-08-12T05:53:10.074Z" }, + { url = "https://files.pythonhosted.org/packages/57/54/35a84d0a4d23ea675994104e667ceff49227ce473ba6a59ba2c84f250b74/wrapt-1.17.3-cp313-cp313-win_amd64.whl", hash = "sha256:1f0b2f40cf341ee8cc1a97d51ff50dddb9fcc73241b9143ec74b30fc4f44f6cb", size = 38885, upload-time = "2025-08-12T05:53:08.695Z" }, + { url = "https://files.pythonhosted.org/packages/01/77/66e54407c59d7b02a3c4e0af3783168fff8e5d61def52cda8728439d86bc/wrapt-1.17.3-cp313-cp313-win_arm64.whl", hash = "sha256:7425ac3c54430f5fc5e7b6f41d41e704db073309acfc09305816bc6a0b26bb16", size = 36896, upload-time = "2025-08-12T05:52:55.34Z" }, + { url = "https://files.pythonhosted.org/packages/02/a2/cd864b2a14f20d14f4c496fab97802001560f9f41554eef6df201cd7f76c/wrapt-1.17.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:cf30f6e3c077c8e6a9a7809c94551203c8843e74ba0c960f4a98cd80d4665d39", size = 54132, upload-time = "2025-08-12T05:51:49.864Z" }, + { url = "https://files.pythonhosted.org/packages/d5/46/d011725b0c89e853dc44cceb738a307cde5d240d023d6d40a82d1b4e1182/wrapt-1.17.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:e228514a06843cae89621384cfe3a80418f3c04aadf8a3b14e46a7be704e4235", size = 39091, upload-time = "2025-08-12T05:51:38.935Z" }, + { url = "https://files.pythonhosted.org/packages/2e/9e/3ad852d77c35aae7ddebdbc3b6d35ec8013af7d7dddad0ad911f3d891dae/wrapt-1.17.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:5ea5eb3c0c071862997d6f3e02af1d055f381b1d25b286b9d6644b79db77657c", size = 39172, upload-time = "2025-08-12T05:51:59.365Z" }, + { url = "https://files.pythonhosted.org/packages/c3/f7/c983d2762bcce2326c317c26a6a1e7016f7eb039c27cdf5c4e30f4160f31/wrapt-1.17.3-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:281262213373b6d5e4bb4353bc36d1ba4084e6d6b5d242863721ef2bf2c2930b", size = 87163, upload-time = "2025-08-12T05:52:40.965Z" }, + { url = "https://files.pythonhosted.org/packages/e4/0f/f673f75d489c7f22d17fe0193e84b41540d962f75fce579cf6873167c29b/wrapt-1.17.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:dc4a8d2b25efb6681ecacad42fca8859f88092d8732b170de6a5dddd80a1c8fa", size = 87963, upload-time = "2025-08-12T05:52:20.326Z" }, + { url = "https://files.pythonhosted.org/packages/df/61/515ad6caca68995da2fac7a6af97faab8f78ebe3bf4f761e1b77efbc47b5/wrapt-1.17.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:373342dd05b1d07d752cecbec0c41817231f29f3a89aa8b8843f7b95992ed0c7", size = 86945, upload-time = "2025-08-12T05:52:21.581Z" }, + { url = "https://files.pythonhosted.org/packages/d3/bd/4e70162ce398462a467bc09e768bee112f1412e563620adc353de9055d33/wrapt-1.17.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d40770d7c0fd5cbed9d84b2c3f2e156431a12c9a37dc6284060fb4bec0b7ffd4", size = 86857, upload-time = "2025-08-12T05:52:43.043Z" }, + { url = "https://files.pythonhosted.org/packages/2b/b8/da8560695e9284810b8d3df8a19396a6e40e7518059584a1a394a2b35e0a/wrapt-1.17.3-cp314-cp314-win32.whl", hash = "sha256:fbd3c8319de8e1dc79d346929cd71d523622da527cca14e0c1d257e31c2b8b10", size = 37178, upload-time = "2025-08-12T05:53:12.605Z" }, + { url = "https://files.pythonhosted.org/packages/db/c8/b71eeb192c440d67a5a0449aaee2310a1a1e8eca41676046f99ed2487e9f/wrapt-1.17.3-cp314-cp314-win_amd64.whl", hash = "sha256:e1a4120ae5705f673727d3253de3ed0e016f7cd78dc463db1b31e2463e1f3cf6", size = 39310, upload-time = "2025-08-12T05:53:11.106Z" }, + { url = "https://files.pythonhosted.org/packages/45/20/2cda20fd4865fa40f86f6c46ed37a2a8356a7a2fde0773269311f2af56c7/wrapt-1.17.3-cp314-cp314-win_arm64.whl", hash = "sha256:507553480670cab08a800b9463bdb881b2edeed77dc677b0a5915e6106e91a58", size = 37266, upload-time = "2025-08-12T05:52:56.531Z" }, + { url = "https://files.pythonhosted.org/packages/77/ed/dd5cf21aec36c80443c6f900449260b80e2a65cf963668eaef3b9accce36/wrapt-1.17.3-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:ed7c635ae45cfbc1a7371f708727bf74690daedc49b4dba310590ca0bd28aa8a", size = 56544, upload-time = "2025-08-12T05:51:51.109Z" }, + { url = "https://files.pythonhosted.org/packages/8d/96/450c651cc753877ad100c7949ab4d2e2ecc4d97157e00fa8f45df682456a/wrapt-1.17.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:249f88ed15503f6492a71f01442abddd73856a0032ae860de6d75ca62eed8067", size = 40283, upload-time = "2025-08-12T05:51:39.912Z" }, + { url = "https://files.pythonhosted.org/packages/d1/86/2fcad95994d9b572db57632acb6f900695a648c3e063f2cd344b3f5c5a37/wrapt-1.17.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5a03a38adec8066d5a37bea22f2ba6bbf39fcdefbe2d91419ab864c3fb515454", size = 40366, upload-time = "2025-08-12T05:52:00.693Z" }, + { url = "https://files.pythonhosted.org/packages/64/0e/f4472f2fdde2d4617975144311f8800ef73677a159be7fe61fa50997d6c0/wrapt-1.17.3-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:5d4478d72eb61c36e5b446e375bbc49ed002430d17cdec3cecb36993398e1a9e", size = 108571, upload-time = "2025-08-12T05:52:44.521Z" }, + { url = "https://files.pythonhosted.org/packages/cc/01/9b85a99996b0a97c8a17484684f206cbb6ba73c1ce6890ac668bcf3838fb/wrapt-1.17.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:223db574bb38637e8230eb14b185565023ab624474df94d2af18f1cdb625216f", size = 113094, upload-time = "2025-08-12T05:52:22.618Z" }, + { url = "https://files.pythonhosted.org/packages/25/02/78926c1efddcc7b3aa0bc3d6b33a822f7d898059f7cd9ace8c8318e559ef/wrapt-1.17.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e405adefb53a435f01efa7ccdec012c016b5a1d3f35459990afc39b6be4d5056", size = 110659, upload-time = "2025-08-12T05:52:24.057Z" }, + { url = "https://files.pythonhosted.org/packages/dc/ee/c414501ad518ac3e6fe184753632fe5e5ecacdcf0effc23f31c1e4f7bfcf/wrapt-1.17.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:88547535b787a6c9ce4086917b6e1d291aa8ed914fdd3a838b3539dc95c12804", size = 106946, upload-time = "2025-08-12T05:52:45.976Z" }, + { url = "https://files.pythonhosted.org/packages/be/44/a1bd64b723d13bb151d6cc91b986146a1952385e0392a78567e12149c7b4/wrapt-1.17.3-cp314-cp314t-win32.whl", hash = "sha256:41b1d2bc74c2cac6f9074df52b2efbef2b30bdfe5f40cb78f8ca22963bc62977", size = 38717, upload-time = "2025-08-12T05:53:15.214Z" }, + { url = "https://files.pythonhosted.org/packages/79/d9/7cfd5a312760ac4dd8bf0184a6ee9e43c33e47f3dadc303032ce012b8fa3/wrapt-1.17.3-cp314-cp314t-win_amd64.whl", hash = "sha256:73d496de46cd2cdbdbcce4ae4bcdb4afb6a11234a1df9c085249d55166b95116", size = 41334, upload-time = "2025-08-12T05:53:14.178Z" }, + { url = "https://files.pythonhosted.org/packages/46/78/10ad9781128ed2f99dbc474f43283b13fea8ba58723e98844367531c18e9/wrapt-1.17.3-cp314-cp314t-win_arm64.whl", hash = "sha256:f38e60678850c42461d4202739f9bf1e3a737c7ad283638251e79cc49effb6b6", size = 38471, upload-time = "2025-08-12T05:52:57.784Z" }, + { url = "https://files.pythonhosted.org/packages/1f/f6/a933bd70f98e9cf3e08167fc5cd7aaaca49147e48411c0bd5ae701bb2194/wrapt-1.17.3-py3-none-any.whl", hash = "sha256:7171ae35d2c33d326ac19dd8facb1e82e5fd04ef8c6c0e394d7af55a55051c22", size = 23591, upload-time = "2025-08-12T05:53:20.674Z" }, ] [[package]]