Skip to content

Conversation

@herniqeu
Copy link

@herniqeu herniqeu commented Dec 26, 2025

  • Replace list.pop(0) with deque.popleft() for O(1) queue dequeue
  • Cache compiled regex patterns in ResourceTemplate for URI matching
  • Cache field info mapping in FuncMetadata via lazy property
  • Throttle expired task cleanup with interval-based execution

These optimizations target high-frequency operations in message queuing, resource lookups, tool calls, and task store access.

Motivation and Context

How Has This Been Tested?

Breaking Changes

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the MCP Documentation
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have added or updated documentation as needed

Additional context

- Replace list.pop(0) with deque.popleft() for O(1) queue dequeue
- Cache compiled regex patterns in ResourceTemplate for URI matching
- Cache field info mapping in FuncMetadata via lazy property
- Throttle expired task cleanup with interval-based execution

These optimizations target high-frequency operations in message
queuing, resource lookups, tool calls, and task store access.
from mcp.shared.experimental.tasks.store import TaskStore
from mcp.types import Result, Task, TaskMetadata, TaskStatus

CLEANUP_INTERVAL_SECONDS = 1.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this would be a breaking change? Setting a throttling limit on the cleanup may not resolve all expired tasks and is timing-dependent, but this makes sense if the tasks are accessed very frequently (though I'm not sure what the benchmarks on this look like). How often does this happen/are there use cases that would be addressed with throttling cleanup?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: iiuc, these changes optimize repeated calls to pre_parse_json. I'm not sure what the protocol is for optimization-related tests, but it could be useful to add a small benchmark in the description on a ~20 field pydantic model and call pre_parse_json repeatedly (50k-100k before vs after).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants