tools/list and receives structured definitions of every operation you’ve exposed - like interactive API documentation that agents can parse and reason about.
How discovery works
Agents send a simple JSON-RPC request to the/mcp endpoint:
From OpenAPI to MCP tools
Grand Central automatically translates your OpenAPI specifications into MCP tool definitions. The tool name comes fromoperationId, descriptions combine summary and description fields, and input schemas are generated from parameter definitions. This means your existing API documentation becomes agent-readable without additional work.
Here’s how the mapping works:
getAccountBalance tool with full type safety and validation rules. Better OpenAPI documentation means better AI agent behavior - agents understand context, constraints, and expected responses without guessing.
Performance and caching
Tool discovery is fast because Grand Central caches aggressively. Tool definitions change infrequently - only when new operations get approved through the access workflow - so responses are served from memory most of the time. Typical discovery latency is 150-300ms, fast enough that agents can call it on startup without noticeable delay. Cache invalidation happens automatically when tool configurations change. When new tools are added or existing ones updated, the cache refreshes within seconds. Agents don’t need to manage cache invalidation - they can safely cache discovery results locally for 5-10 minutes to reduce repeated calls. Agents can filter tools client-side based on their needs. If an agent only handles payment operations, it can filter the tool list for names matchingpayment* or validate*. Authentication requirements, parameter complexity, and tool tags (if configured) provide additional filtering criteria.
Discovery performance at scale
Grand Central’s discovery endpoint handles 100+ concurrent agents without degradation. Whether you have 10 agents or 1,000, discovery latency stays consistent. The system tracks cache hit rates (typically >80%) and automatically scales when usage spikes. For deployments with 20 to 100 tools per server (typical range), discovery responses are small enough (under 50KB) that network latency dominates response time. If your backend APIs are well-documented with rich OpenAPI specifications, tool definitions become self-explanatory - agents understand context without trial-and-error.Next steps
- Tool Invocation - How agents execute tools and handle responses
- Authentication - Subscription keys, JWT tokens, and user context
- Best Practices - Patterns for efficient tool discovery and caching