Skip to main content
Tool discovery is how AI agents learn what your APIs can do. When an agent connects to Grand Central’s MCP server, it calls tools/list and receives structured definitions of every operation you’ve exposed - like interactive API documentation that agents can parse and reason about.

How discovery works

Agents send a simple JSON-RPC request to the /mcp endpoint:
{
  "jsonrpc": "2.0",
  "method": "tools/list",
  "id": 1
}
Grand Central responds with tool definitions generated from your OpenAPI specifications. Each tool includes a name, human-readable description, and JSON Schema defining expected parameters:
{
  "jsonrpc": "2.0",
  "result": {
    "tools": [
      {
        "name": "getAccountBalance",
        "description": "Retrieve current account balance and available credit for a customer account",
        "inputSchema": {
          "type": "object",
          "properties": {
            "accountId": {
              "type": "string",
              "description": "Account identifier (format: ACC-XXXXXX)",
              "pattern": "^ACC-[0-9]{6}$"
            }
          },
          "required": ["accountId"]
        }
      }
    ]
  },
  "id": 1
}
Notice the rich descriptions and validation rules - these help agents understand when to use tools and how to construct valid requests. The pattern constraint prevents malformed account IDs, and the detailed description explains what data the tool returns.

From OpenAPI to MCP tools

Grand Central automatically translates your OpenAPI specifications into MCP tool definitions. The tool name comes from operationId, descriptions combine summary and description fields, and input schemas are generated from parameter definitions. This means your existing API documentation becomes agent-readable without additional work. Here’s how the mapping works:
# Your OpenAPI specification
paths:
  /accounts/{accountId}/balance:
    get:
      operationId: getAccountBalance  # → Tool name
      summary: Get account balance  # → Short description
      description: |
        Returns current balance and available credit for the specified account.
        Requires user authentication via JWT token.  # → Detailed description
      
      parameters:
        - name: accountId  # → Input schema property
          in: path
          required: true
          schema:
            type: string
            pattern: ^ACC-[0-9]{6}$
          description: Account identifier  # → Property description
      
      responses:
        '200':
          description: Balance information  # → Output schema description
          content:
            application/json:
              schema:
                type: object
                properties:
                  currentBalance:
                    type: number
                  availableCredit:
                    type: number
When this operation passes automated security validation and gets approved, it becomes the getAccountBalance tool with full type safety and validation rules. Better OpenAPI documentation means better AI agent behavior - agents understand context, constraints, and expected responses without guessing.

Performance and caching

Tool discovery is fast because Grand Central caches aggressively. Tool definitions change infrequently - only when new operations get approved through the access workflow - so responses are served from memory most of the time. Typical discovery latency is 150-300ms, fast enough that agents can call it on startup without noticeable delay. Cache invalidation happens automatically when tool configurations change. When new tools are added or existing ones updated, the cache refreshes within seconds. Agents don’t need to manage cache invalidation - they can safely cache discovery results locally for 5-10 minutes to reduce repeated calls. Agents can filter tools client-side based on their needs. If an agent only handles payment operations, it can filter the tool list for names matching payment* or validate*. Authentication requirements, parameter complexity, and tool tags (if configured) provide additional filtering criteria.

Discovery performance at scale

Grand Central’s discovery endpoint handles 100+ concurrent agents without degradation. Whether you have 10 agents or 1,000, discovery latency stays consistent. The system tracks cache hit rates (typically >80%) and automatically scales when usage spikes. For deployments with 20 to 100 tools per server (typical range), discovery responses are small enough (under 50KB) that network latency dominates response time. If your backend APIs are well-documented with rich OpenAPI specifications, tool definitions become self-explanatory - agents understand context without trial-and-error.

Next steps