Skip to main content
Grand Central’s MCP servers turn your banking and payment APIs into tools that AI agents can discover and use - without writing custom integrations for every agent platform. Your existing APIs become AI-ready. The platform handles protocol translation, authentication, rate limiting, and audit logging. You control which operations to expose as MCP tools through the admin portal - many common operations are pre-configured and ready to enable immediately. No MCP expertise required.

What is the model context protocol?

The Model Context Protocol (MCP) is an open standard that lets AI agents interact with external systems through a unified interface. Instead of building custom connectors for Claude, Copilot, ChatGPT, and every future AI platform, you expose your APIs once via MCP and every compatible agent can use them. Tool discovery works like API documentation for agents - they query /tools/list and receive structured definitions of available operations with parameters and expected responses. Tool invocation executes those operations using JSON-RPC over HTTP, passing arguments and returning results in a format agents understand. User context propagation maintains identity throughout the request chain, ensuring that when an agent calls getAccountBalance, it retrieves data for the authenticated user, not arbitrary accounts.

Why MCP for financial services?

Security and compliance aren’t optional in banking. When AI agents need access to customer accounts, payment systems, or transaction history, you can’t just hand out API keys and hope for the best. Grand Central’s MCP implementation provides zero-trust tool exposure - only operations you explicitly enable become accessible to agents. User context propagates through the entire request chain, ensuring agents can’t access data outside their authorization scope. Every tool invocation gets logged with full request/response details for audit trails that satisfy SOC 2, GDPR, and HIPAA requirements. You control access through the admin portal. Browse your API catalog, select operations to expose as MCP tools, and apply security policies - all through a self-service interface. Automated validation checks security implications, data classification, and compliance requirements as you configure tools. Many common banking operations (account queries, balance checks, transaction lookups) are pre-approved and ready to enable immediately. Custom operations go through automated security scanning before becoming available. This self-service model means you control exactly what AI agents can do, with platform support when you need it. Your team doesn’t need to become MCP experts. Grand Central handles protocol implementation, authentication flows, and error handling. Developers work with the same REST APIs and subscription keys they already use. You configure MCP servers through the self-service admin portal by selecting operations from your API catalog - no custom code required. When Claude Desktop, Copilot Studio, or custom agents connect, they automatically discover available tools and start working. Infrastructure scales without operational overhead. Grand Central manages MCP server deployments, handles demand spikes, and maintains enterprise SLAs. Your team focuses on building AI agent features, not managing protocol gateways or debugging connection issues at 2 AM.

How it works: Protocol basics

MCP uses JSON-RPC 2.0 over HTTP - the same request/response pattern you’re already familiar with, just structured for AI agents. When an agent connects to Grand Central’s MCP server, it starts by discovering available tools:
POST https://your-instance.example.com/mcp
{
  "jsonrpc": "2.0",
  "method": "tools/list",
  "id": 1
}
Grand Central responds with tool definitions generated from your OpenAPI specifications. Each tool includes a name (from operationId), human-readable description, and input schema defining required parameters:
{
  "jsonrpc": "2.0",
  "result": {
    "tools": [
      {
        "name": "getCustomerProfile",
        "description": "Retrieve customer account details and preferences",
        "inputSchema": {
          "type": "object",
          "properties": {
            "customerId": { 
              "type": "string",
              "description": "Unique customer identifier"
            }
          },
          "required": ["customerId"]
        }
      }
    ]
  },
  "id": 1
}
When the agent needs customer data, it invokes the tool with arguments:
{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "getCustomerProfile",
    "arguments": { "customerId": "CUST-789123" }
  },
  "id": 2
}
Grand Central validates authentication, checks rate limits, calls your backend API, and returns the response. The agent receives structured data it can reason about and present to users.

Getting started

Grand Central provides a secure MCP endpoint that sits between AI agents and your banking APIs. Agents connect to the endpoint, authenticate with subscription keys, and discover available tools automatically. To connect an AI agent, you’ll need credentials from your Grand Central administrator: the MCP endpoint URL and a subscription key. For Claude Desktop, add this configuration to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
  "mcpServers": {
    "grandcentral": {
      "url": "https://your-instance.example.com/mcp",
      "headers": {
        "Ocp-Apim-Subscription-Key": "your-subscription-key-here"
      }
    }
  }
}
Restart Claude Desktop and tools appear automatically in the MCP panel. The agent calls tools/list on startup to discover what’s available, then invokes tools as needed during conversations. User-scoped operations optionally accept JWT tokens in the Authorization header to maintain user identity through the request chain.

Real-world examples

Customer service agents handle thousands of routine questions about account balances, transaction history, and product details. Instead of building custom integrations for every AI platform, expose getAccountBalance, getTransactionHistory, and getProductCatalog as MCP tools. Claude Desktop, Copilot Studio, and custom agents can all use them immediately. When a customer asks “What’s my checking account balance?”, the agent calls the tool with their authenticated context and returns the answer in seconds. Lending advisors need real-time credit calculations to help customers understand loan options. Expose calculateLoanEligibility and getInterestRates as tools. When an advisor asks “Can this customer qualify for a $50,000 mortgage?”, the AI agent invokes the tool with customer income, credit score, and loan parameters, receives structured results, and explains options in plain language. The platform team controls exactly which data fields the tool can access, preventing exposure of sensitive internal scoring algorithms. Payment operations teams deal with IBAN validation, fraud checks, and compliance workflows. Expose validateIBAN and assessFraudRisk as tools. Agents can validate international bank account numbers before processing transfers, reducing error rates and manual verification steps. Rate limiting prevents abuse - if someone tries to validate 10,000 IBANs in a minute, they hit quota limits and receive HTTP 429 responses instead of overwhelming backend systems.

How to get started

Log into the Grand Central admin portal and navigate to the MCP Tools section. You’ll see your API catalog with operations tagged by data sensitivity and common use cases. Many standard banking operations (account balances, transaction history, customer profiles) are pre-configured and ready to enable with a single click. Enable tools for your use case. Start with read-only operations on low-sensitivity data - getAccountBalance is safer than deleteAccount. Look for operations that customer service agents frequently need or that would benefit from automation. For each tool you enable, configure rate limits, authentication requirements, and which subscriptions have access. Changes take effect immediately. For custom operations not yet in the catalog, the portal walks you through adding them. Upload or link your OpenAPI specification, select operations to expose, and the system runs automated security validation. Operations classified as low-risk (read-only, public data) enable immediately. Higher-risk operations (write access, PII exposure) get flagged for review - platform support typically approves within 1-3 business days. Generate subscription keys through the portal for each environment (development, staging, production). Configure your AI agent by adding Grand Central as an MCP server with your endpoint URL and subscription key. Restart the agent and tools appear automatically. Test with tool discovery (tools/list) to verify connectivity, then try invoking a read-only tool with test data. Platform support is available when you need help with security policies, performance optimization, or complex integrations. But for standard use cases, you’re in control from start to finish. Current implementation focuses on tools - operations that agents can invoke with parameters and receive structured responses. MCP also supports resources (data that agents can read) and prompts (templated interactions), but Grand Central currently exposes APIs as tools only. This covers the vast majority of use cases: retrieving data, performing calculations, and triggering backend operations.

Next steps

  • Architecture - Understand how MCP servers integrate with Grand Central’s infrastructure
  • Getting Started - Step-by-step guide to requesting access and connecting your first agent
  • Tool Discovery - How agents find and understand available operations
  • Best Practices - Patterns for secure, scalable agent implementations
For questions about MCP implementation or to Request access, contact your Grand Central administrator. Official MCP protocol specification: modelcontextprotocol.io