Grand Central’s MCP capabilities evolve based on user feedback. Submit feature requests through the admin portal if you need functionality beyond current features - your use case might inform the roadmap.
MCP protocol limitations
Grand Central supports tools only - the function-calling capability of MCP. The platform does not currently support MCP resources (contextual data like files or databases that agents can query), prompts (pre-defined prompt templates), or sampling (AI model invocation through the MCP server). This means your agent can invoke backend API operations but cannot fetch static resources or use server-provided prompts. Workaround: Include necessary context directly in tool responses (e.g.,getCustomer returns both profile data and suggested next steps). Store prompts in your agent’s configuration rather than expecting them from the server. Use external MCP servers alongside Grand Central if you need resources/prompts features.
All tools share a single global namespace across every API exposed through Grand Central. Tool names must be unique across the entire platform - you cannot have getProfile() in both the Customer API and Account API. Name collision causes registration failures. Platform team enforces consistent prefixing: customer_getProfile(), account_getProfile(). This prevents ambiguity when agents discover hundreds of tools.
HTTP-only transport is supported (JSON-RPC 2.0 over HTTP). Server-Sent Events (SSE) and WebSocket transports are not available. This doesn’t impact most use cases - HTTP works fine for request/response patterns. If you need streaming responses or real-time updates, implement polling or use webhooks outside MCP.
Access and governance
New tools require automated validation and potential manual review. You cannot instantly expose all backend APIs as MCP tools - low-risk operations complete in 1 to 3 days, high-risk operations may require manual review extending to 3 to 5 business days. Validation checks security implications (what data gets exposed? who can access it?), rate limiting requirements (expected load? appropriate limits?), compliance considerations (PII handling? audit requirements?), and tool naming/documentation quality (clear descriptions? proper prefixes?). Workaround: Plan ahead when designing agent features that need new tools. Submit configurations early through the admin portal with clear business justification to speed up validation. Custom tool creation has constraints. While the admin portal allows self-service tool enablement from OpenAPI specs, you cannot bypass automated validation or directly modify MCP server infrastructure. This ensures security reviews happen, compliance requirements are met, and tool namespace conflicts are prevented. Multiple teams working independently could create colliding tool names or insecure configurations. Centralized governance trades convenience for safety. Workaround: Provide detailed configurations through the admin portal with business justification, expected usage patterns, and security requirements. The more context you provide upfront, the faster validation completes.Security constraints
API key authentication is the primary authentication mechanism (subscription keys). JWT authentication is available as an optional extension (see Authentication docs), but by default, tool access is tied to the API key, not individual end-users. All agents sharing a key have the same permissions - no tool-level access control. Workaround: Use different API keys for different applications/environments (dev vs prod, support agent vs customer-facing agent). Implement user-level authorization in your agent logic (check JWT claims before invoking sensitive tools). Configure scoped tools through the admin portal (e.g.,getMyProfile that only returns authenticated user’s data vs getAnyProfile that requires admin permissions).
Coarse-grained permissions mean an API key grants access to all tools in the MCP server. You cannot give an agent access to only specific tools - if the API key unlocks an MCP server with 10 tools, the agent can invoke all 10. Workaround: Configure separate MCP servers through the portal for different permission levels (basic-tools vs admin-tools). Implement tool filtering in your agent’s prompt engineering (instruct agents not to call certain tools). Use tool descriptions to guide agent behavior (“ADMIN ONLY: Do not use unless explicitly authorized”).
Performance constraints
Rate limits are enforced per API key with typical limits of 100 requests/minute (standard tier) or 500 requests/minute (premium tier). Burst allowances tolerate short-term spikes (20 requests/second for 2 to 3 seconds), but sustained high-volume agents hit limits. Workaround: Cache tool discovery results (calltools/list once at startup, not per request), cache tool responses in conversation context (don’t call getCustomerProfile five times in one conversation), batch operations when backend supports it, or adjust rate limits through the admin portal.
Response time variability depends on backend API performance. Tool discovery typically completes in under 500ms, simple reads in 1 to 3 seconds, complex operations in 5 to 10 seconds, and report generation in 30+ seconds. Agents must handle this variability without timing out or confusing users. Workaround: Set realistic user expectations (“Generating your report, this takes about 30 seconds…”), implement reasonable timeouts (30 to 60 seconds for most operations), show progress indicators for slow operations, and use async patterns for long-running tasks where possible.
Audit logging overhead adds latency (typically 50-200ms per request) because all tool invocations are logged for compliance. This is a deliberate trade-off: audit trail vs. raw performance. The logging enables security investigations, compliance reporting, and usage analytics. No workaround - audit logging is mandatory.
Monitoring and observability
Built-in analytics are basic. The platform dashboard shows total tool invocations per day, overall error rates, rate limit consumption, and authentication failures. It does not provide per-tool usage breakdowns, latency percentiles (P50/P95/P99), custom dimensions/tags, or real-time dashboards with sub-minute granularity. Workaround: Implement your own metrics collection in agent code (track which tools get called, response times, error types), export data from the dashboard for deeper analysis, or use correlation IDs in requests to trace end-to-end flows. No automatic alerting. The platform doesn’t push notifications when things go wrong - you must manually check the dashboard for errors, rate limit hits, or anomalies. Workaround: Implement your own alerting based on agent metrics (alert when error rate greater than 5%, rate limit hits greater than 10%, authentication failures spike), periodically review the Grand Central dashboard (weekly at minimum), or configure monitoring alerts through the admin portal for critical use cases.Integration limitations
No webhook support means Grand Central cannot push notifications to your agent. Agents must poll for updates rather than receiving real-time events. If your use case needs to react immediately to backend changes (new customer signup, payment processed, account status changed), MCP alone won’t work. Workaround: Implement polling with reasonable intervals (every 30-60s for time-sensitive data), use external event systems alongside MCP (Azure Event Grid, AWS EventBridge), or request tools that support long-polling patterns where appropriate. No native batch operations - each tool call is individual. You cannot invokegetCustomers([id1, id2, id3]) to fetch three customers in one request. You must make three separate calls: getCustomer(id1), getCustomer(id2), getCustomer(id3). This increases latency and consumes more rate limit capacity. Workaround: Enable search/list tools through the portal that return multiple results (searchCustomers with filters), implement parallel requests in your agent to reduce total wall-clock time, or check if specific backends support bulk operations that can be exposed.
No per-tool pricing visibility makes cost optimization difficult if your subscription has usage-based pricing. The dashboard shows total cost, but doesn’t break it down by tool - you can’t tell if generateReport costs 10x more than getCustomer. Workaround: Track tool usage in your application code, export usage data from the dashboard to analyze per-tool costs, and use expensive operations sparingly based on business value.