Skip to main content
The MCP (Model Context Protocol) Integration in Definable.ai enables AI agents to connect to external services and tools through a standardized protocol. It provides intelligent server recommendations, multi-tool orchestration, and seamless OAuth-based authentication for accessing third-party platforms like Gmail, Slack, GitHub, and hundreds of other services.

What is MCP?

Model Context Protocol (MCP) is an open standard that enables Large Language Models to securely connect to external data sources and tools. Think of it as a universal adapter that lets AI agents interact with any service that implements the protocol. Key Benefits:
  • Universal Standard: One protocol to connect to hundreds of services
  • Secure: OAuth-based authentication with proper access controls
  • Contextual: AI agents can access real-time data from connected services
  • Extensible: Easy to add new MCP servers and tools
  • Intelligent: AI automatically discovers and uses relevant tools

MCP System Architecture

Core Components

1. MCP Servers

MCP Servers are external services that implement the Model Context Protocol. Properties:
  • Toolkit Name: Human-readable name (e.g., “Gmail”, “Slack”)
  • Toolkit Slug: URL-safe identifier (e.g., “gmail”, “slack”)
  • Auth Scheme: Authentication method (OAuth2, API Key, etc.)
  • Tools: Collection of functions the server provides
  • Logo: Visual identifier for the service
Example MCP Servers:
  • Gmail MCP Server: List Emails, Send Email, Search Emails, Read Email, Delete Email
  • Slack MCP Server: Send Message, List Channels, Create Channel, Upload File
  • GitHub MCP Server: Create Issue, List Repos, Create PR, Comment on Issue
  • Notion MCP Server: Create Page, Update Database, Search Content

2. MCP Tools

Individual functions provided by MCP servers. Tool Structure:
  • Name: Function name (e.g., “send_email”)
  • Slug: Unique identifier (e.g., “gmail_send_email”)
  • Description: What the tool does
  • Parameters: Input schema for the tool
  • Server: Parent MCP server

3. MCP Sessions (Instances)

User connections to MCP servers. Session Lifecycle: Session Properties:
  • Instance ID: Unique identifier for this connection
  • Server ID: Which MCP server is connected
  • User ID: Who owns this connection
  • Organization ID: Which org this belongs to
  • Status: pending, active, inactive
  • Name: Custom name for the instance
  • Connected Account ID: Composio account reference

4. Intelligent Agent

AI that analyzes queries and recommends MCP servers. How It Works: Agent Capabilities:
  • Analyzes user intent from natural language
  • Searches database for relevant MCP servers
  • Understands tool descriptions and capabilities
  • Maintains conversation context
  • Returns server recommendations in structured format

5. MCP Playground Factory

Orchestrates conversations with multiple MCP servers. Features:
  • Multi-Server Support: Connect to multiple MCPs simultaneously
  • Conversation Memory: Maintains 30-message history (configurable)
  • Tool Metadata: Tracks pagination tokens and result counts
  • Structured Prompting: Better context for AI responses
  • Model Flexibility: Supports OpenAI, Anthropic, DeepSeek, Gemini

MCP Modes

Mode 1: Intelligent Discovery

User chats naturally, AI discovers and recommends MCP servers. Flow: Benefits:
  • No manual server selection needed
  • Natural language interaction
  • AI suggests relevant tools
  • Seamless connection flow

Mode 2: Direct MCP Usage

User explicitly provides MCP instance IDs to use. Flow: Benefits:
  • Full control over which servers to use
  • Faster execution (no discovery phase)
  • Multi-server orchestration
  • Complex workflows

Authentication Flow

OAuth Connection Process

Key Steps:
  1. Initiate: User clicks “Connect” on MCP server
  2. Prepare: System creates pending session record
  3. Redirect: User redirected to OAuth provider
  4. Authorize: User grants permissions
  5. Callback: OAuth provider redirects back
  6. Activate: Session marked as active
  7. Instance: MCP instance auto-created
  8. Ready: User can now use MCP tools

Tool Execution

Tool Call Lifecycle

Event Types: 1. Tool Call Started - AI initiates a tool call with input parameters 2. Tool Call Completed - Tool execution finished with results including success status, output data, and optional pagination metadata

Pagination Support

MCP tools that return large datasets support pagination. Metadata Tracking:
  • nextPageToken: Token for next page of results
  • resultSizeEstimate: Total number of results available
The system automatically stores this metadata and uses it when users request “more” results.

Conversation Management

Memory System

Conversation Storage:
  • Format: Array of role, content, created_at
  • Intelligent Mode Limit: Last 30 exchanges (60 messages max)
  • Direct Mode Limit: Last 30 messages (configurable via memory_size parameter)
  • TTL: 30 minutes of inactivity
  • Structure: Chronological order
Mode-Specific Behavior:
  • Intelligent Discovery Mode: Uses last 30 conversation exchanges for context
  • Direct MCP Mode: Passes memory_size=30 to playground factory (adjustable)
History Usage: The system automatically includes conversation history in each request enabling contextual interactions like:
  • “Show me more” - AI remembers previous query
  • “Send that to Slack” - AI knows what “that” refers to
  • “What was the first email about?” - AI recalls earlier context

Structured Prompting

The MCP Playground uses structured prompts for better context management. Prompt Components:
  • System Instructions: Agent identity, capabilities, tool usage principles
  • Current Message: User’s current query with timestamp
  • Conversation History: Previous exchanges in chronological order
  • Tool Guidelines: How to analyze tool structure and use parameters correctly
Benefits:
  • Better Context: Clear separation of roles and information
  • Improved Parsing: LLMs understand structured formats well
  • Explicit Instructions: Detailed guidelines for tool usage
  • History Awareness: Full conversation context included

Supported Models

The MCP Playground supports multiple LLM providers with hardcoded defaults for each mode:
ProviderSupported ModelsMode DefaultBest For
OpenAIgpt-, o1-General purpose, reasoning
Anthropicclaude-*-Tool use, long context
DeepSeekdeepseek-*Intelligent Mode (deepseek-chat)Cost-effective, database queries
Googlegemini-*Direct Mode (gemini-2.5-flash)Fast tool execution
Default Behavior:
  • Intelligent Discovery Mode: Uses deepseek-chat if no model specified
  • Direct MCP Mode: Uses gemini-2.5-flash if no model specified
  • Users can override by specifying any supported model in the request

Use Cases

Email Management

Scenario: Manage Gmail inbox with natural language Example Workflow:
  1. User: “Show me urgent emails from last week”
  2. AI calls gmail_search_messages with appropriate query
  3. AI: “You have 8 urgent emails from last week…”
  4. User: “Forward the one from Sarah to my team Slack”
  5. AI calls gmail_get_message and slack_send_message
  6. AI: “Forwarded Sarah’s email to #team-updates”
MCP Servers Used: Gmail, Slack

Project Management

Scenario: Sync GitHub issues with project management tools Example Workflow:
  1. User: “What issues are open in our main repo?”
  2. AI calls github_list_issues
  3. AI: “You have 24 open issues. 5 are labeled as bugs…”
  4. User: “Create a Notion page for the critical bugs”
  5. AI calls github_list_issues (filter:critical+bug) and notion_create_page
  6. AI: “Created Notion page with 5 critical bugs”
MCP Servers Used: GitHub, Notion, Slack

Data Analysis

Scenario: Analyze data from multiple sources Example Workflow:
  1. User: “Show me this week’s sales from Stripe”
  2. AI calls stripe_list_charges
  3. AI: “This week you processed $45,230 in 156 transactions”
  4. User: “Export to Google Sheets”
  5. AI calls google_sheets_append_rows
  6. AI: “Added to ‘Weekly Sales’ sheet”
MCP Servers Used: Stripe, Google Sheets, Slack

Customer Support

Scenario: Access customer data across platforms Example Workflow:
  1. User: “Find customer info for john@example.com
  2. AI calls hubspot_search_contacts and stripe_search_customers
  3. AI: “Found John Smith - Enterprise plan, joined Jan 2024…”
  4. User: “What tickets does he have?”
  5. AI calls zendesk_search_tickets
  6. AI: “3 open tickets, 2 are about billing…”
MCP Servers Used: HubSpot, Stripe, Zendesk, Gmail

Best Practices

For Users

  1. Connect Relevant Servers: Only connect MCP servers you’ll actually use
  2. Use Natural Language: Let the AI discover appropriate tools
  3. Provide Context: Reference previous conversations for continuity
  4. Be Specific: Clear requests get better results
  5. Review Permissions: Understand what access you’re granting

For Developers

  1. Server Selection: Choose MCP servers that match your use case
  2. Error Handling: Handle OAuth failures and tool errors gracefully
  3. Pagination: Implement pagination for large result sets
  4. Rate Limiting: Respect MCP server rate limits
  5. Security: Never store or log sensitive OAuth tokens

Tool Usage Guidelines

  1. Comprehensive Searches: Don’t limit to single criteria
    • Bad: Search only “urgent” emails
    • Good: Search “urgent OR important OR high-priority OR from:boss”
  2. Efficient Calls: Break large operations into smaller calls
    • Bad: Process 1000 items in one call
    • Good: Process in batches of 50-100
  3. Metadata Usage: Leverage pagination metadata
    • Check nextPageToken for more results
    • Use resultSizeEstimate to inform user
  4. Parameter Analysis: Understand all tool parameters
    • Read tool descriptions carefully
    • Use only relevant parameters
    • Validate input before calling

Security Considerations

OAuth Security

  • Scopes: Only request necessary permissions
  • Token Storage: Tokens managed by Composio, never stored locally
  • Expiration: Tokens automatically refreshed
  • Revocation: Users can disconnect anytime

Data Privacy

  • No Storage: MCP responses not permanently stored
  • Session Privacy: Conversations cleared after TTL
  • Org Isolation: MCP sessions scoped to organizations
  • User Isolation: Users only see their own connections

Access Control

  • RBAC: Requires mcp:write and mcp:read permissions
  • Feature Flags: mcp_sessions quota enforcement
  • Org Scoping: All operations org-scoped
  • User Verification: JWT authentication required

Performance Optimization

Connection Management

  • Connection Reuse: MCP connections maintained during chat
  • Cleanup: Automatic connection closure after response
  • Timeout: 30-second timeout for MCP operations
  • Parallel Tools: Multiple MCP servers used simultaneously

Memory Management

  • Intelligent Mode History: 60 messages max (30 exchanges)
  • Direct Mode History: 30 messages (configurable via memory_size)
  • TTL Cleanup: Inactive sessions cleared after 30 minutes
  • Metadata Pruning: Tool metadata cleared after use
  • Auto Cleanup: Old messages automatically trimmed when limit reached

Streaming Optimization

  • Chunk Size: 10-token chunks for smooth streaming
  • Event Filtering: Only stream relevant events
  • Tool Events: Separate tool call events from content
  • Buffer Management: Clear buffers after sending

Troubleshooting

Connection Issues

Problem: OAuth fails or times out Solutions:
  • Check internet connectivity
  • Verify MCP server is operational
  • Try different browser (clear cache)
  • Check Composio service status

Tool Call Failures

Problem: Tool returns error or times out Solutions:
  • Verify permissions were granted
  • Check tool parameters are correct
  • Ensure data exists (e.g., email to fetch)
  • Review MCP server documentation

Session Problems

Problem: Session shows as “pending” forever Solutions:
  • Check OAuth callback URL is correct
  • Verify no firewall blocking callback
  • Try disconnecting and reconnecting
  • Check Composio logs for errors

Performance Issues

Problem: Slow responses or timeouts Solutions:
  • Reduce number of simultaneous MCP servers
  • Limit history size (use smaller memory_size)
  • Break large operations into smaller chunks
  • Check MCP server latency

Limitations

Current Limitations

  • Model Support: Limited to 4 providers (OpenAI, Anthropic, DeepSeek, Gemini)
  • Timeout: 30-second max for MCP operations
  • History Limits:
    • Intelligent Mode: 60 messages (30 exchanges)
    • Direct Mode: 30 messages default (configurable)
  • Concurrent MCPs: Performance degrades with too many servers
  • OAuth Only: Limited to OAuth-based authentication

Future Enhancements

  • API key authentication support
  • Longer conversation history
  • MCP server caching
  • Custom MCP server creation
  • Advanced tool composition
  • Workflow automation

Next Steps

Explore MCP integration in depth: Ready to connect your first MCP server? Check out the Quick Start Guide or explore our API Reference.