Building a Supabase MCP Server for AI Agents

by Hoshang Mehta

Supabase makes it easy to build apps. But when you connect AI agents directly to your Supabase database, you're opening a door to security risks, performance issues, and compliance problems that can derail your entire project.

I've watched teams deploy agents with direct Supabase access, only to discover agents bypassing Row Level Security (RLS) policies, querying production tables during peak traffic, or accessing customer data they shouldn't see. The worst part? Supabase's built-in security features—RLS, service role keys, and connection pooling—don't protect you from agent-specific risks.

An MCP server on top of Supabase gives you the control you need. It turns your Supabase database into a secure, governed data layer that agents can safely query. Instead of giving agents raw database access, you expose sandboxed views through MCP tools that enforce access boundaries, optimize queries, and provide audit trails.

This guide shows you how to build a Supabase MCP server from scratch. You'll learn how to use Supabase's RLS policies, create sandboxed views, build MCP tools, and deploy a production-ready server that keeps your data secure.

Table of Contents


Why Supabase Needs an MCP Server

Supabase is built for application developers. Its security model—RLS policies, service role keys, and anon keys—works perfectly for apps. But agents break that model in ways that create serious risks.

Problem 1: RLS Bypass Through Service Role

What happens: You give agents a service role key to bypass RLS, thinking it's necessary for agent queries.

Example: A support agent needs to query customer data. You use the service role key because RLS blocks agent queries. The agent can now access every table, every row, every column—bypassing all RLS policies.

Impact: Complete security bypass. Agents can access data they shouldn't, violating compliance requirements and creating security incidents.

Solution: MCP server uses service role keys only for specific, sandboxed views. Agents never get direct service role access.

Problem 2: Connection Pool Exhaustion

What happens: Agents create hundreds of database connections, exhausting Supabase's connection pool.

Example: An agent framework spawns 50 concurrent agent instances, each opening a Supabase connection. Your connection pool (default: 60 connections) is exhausted. Customer-facing apps can't connect.

Impact: Application downtime, customer complaints, revenue loss.

Solution: MCP server manages a single connection pool and routes all agent queries through it. One pool, unlimited agents.

Problem 3: Query Performance on Production

What happens: Agents write expensive queries that slow down your production Supabase instance.

Example: An analytics agent runs a query that scans 10 million rows without indexes. The query takes 30 seconds, locks tables, and causes timeouts across your application.

Impact: Production performance degradation, customer-facing slowdowns, increased Supabase costs.

Solution: MCP server routes queries to read replicas (if available) or optimizes queries through materialized views and indexes.

Problem 4: No Audit Trail

What happens: You can't track which agents accessed which data, when, or why.

Example: A compliance auditor asks: "Which agents accessed customer PII in the last 30 days?" You check Supabase logs, but they don't show agent context—just raw SQL queries.

Impact: Failed compliance audits, inability to investigate security incidents, regulatory fines.

Solution: MCP server logs every query with agent context, user identity, and access patterns. Complete audit trail.

Problem 5: Prompt Injection Through Queries

What happens: Agents generate SQL queries from user prompts, creating SQL injection risks.

Example: A user asks an agent: "Show me all customers where email = 'test@example.com' OR 1=1--". The agent generates SQL that bypasses filters.

Impact: Data breaches, unauthorized access, security incidents.

Solution: MCP server uses parameterized queries and validates all inputs. No raw SQL from agents.


Understanding Supabase's Security Model

Before building an MCP server, you need to understand how Supabase security works.

Row Level Security (RLS)

RLS policies control row-level access based on user identity. When a user queries a table, Supabase checks RLS policies to determine which rows they can see.

Example RLS policy:

-- Users can only see their own customers
CREATE POLICY "Users see own customers"
ON customers
FOR SELECT
USING (auth.uid() = user_id);

Problem for agents: Agents don't have user identities. They can't use auth.uid(). RLS policies block agent queries.

Solution: Create service role views that enforce access boundaries without relying on RLS.

Service Role Keys

Service role keys bypass RLS entirely. They have full database access.

When to use: Administrative operations, migrations, background jobs.

When NOT to use: Agent queries. Service role keys give agents unlimited access.

Solution: Use service role keys only in your MCP server, never expose them to agents.

Anon Keys

Anon keys work with RLS. They're safe for public access, but only if RLS policies are correctly configured.

Problem for agents: Agents can't use anon keys because they don't have user identities for RLS.

Solution: MCP server uses service role keys internally but exposes only sandboxed views to agents.

Connection Pooling

Supabase provides connection pooling through PgBouncer. Default pool size: 60 connections.

Problem: Each agent instance might open a connection. Multiple agents = connection exhaustion.

Solution: MCP server manages a single connection pool. All agents share the pool.


Architecture: Three-Layer Pattern for Supabase

The Supabase MCP server uses a three-layer architecture:

Agent → MCP Tool → Sandboxed View → Service Role → Supabase Database

Layer 1: Service Role Isolation

Purpose: Use service role keys safely, only within the MCP server.

What it does:

  • MCP server holds service role keys
  • Agents never see service role keys
  • All queries use service role internally
  • Access boundaries enforced by views

Why it matters: Service role keys provide the access agents need, but views limit what they can actually query.

Layer 2: RLS-Protected Views

Purpose: Create views that enforce access boundaries without relying on RLS.

What it does:

  • Creates SQL views that filter rows and columns
  • Views use service role access but limit data exposure
  • Views can join data across tables safely
  • Views optimize queries with indexes

Why it matters: Views define exactly what agents can access. No RLS needed, but same security guarantees.

Layer 3: MCP Tool Builder

Purpose: Turn views into MCP tools that agents can use.

What it does:

  • Generates MCP tool definitions from views
  • Validates inputs to prevent SQL injection
  • Handles errors gracefully
  • Provides natural language interfaces

Why it matters: Agents use tools, not views directly. Tools make agent interactions natural and safe.


Layer 1: Service Role Isolation

The MCP server uses Supabase service role keys internally. Agents never see these keys.

Setting Up Service Role Access

Step 1: Get Your Service Role Key

In Supabase Dashboard:

  1. Go to Settings → API
  2. Copy your service_role key (keep it secret!)

Step 2: Store Keys Securely

Never commit service role keys to code. Use environment variables:

# .env
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key

Step 3: Initialize Supabase Client

import { createClient } from '@supabase/supabase-js'

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_ROLE_KEY!,
  {
    auth: {
      autoRefreshToken: false,
      persistSession: false
    }
  }
)

Why this matters: Service role keys bypass RLS, giving you full database access. But you'll use views to limit what agents can actually query.

Connection Pooling

Supabase clients manage connections automatically, but you should configure pooling for agent workloads.

Option 1: Use Supabase Connection Pooler

Supabase provides a connection pooler at your-project.supabase.co:6543. Use it for agent queries:

const supabase = createClient(
  process.env.SUPABASE_URL!.replace('https://', 'https://').replace('.supabase.co', '.supabase.co:6543'),
  process.env.SUPABASE_SERVICE_ROLE_KEY!,
  {
    db: {
      schema: 'public'
    }
  }
)

Option 2: Use PgBouncer Directly

If you're running your own PgBouncer:

import { Pool } from 'pg'

const pool = new Pool({
  host: 'your-pgbouncer-host',
  port: 5432,
  database: 'postgres',
  user: 'postgres',
  password: process.env.SUPABASE_SERVICE_ROLE_KEY,
  max: 20, // Max connections in pool
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
})

Why this matters: Connection pooling prevents connection exhaustion. One pool handles all agent queries.


Layer 2: RLS-Protected Views

Create SQL views that enforce access boundaries. Views use service role access but limit data exposure.

Creating Sandboxed Views

Step 1: Identify What Agents Need

Before creating views, identify what data agents actually need:

Example: A support agent needs:

  • ✅ Customer name, email, plan, signup date
  • ✅ Recent product usage (last 30 days)
  • ✅ Open support tickets
  • ❌ Credit card numbers
  • ❌ Internal sales notes
  • ❌ Other customers' data

Step 2: Create the View

Create a view that includes only what agents need:

-- Customer Support View (Sandboxed)
CREATE VIEW customer_support_view AS
SELECT 
  id,
  name,
  email,
  plan_name,
  signup_date,
  subscription_status,
  last_login_date,
  active_users_30d,
  open_tickets_count
FROM customers
WHERE is_active = true
  AND signup_date >= CURRENT_DATE - INTERVAL '2 years';  -- GDPR: only last 2 years
  -- Excludes: credit_card_number, internal_notes, ssn, etc.

Step 3: Add Indexes

Add indexes to optimize queries:

-- Index for common queries
CREATE INDEX idx_customer_support_email ON customers(email) WHERE is_active = true;
CREATE INDEX idx_customer_support_signup ON customers(signup_date) WHERE is_active = true;

Step 4: Grant Access

Grant access to the view (not the underlying table):

-- Grant access to view
GRANT SELECT ON customer_support_view TO authenticated;
GRANT SELECT ON customer_support_view TO anon;

Why this matters: Views define exactly what agents can access. Underlying tables remain protected.

Advanced View Patterns

Pattern 1: Customer-Scoped Views

Views that filter by customer from conversation context:

-- Customer-scoped view (uses function parameter)
CREATE OR REPLACE FUNCTION get_customer_context(customer_id_param UUID)
RETURNS TABLE (
  id UUID,
  name TEXT,
  email TEXT,
  plan_name TEXT,
  subscription_status TEXT,
  open_tickets_count INTEGER
) AS $$
BEGIN
  RETURN QUERY
  SELECT 
    c.id,
    c.name,
    c.email,
    c.plan_name,
    c.subscription_status,
    COUNT(t.id) as open_tickets_count
  FROM customers c
  LEFT JOIN support_tickets t ON c.id = t.customer_id AND t.status = 'open'
  WHERE c.id = customer_id_param
    AND c.is_active = true
  GROUP BY c.id, c.name, c.email, c.plan_name, c.subscription_status;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

Pattern 2: Multi-Table Joins

Views that join data across multiple Supabase tables:

-- Unified customer view (joins customers + orders + support)
CREATE VIEW customer_360_view AS
SELECT 
  c.id as customer_id,
  c.name as customer_name,
  c.email,
  c.plan_name,
  COALESCE(o.order_count, 0) as order_count,
  COALESCE(o.total_revenue, 0) as total_revenue,
  COALESCE(s.open_tickets, 0) as open_tickets,
  s.last_ticket_date
FROM customers c
LEFT JOIN (
  SELECT 
    customer_id, 
    COUNT(*) as order_count, 
    SUM(amount) as total_revenue
  FROM orders
  WHERE status = 'completed'
  GROUP BY customer_id
) o ON c.id = o.customer_id
LEFT JOIN (
  SELECT 
    customer_id, 
    COUNT(*) as open_tickets, 
    MAX(created_at) as last_ticket_date
  FROM support_tickets
  WHERE status = 'open'
  GROUP BY customer_id
) s ON c.id = s.customer_id
WHERE c.is_active = true;

Pattern 3: Pre-Aggregated Views

Views that pre-aggregate data for performance:

-- Pre-aggregated customer metrics
CREATE MATERIALIZED VIEW customer_metrics_aggregated AS
SELECT 
  customer_id,
  customer_name,
  email,
  plan_name,
  -- Pre-aggregated metrics
  total_revenue,
  order_count,
  avg_order_value,
  active_users_30d,
  feature_adoption_score
FROM customers_aggregated
WHERE is_active = true;

-- Refresh materialized view periodically
CREATE INDEX idx_customer_metrics_customer ON customer_metrics_aggregated(customer_id);

Pattern 4: RLS-Like Filtering Without RLS

Views that filter by user context (without using RLS):

-- Tenant-scoped view (filters by tenant_id from context)
CREATE OR REPLACE FUNCTION get_tenant_customers(tenant_id_param UUID)
RETURNS TABLE (
  id UUID,
  name TEXT,
  email TEXT,
  plan_name TEXT
) AS $$
BEGIN
  RETURN QUERY
  SELECT 
    c.id,
    c.name,
    c.email,
    c.plan_name
  FROM customers c
  WHERE c.tenant_id = tenant_id_param
    AND c.is_active = true;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

Layer 3: MCP Tool Builder

Turn views into MCP tools that agents can use. Tools provide natural language interfaces and input validation.

What Are MCP Tools?

MCP (Model Context Protocol) tools are functions that agents can call. They abstract database queries behind natural language interfaces.

Tool structure:

{
  "name": "get_customer_info",
  "description": "Get customer information for support context. Returns customer details, subscription status, recent usage, and open tickets.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "email": {
        "type": "string",
        "description": "Customer email address"
      }
    },
    "required": ["email"]
  }
}

Creating Tools from Views

Step 1: Define the Tool

Describe what the tool should do:

Tool: get_customer_info
Purpose: Get customer information for support context
Input: Customer email
Output: Customer details, subscription status, recent usage, open tickets
View: customer_support_view

Step 2: Create Tool Handler

Create the tool handler that queries the view:

import { createClient } from '@supabase/supabase-js'

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_ROLE_KEY!
)

async function getCustomerInfo(email: string) {
  // Validate input
  if (!email || !email.includes('@')) {
    throw new Error('Invalid email format')
  }

  // Query view (parameterized to prevent SQL injection)
  const { data, error } = await supabase
    .from('customer_support_view')
    .select('*')
    .eq('email', email)
    .limit(1)
    .single()

  if (error) {
    throw new Error(`Query failed: ${error.message}`)
  }

  return data
}

Step 3: Create MCP Tool Definition

Create the MCP tool definition:

const tools = [
  {
    name: 'get_customer_info',
    description: 'Get customer information for support context. Returns customer details, subscription status, recent usage, and open tickets.',
    inputSchema: {
      type: 'object',
      properties: {
        email: {
          type: 'string',
          description: 'Customer email address'
        }
      },
      required: ['email']
    }
  }
]

Step 4: Implement MCP Server

Implement the MCP server that handles tool calls:

import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'

const server = new Server(
  {
    name: 'supabase-mcp-server',
    version: '1.0.0',
  },
  {
    capabilities: {
      tools: {},
    },
  }
)

// List available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: tools.map(tool => ({
    name: tool.name,
    description: tool.description,
    inputSchema: tool.inputSchema,
  })),
}))

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params

  if (name === 'get_customer_info') {
    const email = args?.email as string
    if (!email) {
      throw new Error('Email is required')
    }

    const result = await getCustomerInfo(email)
    return {
      content: [
        {
          type: 'text',
          text: JSON.stringify(result, null, 2),
        },
      ],
    }
  }

  throw new Error(`Unknown tool: ${name}`)
})

// Start server
const transport = new StdioServerTransport()
await server.connect(transport)

Input Validation

Always validate inputs to prevent SQL injection and invalid queries:

function validateEmail(email: string): boolean {
  const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/
  return emailRegex.test(email)
}

function validateUUID(uuid: string): boolean {
  const uuidRegex = /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i
  return uuidRegex.test(uuid)
}

async function getCustomerInfo(email: string) {
  // Validate input
  if (!validateEmail(email)) {
    throw new Error('Invalid email format')
  }

  // Query view (Supabase client handles parameterization)
  const { data, error } = await supabase
    .from('customer_support_view')
    .select('*')
    .eq('email', email)
    .limit(1)
    .single()

  if (error) {
    throw new Error(`Query failed: ${error.message}`)
  }

  return data
}

Why this matters: Input validation prevents SQL injection and ensures queries are safe. Supabase client handles parameterization, but validation adds an extra layer of security.


Step-by-Step Implementation

Here's how to build a Supabase MCP server step by step:

Step 1: Set Up Supabase Project

Time: 15 minutes

Steps:

  1. Create a Supabase project (or use existing)
  2. Get your service role key from Settings → API
  3. Store keys in environment variables

Commands:

# Create .env file
echo "SUPABASE_URL=https://your-project.supabase.co" > .env
echo "SUPABASE_SERVICE_ROLE_KEY=your-service-role-key" >> .env

Step 2: Create Sandboxed Views

Time: 1-2 hours per view

Steps:

  1. Identify what data agents need
  2. Create view with only needed columns
  3. Add filters (active customers, date ranges, etc.)
  4. Add indexes for performance
  5. Test the view

Example:

-- Create view
CREATE VIEW customer_support_view AS
SELECT 
  id,
  name,
  email,
  plan_name,
  subscription_status,
  active_users_30d,
  open_tickets_count
FROM customers
WHERE is_active = true
  AND signup_date >= CURRENT_DATE - INTERVAL '2 years';

-- Add indexes
CREATE INDEX idx_customer_support_email ON customers(email) WHERE is_active = true;

Step 3: Build MCP Server

Time: 2-3 hours

Steps:

  1. Initialize Node.js project
  2. Install MCP SDK and Supabase client
  3. Create tool handlers
  4. Implement MCP server
  5. Test with agent framework

Commands:

# Initialize project
npm init -y
npm install @modelcontextprotocol/sdk @supabase/supabase-js
npm install -D typescript @types/node tsx

# Create server file
touch src/server.ts

Example server:

// src/server.ts
import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { createClient } from '@supabase/supabase-js'

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_ROLE_KEY!
)

const server = new Server(
  {
    name: 'supabase-mcp-server',
    version: '1.0.0',
  },
  {
    capabilities: {
      tools: {},
    },
  }
)

// List tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: 'get_customer_info',
      description: 'Get customer information for support context',
      inputSchema: {
        type: 'object',
        properties: {
          email: { type: 'string', description: 'Customer email' }
        },
        required: ['email']
      }
    }
  ]
}))

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params

  if (name === 'get_customer_info') {
    const { data, error } = await supabase
      .from('customer_support_view')
      .select('*')
      .eq('email', args?.email)
      .single()

    if (error) throw new Error(error.message)

    return {
      content: [{ type: 'text', text: JSON.stringify(data) }]
    }
  }

  throw new Error(`Unknown tool: ${name}`)
})

const transport = new StdioServerTransport()
await server.connect(transport)

Step 4: Deploy MCP Server

Time: 30 minutes

Steps:

  1. Deploy server to hosting platform (Vercel, Railway, Fly.io)
  2. Configure environment variables
  3. Test server endpoint
  4. Get server URL for agent frameworks

Example (Vercel):

# Install Vercel CLI
npm i -g vercel

# Deploy
vercel --prod

Step 5: Connect Agents

Time: 15 minutes per agent

Steps:

  1. Get MCP server URL
  2. Add to agent framework configuration
  3. Test agent can use tools
  4. Verify access boundaries work

Example (Claude Desktop):

{
  "mcpServers": {
    "supabase": {
      "url": "https://your-mcp-server.vercel.app",
      "apiKey": "your-api-key"
    }
  }
}

Step 6: Add Monitoring

Time: 1 hour

Steps:

  1. Set up query logging
  2. Track query performance
  3. Monitor costs
  4. Set up alerts

Example:

// Add logging to tool handlers
async function getCustomerInfo(email: string) {
  const startTime = Date.now()
  
  const { data, error } = await supabase
    .from('customer_support_view')
    .select('*')
    .eq('email', email)
    .single()

  const duration = Date.now() - startTime

  // Log query
  console.log({
    tool: 'get_customer_info',
    email,
    duration,
    error: error?.message
  })

  if (error) throw new Error(error.message)
  return data
}

Real-World Examples

Let me show you real-world implementations:

Example 1: Support Agent MCP Server

Requirements:

  • Support agents answer customer questions
  • Access customer data for support context
  • Cannot access other customers' data
  • Cannot access financial data

Implementation:

1. Sandboxed View:

CREATE VIEW customer_support_view AS
SELECT 
  id,
  name,
  email,
  plan_name,
  subscription_status,
  last_login_date,
  active_users_30d,
  open_tickets_count
FROM customers
WHERE is_active = true
  AND signup_date >= CURRENT_DATE - INTERVAL '2 years';

CREATE INDEX idx_customer_support_email ON customers(email) WHERE is_active = true;

2. MCP Tool:

{
  name: 'get_customer_info',
  description: 'Get customer information for support context',
  inputSchema: {
    type: 'object',
    properties: {
      email: { type: 'string', description: 'Customer email address' }
    },
    required: ['email']
  }
}

3. Tool Handler:

async function getCustomerInfo(email: string) {
  const { data, error } = await supabase
    .from('customer_support_view')
    .select('*')
    .eq('email', email)
    .single()

  if (error) throw new Error(error.message)
  return data
}

Result: Support agents get complete customer context without accessing other customers' data or financial information.

Example 2: Analytics Agent MCP Server

Requirements:

  • Analytics agents generate insights
  • Access aggregated customer data
  • Cannot access PII
  • Cannot access individual customer records

Implementation:

1. Sandboxed View (Pre-aggregated):

CREATE MATERIALIZED VIEW customer_analytics_view AS
SELECT 
  plan_name,
  subscription_status,
  COUNT(*) as customer_count,
  AVG(active_users_30d) as avg_active_users,
  SUM(total_revenue) as total_revenue,
  AVG(order_count) as avg_orders
FROM customers_aggregated
WHERE is_active = true
GROUP BY plan_name, subscription_status
HAVING COUNT(*) >= 10;  -- Minimum aggregation threshold

CREATE INDEX idx_customer_analytics_plan ON customer_analytics_view(plan_name);

2. MCP Tool:

{
  name: 'get_customer_analytics',
  description: 'Get aggregated customer analytics (no PII)',
  inputSchema: {
    type: 'object',
    properties: {
      plan_name: { type: 'string', description: 'Plan name to filter by (optional)' }
    }
  }
}

3. Tool Handler:

async function getCustomerAnalytics(planName?: string) {
  let query = supabase
    .from('customer_analytics_view')
    .select('*')

  if (planName) {
    query = query.eq('plan_name', planName)
  }

  const { data, error } = await query

  if (error) throw new Error(error.message)
  return data
}

Result: Analytics agents get insights without accessing individual customer data or PII.

Example 3: Multi-Tenant Agent MCP Server

Requirements:

  • Agents access tenant data
  • Complete isolation between tenants
  • Agents can only access their tenant's data

Implementation:

1. Sandboxed View (Tenant-scoped):

CREATE OR REPLACE FUNCTION get_tenant_customers(tenant_id_param UUID)
RETURNS TABLE (
  id UUID,
  name TEXT,
  email TEXT,
  plan_name TEXT,
  subscription_status TEXT
) AS $$
BEGIN
  RETURN QUERY
  SELECT 
    c.id,
    c.name,
    c.email,
    c.plan_name,
    c.subscription_status
  FROM customers c
  WHERE c.tenant_id = tenant_id_param
    AND c.is_active = true;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

CREATE INDEX idx_customers_tenant ON customers(tenant_id) WHERE is_active = true;

2. MCP Tool:

{
  name: 'get_tenant_customers',
  description: 'Get customers for current tenant',
  inputSchema: {
    type: 'object',
    properties: {
      tenant_id: { type: 'string', description: 'Tenant ID (from conversation context)' }
    },
    required: ['tenant_id']
  }
}

3. Tool Handler:

async function getTenantCustomers(tenantId: string) {
  const { data, error } = await supabase
    .rpc('get_tenant_customers', { tenant_id_param: tenantId })

  if (error) throw new Error(error.message)
  return data
}

Result: Complete tenant isolation. Agents can only access their tenant's data.


Supabase-Specific Best Practices

Here are best practices specific to Supabase:

1. Use Connection Pooling

Always use Supabase's connection pooler for agent queries:

// Use pooler URL
const supabase = createClient(
  process.env.SUPABASE_URL!.replace('.supabase.co', '.supabase.co:6543'),
  process.env.SUPABASE_SERVICE_ROLE_KEY!
)

Why: Prevents connection exhaustion. One pool handles all agent queries.

2. Never Expose Service Role Keys

Service role keys should only exist in your MCP server, never in agent code or client-side applications.

Why: Service role keys bypass RLS. Exposing them gives unlimited database access.

3. Use Materialized Views for Analytics

For analytical queries, use materialized views and refresh them periodically:

-- Create materialized view
CREATE MATERIALIZED VIEW customer_metrics AS
SELECT 
  customer_id,
  COUNT(*) as order_count,
  SUM(amount) as total_revenue
FROM orders
GROUP BY customer_id;

-- Refresh periodically (via cron or Supabase Edge Function)
REFRESH MATERIALIZED VIEW customer_metrics;

Why: Materialized views pre-aggregate data, making analytical queries fast and cheap.

4. Leverage Supabase Edge Functions for Complex Logic

For complex business logic, use Supabase Edge Functions instead of SQL:

// Edge Function: get_customer_context
import { serve } from 'https://deno.land/std@0.168.0/http/server.ts'
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'

serve(async (req) => {
  const { email } = await req.json()
  
  const supabase = createClient(
    Deno.env.get('SUPABASE_URL')!,
    Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
  )

  const { data, error } = await supabase
    .from('customer_support_view')
    .select('*')
    .eq('email', email)
    .single()

  return new Response(JSON.stringify(data), {
    headers: { 'Content-Type': 'application/json' }
  })
})

Why: Edge Functions provide more flexibility than SQL views and can handle complex business logic.

5. Monitor Supabase Usage

Track Supabase usage to prevent cost overruns:

  • Database size: Monitor table sizes and growth
  • Query performance: Track slow queries
  • Connection usage: Monitor connection pool usage
  • API requests: Track API request volume

Why: Supabase costs scale with usage. Monitoring helps you optimize costs and prevent surprises.

6. Use Supabase Realtime for Live Updates

If agents need real-time data, use Supabase Realtime:

const channel = supabase
  .channel('customer-updates')
  .on('postgres_changes', 
    { event: 'UPDATE', schema: 'public', table: 'customers' },
    (payload) => {
      // Handle real-time update
      console.log('Customer updated:', payload.new)
    }
  )
  .subscribe()

Why: Realtime provides live updates without polling, reducing query volume and costs.


Common Mistakes to Avoid

Here are mistakes I've seen teams make:

Mistake 1: Exposing Service Role Keys to Agents

What happens: Teams give agents service role keys directly, thinking it's necessary for queries.

Why it's a problem: Service role keys bypass RLS, giving agents unlimited database access.

The fix: Use service role keys only in your MCP server. Agents never see them.

Mistake 2: Not Using Views

What happens: Teams give agents access to raw Supabase tables instead of views.

Why it's a problem: No access boundaries. Agents can access any data, any column, any row.

The fix: Always create sandboxed views. Agents query views, not tables.

Mistake 3: Ignoring Connection Pooling

What happens: Each agent instance opens a Supabase connection, exhausting the connection pool.

Why it's a problem: Connection exhaustion prevents other applications from connecting.

The fix: Use connection pooling. MCP server manages a single pool for all agents.

Mistake 4: Not Validating Inputs

What happens: Tools accept any input without validation, creating SQL injection risks.

Why it's a problem: SQL injection can lead to data breaches and unauthorized access.

The fix: Validate all inputs. Check formats, ranges, and types before executing queries.

Mistake 5: Using RLS for Agents

What happens: Teams try to use RLS policies for agents, but agents don't have user identities.

Why it's a problem: RLS policies block agent queries because agents can't use auth.uid().

The fix: Use service role keys with sandboxed views. Views enforce access boundaries without RLS.

Mistake 6: Not Monitoring Performance

What happens: Teams deploy MCP server and don't monitor query performance.

Why it's a problem: Performance issues go unnoticed. Costs spike. Users get frustrated.

The fix: Monitor query performance, costs, and access patterns from day one. Set up alerts for anomalies.

Mistake 7: Static Views

What happens: Views are created once and never updated.

Why it's a problem: As agents evolve, views become outdated. Over-permissive or under-permissive access.

The fix: Review and update views regularly. Remove unused views, optimize existing views, add new views as needed.


Where Pylar Fits In

Pylar makes building a Supabase MCP server practical. Here's how:

Supabase Integration: Pylar connects directly to Supabase. You provide your Supabase URL and service role key, and Pylar handles connection pooling, query optimization, and error handling. No infrastructure to manage.

Sandboxed Views: Pylar's SQL IDE lets you create sandboxed views that define exactly what agents can access. Views can join data across multiple Supabase tables, filter rows and columns, and enforce compliance requirements. Views are version-controlled and auditable.

MCP Tool Builder: Pylar automatically generates MCP tools from your views. Describe what you want in natural language, and Pylar creates the tool definition, parameter validation, and query logic. No backend engineering required.

RLS Alternative: Pylar views provide RLS-like security without relying on RLS policies. Views use service role access but limit data exposure, giving you the security guarantees of RLS without the complexity.

Query Optimization: Pylar views are optimized for agent queries. Pylar suggests indexes, pre-aggregations, and query optimizations that keep Supabase costs low and performance fast.

Access Control: Pylar enforces access control on every query. Before executing a query, Pylar validates role, context, and permissions. Queries that violate access boundaries are rejected.

Monitoring: Pylar Evals tracks query performance, costs, and access patterns. You can see exactly how agents are using your Supabase layer, which queries are most expensive, and where optimization opportunities exist.

Framework-Agnostic: Pylar tools work with any MCP-compatible framework. Whether you're using Claude Desktop, LangGraph, OpenAI, n8n, or any other framework, Pylar provides the same safe layer.

Pylar is the Supabase MCP server you don't have to build. Instead of building custom view management, tool generation, and connection pooling, you build views and tools in Pylar. The MCP server is built in.


Frequently Asked Questions

Do I need a read replica for Supabase agents?

Can I use RLS policies with agents?

How do I handle write operations?

What if I need real-time data?

How do I optimize view performance?

Can I use views with existing Supabase setup?

How do I test the MCP server?

What if I need to update a view?

How do I monitor the MCP server?

Can I use this pattern with other databases?


Building a Supabase MCP server isn't optional—it's essential. Without it, agents can bypass RLS, exhaust connection pools, and create security risks. Start with service role isolation, add sandboxed views, create tools, and monitor continuously.

The three-layer pattern gives you the foundation. Isolate agent queries with connection pooling, govern access through views, and provide agent-friendly interfaces through tools. With proper implementation, agents become secure, performant business tools rather than Supabase risks.

Building a Supabase MCP Server for AI Agents