You've built your MCP tool and tested it. Now comes the moment of truth: connecting it to an actual AI agent. This is where your governed data becomes actionable—where agents can actually use your tools to answer questions and make decisions.
This tutorial will walk you through publishing your Pylar tool and connecting it to OpenAI's Agent Builder. We'll cover everything from generating your credentials to testing your tool in a custom GPT. By the end, you'll have a working agent that can query your data safely, and you'll understand why this approach scales better than building custom integrations.
Why This Approach Works
Before we dive into the steps, let me explain why publishing tools to agent builders matters and how it fits into Pylar's approach.
The Problem: Custom Integrations Don't Scale
Most teams start by building custom integrations for each agent builder. They write code to connect to OpenAI, then write different code for LangGraph, then different code for Zapier. Each integration is a separate codebase to maintain, test, and update.
I've watched teams spend weeks building these integrations, only to discover that:
- Each agent builder has different APIs and requirements
- Updates to one integration don't help the others
- Testing becomes a nightmare (you need to test each integration separately)
- Maintenance costs compound over time
The result? Teams end up maintaining multiple codebases for what should be the same functionality.
Why MCP Tools Solve This
MCP (Model Context Protocol) is an emerging standard for AI agent tooling. When you publish a Pylar tool, you get an MCP server that any MCP-compatible agent builder can connect to.
This means:
- Publish once, use everywhere: One tool works with OpenAI, LangGraph, Claude Desktop, Cursor, Zapier, and more
- Automatic updates: When you update a tool in Pylar, all connected agents get the update automatically
- No custom code: No need to write integration code for each platform
- Standard protocol: MCP is becoming the standard, so new agent builders work out of the box
Think of it like REST APIs for web services. Once you have an API endpoint, any client can connect to it. MCP tools work the same way—once you publish, any MCP-compatible agent builder can use your tools.
How Publishing Fits into the Bigger Picture
Publishing is the final step in the "Data → Views → Tools → Agents" flow:
- Data: Your raw data sources (databases, warehouses, SaaS tools)
- Views: Governed SQL queries that define what agents can access
- Tools: MCP tools that wrap your views and define how agents interact with them
- Agents: AI agents that call your tools to get answers
When you publish, you're making your tools available to agents. The agents don't need to know about your database, your views, or your SQL queries—they just call your tools and get answers.
Step-by-Step Guide
Let's publish your tool and connect it to OpenAI's Agent Builder. I'll walk you through each step, from generating credentials to testing your agent.
Prerequisites
Before we start, make sure you have:
- ✅ An MCP tool created and tested in Pylar (if you don't have one, follow the previous tutorial: "How to Build Your First MCP Tool on a Data View")
- ✅ An OpenAI Platform account with access to the Agent Builder
- ✅ Your tool tested and working (use the "Test Run" button to verify)
If you haven't created a tool yet, go back and do that first. You need a working tool before you can publish it.
Step 1: Publish Your Tool in Pylar
First, let's publish your tool in Pylar to generate the connection credentials.
-
Navigate to Your Project: In Pylar, go to the project that contains your MCP tool.
-
Click "Publish": In the right sidebar, you'll see a "Publish" button. Click it.

-
Generate Token: A popup window will appear. Click "Generate Token" to create a secure authorization token.
-
Copy Your Credentials: Once the token is generated, you'll see two critical values:
- MCP HTTP Stream URL:
https://mcp.publish.pylar.ai/mcp - Authorization Bearer Token:
Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
- MCP HTTP Stream URL:

Important: Copy both values and store them securely. You'll need them to connect to OpenAI. Never share your Bearer Token publicly—it provides access to your data.
Step 2: Open OpenAI Agent Builder
Now let's connect your tool to OpenAI's Agent Builder.
-
Navigate to OpenAI Platform: Go to platform.openai.com and sign in.
-
Open Agent Builder: Navigate to the Agent Builder section. This is where you create and configure custom GPTs and agent workflows.
-
Create or Open an Agent: Either create a new agent or open an existing one where you want to use your Pylar tool.
Step 3: Connect Your MCP Server
Now we'll connect Pylar's MCP server to your OpenAI agent.
-
Find the Connect Option: In the Agent Builder interface, look for "Connect", "Add Tool", or "MCP Server" options. This is typically in the tools or integrations section.
-
Click "Connect": Click the button to add a new MCP server connection.
-
Enter Your Credentials: You'll be asked for:
- MCP Server URL: Paste your MCP HTTP Stream URL:
https://mcp.publish.pylar.ai/mcp - Authorization Token: Paste your Bearer Token (the full value including "Bearer ")
- MCP Server URL: Paste your MCP HTTP Stream URL:
-
Save the Connection: Click "Save" or "Connect" to establish the connection.
What happens: OpenAI connects to Pylar's MCP server and discovers your published tools. You should see your tools appear in the list of available tools.
Step 4: Configure Tool Access
Once your tools are connected, you can configure how your agent uses them.
-
Review Your Tools: You'll see a list of your Pylar MCP tools. Each tool shows:
- Function name (e.g.,
get_customer_by_email) - Description (what the tool does)
- Parameters (what inputs it needs)
- Function name (e.g.,
-
Select Tools: Choose which tools you want your agent to have access to. You don't need to enable all tools—just the ones your agent needs.
-
Set Approval Status (optional): For sensitive operations, you can require manual approval before the tool runs:
- Auto-approve: Tools run automatically when the agent needs them
- Require approval: Tools need manual approval before execution
-
Click "Add": Confirm your selections to make the tools available to your agent.
Pro tip: Start with auto-approve to test your tools. Once you're confident they work correctly, you can adjust approval settings based on your needs.
Step 5: Test Your Agent
Now let's test that your agent can actually use your Pylar tools.
-
Open Preview: Click the "Preview" button in the top right corner of the Agent Builder.
-
Ask a Test Question: Type a question that requires your agent to use your Pylar tool.
Example questions:
- "Get customer information for customer@example.com"
- "What's the engagement score for login events?"
- "Show me active customers ordered by signup date"
-
Watch the Agent Work: The agent will:
- Recognize it needs to use your Pylar tool
- Call the tool with appropriate parameters
- Execute the query against your view
- Return the results and use them to answer your question
-
Verify the Results: Check that:
- The agent successfully called your tool
- The results are correct
- The agent used the data to answer your question
What to look for: In the preview, you should see the agent's thought process, including when it decides to use your tool and what parameters it passes.
Step 6: Deploy Your Agent
Once testing looks good, deploy your agent.
-
Save Your Agent: Save your agent configuration in the Agent Builder.
-
Deploy: Deploy your agent to make it available for use. The exact steps depend on your OpenAI setup (custom GPT, agent workflow, etc.).
-
Test in Production: After deployment, test your agent again to make sure everything works in the production environment.
Congratulations: Your Pylar tool is now connected to an OpenAI agent. The agent can query your data through your governed views, and you have complete control over what data it can access.
Real-World Examples
Let me show you how different teams would use this in practice.
Example 1: Customer Support Agent
A support team has published a get_customer_info_by_email tool and wants to create a support agent that can quickly look up customer information.
Setup:
- Publish the tool in Pylar
- Connect to OpenAI Agent Builder
- Create a custom GPT for support
- Enable the
get_customer_info_by_emailtool
How it works: When a customer calls, the support agent asks the OpenAI agent: "What's the status for customer@example.com?" The agent calls the Pylar tool, gets the customer info, and provides a helpful response with subscription status, last login, and account details.
Value: Support agents get instant customer context without switching between systems or writing SQL queries.
Example 2: Sales Intelligence Agent
A sales team has published a get_customer_revenue_summary tool and wants an agent that helps prepare for customer meetings.
Setup:
- Publish the tool in Pylar
- Connect to OpenAI Agent Builder
- Create a custom GPT for sales
- Enable the
get_customer_revenue_summarytool
How it works: Before a meeting, the sales rep asks: "Give me a summary for customer 12345." The agent calls the Pylar tool, gets revenue data, order history, and key metrics, then provides a comprehensive briefing.
Value: Sales reps get complete customer context in seconds, helping them prepare better and close more deals.
Example 3: Product Analytics Agent
A product team has published multiple tools (get_users_by_engagement_status, get_feature_usage_stats, etc.) and wants an agent that can answer product questions.
Setup:
- Publish all tools in Pylar
- Connect to OpenAI Agent Builder
- Create a custom GPT for product analytics
- Enable all relevant tools
How it works: The product manager asks: "Show me all dormant users and their last activity." The agent calls the appropriate Pylar tool, gets the data, and provides insights about user engagement patterns.
Value: Product teams can explore their data naturally, asking questions and getting answers without writing SQL or building dashboards.
Notice how each team uses the same publishing process but creates agents tailored to their specific needs. That's the power of MCP tools—publish once, use everywhere, customize per team.
Common Pitfalls & Tips
I've seen teams make these mistakes when publishing and connecting tools. Here's how to avoid them.
Pitfall 1: Not Testing Before Connecting
Don't connect untested tools to production agents. I've seen teams publish tools that fail when agents try to use them.
Why this matters: A tool that fails in production creates a bad experience. The agent returns an error, users get frustrated, and you have to debug under pressure.
How to avoid it: Always test your tools in Pylar first using the "Test Run" button. Verify they work with real data before connecting them to agents.
Pitfall 2: Sharing Bearer Tokens Publicly
Your Bearer Token is like a password—it provides access to your data. Never share it publicly or commit it to version control.
Why this matters: If someone gets your Bearer Token, they can access your published tools and query your data. This is a security risk.
How to avoid it:
- Store tokens in password managers
- Use environment variables for programmatic access
- Never commit tokens to Git
- Regenerate tokens if they're exposed
Pitfall 3: Not Understanding What Gets Published
When you publish, all tools in your project become available. Make sure you're comfortable with that.
Why this matters: If you have test tools or tools you're still developing, they'll be accessible to agents. This can cause confusion or errors.
How to avoid it: Only publish tools you're ready to use in production. Keep test tools in separate projects, or don't publish until tools are ready.
Pitfall 4: Not Configuring Tool Access Properly
In OpenAI, you can control which tools agents can use. Don't enable tools your agent doesn't need.
Why this matters: Enabling unnecessary tools can confuse agents. They might try to use the wrong tool or get overwhelmed by too many options.
How to avoid it: Only enable the tools your agent actually needs. Start with one or two tools, then add more as needed.
Pitfall 5: Not Testing After Connection
Just because you connected the tool doesn't mean it works. Always test in the Agent Builder preview.
Why this matters: Connection issues, parameter mismatches, or configuration problems might not show up until you actually try to use the tool.
How to avoid it: Use the Preview feature in OpenAI Agent Builder to test your agent with real questions. Verify the agent can successfully call your tools and get correct results.
Best Practices Summary
Here's a quick checklist for publishing and connecting tools:
- ✅ Test before publishing: Verify tools work with real data
- ✅ Store tokens securely: Never share Bearer Tokens publicly
- ✅ Only publish production tools: Keep test tools separate
- ✅ Enable only needed tools: Don't overwhelm agents with unnecessary options
- ✅ Test after connecting: Use Preview to verify everything works
- ✅ Monitor usage: Use Pylar Evals to see how agents use your tools
- ✅ Iterate based on usage: Refine tools based on how agents actually use them
Next Steps
You've published your Pylar tool and connected it to OpenAI's Agent Builder. That's the final piece of the puzzle. Now you can:
-
Create more agents: Build additional agents for different use cases, each using different combinations of your tools.
-
Monitor usage: Use Pylar Evals to see how agents are using your tools, identify patterns, and optimize based on real usage.
-
Iterate and improve: As you learn how agents use your tools, refine them to better match their needs. Remember, updates reflect automatically—no need to republish.
-
Connect to other platforms: The same MCP credentials work with other agent builders too. Try connecting to LangGraph, Claude Desktop, or Zapier.
The key is to start simple and iterate. Your first connection doesn't need to be perfect—it just needs to work. You can always refine your tools and agent configurations based on real usage.
If you want to keep going, the next step is monitoring how agents use your tools with Evals. That's where you'll see the real value—understanding how agents interact with your data and identifying opportunities to improve.
Frequently Asked Questions
Do I need to republish when I update a tool?
Can I connect the same tool to multiple agents?
What if my Bearer Token gets exposed?
Can I see how agents are using my tools?
What if my agent can't find my tool?
Can I use different tools for different agents?
What if I need to update my view after publishing?
Related Posts
How to Build Your First Data View in Pylar
Get started with Pylar in minutes. We'll walk you through creating your first SQL view, connecting a data source, and setting up basic access controls.
How to Build Your First MCP Tool on a Data View
Turn your data view into an MCP tool that agents can actually use. This step-by-step guide shows you how to publish a view as a tool in under 10 minutes.
Using Pylar with BigQuery, Snowflake, and Postgres
Pylar works with all the major data sources. Learn how to connect BigQuery, Snowflake, and Postgres, and what to consider when building views across different systems.
