How to Build a Full Internal Agent Workflow in Under 1 Hour

by Hoshang Mehta

You've read the tutorials on connecting data sources, creating views, and building MCP tools. Now it's time to connect all the dots and get a working agent running end to end. This guide will walk you through building a complete internal agent workflow in under 60 minutes.

This end-to-end tutorial ties everything together: connecting a data source, creating a view, building an MCP tool, publishing it, connecting to an agent builder, and testing it. By the end, you'll have a working agent that can answer questions about your data. No shortcuts, no assumptions—just a complete workflow you can follow.


Here's What We'll Build

We'll build a Customer Support Agent that can answer questions about customers, orders, and support tickets. This agent will:

  • Answer questions like "What's the status of customer 12345?"
  • Look up customer information by ID or email
  • Provide order history and customer health status
  • Help support teams get instant customer context

Example interactions:

  • "Get customer information for customer ID 12345"
  • "What's the status of customer@example.com?"
  • "Show me customers who haven't placed an order in 90 days"

This is a practical use case that most teams can relate to, and you can adapt it to your specific needs.


Why This Approach Works

Before we start, let me explain why building a complete workflow matters and how it fits into Pylar's approach.

How This Fits into the Bigger Picture

This tutorial ties together Pylar's "Data → Views → Tools → Agents" flow:

  1. Data: Connect your data source (10 minutes)
  2. Views: Create a governed SQL view (15 minutes)
  3. Tools: Build an MCP tool on your view (10 minutes)
  4. Publish: Generate credentials and publish (5 minutes)
  5. Agents: Connect to an agent builder (10 minutes)
  6. Test: Verify everything works (5 minutes)
  7. Monitor: Set up Evals (5 minutes)

Total: 60 minutes from data source to working agent.

The key insight: Each step builds on the previous one. You can't skip steps—you need the complete workflow to have a working agent.


Prerequisites

Before we start, make sure you have:

  • ✅ A Pylar account (sign up at app.pylar.ai if you don't have one)
  • ✅ Access to at least one data source (Postgres, MySQL, BigQuery, Snowflake, or any supported database)
  • ✅ Database credentials (host, database name, username, password)
  • ✅ Ability to whitelist IP addresses (for databases/warehouses)
  • ✅ An agent builder account (OpenAI Platform, Claude Desktop, Cursor, or any MCP-compatible builder)
  • ✅ 60 minutes of focused time

Important: If you don't have database credentials or can't whitelist IPs, invite a team member who can help with the connection step.


The Complete Workflow (60 Minutes)


Step 1: Connect Your Data Source (10 minutes)

First, let's connect Pylar to your database. I'll use Postgres as an example, but the process is similar for other databases.

1.1: Navigate to Connections

  1. Log in to Pylar: Go to app.pylar.ai and sign in (or sign up if you don't have an account)
  2. Open Connections: Click "Connections" in the left sidebar
  3. Select Database Type: Click on PostgreSQL (or your database type)

Connections page in Pylar showing available data sources

1.2: Enter Connection Details

Fill in your database credentials:

  • Host: Your Postgres hostname or IP (e.g., db.example.com or 192.168.1.100)
  • Port: 5432 (or your custom port)
  • Database Name: The database you want to connect (e.g., production, analytics)
  • Username: Your Postgres username
  • Password: Your Postgres password

1.3: Whitelist Pylar IP

Critical: Before submitting, whitelist Pylar's IP address: 34.122.205.142

For cloud-hosted Postgres:

  • AWS RDS: Add inbound rule allowing PostgreSQL (port 5432) from 34.122.205.142
  • Google Cloud SQL: Add authorized network 34.122.205.142/32
  • Azure Database: Add client IP 34.122.205.142 in firewall settings

For self-hosted Postgres: Edit pg_hba.conf and add:

host    all    all    34.122.205.142/32    md5

Then reload: sudo systemctl reload postgresql

1.4: Test and Save

  1. Click "Submit" to test the connection
  2. If successful, give it a schema name (e.g., postgres_production)
  3. Save the connection

Time check: You should be at ~10 minutes. If the connection fails, check your credentials and IP whitelisting.


Step 2: Create a Project (2 minutes)

Projects organize your views and tools.

  1. Click "Create Project": From the dashboard, click "Create Project"
  2. Name it: Give it a descriptive name (e.g., "Customer Support Agent")
  3. Add description (optional): "Internal agent for customer support queries"
  4. Click "Create"

Time check: ~12 minutes total.


Step 3: Create Your First View (15 minutes)

Now let's create a view that combines customer and order data. This view will be what the agent queries.

3.1: Open the SQL IDE

  1. In your project, click "Create View"
  2. SQL IDE opens: You'll see the SQL editor with your data sources available

SQL IDE in Pylar showing the query editor

3.2: Write Your Query

Let's create a simple customer view that includes order information:

-- Customer Support View
-- Combines customer data with order history for support queries

SELECT 
  c.customer_id,
  c.email,
  c.name,
  c.signup_date,
  c.status as customer_status,
  COUNT(o.order_id) as total_orders,
  SUM(o.amount) as total_revenue,
  MAX(o.order_date) as last_order_date,
  CASE 
    WHEN MAX(o.order_date) < CURRENT_DATE - INTERVAL '90 days' THEN 'Inactive'
    WHEN MAX(o.order_date) < CURRENT_DATE - INTERVAL '30 days' THEN 'At Risk'
    ELSE 'Active'
  END as customer_health
FROM postgres_production.customers c
LEFT JOIN postgres_production.orders o 
  ON c.customer_id = o.customer_id
WHERE c.status = 'active'
GROUP BY c.customer_id, c.email, c.name, c.signup_date, c.status;

What this does:

  • Joins customers with orders
  • Calculates total orders and revenue per customer
  • Identifies last order date
  • Adds a customer health status based on recency

Adjust for your schema: Modify table names, column names, and join conditions to match your database structure.

3.3: Test Your Query

  1. Click "Run Query": Test the query to make sure it works
  2. Review results: Check that data looks correct
  3. Fix any errors: Adjust the query if needed

3.4: Save Your View

  1. Click "Save View"
  2. Name it: customer_support_view
  3. Add description: "Unified customer view with order history for support queries"
  4. Save

Time check: ~27 minutes total. If you're running into SQL issues, simplify the query—you can always refine it later.


Step 4: Build an MCP Tool (10 minutes)

Now let's turn your view into an MCP tool that agents can use.

4.1: Create Tool with AI

  1. Click on your view: Select customer_support_view in the sidebar
  2. Click "Create MCP Tool"
  3. Select "Create with AI"

Create MCP tool interface in Pylar

  1. Write a prompt:

    "Create a tool to get customer information by customer ID or email. The tool should return customer details, order history, and health status."

  2. Review generated tool: Pylar's AI will configure:

    • Tool name (e.g., get_customer_info)
    • Description
    • Parameters (customer_id, email)
    • Query logic
  3. Make adjustments (if needed):

    • Update parameter names
    • Refine description
    • Adjust query filters
  4. Click "Save Tool"

4.2: Test Your Tool

  1. Click "Test Run" on your tool
  2. Enter test parameters:
    • customer_id: 12345 (or a real customer ID from your database)
    • Or email: customer@example.com
  3. Click "Run Test"
  4. Verify results: Check that the tool returns correct data

Test tool interface showing results in Pylar

If the test fails:

  • Check parameter names match your query placeholders
  • Verify the query works in SQL IDE
  • Review error messages

Time check: ~37 minutes total.


Step 5: Publish Your Tool (5 minutes)

Now let's publish your tool to make it available to agents.

5.1: Publish

  1. Click "Publish": In the right sidebar, click "Publish"
  2. Generate Token: Click "Generate Token" in the popup
  3. Copy Credentials: You'll see two values:
    • MCP HTTP Stream URL: https://mcp.publish.pylar.ai/mcp
    • Authorization Bearer Token: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...

Authorization Bearer Token generation screen showing credentials

Important: Copy both values and store them securely. You'll need them to connect to your agent builder.

5.2: Store Credentials Safely

  • Password manager: Store in 1Password, LastPass, etc.
  • Secure notes: Keep in a secure location
  • Never commit to Git: Don't put tokens in version control

Time check: ~42 minutes total.


Step 6: Connect to Agent Builder (10 minutes)

Now let's connect your tool to an agent builder. I'll use OpenAI Platform as an example, but the process is similar for other builders.

6.1: Open Agent Builder

  1. Navigate to OpenAI Platform: Go to platform.openai.com
  2. Sign in: Log in to your account
  3. Open Agent Builder: Navigate to the Agent Builder section

6.2: Create or Open an Agent

  1. Create new agent: Click "Create" or "New Agent"
  2. Name it: "Customer Support Assistant"
  3. Add description: "Helps answer questions about customers and orders"

6.3: Connect MCP Server

  1. Find Connect option: Look for "Connect", "Add Tool", or "MCP Server" in the tools section
  2. Click "Connect": Add a new MCP server connection
  3. Enter credentials:
    • MCP Server URL: Paste https://mcp.publish.pylar.ai/mcp
    • Authorization Token: Paste your full Bearer Token (including "Bearer ")
  4. Save: Click "Save" or "Connect"

What happens: OpenAI connects to Pylar's MCP server and discovers your published tools. You should see your tool (get_customer_info) appear in the list.

Pylar tools connected to OpenAI Agent Builder

6.4: Configure Tool Access

  1. Review tools: You'll see your Pylar tool in the list
  2. Enable tool: Make sure it's selected/enabled
  3. Set approval (optional): Choose auto-approve or require approval
  4. Save: Confirm your selections

Time check: ~52 minutes total.


Step 7: Test Your Agent (5 minutes)

Now let's verify that your agent can actually use your tool.

7.1: Open Preview

  1. Click "Preview": In the Agent Builder, click the "Preview" button
  2. Test interface opens: You'll see a chat interface

7.2: Ask Test Questions

Try these questions:

  1. "Get customer information for customer ID 12345"

    • The agent should call your get_customer_info tool
    • It should return customer details, order history, and health status
  2. "What's the status of customer@example.com?"

    • The agent should use the email parameter
    • It should return the customer's information
  3. "Show me customers who haven't ordered in 90 days"

    • The agent might need to call the tool multiple times or you might need to refine the tool
    • This tests how the agent handles your data

7.3: Verify Results

Check that:

  • ✅ Agent successfully calls your tool
  • ✅ Results are correct
  • ✅ Agent uses the data to answer questions
  • ✅ No errors occur

If something fails:

  • Check tool is enabled in Agent Builder
  • Verify credentials are correct
  • Review error messages in Agent Builder
  • Test tool again in Pylar

Time check: ~57 minutes total.


Step 8: Set Up Evals (3 minutes)

Finally, let's set up monitoring so you can see how agents use your tool.

8.1: Access Evals

  1. In Pylar, click "Eval" in the top-right corner
  2. Dashboard opens: You'll see the Evaluation Dashboard

Evals dashboard showing tool performance metrics

8.2: Review Metrics

Once agents start using your tool, you'll see:

  • Total Count: How many times the tool was called
  • Success Count: Successful calls
  • Error Count: Failed calls
  • Success Rate: Percentage of successful calls

8.3: Monitor Usage

  • Check regularly: Review Evals weekly or daily
  • Look for errors: Investigate any failures
  • Track patterns: See how agents use your tool
  • Optimize: Refine tools based on usage

Time check: ~60 minutes total. You're done!


Real-World Example: What You Just Built

You've built a complete customer support agent workflow. Here's what it can do:

Example Interactions

User: "What's the status of customer 12345?"

Agent:

  1. Calls get_customer_info with customer_id: 12345
  2. Gets customer data from your view
  3. Responds: "Customer 12345 (customer@example.com) has placed 5 orders totaling $1,250. Last order was 15 days ago. Status: Active."

User: "Which customers are at risk?"

Agent:

  1. Calls get_customer_info multiple times (or you could create a separate tool for this)
  2. Filters for customers with health status "At Risk"
  3. Responds: "Here are customers at risk: [list of customers]"

How This Helps Your Team

  • Faster support: Support agents get customer context instantly
  • Better answers: Agents have complete customer history
  • Reduced errors: No manual SQL queries or data lookups
  • Scalable: Works for any number of customers

Common Pitfalls & Tips

I've seen teams make these mistakes when building their first workflow. Here's how to avoid them.

Pitfall 1: Skipping the Test Step

Don't skip testing your tool before publishing. I've seen teams publish tools that fail in production.

Why this matters: A tool that fails in production creates a bad experience. The agent returns errors, users get frustrated, and you have to debug under pressure.

How to avoid it: Always test your tool in Pylar first using "Test Run". Verify it works with real data before connecting to agents.

Pitfall 2: Not Whitelisting IP First

Don't try to connect before whitelisting Pylar's IP. This is the #1 reason connections fail.

Why this matters: Without whitelisting, Pylar can't connect to your database. You'll spend time debugging credentials when the real issue is network access.

How to avoid it: Always whitelist 34.122.205.142 before connecting. Do it first, then test the connection.

Pitfall 3: Complex Queries on First Try

Don't start with complex queries. Start simple, then add complexity.

Why this matters: Complex queries are harder to debug. If something breaks, you won't know which part is the problem.

How to avoid it: Start with a simple SELECT query. Once it works, add joins, filters, and aggregations incrementally.

Pitfall 4: Not Testing with Real Data

Don't test with fake data. Use real data from your database.

Why this matters: Fake data might not reveal real issues. Real data shows actual problems you'll encounter in production.

How to avoid it: Test with actual customer IDs, emails, or other real identifiers from your database.

Pitfall 5: Forgetting to Monitor

Don't build the agent and forget about it. Monitor usage with Evals.

Why this matters: You won't know if the agent is working well or if there are issues. Problems compound over time if you don't catch them early.

How to avoid it: Set up Evals and check them regularly. Review success rates, errors, and usage patterns.

Best Practices Summary

Here's a quick checklist:

  • Whitelist IP first: Always whitelist 34.122.205.142 before connecting
  • Start simple: Begin with simple queries, add complexity later
  • Test thoroughly: Test tools with real data before publishing
  • Monitor usage: Set up Evals and check regularly
  • Iterate: Refine based on real usage, not assumptions
  • Document: Note what works and what doesn't for future reference

Next Steps

You've built a complete internal agent workflow. That's a huge accomplishment. Now you can:

  1. Expand the agent: Add more tools for different use cases (order history, support tickets, etc.)

  2. Refine based on usage: Use Evals to see how agents use your tools and optimize them

  3. Build more agents: Create agents for different teams (sales, product, finance)

  4. Connect more data sources: Add BigQuery, Snowflake, or SaaS tools to enrich your views

  5. Share with your team: Deploy the agent so your team can actually use it

The key is to start simple and iterate. Your first agent doesn't need to be perfect—it just needs to work. You can always refine it later based on real usage.

If you want to keep going, the next step is expanding your agent with more tools and data sources. That's where you'll see the real value—agents that can answer complex questions across multiple systems.


Frequently Asked Questions

What if I don't have customer and order data?

Can I use multiple data sources?

What if my tool doesn't work in the agent?

How do I add more tools to my agent?

Can I use this with other agent builders?

What if I need to update my view?

How do I know if my agent is working well?

Can I build this in less than 60 minutes?

How to Build a Full Internal Agent Workflow in Under 1 Hour | Pylar Blog