🚀

We've launched on Product Hunt

Check us out →
Insights

H

Hoshang Mehta

Your AI Agents Are Leaking Data Right Now (And You Don't Even Know It)

The Model Context Protocol was supposed to be AI's breakthrough moment—a universal standard enabling agents to connect seamlessly to any data source or tool. Instead, it's become a masterclass in what happens when convenience eclipses security. The numbers are sobering: 43% of MCP servers suffer from command injection vulnerabilities, 33% allow unrestricted network access, and 22% expose file systems beyond their intended scope. We're not talking about theoretical risks—these are exploited vulnerabilities affecting hundreds of thousands of production deployments.

This is precisely why Pylar exists.


The MCP Promise vs. Reality

When Anthropic launched MCP in November 2024, the vision was compelling: create a "USB-C for AI applications" that would let agents safely interact with databases, APIs, and services through one standardized protocol. Major players quickly jumped on board—Microsoft, OpenAI, Google, Amazon, and development tools companies like Block, Replit, Sourcegraph, and Zed all integrated MCP support.

The adoption was explosive. Within months, thousands of MCP server repositories emerged on GitHub. The mcp-remote package alone has been downloaded over 558,000 times. Organizations rushed to connect their AI agents to production data, eager to unlock the productivity gains promised by autonomous AI.

But here's the uncomfortable truth: MCP was designed for developer convenience, not enterprise security.

Six Attack Vectors Threatening Your Data

Recent security analyses have identified six critical vulnerability classes systematically affecting MCP deployments. Each represents not just a theoretical risk, but an active exploitation pattern targeting real-world implementations.

1. OAuth Discovery Vulnerabilities (43% of servers affected)

The most severe attack class involves malicious servers injecting arbitrary commands through OAuth authorization endpoints. CVE-2025-6514 demonstrated how a trusted OAuth proxy can become a remote code execution nightmare. With nearly 560,000 downloads of vulnerable packages, this isn't a niche problem—it's a supply chain attack affecting hundreds of thousands of developer environments.

The Invariant Labs team discovered a particularly devastating example with the GitHub MCP server (14,000+ GitHub stars). By placing a malicious issue in a public repository, attackers could hijack a user's AI agent and coerce it into leaking data from private repositories. The agent would willingly pull private repository data into context and leak it into a publicly accessible pull request—all through what appeared to be a legitimate workflow.

2. Command Injection and Code Execution (43% of servers affected)

Backslash Security's analysis of thousands of publicly available MCP servers uncovered dozens of instances where servers execute arbitrary system commands through inadequate input validation. These aren't edge cases—they're systemic flaws enabling remote code execution at scale.

The problem is architectural: MCP servers often construct shell commands from user inputs without proper sanitization, use eval() and exec() functions indiscriminately, and lack input validation entirely.

3. Unrestricted Network Access (33% of servers affected)

One-third of MCP servers allow unrestricted URL fetches, creating direct pathways for data exfiltration. Agents can communicate with external command-and-control infrastructure, download malicious payloads, or systematically steal intellectual property—all while appearing to perform legitimate queries.

This becomes particularly dangerous in production environments where agents have access to customer data, proprietary algorithms, or sensitive business intelligence. A compromised agent doesn't just leak one record—it can systematically harvest entire databases.

4. File System Exposure (22% of servers affected)

Inadequate path validation allows MCP servers to access files outside their intended directories through classic directory traversal attacks. When 22% of servers exhibit file leakage vulnerabilities, you're not looking at sophisticated exploits—you're looking at missing basic security controls.

Combined with the 66% of servers showing poor MCP security practices overall, this creates a massive attack surface. Credentials stored in config files, source code in version control, environment variables containing API keys—all become accessible to agents that should only have narrowly scoped data access.

5. Tool Poisoning Attacks (5.5% of servers affected)

Tool poisoning represents a new class of AI-targeted vulnerabilities. Malicious MCP servers provide false tool descriptions or poisoned responses that trick AI systems into performing unauthorized actions. The Tenable research on localhost exploitation demonstrates how tool poisoning, combined with other vulnerabilities, turns users' own development tools against them.

This is particularly insidious because it exploits the AI's trust in the tools it's been given access to. Unlike traditional attacks that target technical vulnerabilities, tool poisoning attacks leverage the AI's reasoning capabilities against itself.

6. Secret Exposure and Credential Theft (66% show poor practices)

Traditional MCP deployments systematically leak credentials through environment variables, process lists, and inadequate secret management. Plaintext secrets visible in logs and process lists are not exceptions—they're the norm.

The secret harvesting attacks documented by security researchers show how attackers systematically collect API keys and credentials from compromised MCP environments, enabling widespread account takeovers. When your agent needs database credentials, and those credentials live in an environment variable that's visible to any process, you don't have security—you have security theater.

Real-World Consequences: The Asana Incident

In May 2025, Asana launched an MCP server to help customers automate tasks via third-party AI applications. One month later, they discovered a critical flaw: their MCP implementation had broken tenant isolation completely. Users in one organization could see project names, task descriptions, and metadata from entirely separate tenants.

Over 1,000 customers were potentially affected between June 5 and June 17, 2025. The vulnerability went undetected for over a month—long enough for sensitive information to potentially leak across organizational boundaries, creating privacy and regulatory complexities.

Asana had to take the entire MCP server offline for nearly two weeks while implementing fixes. Even after restoration, all connections had to be manually reset. Organizations were left reviewing logs, auditing AI-generated summaries, and trying to determine if their confidential data had been exposed to competitors or unauthorized parties.

The Asana incident reveals a fundamental truth: this wasn't a hack or malicious attack. It was a logic flaw in how MCP handles multi-tenant architectures—and it represents a systemic problem, not an isolated incident.

Why Traditional Security Fails With MCP

Traditional access controls were designed around users and static roles. AI agents don't fit that mold. They operate at machine speed, make autonomous decisions, and chain multiple tool calls together in ways that can't be predicted at configuration time.

Handing an agent a long-lived token and hoping for the best is not a security strategy—it's negligence waiting to be exploited. Yet that's exactly what most MCP implementations do today.

The problem compounds when you consider that MCP protocols assume both the requestor and the requested object are benign. Requests aren't validated before execution. There's no concept of least-privilege access. Authorization sprawl—where permissions accumulate over time without proper governance—becomes inevitable.

Model alignment isn't enough either. Invariant Labs demonstrated that even Claude 4 Opus, one of the most secure and aligned AI models available, remains vulnerable to relatively simple prompt injection attacks when operating through MCP. The security of agent systems is fundamentally contextual and environment-dependent.

The Cost of Getting This Wrong

When MCP security fails, the consequences cascade:

Data breaches expose customer PII, intellectual property, and confidential business information. In regulated industries, this triggers mandatory breach notifications, regulatory fines under GDPR, CCPA, or HIPAA, and potential class-action lawsuits.

Competitive disadvantages emerge when proprietary algorithms, strategic plans, or customer insights leak to competitors. The intellectual property you've spent years developing can be exfiltrated in seconds.

Operational disruptions occur when organizations must audit and remediate compromised systems. Business continuity suffers while security teams investigate the scope of exposure and implement fixes.

Regulatory violations in the EU carry particularly severe penalties. The EU AI Act and strict data protection requirements mean that security failures can result in fines reaching 4% of global annual revenue.

Reputational damage erodes customer trust permanently. Once customers learn their data was exposed due to inadequate security controls, they don't forget—they switch vendors.

Why Pylar Exists: Security-First Data Access for AI

Pylar was built specifically to solve the security crisis that traditional MCP implementations create. Rather than bolting security onto a convenience-first protocol, Pylar makes security the foundation.

Credential Isolation

Your agents never see database credentials. Ever. Pylar stores all credentials securely using cloud KMS. Agents interact with data through Pylar's secure gateway, which handles authentication and authorization transparently. This eliminates the entire class of secret exposure vulnerabilities that plague traditional MCP deployments.

View-Level Governance

Agents don't query raw database tables—they query SQL views you explicitly define and control. This gives you complete authority over rows, columns, and PII exposure. You can implement row-level security, filter sensitive data, and join across multiple data sources, all while ensuring agents only see what they're explicitly authorized to access.

If your support agent needs customer context, you create a view that includes name, email, and ticket history—but excludes social security numbers, credit card information, and internal notes. The agent physically cannot access data that's not in the view.

Safe Query Abstraction

MCP tools in Pylar execute predefined SQL queries—not arbitrary queries constructed from user input. This eliminates command injection vulnerabilities entirely. Agents can't inject malicious SQL, can't perform unauthorized joins, and can't access tables outside their designated views.

When you publish a tool called get_customer_health, you define exactly what query it executes. The agent can call the tool, but it cannot modify the underlying query logic.

Zero Raw Database Access

Agents never interact with your data warehouse directly. Pylar becomes the secure layer in between, enforcing governance policies in real-time. This architectural decision prevents an entire category of attacks that exploit direct database connections.

Even if an agent is compromised, it cannot bypass Pylar to access raw data. It cannot escalate privileges. It cannot discover other tables or databases. It can only execute the specific tools you've published, with the exact parameters you've defined.

Network and Resource Controls

Pylar enforces strict network policies, preventing data exfiltration attempts. Resource limits prevent denial-of-service attacks. Comprehensive logging provides full visibility into every query, every tool call, and every data access pattern.

When something goes wrong—and in production environments, things always go wrong eventually—you have the forensic data needed to understand exactly what happened, who was affected, and how to prevent recurrence.

Real-Time Authorization, Not Static Tokens

Traditional MCP implementations rely on long-lived tokens and assume benign actors. Pylar implements real-time authorization that evaluates context for every request:

  • Who is making the request? Not just user authentication, but agent identity verification.
  • What data are they requesting? Not just table names, but specific columns and rows.
  • What will they do with it? Not just read access, but understanding the downstream use case.
  • Is this request pattern normal? Anomaly detection flags unusual query sequences.

This continuous authorization model prevents privilege escalation, detects compromised agents before they can cause damage, and ensures that authorization decisions reflect current policies, not stale configurations.

Built for Production: Observability and Compliance

Pylar provides enterprise-grade observability out of the box:

Success rates show which tools are working and which are failing. When tools start failing, you need to know immediately—before your customer-facing agents start hallucinating or providing incorrect information.

Latency analysis reveals performance bottlenecks. When queries slow down, you can optimize views or add indexes before users notice degraded response times.

Error tracking surfaces issues early. When schema changes break tools, or when query patterns indicate potential security concerns, Pylar alerts you proactively.

Cost monitoring prevents runaway spending. AI agents can generate thousands of queries per hour. Without cost visibility, your data warehouse bills can spiral out of control.

Audit trails satisfy compliance requirements. Every query, every tool call, every data access gets logged. When auditors ask "who accessed customer data and when," you have answers.

Framework-Agnostic by Design

One MCP server URL works with LangGraph, Claude Desktop, Zapier, n8n, Cursor, Windsurf, VS Code, and every other agent builder. Your governance policies travel with the data, regardless of which framework your teams choose.

This matters because organizations don't standardize on a single agent framework. Your customer support team might use n8n for workflow automation, your engineering team might use Cursor for coding assistance, and your sales team might use Claude Desktop for research. With Pylar, you define governance policies once, and they apply consistently across all agent deployments.

When you update a view or modify a tool, changes propagate automatically to every connected agent builder. You don't redeploy code. You don't update endpoints. You don't coordinate releases across teams. You change it once in Pylar, and every agent everywhere sees the update instantly.

Learning From API History

We've seen this pattern before with APIs. Early API development prioritized speed and flexibility. Security was an afterthought. Developers used static API keys, granted overly broad permissions, and lacked comprehensive logging.

The result was a decade of API security incidents: exposed keys in public repositories, privilege escalation attacks, data breaches through poorly secured endpoints. Eventually, the industry learned. We developed OAuth flows, implemented rate limiting, adopted zero-trust architectures, and built comprehensive API gateways.

MCP is following the same trajectory, but we have the advantage of learning from API mistakes rather than repeating them. The security practices that took the API ecosystem a decade to develop can be implemented from day one with MCP—if organizations choose platforms designed with security as the foundation.

The Path Forward

The regulatory environment is already responding to AI security risks. The EU AI Act and NIST's AI Risk Management Framework explicitly require organizations to address authentication, authorization, and data governance concerns. Security is no longer optional—it's a compliance requirement.

But beyond compliance, there's a competitive advantage to getting security right. Organizations that can deploy AI agents confidently—knowing their data is protected, their governance policies are enforced, and their audit trails are comprehensive—will move faster than competitors paralyzed by security concerns.

Pylar exists because connecting AI agents to production data shouldn't require choosing between innovation and security. You should be able to move fast without breaking things—or exposing your customers' data to unauthorized access.

What This Means for Your Organization

If you're connecting AI agents to production data today, you need to ask hard questions:

  • Can your agents access raw database tables, or only governed views?
  • Are database credentials visible to agents, or securely isolated?
  • Do agents execute arbitrary queries, or only predefined tools?
  • Can compromised agents exfiltrate data to external services?
  • Do you have comprehensive logging and audit trails?
  • Can you detect anomalous query patterns in real-time?
  • What happens when an agent is compromised?

If you don't have confident answers to these questions, you're operating with security debt that will eventually come due. The question isn't whether you'll face an incident—it's when, and how severe the consequences will be.

Conclusion: Security as the Foundation

The MCP security crisis isn't a temporary growing pain—it's a fundamental architectural challenge that requires purpose-built solutions. Organizations adopting AI agents need security-first platforms that make governance, isolation, and observability the default, not an afterthought.

Pylar exists because the alternative—hoping agents behave responsibly while giving them unfettered access to production databases—isn't a strategy. It's a liability waiting to materialize.

The future of AI agents is inevitable. The question is whether we build that future on a foundation of security and governance, or on the hope that nothing goes wrong.

The choice is yours, but the consequences are everyone's.


Ready to connect your AI agents to production data securely? Start with Pylar and deploy governed data access in minutes, not months.