The latest MCP specification update hardens enterprise infrastructure with enhanced security and moves AI agents from pilot to production.
To mark its first year, the open source project created by Anthropic released a revised specification this week aimed at solving operational pain points that keep generative AI agents in pilot mode. This update adds support for long-running workflows and tighter security controls with help from Amazon Web Services (AWS), Microsoft, and Google Cloud.
The market is moving away from weak, bespoke integrations. For enterprises, this is an opportunity to deploy agent AI that can read and write corporate data stores without incurring significant technical debt.
MCP progresses from “developer curiosity” to practical infrastructure
The story has moved from experimental chatbots to structural integration. Since September, the registry has grown 407% and now houses nearly 2,000 servers.
“One year after Anthropic introduced Model Context Protocol, MCP has gone from a developer curiosity to a practical way to connect AI to the systems where the work and data reside,” said Satyajith Mundakkal, Global CTO at Hexaware, following this latest specification update.
Microsoft is already “signaling the transition by adding native MCP support to Windows 11,” effectively moving the standard directly into the operating system layer.
This software standardization will occur in parallel with aggressive hardware scale-up. Mundakkal emphasizes “building unprecedented infrastructure,” citing OpenAI’s multi-gigawatt “Stargate” program as an example. “These are clear signs that AI capabilities and the data they rely on are rapidly expanding,” he says.
MCP is the plumbing that supplies these large-scale computing resources. Mundakkal said: “The value of AI is determined by the data that can be safely reached.”
Until now, hooking LLM to a database has been mostly synchronous. This works if your chatbot checks the weather, but fails if you want to migrate a codebase or analyze medical records.
The new “Tasks” feature changes this (SEP-1686). This provides a standard way for the server to track work, and allows clients to poll for status and cancel jobs if things go wrong. Operations teams that automate infrastructure migrations need agents that can run for hours without timing out. Supporting states like working and input_required ultimately brings resiliency to agent workflows.
MCP specification updates improve security
For CISOs in particular, AI agents often look like a large, uncontrolled attack surface. The risks are already visible. “Security researchers have even found that approximately 1,800 MCP servers will be exposed on the public internet by mid-2025,” suggesting that the deployment of private infrastructure is expanding significantly.
“If done poorly, (MCP) integration can lead to sprawl and expand the attack surface,” Mundakkar warns.
To address this, the maintainers addressed the issue of Dynamic Client Registration (DCR). This fix is URL-based client registration (SEP-991). This eliminates administrative bottlenecks by allowing clients to provide unique IDs that point to self-managed metadata documents.
Next, there is “URL mode navigation” (SEP-1036). This allows, for example, a server processing a payment to bounce the user to a secure browser window to obtain credentials. Agents never see your password. Just get the token. This keeps core credentials isolated and non-negotiable for PCI compliance.
Harish Peri, SVP at Okta, believes this will provide “the oversight and access controls needed to build a secure and open AI ecosystem.”
One of the features as part of the MCP Infrastructure specification update, “Sampling with Tools” (SEP-1577), was not well known. Servers used to be passive data fetchers. You can now run your own loops using client tokens. Imagine a “research server” that spawns subagents that examine documents and create reports. No custom client code required. It just brings the inference closer to the data.
However, wiring these connections is only step one. Mayur Upadhyaya, CEO of APIContext, asserts, “The first year of MCP implementation showed that enterprise AI doesn’t start with a rewrite, it starts with a publish.”
But the next hurdle is visibility. “The next wave is about visibility. Enterprises need to monitor MCP uptime and validate authentication flows as rigorously as they monitor APIs today,” explains Upadhyaya.
The MCP roadmap reflects this, with updates aimed at making debugging more “reliable and observable.” Trouble arises when you treat your MCP server as a “set it and forget it” thing. Mundakkal agreed, noting that the first year’s takeaway was to “combine MCP with strong identity, RBAC, and observability from day one.”
Star-studded industry lineup adopting MCP for infrastructure
The protocol depends on who is using it. In the year since the original specification was released, MCP has reached nearly 2,000 servers. Microsoft uses it to bridge GitHub, Azure, and M365. AWS is baking it into Bedrock. Google Cloud supports this throughout Gemini.
This reduces vendor lock-in. Postgres connectors built for MCP should theoretically work across Gemini, ChatGPT, or internal Anthropic agents without rewriting.
The “plumbing” phase of generative AI is settling down, and open standards are winning the connectivity debate. Technology leaders should audit internal APIs for MCP readiness, focus on publishing rather than rewriting, and ensure new URL-based registrations align with current IAM frameworks.
Monitoring protocols should also be established immediately. The latest MCP specification updates are backward compatible with existing infrastructure. This new feature is the only way to engage agents in regulated mission-related workflows and ensure security.
See also: Breakthrough in adversarial learning enables real-time AI security
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expos in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events including Cyber Security Expo. Click here for more information.
AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

