Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Cisco AI Routers Solve Data Center Interconnect Challenges

October 10, 2025

AI and human collaboration in B2B content creation

October 10, 2025

AI illustration tools like PicLumen revolutionize the creation of nature-themed digital art | AI News Details

October 10, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, October 10
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Cisco AI Routers Solve Data Center Interconnect Challenges
Tools

Cisco AI Routers Solve Data Center Interconnect Challenges

versatileaiBy versatileaiOctober 10, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Cisco has become the latest major company to announce purpose-built routing hardware to connect AI workloads distributed across multiple facilities, as the race to dominate AI data center interconnect technology becomes increasingly intense.

The networking giant announced its 8223 Routing System on October 8, introducing what it claims is the industry’s first 51.2 Tbit/s fixed router specifically designed to link data centers running AI workloads.

At its core is the new Silicon One P200 chip, which represents Cisco’s answer to the challenge that is increasingly constraining the AI ​​industry: What happens when there’s no room to grow?

A three-way battle for scale supremacy?

By the way, Cisco is not alone in recognizing this opportunity. Broadcom launched its first salvo with its “Jericho 4” StrataDNX switch/router chip in mid-August. It started sampling and also provided 51.2 Tb/s of total bandwidth backed by HBM memory for deep packet buffering to manage congestion.

Two weeks after Broadcom’s announcement, Nvidia announced its Spectrum-XGS scale-across network. The name is quite cheeky considering Broadcom’s “Trident” and “Tomahawk” switch ASICs belong to the StrataXGS family.

Nvidia secured CoreWeave as a major customer but provided limited technical details about the Spectrum-XGS ASIC. Now, Cisco is rolling out its own components for the scale-across networking market, setting up a three-way competition among the networking heavyweights.

Problem: AI is too big for one building

To understand why multiple vendors are flooding into this space, consider the scale of modern AI infrastructure. Training large language models or running complex AI systems requires thousands of high-performance processors working together, generating enormous amounts of heat and consuming large amounts of power.

Data centers are reaching severe limits not only in available space, but also in the amount of power they can provide and cool.

“Cisco’s Common Hardware Group is an exciting addition to Cisco’s Cisco Common Hardware Group,” said Martin Rand, executive vice president of Cisco’s Common Hardware Group. “AI computing is exceeding the capacity of even the largest data centers, increasing the need for reliable and secure connectivity between data centers hundreds of miles apart.”

The industry has traditionally addressed capacity challenges through two approaches: scale-up (adding more functionality to individual systems) or scale-out (connecting more systems within the same facility).

But both strategies are reaching their limits. Data centers are running out of physical space, power grids can’t provide enough power, and cooling systems can’t dissipate heat fast enough.

This requires a third approach. “Scale-across” refers to distributing AI workloads across multiple data centers located in different cities or different states. However, this creates a new problem in that connectivity between these facilities becomes a significant bottleneck.

Why traditional routers aren’t enough

AI workloads behave differently than typical data center traffic. Training runs generate large, bursty traffic patterns. That is, intense data movement is followed by periods of relative quiet. If the networks connecting your data centers can’t absorb these spikes, everything slows down, wasting expensive computing resources and, importantly, time and money.

Traditional routing equipment was not designed for this. Most routers prioritize either raw speed or advanced traffic management, but it’s difficult to provide both at the same time while maintaining reasonable power consumption. For AI data center interconnect applications, organizations need all three: speed, intelligent buffering, and efficiency.

Cisco answer: 8223 system

Cisco’s 8223 system represents a departure from general-purpose routing equipment. Housed in a compact 3-rack unit chassis, it delivers 64 ports of 800 Gigabit connectivity. This is currently the highest density available for fixed routing systems. More importantly, it can process more than 20 billion packets per second and scale interconnect bandwidth to 3 exabytes per second.

This system features deep buffering capabilities enabled by the P200 chip. Think of a buffer as a temporary holding area for your data, like a reservoir that receives water during heavy rains. When AI training causes traffic spikes, the 8223’s buffer absorbs the spikes, preventing network congestion that would slow down expensive GPU clusters sitting idle waiting for data.

Power efficiency is also an important benefit. As a 3RU system, the 8223 achieves what Cisco describes as “switch-like power efficiency” while maintaining routing functionality. This is important if your data center is already straining your power budget.

The system also supports 800G coherent optics, enabling connectivity between facilities over up to 1,000 kilometers. This is essential for geographically distributing AI infrastructure.

Industry adoption and real-world applications

Major hyperscalers have already implemented this technology. Microsoft, an early adopter of Silicon One, recognized the value of this architecture across multiple use cases.

“The common ASIC architecture makes it easy to scale from initial use cases to multiple roles in DC, WAN, and AI/ML environments,” said Dave Maltz, technical fellow and corporate vice president of Azure Networking at Microsoft.

Alibaba Cloud plans to use P200 as the foundation for extending its eCore architecture. Dennis Cai, vice president and head of network infrastructure at Alibaba Cloud, said the chip “enables expansion into the core network, replacing traditional chassis-based routers with clusters of P200-powered devices.”

Lumen is also exploring how this technology fits into its network infrastructure plans. Dave Ward, Lumen’s chief technology officer and head of product, said the company is “exploring how the new Cisco 8223 technology fits into our plans to enhance network performance and deploy superior service to our customers.”

Programmability: Future-proof your investment

One aspect of AI data center interconnection infrastructure that is often overlooked is adaptability. AI networking requirements are rapidly evolving, with new protocols and standards emerging regularly.

Legacy hardware typically requires replacement or expensive upgrades to support new features. The P200’s programmability addresses this challenge.

Organizations can update silicon to support new protocols without replacing hardware. This is important when there is significant capital investment in individual routing systems and AI networking standards are in flux.

Security considerations

Connecting data centers hundreds of miles apart creates security challenges. 8223 includes line-rate encryption with post-quantum resiliency algorithms to address concerns about future threats from quantum computing. Integration with Cisco observability platforms provides detailed network monitoring to quickly identify and resolve issues.

Can Cisco compete?

Cisco faces established competition as Broadcom and Nvidia have already established themselves in the scale-across networking market. However, the company brings advantages including a long-standing presence in enterprise and service provider networks, a mature Silicon One portfolio launched in 2019, and relationships with major hyperscalers already using its technology.

The 8223 will initially ship with open source SONiC support, and IOS XR will also be available in the future. The P200 will be available on multiple platform types including modular systems and the Nexus portfolio.

The flexibility of this deployment option can be crucial as organizations seek to avoid vendor lock-in when building a distributed AI infrastructure.

It remains to be seen whether Cisco’s approach will become the industry standard for AI data center interconnection, but the fundamental problem all three vendors are addressing – connecting distributed AI infrastructure efficiently – will become increasingly pressing as AI systems continue to expand beyond the limits of a single facility.

The true winner may ultimately not be determined by technical specifications alone, but by which vendor can offer the most complete ecosystem of software, support, and integration capabilities around their silicon.

See also: Cisco: Protecting your enterprise in the age of AI

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expos in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events such as Cyber ​​Security Expo. Click here for more information.

AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI and human collaboration in B2B content creation
versatileai

Related Posts

Tools

Google aims to put an AI agent on every desk

October 9, 2025
Tools

3D Gaussian Splatting Overview

October 9, 2025
Tools

Introducing the Gemini 2.5 computer usage model

October 8, 2025
Add A Comment

Comments are closed.

Top Posts

3D Gaussian Splatting Overview

October 9, 20252 Views

Introducing the Gemini 2.5 computer usage model

October 8, 20252 Views

Meta has created a game to track employee AI use and promote adoption

October 3, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

3D Gaussian Splatting Overview

October 9, 20252 Views

Introducing the Gemini 2.5 computer usage model

October 8, 20252 Views

Meta has created a game to track employee AI use and promote adoption

October 3, 20252 Views
Don't Miss

Cisco AI Routers Solve Data Center Interconnect Challenges

October 10, 2025

AI and human collaboration in B2B content creation

October 10, 2025

AI illustration tools like PicLumen revolutionize the creation of nature-themed digital art | AI News Details

October 10, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?