Eridu, an AI networking startup founded by serial entrepreneur Drew Perkins, emerged from stealth on March 10, 2026 with an oversubscribed $200 million Series A to build clean-sheet network switches with custom silicon designed from the ground up for AI data centers. The company argues that existing networking hardware — from Broadcom, Nvidia, Cisco, and Marvell — is hitting an architectural ceiling that incremental improvements can’t fix, and that connecting millions of GPUs requires a fundamentally different approach to switch design.
Key Takeaway: Eridu’s $200M bet signals that AI networking is splitting into its own hardware category — and the startup’s clean-sheet approach to custom silicon could disrupt incumbents the same way Infinera disrupted optical networking a decade ago.
Who Is Behind Eridu and Why Does It Matter?
Drew Perkins isn’t a first-time founder chasing an AI trend. He co-created the Point-to-Point Protocol (PPP) in the 1980s — a foundational piece of TCP/IP. According to TechCrunch, his track record includes:
- Lightera Networks: Co-founded, sold to Ciena for over $500 million in 1999
- Infinera: Co-founded, IPO’d, sold to Nokia for $2.3 billion in 2025
- Gainspeed: Co-founded, also acquired by Nokia
His co-founder, Omar Hassen (Chief Product Officer), comes from networking chip design at Broadcom and Marvell — the two companies whose silicon currently dominates data center switching.
The $200M Series A was led by Socratic Partners, with legendary VC John Doerr, Hudson River Trading, Capricorn Investment Group, and Matter Venture Partners participating. Notably, TSMC’s investing arm (VentureTech Alliance) is among the investors, signaling a fabrication partnership for Eridu’s custom silicon. According to TechCrunch, MediaTek, Bosch Ventures, and TDK Ventures also participated, bringing total funding to $230 million.
“My phone has been ringing off the hook,” Perkins told TechCrunch. “It’s been a fun time raising money for this venture… we’re very oversubscribed.”
The Problem: Networking Can’t Keep Up with GPU Compute
Eridu’s thesis boils down to a math problem that every AI infrastructure team is facing.
According to Perkins in his Network World interview: “GPU compute and memory bandwidth are improving by roughly 10x per year, while data center switches from Broadcom, Marvell, Cisco, etc. are still only improving 2–3x every 2–3 years.”
| Technology | Improvement Rate | Scale Target |
|---|---|---|
| GPU compute (Nvidia roadmap) | ~10x per year | Millions of GPUs per cluster |
| Memory bandwidth | ~10x per year | HBM4 and beyond |
| Network switching (incumbent) | 2-3x every 2-3 years | 51.2T per chip (Broadcom Tomahawk 5) |
That widening gap means the network is increasingly the bottleneck — not the GPUs themselves. A typical cloud data center connects roughly 100,000 servers using tens of gigabits each. AI data centers connect millions of GPUs requiring hundreds of gigabits each, with synchronized all-to-all communication patterns that punish any network imperfection.
Promode Nedungadi, Eridu’s CTO, told Network World that the problem is getting worse, not better: “Techniques like mixture-of-experts models and the disaggregation of inference into separate prefill and decode stages all require more data movement. The amount of data being moved per token is growing.”
What Is Eridu Actually Building?
Eridu is developing a clean-sheet network switch built around custom silicon — new ASICs designed from scratch for AI traffic patterns rather than adapting general-purpose switching chips.
According to Perkins: “There’s no doubt that we are developing our own silicon. We’re developing the most advanced silicon in the networking sector, bar none, period, and that’s absolutely necessary. You don’t get to an order-of-magnitude higher scale using off-the-shelf silicon.”
The Technical Approach
While Eridu hasn’t disclosed detailed specifications or a GA date, the public details from their Network World and TechCrunch interviews reveal the architecture:
- Custom silicon with chiplet architecture: Leveraging TSMC’s advanced packaging and chiplet-based design to break through single-die limitations
- On-chip integration: Moving networking functions that currently require separate optical connections onto the chip itself, reducing hops, latency, and power consumption
- Clean-sheet system design: Complete switch systems — not just chips — that replace traditional tiered network architectures
“We believe you need to be on a different technology arc than what the mainstream technology is,” Hassen told Network World. “You’ve got to take advantage of everything you can from chiplet-based architecture, clean-sheet design, and advanced packaging.”
Three Scales of AI Networking
Perkins described three distinct networking challenges that Eridu is targeting:
| Scale | Definition | Current State |
|---|---|---|
| Scale-up | GPU-to-GPU interconnects within a training domain | NVLink, NVSwitch (proprietary Nvidia) |
| Scale-out | Broader cluster fabric connecting training domains | Spectrum-X, Broadcom switches, Cisco Silicon One |
| Scale-across | Linking data centers across cities and continents | Emerging — standards bodies beginning to address |
The scale-across layer is particularly interesting. As we covered in our analysis of Meta’s $135 billion Nvidia Spectrum-X deployment, hyperscalers are building unified architectures spanning multiple data centers. Eridu sees this as an underserved opportunity.
The Competitive Landscape: Who Is Eridu Taking On?
Eridu is entering one of the most fiercely competitive markets in semiconductors. Here’s how the main players stack up:
| Company | Approach | AI Networking Product | Market Position |
|---|---|---|---|
| Broadcom | Merchant silicon + custom ASICs | Tomahawk 5 (51.2T), Jericho3-AI | Dominant — supplies most hyperscalers, $100B+ AI chip TAM by 2027 per Reuters |
| Nvidia | Vertically integrated platform | Spectrum-X switches + SuperNIC | Growing — adopted by Meta, Oracle, xAI |
| Cisco | New AI-specific ASIC | Silicon One G200 (AI networking) | Launched Feb 2026, 28% faster AI job completion per Reuters |
| Marvell | Merchant silicon + custom compute | Teralynx, custom AI accelerators | $300M+ Ethernet switch business in FY2026, per Next Platform |
| Eridu | Clean-sheet custom silicon | Unannounced — targeting order-of-magnitude improvement | Pre-revenue, $230M funded |
Eridu’s argument is that all of these incumbents are iterating on the same underlying switch architecture — higher-speed SerDes, bigger buffers, more ports — rather than fundamentally rethinking how an AI network switch should work. It’s the classic disruptor argument: incumbents optimize the existing curve while a startup jumps to a new one.
Whether Eridu can execute is the open question. Custom networking silicon is a multi-year, capital-intensive endeavor. Infinera succeeded in optical with a similar clean-sheet approach, but the AI networking market moves faster and has deeper-pocketed incumbents.
What This Means for the AI Networking Market
Eridu’s $200M raise is part of a broader pattern: AI networking is becoming its own distinct market category, separate from enterprise networking.
According to PitchBook’s Q4 2025 VC trends report, DevOps infrastructure drew the most VC capital at $1.8 billion, driven by “feed the GPU” economics. AI networking sits at the intersection of this investment wave — and it’s attracting capital at a pace not seen since the optical networking boom of 2000.
The evidence is stacking up:
- Meta spending $135 billion on AI infrastructure, with Spectrum-X Ethernet as the fabric
- Broadcom projecting over $100 billion in AI chip sales by 2027
- Cisco launching a dedicated AI networking chip (Silicon One G200) for the first time
- Nvidia acquiring Enfabrica, another AI networking startup, for $900 million
- Eridu raising $230M to build clean-sheet switch silicon
This isn’t incremental growth. It’s a market inflection where the rules of network hardware design are being rewritten for a new class of workload.
What Network Engineers Should Watch For
As a network engineer, you might look at a pre-revenue startup building custom silicon and think it’s irrelevant to your career today. It’s not. Here’s why:
1. The Skills Are the Same — the Scale Is Different
Eridu’s switches will still run on Ethernet. They’ll still participate in leaf-spine Clos fabrics. They’ll still use BGP underlay, RoCE transport, and ECN/PFC for lossless forwarding. The fundamental protocols don’t change — but the scale, the traffic patterns, and the telemetry requirements do.
If you’re studying for CCIE Enterprise Infrastructure or CCIE Data Center, the fabric design, QoS, and troubleshooting skills you’re building are directly applicable to AI networking. As we explored in AI Network Automation: Your CCIE Insurance Policy, the CCIE foundation is becoming more valuable, not less.
2. Vendor Diversification Is Accelerating
For the past decade, Broadcom merchant silicon powered most data center switches regardless of brand. Eridu, Cisco’s Silicon One, Nvidia’s Spectrum-X, and Marvell’s Teralynx are all fragmenting that monopoly. Network engineers who understand multiple platforms — not just one vendor’s CLI — will be in higher demand.
As we covered in Every Networking Vendor Is Now an AI Company, the vendor landscape is reshuffling around AI workloads, and engineers who can evaluate and deploy across platforms command premium salaries.
3. The Job Market Is Expanding
Every new entrant in AI networking creates engineering jobs — not just at the startup itself, but at the hyperscalers evaluating and deploying the technology, the system integrators building the data centers, and the managed service providers operating them. Eridu’s 100+ employees today will grow significantly as they approach product launch.
According to Dell’Oro Group, Ethernet has more than doubled InfiniBand as the leading fabric for AI scale-out networks. That expansion creates thousands of roles for engineers who understand both traditional networking and AI-specific requirements.
The Bottom Line: Architecture Matters Again
For years, data center networking felt commoditized — the same Broadcom silicon in every switch, the same leaf-spine topology, the same BGP underlay. The AI infrastructure buildout is changing that. Architecture choices matter again because the workloads are fundamentally different from anything traditional Ethernet was designed for.
Eridu may succeed or it may not — building custom networking silicon is one of the hardest things in semiconductors. But its $200M raise and the pedigree of its founders tell us something important: the smartest money in tech believes that current networking architecture isn’t good enough for AI at scale, and whoever solves that problem will capture an enormous market.
For network engineers, the message is clear: the same CCIE skills that built the internet and the cloud are now the foundation for AI infrastructure — but you need to extend them into RoCE, lossless Ethernet, and AI workload telemetry to stay at the cutting edge.
Frequently Asked Questions
What is Eridu and what does the AI networking startup do?
Eridu is an AI networking startup founded by Drew Perkins (co-founder of Infinera and Lightera) that emerged from stealth in March 2026 with $200M in Series A funding. The company is building clean-sheet network switches with custom silicon designed specifically for AI data center workloads.
How is AI data center networking different from cloud networking?
AI data centers connect millions of GPUs requiring massive east-west bandwidth for synchronized all-to-all communication during training. Cloud data centers typically serve 100,000 servers with more modest per-node bandwidth. AI workloads demand lossless RDMA fabrics with nanosecond-class congestion control — fundamentally different from traditional cloud networking.
Who are Eridu’s competitors in AI networking?
Eridu competes with Nvidia (Spectrum-X), Broadcom (Tomahawk/Jericho), Cisco (Silicon One), Marvell (Teralynx), and Arista in the AI networking space. Each takes a different approach: Nvidia bundles switches with GPUs, Broadcom sells merchant silicon, and Eridu is building clean-sheet custom ASICs.
What skills do network engineers need for AI networking jobs?
AI networking roles require expertise in RoCE (RDMA over Converged Ethernet), lossless Ethernet fabric design with PFC/ECN, leaf-spine Clos topologies at massive scale, adaptive routing, and network telemetry for GPU workload optimization. CCIE-level foundation in switching and routing translates directly.
Is Eridu a real competitor to Broadcom and Nvidia?
It’s too early to say. Eridu is pre-revenue with no disclosed product specs or GA date. However, Drew Perkins’ track record (Infinera’s $2.3B exit to Nokia, Lightera’s $500M exit to Ciena) and the TSMC partnership give the company credibility. Custom silicon development takes 2-3 years minimum, so real competitive validation won’t come until late 2027 or 2028 at earliest.
Want to position your networking career for the AI infrastructure wave? Contact us on Telegram @phil66xx for a free skills assessment and personalized study plan.