Microsoft’s MOSAIC technology replaces traditional laser-based optical cables with MicroLED-powered interconnects that cut data center networking power consumption by up to 68%. Announced in March 2026, MOSAIC uses hundreds of parallel low-speed channels on medical-grade imaging fiber to deliver 800 Gbps throughput over 50 meters — ten times the reach of copper — while consuming only 3.1–5.3W per link compared to 9.8–12W for conventional optics. With a working proof-of-concept transceiver built in collaboration with MediaTek, Microsoft targets commercialization by late 2027.
Key Takeaway: MOSAIC’s “Wide-and-Slow” architecture eliminates the laser bottleneck in AI data center networking — delivering the same bandwidth at half the power by trading a few high-speed channels for hundreds of slow, cheap, reliable MicroLED channels on imaging fiber.
Why Does Data Center Networking Power Matter for AI?
Electricity accounts for 46% of total spending at enterprise data centers and 60% at service provider facilities, according to IDC. AI data center energy consumption is growing at a compound annual rate of 44.7%, projected to reach 146 terawatt-hours by 2027. Networking interconnects — the cables connecting GPUs, switches, and storage — are a significant and growing portion of that power budget.
In a typical NVIDIA NVL72 pod interconnecting 72 B200 GPUs, optical link power adds roughly 20 kW per rack. At 100,000-GPU scale, optical link failures occur every 6–12 hours, according to the MOSAIC research paper. These constraints force engineers to rely on copper cables, limiting all 72 GPUs to a single rack and pushing rack power density to 120 kW — requiring complex liquid cooling and causing deployment delays.
| Interconnect Type | Reach | Power (800G) | Failure Rate | Temperature Sensitivity |
|---|---|---|---|---|
| Copper (DAC) | ~2 meters | Passive (0W) | Very low | Low |
| Laser-based optics (AOC) | 100+ meters | 9.8–12W | High (FIT ~hundreds) | High — dust/heat sensitive |
| MOSAIC MicroLED | 50 meters | 3.1–5.3W | Very low (FIT <20) | Low — temperature-stable |
“Power is the biggest bottleneck in AI datacenters today,” said Neil Shah, VP for research and partner at Counterpoint Research, in an interview with NetworkWorld. “Microsoft’s use of inexpensive MicroLEDs is a good approach which could keep the thermal bottleneck in check within the power-hungry AI data center, thereby reducing TCO for hyperscalers and eventually CIOs renting the infrastructure.”
For CCIE Data Center candidates, understanding the relationship between interconnect power budgets, rack density constraints, and topology design decisions is becoming essential knowledge. The days when cabling was “just plumbing” are over.
How Does Microsoft MOSAIC Actually Work?

MOSAIC flips the conventional “Narrow-and-Fast” (NaF) optical interconnect model on its head with a “Wide-and-Slow” (WaS) architecture. Instead of pushing data through 8 channels at 100 Gbps each (laser-based 800G), MOSAIC distributes the same 800 Gbps across 400+ channels running at just 2 Gbps each. This architectural shift eliminates the need for power-hungry components that dominate traditional optical links.
The Three Core Components
1. Directly Modulated MicroLEDs — MOSAIC replaces communication-grade lasers (consuming tens to hundreds of milliwatts each) with MicroLEDs originally designed for display technology. Each MicroLED measures just a few microns across and consumes only a few hundred microwatts — 100x to 1,000x less than a laser. A single monolithically integrated MicroLED array packs 400+ emitters within 1 mm², using simple ON/OFF (NRZ) modulation at 2 Gbps per channel. According to the MOSAIC SIGCOMM paper, MicroLEDs are inherently temperature-stable and dust-insensitive — two major reliability pain points for lasers.
2. Multicore Imaging Fiber — Borrowed from medical endoscopy, imaging fiber bundles thousands of individual fiber cores inside a single cable. “Imaging fiber looks like a standard fiber, but inside it has thousands of cores,” wrote Paolo Costa, Microsoft partner research manager and MOSAIC’s lead researcher. “That was the missing piece. We finally had a way to carry thousands of parallel channels in one cable.” Each MicroLED’s signal maps to multiple fiber cores, which simplifies alignment and packaging.
3. Low-Power Analog Backend — By running each channel at only 2 Gbps with NRZ encoding, MOSAIC eliminates the need for power-hungry DSP (digital signal processing), ADC/DAC converters, and CDR (clock data recovery) circuits that dominate traditional optical transceiver power budgets. The clock signal travels on a dedicated control channel (adding only 0.25% overhead for 400 channels), and simple analog equalization compensates for chromatic dispersion.
Power Breakdown: Where the Savings Come From
| Component | Traditional Optics (800G) | MOSAIC (800G) |
|---|---|---|
| DSP / CDR | 3.5W | 0W (eliminated) |
| Light source + drivers | 4.7W (lasers) | 1.2W (MicroLEDs) |
| Digital backend | Included in DSP | 0.4W |
| Host interface | 0.2–2.4W | 0.2–2.4W |
| MCU / DC-DC | 1.4W | 1.3W |
| Total (end-to-end) | 9.8–12W | 3.1–5.3W |
The DSP elimination alone saves 3.5W — representing roughly 30% of a traditional optical link’s power budget. For a 1.6 Tbps link, MOSAIC projects 10.6W versus 23–25W for conventional designs, according to the SIGCOMM paper.
How Does MOSAIC Compare to Co-Packaged Optics (CPO)?
Co-packaged optics (CPO) is the industry’s other major play to cut interconnect power. NVIDIA and Broadcom are advancing CPO as the preferred path, with NVIDIA’s CPO-based switches promising up to 3.5x lower power consumption over pluggable transceivers, slated for commercial availability in 2026. CPO integrates optical transceivers directly into the switch or NIC package, shortening the electrical traces between the host chip and the optics.
According to recent industry estimates cited in the MOSAIC paper, CPO reduces power consumption by 25–30% compared to pluggable transceivers. MOSAIC achieves 56–68% reduction — and the two approaches are complementary, not competing. When combined with CPO packaging, MOSAIC’s advantages amplify because the shorter chip-to-chip electrical paths allow direct MicroLED modulation without high-speed conversion circuitry.
| Feature | Pluggable Optics | Co-Packaged Optics (CPO) | MOSAIC MicroLED |
|---|---|---|---|
| Power reduction vs. pluggable | Baseline | 25–30% | 56–68% |
| Light source | Lasers | Lasers | MicroLEDs |
| Reach | 100+ m | 100+ m | Up to 50 m |
| Laser supply chain risk | Yes | Yes | No |
| Temperature sensitivity | High | Medium | Low |
| CPO-compatible | N/A | N/A | Yes |
| Commercial availability | Now | 2026 | Late 2027 |
There is a critical supply chain angle here. Laser supply shortages are expected to persist through 2027, according to Naresh Singh, senior director analyst at Gartner. “Microsoft’s MicroLED technology can come as a good alternative, in this context,” Singh said. By using commodity MicroLED and CMOS sensor manufacturing — both mature, high-volume supply chains — MOSAIC sidesteps the laser bottleneck entirely.
What Are the Limitations and Open Questions?

MOSAIC is not a silver bullet. Counterpoint Research’s Shah identified several challenges that could limit widespread adoption, per NetworkWorld:
Chromatic dispersion limits reach. MicroLEDs have broad spectral widths (tens of nanometers versus sub-picometer for lasers), making them susceptible to chromatic dispersion over distance. MOSAIC’s 50-meter sweet spot works for intra-facility connectivity but cannot replace long-haul laser optics.
Bandwidth ceiling risk. MOSAIC’s current sweet spot is 400G–800G. By the 2027–2028 deployment window, the industry may have moved to 1.6T or 3.2T targets. However, the architecture is designed to scale: increasing channel count or boosting per-channel rates to 4–8 Gbps can reach 1.6 Tbps and beyond. Simulations in the SIGCOMM paper show 8 Gbps per channel is achievable at 10 meters.
Ecosystem adoption uncertainty. Without buy-in from NVIDIA or AMD for GPU-side integration, scalability remains uncertain. Standardization is another hurdle — traditional optical interconnects have benefited from Multi-source Agreements (MSAs) that define transceiver standards. “Recent interconnect offerings have to aim for some standardization to drive faster and sustained adoption,” Singh noted at Gartner.
Infrastructure changes. Specialized cabling (imaging fiber) and potential rack design changes add cost beyond the MicroLED components themselves. The drop-in QSFP/OSFP compatibility helps, but imaging fiber is not standard data center cabling today.
Despite these challenges, the MediaTek proof-of-concept demonstrates manufacturing feasibility, and Microsoft’s parallel deployment of Hollow Core Fiber (HCF) for inter-data-center links shows a comprehensive strategy — MOSAIC for short-range intra-facility, HCF for long-distance.
What Does This Mean for Data Center Network Topology?
MOSAIC’s 50-meter reach at copper-like power levels opens topology options that were previously impractical. Current data center fabrics use Top-of-Rack (ToR) switches because copper cables cannot span beyond 2 meters at high speeds. This forces a specific leaf-spine architecture with ToR switches as an intermediate layer.
With 50-meter MicroLED reach, according to Data Center Knowledge, several topology changes become feasible:
- ToR switch elimination — servers connect directly to Row Switches or End-of-Row (EoR) switches, reducing latency, hardware cost, and single points of failure
- GPU fabric disaggregation — instead of confining 72 GPUs to one rack (as in NVL72 today), MicroLED links enable GPU-to-GPU connectivity across multiple racks at low power
- Advanced topologies — multi-dimensional torus, dragonfly, and hypercube topologies become practical when 50-meter reach removes the copper distance constraint
- Memory disaggregation — MOSAIC’s low latency (no FEC or DSP processing, only nanoseconds of delay) supports separating memory pools from compute resources, reducing dependence on costly HBM3e stacking
“This architectural shift enables Microsoft to scale its Azure GPU clusters more densely than rivals such as AWS and Google Cloud, which remain tethered to power-intensive, heat-sensitive laser systems,” said Ron Westfall, VP and analyst at HyperFrame Research, in an interview with Data Center Knowledge.
Frank Rey, Microsoft’s general manager of Azure Hyperscale Networking, framed the two technologies as complementary: “HCF for long-distance inter-datacenter links, MOSAIC for in-facility GPU and server connectivity.”
What Should CCIE Data Center Candidates Know?
CCIE Data Center candidates increasingly need to understand physical-layer constraints driving fabric design decisions. The MOSAIC announcement signals a broader shift: data center networking innovation is moving from the control plane to the physical layer, driven by AI power density requirements.
Key areas to understand:
- Power-per-bit as a design constraint — GPU fabric topology decisions now start with the power budget, not just bandwidth requirements. A 68% power reduction per link changes the math on rack density, cooling design, and switch placement
- Copper vs. optics vs. MicroLED trade-offs — the three-way comparison (reach, power, reliability, cost) is now a practical design exercise, not just theory
- NX-OS and ACI implications — as ToR elimination becomes feasible, leaf-spine fabric designs on Nexus 9000 platforms may evolve toward flatter architectures with fewer switching tiers
- VXLAN EVPN fabric scaling — longer physical reach means larger Layer 2 domains and different VXLAN segment sizing calculations
- HBM and memory architecture — understanding how interconnect capabilities affect GPU memory disaggregation is becoming relevant for data center design conversations
The convergence of optical innovation, AI compute density, and power constraints is reshaping what “data center networking” means. CCIE DC candidates who understand these physical-layer economics will have an edge in design discussions that increasingly start with watts, not just Gbps.
The Bigger Picture: Microsoft’s Dual-Layer Optical Strategy
Microsoft is not betting on a single optical technology. The company is deploying a dual-layer strategy: Hollow Core Fiber (HCF) for long-distance inter-data-center connectivity and MOSAIC MicroLED for short-range intra-facility links.
HCF, acquired through Microsoft’s 2022 purchase of University of Southampton spin-off Lumenisity, transmits light through air rather than glass. Microsoft reports up to 47% faster data transmission and 33% lower latency versus conventional single-mode fiber, based on published Southampton research. HCF is already in production across Azure regions.
Together, these technologies represent a comprehensive optical networking overhaul:
| Layer | Technology | Range | Status |
|---|---|---|---|
| Intra-rack | Copper DAC | <2 m | Current standard |
| Intra-facility | MOSAIC MicroLED | Up to 50 m | PoC complete, 2027 target |
| Inter-DC | Hollow Core Fiber (HCF) | Long-haul | In production (Azure) |
“Overall, I see Microsoft capitalizing on the AI boom by owning the underlying physical efficiency of the cloud,” said HyperFrame Research’s Westfall, “preparing its infrastructure to be the fastest and most cost-effective to operate at scale.”
Frequently Asked Questions
What is Microsoft MOSAIC and how does it reduce data center power?
MOSAIC is a MicroLED-based optical interconnect developed by Microsoft Research in Cambridge, UK. It replaces traditional laser-based fiber optic cables with hundreds of parallel low-speed MicroLED channels transmitted through multicore imaging fiber. According to Microsoft’s SIGCOMM 2025 paper, this “Wide-and-Slow” architecture reduces networking power consumption by 56–68% compared to conventional 800 Gbps optical links.
How does MOSAIC compare to co-packaged optics (CPO)?
CPO integrates laser-based transceivers directly into switch or NIC packages, reducing power by 25–30% versus pluggable transceivers. MOSAIC achieves 56–68% reduction by eliminating lasers entirely. The two approaches are complementary — MOSAIC is fully compatible with CPO configurations and achieves even greater savings when combined, since shorter chip-to-chip paths enable direct MicroLED modulation.
When will MicroLED data center cables be commercially available?
Microsoft expects to commercialize MOSAIC with industry partners by late 2027. A working proof-of-concept transceiver has been miniaturized to thumb-size in collaboration with MediaTek, fitting standard QSFP/OSFP form factors compatible with existing data center equipment.
Does MOSAIC work with existing data center equipment?
Yes. MOSAIC fits standard QSFP/OSFP transceiver form factors and is compatible with existing PCIe electrical interfaces, according to the SIGCOMM 2025 paper. It functions as a drop-in replacement for current optical cables without requiring modifications to servers, switches, or NICs.
What does MOSAIC mean for CCIE Data Center candidates?
CCIE DC candidates should understand how power-per-bit constraints are reshaping GPU fabric topology decisions, the three-way trade-off between copper, laser optics, and MicroLED interconnects, and how technologies like MOSAIC enable architectural changes such as ToR elimination and GPU disaggregation on Nexus 9000 and ACI platforms.
Ready to fast-track your CCIE journey? Contact us on Telegram @firstpasslab for a free assessment.
