STMicroelectronics just entered high-volume production of its PIC100 silicon photonics platform — the manufacturing technology behind the 800G and 1.6T optical modules going into every major AI data center buildout. For network engineers, this is the plumbing layer beneath your VXLAN EVPN overlays and BGP fabrics, and understanding it is becoming essential as data centers push past 400G.

Key Takeaway: Silicon photonics and co-packaged optics are the technologies enabling AI data center fabrics to scale to 800G/1.6T per link while cutting power consumption by up to 70% — and network engineers who understand the optical layer will design better fabrics and troubleshoot faster.

What Is Silicon Photonics and Why Should Network Engineers Care?

Traditional optical transceivers use III-V semiconductor materials (indium phosphide, gallium arsenide) manufactured on specialized processes. Silicon photonics does something fundamentally different: it builds optical components — waveguides, modulators, photodetectors — directly on standard silicon wafers using CMOS manufacturing processes.

According to STMicro’s official announcement (March 9, 2026), the PIC100 platform is now in high-volume production on 300mm wafers — the same wafer size used for mainstream processor manufacturing. This matters because:

  • Cost scales with volume — CMOS manufacturing is the most mature semiconductor process on the planet
  • Integration density — multiple optical channels on a single chip
  • Path to CPO — silicon photonics enables co-packaged optics, the next major architectural shift

STMicro plans to quadruple production capacity by 2027, with further expansion in 2028. The company’s roadmap includes PIC100 TSV (through-silicon via) technology enabling near-packaged and co-packaged optics integration.

Who’s Using PIC100?

According to ST’s blog (March 2026), PIC100 is used by “hyperscalers for optical transceivers.” While STMicro doesn’t name specific customers, the hyperscaler customer base — Google, Amazon, Microsoft, Meta — are the primary buyers of 800G and 1.6T optical modules for AI training fabrics.

STMicro manufactures the silicon photonics die; module vendors (Coherent, Lumentum, InnoLight) integrate it with electronic DSPs from companies like Marvell to create complete transceiver modules.

What Are Co-Packaged Optics and Why Do They Change Everything?

Co-packaged optics (CPO) is the architectural evolution that silicon photonics enables. Instead of plugging transceivers into the front panel of a switch (the model we’ve used for decades), CPO places the optical engine directly on or adjacent to the switch ASIC package.

Pluggable vs. Near-Packaged vs. Co-Packaged

ArchitectureOptical Engine LocationPower per 1.6T LinkDeployments
Pluggable (OSFP/QSFP-DD)Front-panel module~30WMainstream today
Near-packaged optics (NPO)On the board, near ASIC~15-20WEarly 2027+
Co-packaged optics (CPO)Inside ASIC package~9W2028-2030

According to Siemens Semiconductor Packaging research (February 2026), NVIDIA’s analysis shows transitioning from pluggable to CPO in 1.6T networks reduces link power from 30W to 9W — a 70% reduction. At data center scale with thousands of links, that’s megawatts of power savings.

Why CPO Matters for Fabric Design

The power savings are significant, but the architectural impact goes deeper:

Eliminated front-panel bottleneck — current switches are limited by how many transceivers you can physically fit in the front panel. CPO removes this constraint, enabling higher-radix switches with more ports per unit.

Reduced latency — shorter electrical traces between ASIC and optical engine mean lower serialization delay. For RDMA/RoCE workloads in AI training clusters, every microsecond matters.

Changed operational model — with pluggable optics, you can hot-swap a failed transceiver in minutes. CPO modules are soldered to the board — failure requires replacing the entire line card or switch. This is a fundamental operational tradeoff that network engineers need to plan for.

The Deployment Timeline

According to Yole Group analysis cited by the Institution of Electronics (2026), large-scale CPO deployments are expected between 2028 and 2030. The timeline:

  • 2024-2026 — Pluggable optics dominate (OSFP, QSFP-DD at 400G/800G)
  • 2026-2027 — Silicon photonics-based pluggables ramp (PIC100 modules)
  • 2027-2028 — Near-packaged optics enter early production
  • 2028-2030 — CPO enters volume production for hyperscale AI fabrics

For network engineers, this means pluggable optics will be your primary interface for the next 2-3 years. But CPO planning is already happening at hyperscalers — and understanding the implications affects how you design fabrics today.

How Does the 800G to 1.6T Transition Change Fabric Design?

The jump from 400G to 800G — and then to 1.6T — isn’t just about faster links. It fundamentally changes spine-leaf fabric mathematics.

Higher Radix, Fewer Cables

A 51.2Tbps switch ASIC (the current generation) offers different port configurations:

ConfigurationPortsPer-Port SpeedTotal Bandwidth
128-port128400G51.2T
64-port64800G51.2T
32-port321.6T51.2T

The total switch bandwidth is the same, but the 64×800G configuration uses half the cables of 128×400G for the same bisection bandwidth. With 1.6T, it’s a quarter of the cables. At hyperscale — where a single fabric might have 100,000+ cables — this reduces physical complexity, weight, and airflow obstruction dramatically.

Impact on AI Training Fabrics

AI training clusters generate massive east-west traffic between GPU nodes. As we covered in our RoCE vs InfiniBand comparison, GPU-to-GPU communication requires lossless, low-latency connectivity. The 800G/1.6T transition enables:

  • Larger single-tier fabrics — 800G leaf-spine fabrics can support more GPU nodes before requiring a multi-tier design
  • Lower oversubscription — higher per-port bandwidth means closer to 1:1 oversubscription ratios for AI workloads
  • Adaptive routing at scale — 800G/1.6T links combined with packet spraying and adaptive routing eliminate the ECMP polarization issues seen at 400G

PAM4 Signaling: What Engineers Need to Know

Both 800G and 1.6T use PAM4 (Pulse Amplitude Modulation 4-level) signaling, which carries 2 bits per symbol instead of the 1 bit per symbol used in NRZ (Non-Return-to-Zero) signaling at lower speeds. This doubles the data rate per lane but introduces:

  • Tighter signal integrity requirements — PAM4 has a 9.5dB SNR penalty vs. NRZ
  • Higher sensitivity to fiber quality — dirty connectors, tight bends, and substandard patch cords that worked at 100G may fail at 400G/800G
  • FEC dependency — Forward Error Correction is mandatory at 800G/1.6T, adding ~100ns of latency

For troubleshooting: when you see CRC errors or FEC uncorrectable frames on an 800G link, the root cause is usually physical layer — fiber contamination, connector issues, or exceeding the optical power budget. Clean your connectors before opening a TAC case.

What Does This Mean for the CCIE Data Center Track?

The CCIE Data Center blueprint focuses on ACI, VXLAN EVPN, and Nexus platform architecture — which runs on top of these optical interconnects. While the exam doesn’t test optical engineering, understanding the physical layer gives you:

Better Troubleshooting

When a VXLAN tunnel between leaf and spine fails, knowing whether it’s a control-plane issue (BGP EVPN) or a physical-layer issue (optical power, PAM4 signal integrity) cuts your troubleshooting time in half. The switch CLI commands:

show interface transceiver detail
show interface counters errors
show logging | include CRC|FEC

Smarter Fabric Design

When designing a leaf-spine fabric, your optics choice affects cost, power, and reach:

Optic TypeReachPowerUse Case
400G-DR4500m~12WIntra-row leaf-spine
400G-FR42km~12WCross-building
800G-DR8500m~18WAI spine uplinks
800G-FR42km~16WDCI short-haul

Choosing DR4 vs FR4 at each tier of the fabric is a design decision that affects your power budget, cabling infrastructure, and failure domain — exactly the kind of architectural thinking CCIE candidates need.

Career Positioning

As we noted in our Broadcom AI chip market analysis, the data center semiconductor TAM is approaching $94B by 2028. Engineers who understand the full stack — from silicon photonics to VXLAN EVPN — are the architects hyperscalers and enterprises are competing to hire.

Frequently Asked Questions

What is silicon photonics and why does it matter for data centers?

Silicon photonics converts electrical signals to light on a standard silicon chip, enabling optical transceivers to be manufactured using CMOS processes. This reduces cost, increases density, and enables co-packaged optics — placing the optical engine directly on the switch ASIC for massive power and latency savings.

What is STMicro’s PIC100 platform?

PIC100 is STMicroelectronics’ silicon photonics manufacturing platform, now in high-volume 300mm wafer production. It supports 800Gbps and 1.6Tbps optical interconnects for AI data center deployments. STMicro plans to quadruple production capacity by 2027.

What is co-packaged optics (CPO) and when will it be deployed?

CPO places the optical transceiver engine directly inside or adjacent to the switch ASIC package, eliminating the front-panel pluggable module. NVIDIA reports CPO reduces 1.6T link power from 30W to 9W. Industry analysts expect large-scale CPO deployments between 2028 and 2030.

How does the 800G to 1.6T transition affect spine-leaf fabric design?

Higher per-port bandwidth means fewer uplinks needed for the same bisection bandwidth, enabling higher-radix switches with more server-facing ports. A 51.2Tbps switch with 64×800G ports offers the same bandwidth as 128×400G — but in half the physical connections, reducing cabling complexity.

Do CCIE candidates need to understand silicon photonics?

Not at the manufacturing level, but understanding optical layer basics — transceiver types, reach budgets, PAM4 signaling, and the CPO vs. pluggable tradeoff — directly improves your fabric design and troubleshooting skills. These are the kinds of architectural decisions that separate CCIE-level engineers from CCNP-level ones.


The physical layer beneath your VXLAN EVPN fabric is undergoing its biggest transformation in a decade. Silicon photonics and co-packaged optics will reshape how data centers are built — and network engineers who understand both the optical and protocol layers will be the architects who design them.

Ready to fast-track your CCIE journey? Contact us on Telegram @phil66xx for a free assessment.