Samsung and AMD have officially expanded their strategic partnership beyond the Radio Access Network into 5G Core, private networks, and edge AI — marking a pivotal shift from lab verification to commercial deployment. Announced at MWC 2026 in Barcelona, this collaboration now puts AMD EPYC processors at the heart of Samsung’s entire telecom software stack, delivering commercial-grade AI-powered vRAN performance without dedicated hardware accelerators. For service provider engineers, this signals that cloud-native, software-defined architecture is no longer a future roadmap item — it is the production reality operators are deploying today.

Key Takeaway: Samsung’s AI-RAN on AMD EPYC achieved commercial-grade multi-cell performance without hardware accelerators, proving that x86-based vRAN is production-ready and expanding the partnership into 5G Core and edge AI platforms that will reshape how service providers build and operate networks.

What Exactly Did Samsung and AMD Announce at MWC 2026?

Samsung Electronics announced new breakthroughs with AMD across its entire network portfolio — 5G Core, virtualized RAN (vRAN), and private networks — on March 2, 2026. According to Samsung’s official press release (2026), this achievement “marks a key milestone for both companies that move forward from the joint verification stage to commercial deployments.” The partnership is no longer a proof-of-concept; Videotron, one of Canada’s major telecommunications operators, has already selected Samsung to deploy 5G Non-Standalone (NSA) and 4G LTE Core gateway solutions powered by AMD EPYC 9005 Series CPUs.

At MWC 2026, Samsung demonstrated AI-RAN running on AMD EPYC processors with successful multi-cell testing results from Samsung’s R&D Lab. This matters because multi-cell testing validates scalable deployments — single-cell demos are table stakes, but multi-cell proves the architecture can handle real-world cell density and interference patterns. The key technical claim: commercial-grade performance using a fully virtualized software stack on AMD CPUs without additional accelerator cards.

AnnouncementDetailsSignificance
Videotron 5G CoreAMD EPYC 9005 Series powering 5G NSA + 4G LTE Core gatewayFirst commercial core deployment on AMD silicon
AI-RAN Multi-CellvRAN on AMD EPYC with no hardware acceleratorsProves software-only approach scales beyond single cell
Network in a Server (NIS)Edge AI platform on AMD CPU for enterprise private networksConsolidates RAN + AI onto single COTS server
Open Telco AIAMD Instinct GPUs training telco-specific AI models with AT&TIndustry-wide push for telecom-grade AI

Samsung’s Keunchul Hwang, EVP and Head of Technology Strategy Group, stated that the collaboration “emphasizes what’s possible when AI-native, open and virtualized architectures meet advanced compute innovations.” AMD’s Derek Dicker, Corporate VP of the Enterprise Business Group, confirmed that “latest generation EPYC processors deliver the performance, efficiency and scalability that network operators and enterprises need.”

Why Is This Partnership Expanding Beyond the RAN Now?

The timing reflects a broader industry shift: operators are moving from isolated vRAN pilots to full-stack cloud-native deployments spanning RAN, core, and edge simultaneously. According to Light Reading’s Omdia analyst Gabriel Brown (2026), Samsung and AMD’s collaboration now delivers “tangible benefits as they move from verification into deployment,” demonstrating that software-based solutions on AMD CPUs achieve commercial-grade performance without hardware accelerators.

Three converging forces are driving this expansion:

  1. Economic pressure on operators. Telecom capital expenditure is under scrutiny. Running RAN, core, and edge AI on common x86 infrastructure (AMD EPYC) eliminates separate hardware platforms, reducing procurement complexity and maintenance costs. Samsung’s January 2026 milestone — completing its first commercial vRAN call on a single HPE COTS server running Wind River cloud platform — proved consolidation works in production.

  2. AI workload demands at the edge. Operators want to monetize 5G infrastructure with AI services, not just connectivity. Samsung’s Network in a Server (NIS) runs video analytics, sensor detection, and Integrated Sensing and Communication (ISAC) workloads alongside RAN functions — all on a single AMD-powered server. A major Japanese operator has already validated these use cases in real-world environments.

  3. Open ecosystem momentum. According to Grand View Research (2026), the Open RAN market reached $6.53 billion in 2025 and is projected to hit $45.09 billion by 2033 at a 26.8% CAGR. Samsung’s open ecosystem approach — supporting multiple chipset partners — aligns with operator demand for vendor diversity. Orange Group expanded its vRAN and Open RAN deployment with Samsung across Europe in February 2026, moving from pilot to field deployment in France.

For engineers working in service provider environments, this shift means the infrastructure you manage is increasingly software on general-purpose compute, not proprietary ASIC-based platforms.

How Does Samsung’s AI-RAN Architecture Actually Work?

Samsung’s AI-RAN architecture runs virtualized RAN functions as containerized workloads on AMD EPYC processors, using a cloud-native software stack that eliminates the need for dedicated Layer 1 (L1) hardware accelerators. This is the critical technical differentiator — most competing vRAN implementations still rely on FPGAs or custom ASICs for compute-intensive L1 processing (FFT, channel estimation, LDPC encoding/decoding).

Samsung AMD AI-RAN Technical Architecture

The Software Stack

The architecture layers look like this from bottom to top:

  1. Hardware layer: AMD EPYC 9005 Series (Zen 5 architecture) or EPYC 8005 for edge-optimized deployments. The 8005 specifically targets telco edge with support for wide thermal operating ranges and NEBS-compliant form factors.

  2. Cloud platform: Wind River or equivalent Kubernetes-based container orchestration. Samsung’s first commercial vRAN call used Wind River’s cloud platform on HPE ProLiant servers.

  3. Virtualized network functions: Samsung’s vRAN software handles L1/L2/L3 processing entirely in software. The AI component uses the same AMD CPU for real-time RAN optimization — beam management, scheduling, interference mitigation — without offloading to separate AI accelerators.

  4. AI overlay: Samsung’s NIS platform enables additional AI workloads (video analytics, ISAC, anomaly detection) to run alongside RAN functions on the same physical server.

Why No Accelerator Matters

Traditional vRAN deployments use Intel FlexRAN with FPGA assist or NVIDIA’s Aerial platform with GPU acceleration. Samsung’s approach eliminates this dependency entirely. According to Samsung (2026), this “underscores Samsung’s ongoing shift toward software-driven architectures designed to reduce hardware dependency and provide operators with greater choice and adaptability.”

The implication for CCIE SP engineers: troubleshooting vRAN performance issues will increasingly involve Linux kernel tuning, DPDK configuration, CPU pinning, and NUMA topology optimization — not hardware accelerator firmware updates.

The EPYC 8005 Edge Play

According to Network World (2026), AMD’s EPYC 8005 processors are “designed for edge environments a telco will face” with high compute density for vRAN workloads, support for wide thermal operating ranges enabling OEMs to certify NEBS-compliant platforms, and small-form-factor system support for outdoor and ruggedized deployments. This is AMD’s direct answer to Intel’s Granite Rapids Xeon 6, which has also been making inroads into Samsung’s competitor ecosystem.

What Is Network in a Server and Why Should SP Engineers Care?

Network in a Server (NIS) is Samsung’s fully virtualized next-generation Edge AI platform that consolidates multiple network functions and AI workloads onto a single commercial off-the-shelf (COTS) server powered by AMD EPYC CPUs. According to Samsung (2026), NIS helps “operators easily incorporate AI into their networks, reduce operational complexity and unlock new opportunities.”

Samsung AMD Partnership Industry Impact

At MWC 2026, Samsung demonstrated NIS with use cases validated by a major Japanese operator in real-world environments:

Use CaseTechnologySP Engineering Relevance
Video analyticsAI inference on edge computeQoS policy for real-time video streams
Sensor/radar detectionISAC (Integrated Sensing and Communication)New RAN signaling protocols and interference management
HyperconnectivityNext-gen device densityCapacity planning for massive IoT deployments

For service provider engineers, NIS represents a fundamental shift in how edge infrastructure is designed. Instead of dedicated appliances for each function — a separate RAN unit, a separate MEC server, a separate AI inference box — everything runs as containerized workloads on a single platform. This is the same architectural pattern that drove the transition from hardware-based to software-based MPLS in the core, now extending to the RAN edge.

The operational model changes dramatically. Instead of managing separate hardware lifecycle for RAN, compute, and AI, operators manage a unified Kubernetes cluster. Network function upgrades become container image pulls. Scaling is horizontal — add another COTS server — rather than forklift upgrades.

How Does the Open Telco AI Initiative Fit In?

AMD is a founding participant in Open Telco AI, a GSMA-led global initiative launched at MWC 2026 to build telecom-specific AI models that general-purpose LLMs cannot match. According to Network World (2026), the initiative “addresses the limitations of general-purpose AI models like large language models when applied to telecom-specific tasks such as network operations, standards interpretation, and troubleshooting.”

The collaboration structure:

  • AT&T contributes Open Telco models (training data from real operator networks)
  • AMD provides compute via Instinct GPUs running the ROCm open software stack
  • TensorWave offers hosting infrastructure for model training
  • AMD Enterprise AI Suite serves as the production deployment layer with Kubernetes-native container orchestration

This is significant because telecom AI isn’t a generic chatbot problem. Network fault correlation, traffic prediction, anomaly detection, and automated remediation require models trained on actual telecom data — BGP state changes, MPLS label distributions, RAN KPI time series, and 3GPP signaling traces. Open Telco AI is building these purpose-built models on AMD’s GPU infrastructure.

For CCIE SP candidates, this means understanding how AI/ML integrates with traditional SP protocols is becoming a differentiator. The MWC 2026 AI-native 6G discussions we covered earlier this month laid out the roadmap; the Samsung-AMD-GSMA collaboration is the execution.

What Does This Mean for the Competitive Landscape?

Samsung’s AMD partnership directly parallels Nokia’s relationship with NVIDIA in the vRAN space. According to Network World (2026), “the partnership with Samsung is similar to the one Nokia has with Nvidia.” This creates a clear two-camp dynamic in the telecom infrastructure market:

Vendor AllianceRAN SiliconAI AccelerationCore Platform
Samsung + AMDAMD EPYC (CPU-only vRAN)AMD Instinct GPUs (Open Telco AI)Cloud-native on AMD EPYC 9005
Nokia + NVIDIANVIDIA Grace (ARM-based)NVIDIA Aerial + GPUNVIDIA-accelerated stack
EricssonIntel Xeon / custom ASICMixedTraditional + cloud-native

Samsung’s approach is unique because it achieves commercial-grade vRAN without any accelerator, relying purely on AMD CPU performance. Nokia’s NVIDIA partnership leans heavily on GPU acceleration for L1 processing. Ericsson maintains a hybrid approach with both custom silicon and x86 options.

For operators, this competition drives vendor diversity — exactly what Open RAN was designed to enable. For engineers, it means the skillset varies depending on which vendor stack your operator deploys. Samsung/AMD environments will demand deep Linux, container orchestration, and x86 performance tuning skills. Nokia/NVIDIA environments will require GPU programming awareness and NVIDIA’s CUDA/Aerial SDK knowledge.

Intel’s position is notable: its Granite Rapids Xeon 6 is also pushing into vRAN, and Samsung completing its first commercial call on HPE hardware suggests Samsung isn’t exclusively locked to AMD. The SP career landscape is increasingly defined by which vendor ecosystem you specialize in.

What Skills Should CCIE SP Engineers Develop Now?

The Samsung-AMD expansion signals that three skill clusters are becoming essential for service provider engineers working with modern 5G infrastructure, beyond the traditional MPLS, BGP, and IS-IS foundation that CCIE SP certification covers.

1. Cloud-Native Network Function Management

Every Samsung product announced at MWC 2026 — vRAN, 5G Core, NIS — runs as containerized workloads on Kubernetes. Engineers need to understand:

  • Kubernetes orchestration for network functions (not just IT workloads)
  • Helm charts and operators for CNF lifecycle management
  • Service mesh (Istio/Envoy) for inter-CNF communication
  • Container networking (Multus, SR-IOV, DPDK) for high-performance data plane

2. x86 Performance Engineering for Telecom

Samsung’s accelerator-free approach means CPU performance tuning is critical:

  • CPU pinning and isolation (isolcpus, irqbalance) for real-time L1 processing
  • NUMA topology awareness for memory-local packet processing
  • DPDK and SR-IOV configuration for line-rate packet handling
  • Huge pages allocation and management for vRAN memory requirements

3. AI/ML Operations for Network Automation

The Open Telco AI initiative and Samsung’s NIS platform both require:

  • Understanding AI inference at the edge (what runs where, resource allocation)
  • Telco-specific data pipelines (KPI collection, event correlation)
  • Integration with existing network automation workflows (Ansible, Terraform)

According to the CCIE SP salary data we published, engineers combining traditional SP skills with cloud-native competency command the highest premiums. The Samsung-AMD trajectory makes this dual skillset even more valuable.

How Big Is the Open RAN Market Opportunity?

The Open RAN market is growing rapidly, creating sustained demand for engineers who understand disaggregated, software-defined RAN architecture. According to Grand View Research (2026), the global Open RAN market was valued at $6.53 billion in 2025 and is projected to reach $45.09 billion by 2033, growing at a compound annual growth rate of 26.8%.

MetricValueSource
Open RAN market size (2025)$6.53 billionGrand View Research (2026)
Projected market size (2033)$45.09 billionGrand View Research (2026)
CAGR (2026-2033)26.8%Grand View Research (2026)
Samsung vRAN commercial deploymentsActive (Videotron, Japanese operator, Orange)Samsung (2026)

Samsung’s deployment momentum illustrates this growth. In the first quarter of 2026 alone:

  • Videotron (Canada): 5G NSA + 4G LTE Core on AMD EPYC 9005
  • Orange Group (Europe): Expanded vRAN and Open RAN from pilot to production in France
  • Major Japanese operator: NIS edge AI use cases validated in live networks
  • First commercial vRAN call: Completed January 2026 on single HPE COTS server

According to GSMA Intelligence (2026), performance has been the primary barrier holding back Open RAN adoption — but Samsung’s accelerator-free commercial-grade results directly address this concern. The Huawei 2T optical wavelength announcement at the same MWC 2026 show underscores how much innovation is converging in the service provider space simultaneously.

What Does This Mean for Network Architecture Long-Term?

The Samsung-AMD partnership signals that the service provider network is converging onto a unified compute platform where RAN, core, and AI workloads share the same x86 infrastructure managed through cloud-native orchestration. Samsung’s January 2026 commercial vRAN call consolidated multiple RAN and network functions onto a single COTS server — this is the template for how operators will build networks in the 5G Advanced and 6G era.

Three architectural shifts will accelerate:

  1. Compute-centric network design. Network planning moves from “which boxes go where” to “how much compute capacity at each site.” Edge, regional, and central DCs all run the same AMD EPYC platform with different workload mixes.

  2. AI-native operations. Samsung’s ISAC demonstrations and Open Telco AI models indicate that AI will be embedded in the network fabric, not bolted on as a separate management layer. Autonomous network concepts move from L2 (conditional automation) toward L4 (high automation).

  3. Hardware vendor diversification. Samsung’s multi-chipset partner strategy and the Open RAN disaggregation model mean operators can mix and match silicon vendors. This creates a competitive dynamic that benefits engineers — more vendor options mean more roles for people who understand integration and interoperability.

Frequently Asked Questions

What is Samsung’s AI-RAN and how does it work with AMD processors?

Samsung’s AI-RAN is a virtualized radio access network that runs AI and radio functions on the same AMD EPYC processor without dedicated hardware accelerators. At MWC 2026, Samsung demonstrated successful multi-cell testing results from its R&D Lab, achieving commercial-grade performance on standard COTS servers. The architecture uses a fully virtualized software stack where L1 processing — traditionally handled by FPGAs or ASICs — runs entirely on AMD’s Zen 5 cores.

Why did Samsung choose AMD EPYC for its 5G network products?

AMD EPYC processors deliver the compute density, power efficiency, and thermal flexibility that telecom edge deployments require. The EPYC 9005 Series powers Samsung’s 5G Core gateway deployed by Videotron in Canada, while the EPYC 8005 targets edge environments with NEBS compliance and wide thermal operating ranges. According to AMD’s Derek Dicker (2026), EPYC processors deliver “the performance, efficiency and scalability that network operators and enterprises need.”

How does the Samsung-AMD partnership affect CCIE Service Provider certification?

The shift to software-defined, cloud-native telecom architecture expands the skillset CCIE SP candidates need. Beyond traditional MPLS and Segment Routing, understanding containerized network functions, Kubernetes orchestration, and CPU performance tuning for vRAN becomes increasingly relevant. The CCIE SP lab exam still focuses on IOS-XR and traditional protocols, but employers increasingly value candidates who bridge legacy and cloud-native skills.

What is Samsung’s Network in a Server (NIS)?

NIS is a fully virtualized edge AI platform running on AMD CPUs that consolidates multiple network functions onto a single COTS server. Samsung demonstrated NIS at MWC 2026 with use cases validated by a major Japanese operator, including video analytics, ISAC-based sensor and radar detection, and hyperconnectivity for next-generation devices. It represents the convergence of RAN, MEC, and AI inference into a single platform.

What is the projected market size for Open RAN by 2033?

According to Grand View Research (2026), the global Open RAN market is projected to reach $45.09 billion by 2033, growing at a 26.8% CAGR from its $6.53 billion valuation in 2025. Samsung is one of the leading deployments, with active commercial rollouts at Videotron (Canada), Orange (Europe), and operators in Japan.


Ready to fast-track your CCIE journey? Contact us on Telegram @firstpasslab for a free assessment.