SoftBank just deployed AI-driven autonomous routing on its commercial mobile network — and the results prove that intent-based networking isn’t just a blueprint concept anymore. Their “Autonomous Thinking Distributed Core Routing” technology, announced at MWC Barcelona 2026 on March 11, uses AI agents paired with the CAMARA Quality on Demand (QoD) API to dynamically select optimal network paths based on real-time traffic analysis. In field trials, it cut average latency from 41.9ms to 27.4ms with 99.7% traffic control accuracy.
Key Takeaway: SoftBank’s production deployment is the first real proof that AI-driven intent-based networking works at carrier scale — and every core concept maps directly to the CCIE Service Provider blueprint.
What Exactly Did SoftBank Build?
SoftBank’s Autonomous Thinking Distributed Core Routing is a system where AI agents continuously analyze communication conditions and autonomously switch between two routing paradigms in real time. When an application needs raw throughput for bulk data transfer, traffic flows through the conventional centralized mobile core via User Plane Function (UPF) nodes. When that same application suddenly needs low latency — say, a cloud gaming session switches from loading assets to real-time gameplay — the AI agent detects the shift and reroutes traffic through SRv6 MUP (Segment Routing v6 Mobile User Plane) for the shortest possible path.
The critical piece is the decision layer. According to SoftBank’s press release (March 2026), the AI agent uses the CAMARA QoD API to understand what performance parameters each application requires. It doesn’t just react to congestion — it anticipates latency requirements based on traffic characteristics and proactively selects the optimal path before quality degrades.
Here’s the architecture breakdown:
| Component | Role | Protocol/Standard |
|---|---|---|
| AI Agent | Analyzes traffic patterns, selects routing mode | Proprietary ML model |
| CAMARA QoD API | Standardized interface for quality requirements | CAMARA Project (Linux Foundation) |
| Centralized UPF | Traditional mobile core routing for efficiency | 3GPP 5G Core |
| SRv6 MUP | Shortest-path distributed routing for low latency | draft-ietf-dmm-srv6-mobile-uplane |
| Broadcom Jericho2 | Hardware forwarding for SRv6 | Line-rate silicon |
| ArcOS (Arrcus) | Network operating system for SRv6 MUP | Commercial NOS |
This isn’t a lab demo. SoftBank deployed SRv6 MUP on its commercial 5G network in December 2025, becoming the first carrier worldwide to do so, according to their December 2025 press release. The Autonomous Thinking Distributed Core Routing layer adds the AI decision-making on top of that existing SRv6 MUP infrastructure.
Why the CAMARA QoD API Changes Everything for SP Engineers
The CAMARA Project, hosted under the Linux Foundation, is building standardized network APIs that abstract telecom complexity for developers. The Quality on Demand API is arguably the most impactful one for service provider engineers. According to GSMA (2026), 73 operator groups representing 285 networks and almost 80% of mobile subscribers worldwide have committed to GSMA Open Gateway, which includes QoD capabilities.
Here’s why this matters more than traditional QoS:
Traditional QoS (what you study for CCIE SP today): You configure static DSCP markings, queuing policies, and traffic shaping on a per-interface or per-class basis. The network enforces pre-defined policies regardless of what the application actually needs at any given moment.
QoD API approach (where the industry is heading): The application — or an AI agent acting on its behalf — tells the network what performance parameters it needs right now. The network dynamically adjusts. No static policy configurations. No manual intervention.
According to Telco Magazine (2026), the QoD API “allows an AI agent or developer to request specific performance parameters from the network, such as stable latency and jitter reduction.” T-Mobile already offers QoD through its DevEdge platform. CableLabs is developing intent-based QoD extensions that move beyond fixed profiles toward dynamic, real-time quality negotiation.
For CCIE SP candidates, this is a critical evolution to understand. The fundamentals of QoS — queuing theory, scheduling algorithms, congestion management — don’t go away. But the control plane is shifting from CLI-configured policies to API-driven intent. SoftBank’s deployment proves this transition is happening now, not five years from now.
How SRv6 MUP Replaces Traditional Mobile Core Routing
To appreciate what SoftBank accomplished, you need to understand how conventional mobile networks route traffic versus SRv6 MUP. If you’ve studied segment routing versus MPLS TE, the concepts will be familiar.
Conventional Mobile Core (GTP-U)
In a standard 4G/5G network, user traffic is encapsulated in GTP-U tunnels from the gNodeB (base station) through one or more UPF nodes to the data network. Every packet traverses a centralized core path, even if the destination server is physically close to the radio tower. Latency is the cost of centralization.
SRv6 MUP Architecture
SRv6 MUP eliminates GTP-U tunneling entirely. Instead, it encodes mobile user session information directly into SRv6 segment identifiers. Traffic can take the shortest path from radio to destination without passing through centralized UPF nodes. According to SoftBank’s MPLS World Congress presentation (2022), the architecture requires “no change to 5G” — it plugs into the existing 3GPP framework.
The performance difference is significant. From SoftBank’s JANOG57 field trial (February 2026):
| Metric | Conventional Core | SRv6 MUP + AI Routing | Improvement |
|---|---|---|---|
| Average Latency | 41.9ms | 27.4ms | 35% reduction |
| Cloud Gaming SLA (<40ms) | Marginal pass | Comfortable margin | Stable compliance |
| AI Traffic Control Accuracy | N/A | 99.7% | — |
That 35% latency reduction comes entirely from path optimization — no hardware upgrades, no spectrum changes. The AI agent’s 99.7% accuracy means it correctly identified traffic type and selected the appropriate routing mode in virtually every case during the trial.
Intent-Based Networking: From Blueprint to Production
The CCIE Service Provider blueprint includes intent-based networking under the programmability and automation sections. Until SoftBank’s announcement, most real-world examples were vendor demos or controlled PoCs. This deployment changes that narrative.
Here’s how SoftBank’s implementation maps to intent-based networking principles:
- Intent Declaration: Applications express quality requirements through the CAMARA QoD API (e.g., “I need sub-40ms latency for this session”)
- Translation: The AI agent translates intent into network-level decisions (centralized UPF vs. SRv6 MUP path)
- Automated Fulfillment: Routing changes happen autonomously — no human operator configures anything per-session
- Continuous Verification: The AI agent monitors whether the selected path continues to meet the declared intent, re-routing if conditions change
This is textbook intent-based networking. And it’s running on a commercial carrier network serving real customers.
At MWC 2026, SoftBank wasn’t alone in pushing AI-native networking. As we covered in our MWC 2026 recap, India’s Communications Minister Jyotiraditya Scindia described the industry entering the “IQ era” where AI transforms networks into “adaptive systems capable of real-time transactions, predictive maintenance, and intelligent resource allocation.” Multiple vendors at the Autonomous Network Summit converged on AI-enabled operations as the next SP operational model.
What This Means for CCIE SP Candidates
If you’re studying for CCIE Service Provider, SoftBank’s deployment validates that the blueprint topics you’re studying have direct operational relevance. Here’s the practical takeaway:
Skills That Map Directly
- Segment Routing (SRv6): SoftBank’s entire architecture depends on SRv6 MUP. Understanding SRv6 SID structures, network programming, and traffic engineering policies is foundational.
- QoS and Traffic Engineering: The AI agent is making the same decisions a human engineer would — choosing between efficiency-optimized and latency-optimized paths. Understanding queuing, scheduling, and congestion management helps you understand what the AI is optimizing.
- 5G Core Architecture: Knowing how UPF nodes, gNodeBs, and the N3/N6 interfaces work lets you understand why eliminating GTP-U tunneling matters.
- Network Automation and Programmability: The CAMARA QoD API is a REST API. Understanding API-driven network operations is no longer optional for SP engineers.
Skills to Add
- AI/ML Fundamentals: You don’t need to build the models, but you need to understand what traffic classification ML models do and how they integrate with routing decisions.
- CAMARA/Open Gateway APIs: Familiarize yourself with the CAMARA Project documentation. QoD is just the start — location, device status, and number verification APIs are also in the framework.
- SRv6 MUP Specifics: Read draft-ietf-dmm-srv6-mobile-uplane to understand how mobile session state maps to SRv6 SIDs.
The Bigger Picture: AI-Native Carrier Networks
SoftBank’s roadmap goes beyond routing optimization. According to their MWC 2026 keynote, the company is transitioning from a “traditional carrier that carries data” to an “AI infrastructure orchestrator.” Their Telco AI Cloud vision positions the network as a “central nervous system” that doesn’t just transport data — it understands and acts on it.
They’re also participating in the OCUDU initiative under the Linux Foundation for open, distributed AI-RAN infrastructure, and they demonstrated Autonomous Agentic AI-RAN (AgentRAN) at MWC 2026 in collaboration with Northeastern University, Keysight, and zTouch Networks. This system uses Large Telecom Models (LTMs) to autonomously manage radio access network operations.
For SP engineers, the trajectory is clear: manual CLI-based network management is being supplemented — not replaced, not yet — by AI agents that handle real-time optimization decisions. The engineers who understand both the underlying protocols (SRv6, BGP, MPLS) and the AI-driven automation layer will be the most valuable in this transition.
SoftBank plans to expand SRv6 MUP service areas throughout 2026 and evolve the AI agent to learn from more application traffic patterns. Their goal: application providers simply deploy low-latency apps on SoftBank’s MEC servers, and optimal network control happens autonomously.
Frequently Asked Questions
What is SoftBank’s Autonomous Thinking Distributed Core Routing?
It’s an AI-driven technology that uses AI agents and the CAMARA QoD API to analyze traffic characteristics in real time and autonomously select optimal network routes. It dynamically switches between centralized UPF paths and SRv6 MUP shortest-path routing depending on latency requirements. In field trials, it achieved 99.7% traffic control accuracy and reduced average latency by 35%.
What is the CAMARA QoD API and why does it matter for network engineers?
CAMARA Quality on Demand (QoD) is an open-source network API defined by the Linux Foundation’s CAMARA Project. It lets developers and AI agents request specific performance parameters like stable latency and throughput from the network programmatically. According to GSMA (2026), over 285 operator networks worldwide support the Open Gateway framework that includes QoD — making it a de facto industry standard SP engineers need to understand.
How does SRv6 MUP compare to traditional MPLS traffic engineering?
SRv6 MUP replaces GTP-U tunneling in mobile networks with SRv6 segment routing, eliminating centralized UPF dependencies. Unlike MPLS TE, which requires centralized path computation (PCE/RSVP-TE), SRv6 MUP enables distributed edge-based routing decisions using IPv6 extension headers. SoftBank’s December 2025 commercial deployment proved it works at production scale with lower operational cost than traditional mobile core architectures.
Is intent-based networking tested on the CCIE SP exam?
The CCIE Service Provider v5 blueprint includes intent-based networking, programmability, and network automation. SoftBank’s deployment demonstrates exactly how these concepts work in production — AI agents translating application intent into automated routing decisions using standardized APIs and SRv6 forwarding.
What latency improvement did SoftBank achieve with AI routing?
In field trials at JANOG57 (February 2026) on SoftBank’s commercial 4G network, average latency dropped from 41.9ms to 27.4ms — a 35% reduction. This comfortably meets the sub-40ms requirement for cloud gaming, whereas the conventional core path was marginal.
Ready to fast-track your CCIE journey? Contact us on Telegram @phil66xx for a free assessment.