F5 upgraded to Gold Membership in the Cloud Native Computing Foundation (CNCF) on March 26, 2026, during KubeCon + CloudNativeCon Europe in Amsterdam. This move signals F5’s deepening investment in Kubernetes-native networking, open source application delivery, and AI inference infrastructure — areas where network engineers increasingly need hands-on expertise.
Key Takeaway: F5’s CNCF Gold Membership accelerates the convergence of traditional application delivery controllers with Kubernetes-native networking, making Gateway API, OpenTelemetry, and service mesh skills essential for network engineers in 2026 and beyond.
Why Did F5 Upgrade to CNCF Gold Membership?
F5’s upgrade from Silver to Gold Member reflects a strategic bet on cloud native infrastructure as the default platform for modern workloads, including AI inference. According to the CNCF 2025 Annual Cloud Native Survey, 98% of organizations have adopted cloud native technologies, with 82% running Kubernetes in production. F5 — the corporate sponsor of NGINX and a contributor to Kubernetes Ingress, Gateway API, and OpenTelemetry — is positioning itself at the center of this ecosystem.
“Expanding to Gold Membership in the CNCF reflects our dedication to fostering innovation and collaboration in the cloud native ecosystem,” said Kunal Anand, Chief Product Officer at F5, in the official CNCF announcement. “F5 holds a deep heritage of open source from its careful stewardship of the NGINX project.”
For network engineers, this matters because F5 hardware and software already dominates enterprise load balancing and application delivery. When the company that runs your BIG-IP fleet doubles down on Kubernetes, your skill requirements shift accordingly.

What Are CNCF Membership Tiers and Why Do They Matter?
CNCF membership operates on three tiers — Silver, Gold, and Platinum — each representing different levels of investment and influence over the cloud native ecosystem. Silver members join the community and access benefits. Gold members gain closer collaboration on key projects. Platinum members receive a guaranteed Governing Board seat with full voting rights and twice-yearly strategy reviews with CNCF leadership, according to the CNCF Membership Overview 2025.
| Tier | Key Benefits | Notable Members |
|---|---|---|
| Silver | Community access, event discounts, project participation | Startups, regional SIs, emerging vendors |
| Gold | Deeper project collaboration, enhanced visibility, co-marketing | F5, Viettel, mid-tier enterprise vendors |
| Platinum | Governing Board seat, voting rights, strategic reviews | Google, AWS, Microsoft, Red Hat, Cisco |
CNCF currently supports nearly 800 members across these tiers. The foundation hosts critical infrastructure projects including Kubernetes, Prometheus, Envoy, and OpenTelemetry — the same projects that increasingly define how network traffic flows in production environments.
For network engineers, tracking which vendors hold Platinum and Gold positions reveals where the industry is investing. When F5 upgrades, it signals that cloud networking and Kubernetes-native traffic management are becoming core enterprise requirements, not edge cases.
How Does F5’s NGINX Fit Into the Kubernetes Ecosystem?
F5 acquired NGINX in 2019 for approximately $670 million, gaining control of the world’s most widely deployed web server and reverse proxy. NGINX powers roughly 34% of all web servers globally, according to W3Techs (2026). Inside Kubernetes, the NGINX Ingress Controller provides Layer 7 load balancing, SSL/TLS termination, and content-based routing for containerized applications.
The Kubernetes ecosystem recently underwent a significant shift. In November 2025, the Kubernetes project announced the retirement of the community-maintained ingress-nginx controller, citing maintenance challenges and security concerns. This creates a clear opening for F5’s commercial NGINX Ingress Controller, which uses Custom Resource Definitions (CRDs) like VirtualServer and Policy instead of the annotation-heavy approach of the legacy project.
| Feature | Community ingress-nginx (Retired) | F5 NGINX Ingress Controller |
|---|---|---|
| Configuration | Annotations (nginx.ingress.kubernetes.io/) | CRDs (VirtualServer, Policy) |
| Protocol Support | HTTP/HTTPS primarily | HTTP, gRPC, TCP, UDP |
| WAF Integration | Limited | NGINX App Protect built-in |
| Gateway API | Partial support | Full Gateway API conformance |
| Commercial Support | Community only | F5 enterprise support |
Network engineers managing hybrid cloud environments should note this transition. If your organization runs ingress-nginx today, migration planning to either F5 NGINX Ingress Controller or another conformant implementation is now a near-term operational requirement.
What Is BIG-IP Next for Kubernetes?
BIG-IP Next for Kubernetes extends F5’s traditional ADC capabilities into container environments, providing a single control point for ingress, egress, security, and visibility. According to F5’s product documentation, it addresses a fundamental gap: Kubernetes’ native networking architecture does not inherently support multi-network integration or non-HTTP/HTTPS protocols.
BIG-IP Next for Kubernetes centralizes ingress and egress management, enforces network policies, and provides deep traffic visibility — capabilities that network engineers already manage on traditional BIG-IP hardware. The key difference is deployment context: these functions now run as Kubernetes-native workloads, managed through Kubernetes APIs rather than TMSH or the BIG-IP GUI.
For engineers preparing for CCIE Enterprise Infrastructure or managing multi-cloud networking, BIG-IP Next represents the bridge between legacy ADC knowledge and cloud native operations. Your understanding of virtual servers, pools, iRules, and health monitors translates directly — the orchestration layer changes from CLI/GUI to Kubernetes manifests and Helm charts.
Why Is the Kubernetes Gateway API a Big Deal for Network Engineers?
The Kubernetes Gateway API is the next-generation routing specification that replaces the legacy Ingress resource with a role-based, protocol-flexible, and extensible model. F5 is a key contributor to this specification, and their CNCF Gold Membership deepens their influence on its direction. The Gateway API introduces three core resource types: GatewayClass (infrastructure provider), Gateway (cluster operator), and HTTPRoute/TCPRoute/GRPCRoute (application developer).
| Concept | Legacy Ingress | Gateway API |
|---|---|---|
| Role Separation | Single resource, single owner | GatewayClass → Gateway → Route (multi-role) |
| Protocol Support | HTTP/HTTPS only | HTTP, TCP, UDP, gRPC, TLS passthrough |
| Cross-Namespace | Not supported | Built-in ReferenceGrant mechanism |
| Extensibility | Annotations (vendor-specific) | Policy attachment model (standardized) |
| Status Reporting | Minimal | Rich status conditions per resource |
For network engineers accustomed to configuring virtual servers, VIPs, and routing policies on traditional load balancers, the Gateway API provides a familiar mental model wrapped in Kubernetes-native semantics. The role separation mirrors how networking teams already operate — infrastructure teams define the gateway (analogous to provisioning a BIG-IP), while application teams define routes (analogous to creating pool members and virtual servers).
Engineers pursuing CCIE DevNet Expert or working in network automation roles should add Gateway API to their study list. It’s becoming the default API for all Layer 4-7 traffic management in Kubernetes.

How Does AI Inference Drive Cloud Native Networking Demand?
AI inference workloads are accelerating cloud native infrastructure investment. “Inference relies on scalable infrastructure, which is a fundamentally cloud native challenge enabled by CNCF technologies,” said Jonathan Bryce, Executive Director of CNCF, in the F5 Gold Membership announcement. Bryce specifically cited F5’s leadership on NGINX, Gateway API, and OpenTelemetry as “necessary for delivering secure, scalable AI inference workloads reliably to production.”
The connection between AI and networking runs deep. AI inference endpoints require:
- Low-latency load balancing — distributing requests across GPU-backed pods with health-aware routing
- Protocol flexibility — gRPC for model serving (TensorFlow Serving, Triton Inference Server), HTTP/2 for API gateways
- Observability — OpenTelemetry traces and metrics across the entire inference pipeline
- Security — mTLS between services, WAF at ingress, rate limiting per client
These are networking problems. Every item on that list maps directly to skills network engineers already possess — load balancing, protocol management, monitoring, and security policy enforcement. The platform is different (Kubernetes instead of Cisco IOS), but the engineering principles are identical.
According to CNCF, Kubernetes has become “the standard AI platform.” For network engineers watching the cloud networking market shift, this means your Kubernetes networking skills have a direct line to the fastest-growing infrastructure category in enterprise IT.
What Should Network Engineers Do Right Now?
Network engineers should treat F5’s CNCF Gold Membership as a signal to accelerate cloud native skill development. The convergence of traditional ADC vendors with Kubernetes-native networking is not a future trend — it’s happening in production environments today. Here’s a prioritized action plan:
Deploy a Kubernetes lab with NGINX Ingress Controller — Install K3s or kind locally, deploy the F5 NGINX Ingress Controller, and configure
VirtualServerCRDs. This is the hands-on equivalent of configuring virtual servers on BIG-IP.Study the Gateway API specification — Read the official Gateway API docs and implement
GatewayClass,Gateway, andHTTPRouteresources. Focus on the role-based model and cross-namespace routing.Instrument with OpenTelemetry — Deploy an OpenTelemetry Collector in your lab and export traces/metrics from NGINX. This builds the observability muscle that AI inference environments demand.
Bridge to certification — Map these skills to your CCIE preparation. CCIE Enterprise Infrastructure covers SD-WAN and DNA Center automation that uses similar API-driven paradigms. CCIE DevNet Expert directly tests programmability concepts that align with Kubernetes orchestration.
Track CNCF project graduates — Monitor which projects move from Sandbox to Incubating to Graduated status. These transitions predict which technologies will become enterprise defaults within 12-24 months.
The network automation career path is increasingly defined by your ability to operate across traditional CLI-driven devices and API-driven cloud native platforms. F5’s CNCF investment confirms that even the most traditional networking vendors see Kubernetes as the future control plane.
Frequently Asked Questions
What does F5’s CNCF Gold Membership mean for network engineers?
F5’s upgrade signals deeper investment in Kubernetes-native networking tools like NGINX Ingress Controller and Gateway API. Network engineers should expect tighter integration between traditional ADC capabilities and cloud native infrastructure, making skills in both domains increasingly valuable.
What is the difference between CNCF Gold and Platinum membership?
Gold members get closer collaboration on CNCF projects and community initiatives. Platinum members receive a guaranteed Governing Board seat with full voting rights and twice-yearly strategy reviews with CNCF leadership. Platinum members include Google, AWS, Microsoft, Red Hat, and Cisco.
Is Kubernetes knowledge required for CCIE certification?
While Kubernetes isn’t directly tested on CCIE lab exams, understanding container networking, ingress controllers, and service mesh is increasingly relevant for enterprise and automation tracks. The CCIE DevNet Expert track covers programmability concepts that overlap with Kubernetes orchestration.
What is the Kubernetes Gateway API and why should I learn it?
Gateway API is the next-generation Kubernetes routing standard replacing the legacy Ingress resource. It provides role-based configuration, cross-namespace routing, and protocol-level flexibility that mirrors how networking teams already operate. F5 is a key contributor to this specification.
How does F5 BIG-IP Next for Kubernetes work?
BIG-IP Next for Kubernetes provides a single control point for container ingress/egress, security, and visibility. It bridges traditional F5 ADC capabilities with Kubernetes-native workflows, supporting non-HTTP protocols and multi-network integration that Kubernetes doesn’t handle natively.
Ready to fast-track your CCIE journey? Contact us on Telegram @firstpasslab for a free assessment.
