You can build a fully functional VXLAN EVPN leaf-spine fabric on EVE-NG using free Nexus 9000v images — no physical Nexus switches or expensive hardware required. This guide walks through the complete stack from underlay IGP to L3VNI inter-VXLAN routing, with every NX-OS command you need and verification steps at each stage.
Key Takeaway: VXLAN EVPN is the dominant fabric technology on the CCIE Data Center v3.1 blueprint, and with Cisco ACI shifting toward NDFC-managed NX-OS fabrics, hands-on CLI-based VXLAN EVPN skills are now non-negotiable for passing the lab exam.
What Hardware Do You Need for a VXLAN EVPN Lab?
A 2-spine, 4-leaf VXLAN EVPN lab requires approximately 48-64 GB RAM on your EVE-NG host. Each Nexus 9000v node requires 8 GB RAM and 2 vCPUs, and you’ll also need lightweight host nodes for end-to-end traffic testing.
EVE-NG Host Requirements
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 48 GB | 64 GB |
| CPU | 8 cores (VT-x/AMD-V) | 12+ cores |
| Storage | 100 GB SSD | 200 GB NVMe |
| EVE-NG Version | Community (free) | Pro (optional) |
Per-Node Resource Allocation
| Node Type | RAM | vCPUs | Quantity | Total RAM |
|---|---|---|---|---|
| Nexus 9000v (Spine) | 8 GB | 2 | 2 | 16 GB |
| Nexus 9000v (Leaf) | 8 GB | 2 | 4 | 32 GB |
| Linux Host (Alpine/Ubuntu) | 512 MB | 1 | 2 | 1 GB |
| Total | 8 nodes | ~49 GB |
According to the EVE-NG documentation (2026), nested virtualization (running EVE-NG inside VMware or KVM) adds approximately 10-15% overhead. For the smoothest experience, bare-metal installation on a dedicated server is recommended.
If you’re evaluating which lab platform to use, EVE-NG Community Edition is free and handles Nexus 9000v images well. The same qcow2 images also work in GNS3 and Cisco CML.
How Do You Import Nexus 9000v Images into EVE-NG?
Download the Nexus 9000v qcow2 image from Cisco’s software download page (requires a valid Cisco account) and place it in the correct EVE-NG directory. The image filename must follow EVE-NG’s naming convention.
Step-by-Step Image Import
# Create the image directory on your EVE-NG server
mkdir -p /opt/unetlab/addons/qemu/nxosv9k-10.4.3/
# Copy or download the qcow2 image into the directory
# Rename the image to match EVE-NG's expected format
mv nxosv9k-10.4.3.qcow2 /opt/unetlab/addons/qemu/nxosv9k-10.4.3/virtioa.qcow2
# Fix permissions
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions
Important: The image must be named virtioa.qcow2 inside its directory. Use NX-OS 10.3.x or 10.4.x for the best VXLAN EVPN feature support — older versions may lack features like ingress replication or distributed anycast gateway.
What Does the Lab Topology Look Like?
The topology uses a standard Clos (leaf-spine) architecture with 2 spines and 4 leaves. Spines serve as BGP route reflectors for the EVPN overlay, while leaves act as VTEPs (VXLAN Tunnel Endpoints) hosting tenant workloads.
┌──────────┐ ┌──────────┐
│ Spine-1 │ │ Spine-2 │
│ Lo0: .1 │ │ Lo0: .2 │
│ AS 65000 │ │ AS 65000 │
└────┬┬┬┬──┘ └──┬┬┬┬────┘
││││ ││││
┌──────────┘│││ ┌─────┘│││
│ ┌─────┘││ │ ┌───┘││
│ │ ┌───┘│ │ │ ┌─┘│
│ │ │ │ │ │ │ │
┌────┴┐ ┌──┴──┴┐ ┌┴────┴┐ ┌┴──┴──┐
│Leaf1│ │Leaf2 │ │Leaf3 │ │Leaf4 │
│Lo0:3│ │Lo0:.4│ │Lo0:.5│ │Lo0:.6│
└──┬──┘ └──┬───┘ └──┬───┘ └──┬───┘
│ │ │ │
[Host1] [Host1] [Host2] [Host2]
VLAN 10 VLAN 10 VLAN 20 VLAN 20
IP Addressing Plan
| Device | Loopback0 (Router-ID) | Loopback1 (VTEP) |
|---|---|---|
| Spine-1 | 10.0.0.1/32 | — |
| Spine-2 | 10.0.0.2/32 | — |
| Leaf-1 | 10.0.0.3/32 | 10.0.1.3/32 |
| Leaf-2 | 10.0.0.4/32 | 10.0.1.4/32 |
| Leaf-3 | 10.0.0.5/32 | 10.0.1.5/32 |
| Leaf-4 | 10.0.0.6/32 | 10.0.1.6/32 |
Point-to-point links use /30 subnets from the 10.10.x.0/24 range. Loopback0 serves as the BGP router-ID and OSPF router-ID, while Loopback1 is the VTEP source interface for NVE.
How Do You Configure the Underlay with OSPF?
The underlay provides IP reachability between all loopback addresses — this is the foundation everything else builds on. Configure OSPF with point-to-point network type on all fabric links to eliminate DR/BDR elections and reduce LSA overhead.
Spine-1 Underlay Configuration
feature ospf
router ospf UNDERLAY
router-id 10.0.0.1
interface loopback0
ip address 10.0.0.1/32
ip router ospf UNDERLAY area 0.0.0.0
interface Ethernet1/1
description to Leaf-1
no switchport
ip address 10.10.1.1/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
interface Ethernet1/2
description to Leaf-2
no switchport
ip address 10.10.2.1/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
interface Ethernet1/3
description to Leaf-3
no switchport
ip address 10.10.3.1/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
interface Ethernet1/4
description to Leaf-4
no switchport
ip address 10.10.4.1/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
Leaf-1 Underlay Configuration
feature ospf
router ospf UNDERLAY
router-id 10.0.0.3
interface loopback0
ip address 10.0.0.3/32
ip router ospf UNDERLAY area 0.0.0.0
interface loopback1
description VTEP Source
ip address 10.0.1.3/32
ip router ospf UNDERLAY area 0.0.0.0
interface Ethernet1/1
description to Spine-1
no switchport
ip address 10.10.1.2/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
interface Ethernet1/2
description to Spine-2
no switchport
ip address 10.10.5.2/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
Repeat for Leaf-2 through Leaf-4 with appropriate IP addresses.
Underlay Verification
Before proceeding, verify full loopback reachability:
Spine-1# show ip ospf neighbors
OSPF Process ID UNDERLAY VRF default
Total number of neighbors: 4
Neighbor ID Pri State Up Time Address Interface
10.0.0.3 1 FULL/ - 00:05:12 10.10.1.2 Eth1/1
10.0.0.4 1 FULL/ - 00:05:10 10.10.2.2 Eth1/2
10.0.0.5 1 FULL/ - 00:05:08 10.10.3.2 Eth1/3
10.0.0.6 1 FULL/ - 00:05:06 10.10.4.2 Eth1/4
Leaf-1# ping 10.0.1.4 source 10.0.1.3
PING 10.0.1.4 (10.0.1.4): 56 data bytes
64 bytes from 10.0.1.4: icmp_seq=0 ttl=253 time=3.2 ms
Verify that every leaf can ping every other leaf’s Loopback1 (VTEP) address. If this fails, VXLAN tunnels will not form.
How Do You Configure BGP EVPN Overlay?
The BGP EVPN overlay uses iBGP with spines as route reflectors. All devices share ASN 65000, and spines reflect EVPN routes (Type-2 MAC/IP, Type-5 IP Prefix) between leaves. This is the control plane for VXLAN — it distributes MAC addresses and host routes across the fabric.
Enable Required Features on All Devices
feature bgp
feature nv overlay
feature vn-segment-vlan-based
nv overlay evpn
Spine-1 BGP Configuration (Route Reflector)
router bgp 65000
router-id 10.0.0.1
address-family l2vpn evpn
retain route-target all
neighbor 10.0.0.3
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
neighbor 10.0.0.4
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
neighbor 10.0.0.5
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
neighbor 10.0.0.6
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
Key detail: The retain route-target all command on spines ensures that route reflectors keep all EVPN routes regardless of local import policy. Without this, spines would drop routes for VNIs they don’t participate in.
Leaf-1 BGP Configuration
router bgp 65000
router-id 10.0.0.3
neighbor 10.0.0.1
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
neighbor 10.0.0.2
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
BGP EVPN Verification
Leaf-1# show bgp l2vpn evpn summary
BGP summary information for VRF default, address family L2VPN EVPN
BGP router identifier 10.0.0.3, local AS number 65000
BGP table version is 1, L2VPN EVPN config peers 2, capable peers 2
Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State/PfxRcd
10.0.0.1 4 65000 45 42 0 0 00:10:23 0
10.0.0.2 4 65000 44 41 0 0 00:10:20 0
Both spine neighbors should show State/PfxRcd with a number (or 0 if no VNIs configured yet). If the state shows Idle or Active, check your loopback reachability and update-source settings.
How Do You Configure L2VNI for Layer 2 Extension?
L2VNI maps VLANs to VXLAN Network Identifiers, enabling Layer 2 stretching across the fabric. This is how hosts in the same VLAN on different leaves communicate at Layer 2 — EVPN distributes their MAC addresses via Type-2 routes. According to Cisco’s VXLAN configuration guide (2026), ingress replication is the recommended BUM handling method for most deployments.
Leaf-1 L2VNI Configuration
! Create VLANs and map to VN segments
vlan 10
vn-segment 100010
vlan 20
vn-segment 100020
! Configure EVPN instance for each VNI
evpn
vni 100010 l2
rd auto
route-target import auto
route-target export auto
vni 100020 l2
rd auto
route-target import auto
route-target export auto
! Configure NVE interface (VTEP)
interface nve1
no shutdown
host-reachability protocol bgp
source-interface loopback1
member vni 100010
ingress-replication protocol bgp
member vni 100020
ingress-replication protocol bgp
! Configure host-facing interface
interface Ethernet1/5
switchport
switchport access vlan 10
no shutdown
Apply the same L2VNI configuration on all leaves (Leaf-1 through Leaf-4), adjusting the host-facing interface VLAN as needed. For our topology, Leaf-1 and Leaf-2 host VLAN 10, while Leaf-3 and Leaf-4 host VLAN 20.
L2VNI Verification
Leaf-1# show nve peers
Interface Peer-IP State LearnType Uptime Router-Mac
--------- --------------- ----- --------- -------- -----------------
nve1 10.0.1.4 Up CP 00:02:15 5004.0000.1b08
nve1 10.0.1.5 Up CP 00:02:10 5005.0000.1b08
nve1 10.0.1.6 Up CP 00:02:08 5006.0000.1b08
Leaf-1# show vxlan
Vlan VN-Segment
==== ==========
10 100010
20 100020
Leaf-1# show bgp l2vpn evpn
Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 10.0.0.3:32777 (L2VNI 100010)
*>i[2]:[0]:[0]:[48]:[0050.0000.0001]:[0]:[0.0.0.0]/216
10.0.1.3 100 32768 i
*>i[3]:[0]:[32]:[10.0.1.3]/88
10.0.1.3 100 32768 i
If show nve peers shows peers in Up state with CP (control plane) learning, your EVPN overlay is working. Type-2 routes carry MAC addresses, and Type-3 routes handle ingress replication for BUM traffic.
For more detail on EVPN multi-homing and ESI configurations, check our dedicated guide on ESI LAG with Nexus.
How Do You Configure L3VNI for Inter-VXLAN Routing?
L3VNI enables routing between different VNIs (subnets) using a tenant VRF and symmetric IRB (Integrated Routing and Bridging). Each leaf performs distributed routing — traffic between VLAN 10 and VLAN 20 is routed locally at the ingress leaf rather than hairpinning through a centralized router.
Leaf-1 L3VNI Configuration
! Create tenant VRF
vrf context TENANT-1
vni 50000
rd auto
address-family ipv4 unicast
route-target import auto
route-target import auto evpn
route-target export auto
route-target export auto evpn
! Create L3VNI VLAN (transit VLAN — no hosts)
vlan 500
vn-segment 50000
! L3VNI SVI
interface Vlan500
no shutdown
vrf member TENANT-1
ip forward
no ip redirects
! Distributed anycast gateway for VLAN 10
interface Vlan10
no shutdown
vrf member TENANT-1
ip address 192.168.10.1/24
fabric forwarding mode anycast-gateway
no ip redirects
! Distributed anycast gateway for VLAN 20
interface Vlan20
no shutdown
vrf member TENANT-1
ip address 192.168.20.1/24
fabric forwarding mode anycast-gateway
no ip redirects
! Enable anycast gateway MAC (same on ALL leaves)
fabric forwarding anycast-gateway-mac 0001.0001.0001
! Add L3VNI to NVE interface
interface nve1
member vni 50000 associate-vrf
! Advertise tenant VRF in BGP
router bgp 65000
vrf TENANT-1
address-family ipv4 unicast
advertise l2vpn evpn
Critical: The fabric forwarding anycast-gateway-mac must be identical on every leaf. This is what makes the distributed gateway work — every leaf responds to ARP for the gateway IP with the same MAC address, so hosts always use their local leaf as the default gateway.
L3VNI Verification
Leaf-1# show vrf TENANT-1
VRF-Name VRF-ID State Reason
TENANT-1 3 Up --
Leaf-1# show nve vni
Codes: CP - Control Plane, DP - Data Plane
Interface VNI Multicast-group State Mode Type [BD/VRF]
--------- -------- ---------------- ----- ---- ---- --------
nve1 100010 UnicastBGP Up CP L2 [10]
nve1 100020 UnicastBGP Up CP L2 [20]
nve1 50000 n/a Up CP L3 [TENANT-1]
Leaf-1# show bgp l2vpn evpn route-type 5
Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 10.0.0.3:3
*>i[5]:[0]:[0]:[24]:[192.168.10.0]/224
10.0.1.3 100 32768 i
*>i[5]:[0]:[0]:[24]:[192.168.20.0]/224
10.0.1.5 100 0 i
Type-5 routes carry IP prefixes between VRFs across the fabric. When you see routes from remote leaves appearing in the L2VPN EVPN table, inter-subnet routing is operational.
End-to-End Test
From Host-1 (VLAN 10, 192.168.10.10) connected to Leaf-1, ping Host-2 (VLAN 20, 192.168.20.10) connected to Leaf-3:
Host-1$ ping 192.168.20.10
PING 192.168.20.10 (192.168.20.10): 56 data bytes
64 bytes from 192.168.20.10: seq=0 ttl=62 time=8.5 ms
64 bytes from 192.168.20.10: seq=1 ttl=62 time=3.2 ms
The TTL of 62 (default 64 minus 2 hops) confirms that the packet was routed at the ingress leaf (Leaf-1) and then forwarded via VXLAN to the egress leaf (Leaf-3) — symmetric IRB in action.
What Are the Most Common VXLAN EVPN Lab Troubleshooting Issues?
The most common issue is mismatched VNI-to-VLAN mappings or missing nv overlay evpn — without this global command, no EVPN routes are exchanged even if BGP sessions are up.
Quick Troubleshooting Checklist
| Symptom | Check | Fix |
|---|---|---|
| NVE peers not forming | show nve peers | Verify Loopback1 reachability via ping from VTEP source |
| BGP EVPN session idle | show bgp l2vpn evpn summary | Check nv overlay evpn and feature nv overlay |
| No Type-2 routes | show bgp l2vpn evpn route-type 2 | Verify evpn block under vni and send-community both |
| L3VNI routing fails | show vrf TENANT-1 | Check vni 50000 under VRF and member vni 50000 associate-vrf on NVE |
| Same RD on multiple leaves | show bgp l2vpn evpn | Use rd auto (auto-generates unique RD per switch); identical manual RDs break EVPN, as noted by Cisco Community (2025) |
| Anycast gateway not responding | show ip arp vrf TENANT-1 | Verify fabric forwarding anycast-gateway-mac is identical on all leaves |
Where Does This Fit in CCIE Data Center v3.1 Preparation?
VXLAN EVPN covers Section 3.0 (Data Center Fabric Connectivity) of the CCIE DC v3.1 blueprint — the largest weighted section in the lab exam. According to INE’s lab guide analysis (2026), all VXLAN EVPN topics can be fully practiced using Nexus 9000v virtualization, making this lab directly relevant to exam preparation.
The CCIE DC v3.1 lab tests candidates on:
- Underlay design: OSPF/IS-IS for loopback reachability
- eBGP vs iBGP overlay: Understanding when to use each model
- L2VNI and L3VNI: Stretching Layer 2 and routing between tenants
- vPC with VXLAN: Dual-homing hosts to leaf pairs (advanced topic)
- Multi-site EVPN: Border gateway configuration for data center interconnect
This lab covers the first four topics. For career planning around CCIE Data Center, NX-OS VXLAN EVPN skills are increasingly valuable as the industry transitions away from proprietary fabric controllers.
Frequently Asked Questions
How much RAM do I need to run a VXLAN EVPN lab on EVE-NG?
Each Nexus 9000v requires 8 GB RAM and 2 vCPUs. A minimal 2-spine, 4-leaf topology with 2 host nodes needs approximately 48-64 GB RAM on your EVE-NG host, plus overhead for the hypervisor itself.
Is VXLAN EVPN on the CCIE Data Center v3.1 exam?
Yes. VXLAN EVPN is a core topic in Section 3.0 (Data Center Fabric Connectivity) of the CCIE DC v3.1 blueprint. According to INE (2026), all VXLAN EVPN topics can be fully practiced using Nexus 9000v virtualization.
Should I use OSPF or IS-IS for the VXLAN EVPN underlay?
Either works. OSPF is more common in Cisco documentation and lab guides, while IS-IS is preferred in large-scale deployments. For CCIE DC lab prep, master OSPF first since most Cisco reference designs use it, then learn IS-IS as an alternative.
What is the difference between L2VNI and L3VNI?
L2VNI extends Layer 2 VLANs across the VXLAN fabric for bridging (same subnet). L3VNI enables inter-VXLAN routing between different subnets using a tenant VRF. Most production fabrics use both: L2VNI for stretched VLANs and L3VNI for inter-subnet traffic.
Can I use GNS3 or CML instead of EVE-NG for this lab?
Yes. The same Nexus 9000v qcow2 images work in GNS3 and Cisco CML. The NX-OS configurations are identical regardless of the platform. EVE-NG is popular because it’s free (Community Edition) and supports browser-based access.
Building this lab from scratch teaches you the VXLAN EVPN stack in a way that reading documentation alone never will. Every configuration line maps to a concept tested on the CCIE Data Center lab exam — underlay reachability, control plane distribution, and data plane encapsulation.
Ready to accelerate your CCIE Data Center preparation? Contact us on Telegram @phil66xx for a free assessment of your lab readiness and a personalized study plan.