A Cisco Catalyst 8000V running on a $1/day AWS t3.medium instance gives you a production-grade hybrid cloud lab that connects to your on-prem CML or EVE-NG environment via IPsec VPN with BGP. This is the fastest way for a network engineer to get hands-on with cloud networking using real infrastructure instead of slides and diagrams.

Key Takeaway: Building a hybrid cloud lab with AWS VPC and Cisco Catalyst 8000V costs under $2/day, teaches cloud networking fundamentals through a network engineer’s lens, and maps directly to CCIE EI v1.1 blueprint topics — making it the single best investment for bridging traditional and cloud networking skills.

What Will You Build in This Lab?

The complete lab architecture connects an on-premises network (running in CML or EVE-NG on your local machine) to AWS through a Cisco Catalyst 8000V acting as the cloud-side router. The topology includes:

On-Prem Lab (CML/EVE-NG)                          AWS Cloud
┌─────────────────────┐                    ┌──────────────────────────────┐
│  CSR1000v / IOSv    │                    │  VPC: 10.100.0.0/16         │
│  Loopback0: 1.1.1.1 │                    │                              │
│  ASN 65001          │     IPsec VPN      │  ┌────────────────────────┐  │
│                     │◄──────────────────►│  │ Catalyst 8000V (cEdge) │  │
│  Lab Prefix:        │     + BGP (eBGP)   │  │ Public: 10.100.1.0/24  │  │
│  192.168.0.0/16     │                    │  │ Private: 10.100.2.0/24 │  │
└─────────────────────┘                    │  │ ASN 65002              │  │
                                           │  └────────────────────────┘  │
                                           │              │               │
                                           │       Transit Gateway        │
                                           │        ┌─────┴─────┐        │
                                           │     VPC-A       VPC-B       │
                                           │   10.200.0.0  10.201.0.0    │
                                           └──────────────────────────────┘

By the end, you’ll have BGP exchanging routes between your physical lab and multiple AWS VPCs through a Transit Gateway — the exact architecture used in enterprise hybrid cloud deployments.

What Do You Need Before Starting?

Before deploying anything in AWS, make sure you have these prerequisites ready:

  • AWS Account with a payment method (free tier covers some resources, but EC2 charges apply)
  • On-prem lab environment — CML, EVE-NG, or GNS3 with a router image that supports IKEv2 and BGP (CSR1000v, IOSv, or IOSvL2)
  • Public IP address on your home/lab network (or a NAT traversal solution)
  • Cisco Smart Account (for BYOL licensing — free to create at software.cisco.com)
  • AWS CLI installed and configured (optional but speeds up deployment)

Total estimated cost for a weekend lab session: $2-5 (EC2 instance + data transfer).

How Do You Create the AWS VPC and Subnets?

The VPC is your cloud-side network boundary — the equivalent of a VRF in Cisco terms. Every subnet inside the VPC gets a virtual router at the .1 address of its CIDR, which handles L3 forwarding. According to AWS networking documentation, the VPC route table functions like a static routing table that you can augment with BGP through Transit Gateway.

Step 1: Create the VPC

Navigate to the VPC console or use the CLI:

aws ec2 create-vpc --cidr-block 10.100.0.0/16 --tag-specifications \
  'ResourceType=vpc,Tags=[{Key=Name,Value=hybrid-lab-vpc}]'

Step 2: Create two subnets

The public subnet hosts the Catalyst 8000V’s outside interface (facing the internet for VPN termination). The private subnet simulates a workload network:

# Public subnet for C8000V
aws ec2 create-subnet --vpc-id <vpc-id> --cidr-block 10.100.1.0/24 \
  --availability-zone us-east-1a --tag-specifications \
  'ResourceType=subnet,Tags=[{Key=Name,Value=public-csr}]'

# Private subnet for workloads
aws ec2 create-subnet --vpc-id <vpc-id> --cidr-block 10.100.2.0/24 \
  --availability-zone us-east-1a --tag-specifications \
  'ResourceType=subnet,Tags=[{Key=Name,Value=private-workload}]'

Step 3: Create and attach an Internet Gateway

aws ec2 create-internet-gateway --tag-specifications \
  'ResourceType=internet-gateway,Tags=[{Key=Name,Value=hybrid-lab-igw}]'

aws ec2 attach-internet-gateway --internet-gateway-id <igw-id> --vpc-id <vpc-id>

Step 4: Configure the route table

Add a default route pointing to the Internet Gateway for the public subnet. The private subnet’s route table should point on-prem prefixes to the Catalyst 8000V’s ENI:

# Public subnet route table — default route to IGW
aws ec2 create-route --route-table-id <rtb-id> \
  --destination-cidr-block 0.0.0.0/0 --gateway-id <igw-id>

Cloud-to-Cisco translation table:

AWS ConceptCisco Equivalent
VPC (10.100.0.0/16)VRF with a /16 address space
Subnet (10.100.1.0/24)VLAN / SVI on a /24 segment
Route TableStatic routing table (no dynamic protocols natively)
Internet GatewayDefault route to upstream ISP
Security GroupStateful ACL (permit return traffic automatically)
Network ACLStateless extended ACL (inbound + outbound rules)
Elastic IPNAT static translation for public reachability

How Do You Deploy Cisco Catalyst 8000V from AWS Marketplace?

The Catalyst 8000V (C8000V) is the successor to the CSR 1000v and runs the same IOS-XE code. According to Cisco’s Catalyst 8000V ordering guide, the supported AWS instance types start at t3.medium (2 vCPU, 4 GB RAM) and scale up to c5n.9xlarge for high-throughput deployments.

Step 1: Find the AMI in AWS Marketplace

Search for “Cisco Catalyst 8000V” in the AWS Marketplace. Choose the BYOL listing if you have a Smart Account license, or Pay-As-You-Go for a simpler (but more expensive) option.

Step 2: Launch the instance

  • Instance type: t3.medium ($0.042/hour on-demand in us-east-1)
  • VPC: Select your hybrid-lab-vpc
  • Subnet: Public subnet (10.100.1.0/24)
  • Auto-assign Public IP: Disable (we’ll use an Elastic IP)
  • Security Group: Create a new one with these rules:
TypeProtocolPortSourcePurpose
SSHTCP22Your IP/32Management access
Custom UDPUDP500Your public IP/32IKEv2
Custom UDPUDP4500Your public IP/32IPsec NAT-T
Custom ProtocolESP (50)AllYour public IP/32IPsec ESP
ICMPICMPAll10.0.0.0/8Lab ping tests

Step 3: Add a second network interface

After launch, create and attach a second ENI in the private subnet (10.100.2.0/24). This gives the C8000V two interfaces — GigabitEthernet1 (public) and GigabitEthernet2 (private), just like a physical router with WAN and LAN interfaces.

Step 4: Assign an Elastic IP

aws ec2 allocate-address --domain vpc
aws ec2 associate-address --instance-id <instance-id> --allocation-id <eip-alloc-id>

Step 5: SSH into the router

ssh -i your-key.pem ec2-user@<elastic-ip>

You should see the familiar IOS-XE prompt. Verify the interfaces:

Router# show ip interface brief
Interface              IP-Address      OK? Method Status                Protocol
GigabitEthernet1       10.100.1.x      YES DHCP   up                    up
GigabitEthernet2       10.100.2.x      YES DHCP   up                    up

Cost optimization tip: Stop the instance when you’re not labbing. A stopped instance costs $0 for compute — you only pay for the EBS volume (~$0.08/GB/month for gp3). A 8 GB root volume costs about $0.64/month when stopped.

How Do You Configure the IPsec VPN Tunnel?

The IPsec tunnel connects your on-prem lab router to the Catalyst 8000V in AWS. I’m using IKEv2 with pre-shared key for simplicity, but you can substitute certificate-based authentication for a more production-like setup.

On the AWS Catalyst 8000V (cloud side):

! Crypto configuration
crypto ikev2 proposal HYBRID-LAB
 encryption aes-cbc-256
 integrity sha256
 group 14
!
crypto ikev2 policy HYBRID-LAB
 proposal HYBRID-LAB
!
crypto ikev2 keyring ONPREM-KEY
 peer ONPREM
  address <your-public-ip>
  pre-shared-key Str0ngP@ssw0rd!
!
crypto ikev2 profile HYBRID-LAB
 match identity remote address <your-public-ip> 255.255.255.255
 authentication remote pre-share
 authentication local pre-share
 keyring local ONPREM-KEY
!
crypto ipsec transform-set AES256-SHA256 esp-aes 256 esp-sha256-hmac
 mode tunnel
!
crypto ipsec profile HYBRID-LAB
 set transform-set AES256-SHA256
 set ikev2-profile HYBRID-LAB
!
interface Tunnel0
 ip address 172.16.0.1 255.255.255.252
 tunnel source GigabitEthernet1
 tunnel destination <your-public-ip>
 tunnel mode ipsec ipv4
 tunnel protection ipsec profile HYBRID-LAB
!

On the on-prem router (CML/EVE-NG side):

! Mirror configuration — swap addresses
crypto ikev2 proposal HYBRID-LAB
 encryption aes-cbc-256
 integrity sha256
 group 14
!
crypto ikev2 policy HYBRID-LAB
 proposal HYBRID-LAB
!
crypto ikev2 keyring AWS-KEY
 peer AWS
  address <elastic-ip>
  pre-shared-key Str0ngP@ssw0rd!
!
crypto ikev2 profile HYBRID-LAB
 match identity remote address <elastic-ip> 255.255.255.255
 authentication remote pre-share
 authentication local pre-share
 keyring local AWS-KEY
!
crypto ipsec transform-set AES256-SHA256 esp-aes 256 esp-sha256-hmac
 mode tunnel
!
crypto ipsec profile HYBRID-LAB
 set transform-set AES256-SHA256
 set ikev2-profile HYBRID-LAB
!
interface Tunnel0
 ip address 172.16.0.2 255.255.255.252
 tunnel source GigabitEthernet1
 tunnel destination <elastic-ip>
 tunnel mode ipsec ipv4
 tunnel protection ipsec profile HYBRID-LAB
!

Verify the tunnel:

Router# show crypto ikev2 sa
 Tunnel-id Local                 Remote                fvrf/ivrf            Status
 1         10.100.1.x/500        <your-ip>/500         none/none            READY

Router# ping 172.16.0.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.0.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5)

How Do You Configure BGP Over the VPN Tunnel?

Static routes work, but BGP is how production hybrid clouds exchange routes. eBGP between the cloud and on-prem routers lets you add new VPCs or lab segments without manually updating route tables on both sides.

On the AWS Catalyst 8000V:

router bgp 65002
 bgp log-neighbor-changes
 neighbor 172.16.0.2 remote-as 65001
 !
 address-family ipv4
  network 10.100.0.0 mask 255.255.0.0
  network 10.100.2.0 mask 255.255.255.0
  neighbor 172.16.0.2 activate
 exit-address-family
!
ip route 10.100.0.0 255.255.0.0 Null0

On the on-prem router:

router bgp 65001
 bgp log-neighbor-changes
 neighbor 172.16.0.1 remote-as 65002
 !
 address-family ipv4
  network 192.168.0.0
  neighbor 172.16.0.1 activate
 exit-address-family
!
ip route 192.168.0.0 255.255.0.0 Null0

Verify BGP adjacency and route exchange:

Router# show bgp ipv4 unicast summary
Neighbor        V    AS   MsgRcvd  MsgSent  TblVer  InQ OutQ Up/Down  State/PfxRcd
172.16.0.2      4  65001      15       17       3    0    0 00:05:32  1

Router# show ip route bgp
B     192.168.0.0/16 [20/0] via 172.16.0.2, 00:05:32

Now your AWS VPC knows about 192.168.0.0/16 (on-prem lab), and your lab knows about 10.100.0.0/16 (AWS VPC). The route exchange is dynamic — add a new network statement on either side and it propagates automatically.

Important AWS step: Update the VPC route table to point on-prem prefixes (192.168.0.0/16) to the Catalyst 8000V’s ENI. AWS route tables don’t learn from BGP natively — you need this static entry:

aws ec2 create-route --route-table-id <private-rtb-id> \
  --destination-cidr-block 192.168.0.0/16 \
  --network-interface-id <c8000v-private-eni-id>

Also disable source/destination checking on the C8000V instance (required for routing):

aws ec2 modify-instance-attribute --instance-id <instance-id> \
  --no-source-dest-check

How Do You Extend to Transit Gateway for Multi-VPC Connectivity?

Transit Gateway is where this lab goes from “cool demo” to “enterprise architecture practice.” TGW centralizes routing between your transit VPC (with the Catalyst 8000V) and additional spoke VPCs — exactly how multi-cloud networking works in production.

Step 1: Create the Transit Gateway

aws ec2 create-transit-gateway --description "hybrid-lab-tgw" \
  --options "AmazonSideAsn=64512,AutoAcceptSharedAttachments=enable,DefaultRouteTableAssociation=enable,DefaultRouteTablePropagation=enable,DnsSupport=enable"

Step 2: Create two spoke VPCs

# Spoke VPC-A
aws ec2 create-vpc --cidr-block 10.200.0.0/16 \
  --tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=spoke-vpc-a}]'

# Spoke VPC-B
aws ec2 create-vpc --cidr-block 10.201.0.0/16 \
  --tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=spoke-vpc-b}]'

Step 3: Attach all three VPCs to the Transit Gateway

aws ec2 create-transit-gateway-vpc-attachment --transit-gateway-id <tgw-id> \
  --vpc-id <transit-vpc-id> --subnet-ids <public-subnet-id>

aws ec2 create-transit-gateway-vpc-attachment --transit-gateway-id <tgw-id> \
  --vpc-id <spoke-vpc-a-id> --subnet-ids <spoke-a-subnet-id>

aws ec2 create-transit-gateway-vpc-attachment --transit-gateway-id <tgw-id> \
  --vpc-id <spoke-vpc-b-id> --subnet-ids <spoke-b-subnet-id>

Step 4: Update route tables

The spoke VPCs need a route to the on-prem prefix (192.168.0.0/16) pointing to the Transit Gateway. The transit VPC needs routes to the spoke VPC CIDRs pointing to TGW as well.

With TGW’s default route table propagation enabled, all three VPC CIDRs (10.100.0.0/16, 10.200.0.0/16, 10.201.0.0/16) are automatically available via TGW. For on-prem reachability, add a static route in the TGW route table:

aws ec2 create-transit-gateway-route --transit-gateway-route-table-id <tgw-rtb-id> \
  --destination-cidr-block 192.168.0.0/16 \
  --transit-gateway-attachment-id <transit-vpc-attachment-id>

Step 5: Test end-to-end

From an instance in Spoke VPC-A, you should be able to ping your on-prem lab addresses via the path: Spoke VPC-A → TGW → Transit VPC → C8000V → IPsec Tunnel → On-prem router → Lab network.

This is the same traffic flow used in production Cisco SD-WAN Cloud OnRamp deployments. For a detailed comparison of how this maps to Azure and GCP, see our multi-cloud networking comparison.

How Do You Optimize Costs for This Lab?

Running a cloud lab doesn’t have to drain your wallet. According to AWS pricing (2026), here are the real numbers:

ResourceRunning CostStopped Cost
t3.medium (C8000V)$0.042/hour (~$1/day)$0/hour
EBS gp3 (8 GB root)$0.064/month$0.064/month
Elastic IP (attached)$0.005/hour$0.005/hour
Data transfer (first 100 GB/month)Free outbound to internet
Transit Gateway attachment$0.05/hour per attachment

Cost-saving strategies:

  1. Stop when not labbing — A stopped instance costs nothing for compute. Only the EBS volume and Elastic IP continue billing.
  2. Use Spot Instances — For non-persistent lab sessions, Spot pricing can reduce C8000V cost by 60-90%. Be aware that AWS can terminate Spot Instances with 2 minutes notice.
  3. Schedule with Lambda — Create a CloudWatch Events rule to stop the instance at midnight and start it in the morning.
  4. Use BYOL — Pay-As-You-Go adds Cisco licensing fees on top of EC2 costs. BYOL with a free Smart Account evaluation license eliminates this.
  5. Tear down TGW when not needed — Transit Gateway charges per attachment per hour. Delete spoke VPC attachments after each lab session.

A typical weekend lab session (8 hours Saturday + 8 hours Sunday) costs approximately $1.35 for compute + data transfer. That’s cheaper than a coffee.

What Troubleshooting Steps Should You Know?

These are the most common issues I’ve hit building this lab, with fixes:

IPsec tunnel won’t establish:

  • Verify security group allows UDP 500, UDP 4500, and ESP (protocol 50) from your public IP
  • Check that your home router isn’t blocking outbound ESP — some ISP routers do. Use NAT-T (UDP 4500) if ESP is blocked
  • Verify the Elastic IP is correctly associated to GigabitEthernet1

BGP session stuck in Active state:

  • Confirm the tunnel interface is up/up first (show interface Tunnel0)
  • Check that the BGP neighbor address matches the remote tunnel IP exactly
  • Verify no ACL is blocking TCP 179 on the tunnel interface

Can’t reach instances in spoke VPCs from on-prem:

  • Confirm source/destination check is disabled on the C8000V instance
  • Verify the spoke VPC route tables have a route to 192.168.0.0/16 via TGW
  • Check that the TGW route table has a static route for 192.168.0.0/16 pointing to the transit VPC attachment
  • Verify security groups on spoke instances allow ICMP from 192.168.0.0/16

For lab environment options to run the on-prem side, see our comparison of CML vs INE vs GNS3.

Frequently Asked Questions

How much does it cost to run a Cisco Catalyst 8000V lab in AWS?

A t3.medium instance with BYOL licensing costs approximately $0.042/hour, or about $1/day for 24-hour operation. Stop the instance when not labbing to reduce costs to near zero — you only pay for EBS storage at approximately $0.08/GB/month. A weekend lab session costs about $1.35 total.

Can I use the free Cisco CSR 1000v instead of Catalyst 8000V?

Cisco has transitioned from CSR 1000v to Catalyst 8000V (C8000V). The C8000V runs the same IOS-XE code and supports the same features. According to Cisco’s AWS deployment guide, both BYOL and Pay-As-You-Go AMIs are available on the AWS Marketplace. The BYOL AMI on t3.medium is the most cost-effective for lab use.

What AWS instance type should I use for Cisco Catalyst 8000V?

For lab purposes, t3.medium (2 vCPU, 4 GB RAM) is sufficient and the minimum supported type. According to Cisco’s ordering guide, supported types include t3.medium, c5.large through c5.9xlarge, and c5n.large through c5n.9xlarge. Use c5 or c5n instances for production throughput testing.

Does this lab help prepare for the CCIE Enterprise Infrastructure exam?

Yes. The CCIE EI v1.1 blueprint includes SD-WAN overlay to cloud, BGP peering design, and hybrid network architecture. This lab provides hands-on experience with IPsec VPN, eBGP, Transit Gateway hub-spoke topology, and cloud networking fundamentals — all directly testable concepts. For overall CCIE preparation strategy, see our first-attempt pass guide.

Can I extend this lab to include Cisco SD-WAN Cloud OnRamp?

Yes. Once the Catalyst 8000V is running in AWS, you can register it with vManage as a cEdge router and enable Cloud OnRamp for Multicloud. This extends the lab into a full SD-WAN fabric-to-cloud deployment, which is the architecture covered in Cisco Live 2026 session BRKENT-2283.


Ready to fast-track your CCIE journey and master hybrid cloud networking? Contact us on Telegram @phil66xx for a free assessment.