Cisco ASR 9000 System Architecture Javed Asghar, Technical Marketing Architect - Speaker Dennis Cai, Distinguished Engineer, Technical Marketing – Question Manager CCIE #6621, R&S, Security
BRKARC-2003
Swiss Army Knife Built for Edge Routing World Cisco ASR9000 Market Roles Carrier Ethernet
1. High-End Aggregation & Transport
Cable/MSO
1. 2. 3. 4.
Mobile Backhaul
Multiservice Edge
Web/OTT 2. DC Gateway Router
DC gateway Broadband Gateway
Mobile Backhaul L2/Metro Aggregation CMTS Aggregation Video Distribution & Services
1. 2. 3.
DC Interconnect DC WAN Edge WEB/OTT 3. Services Router
Large Enterprise WAN
1. 2. 3. 4.
Business Services Residential Broadband Converged Edge/Core Enterprise WAN
Other ASR9000 or Cisco IOS XR Sessions •
… you might be interested in
•
BRKSPG-2904 - ASR-9000/IOS-XR Understanding forwarding, troubleshooting the system and XR operations
•
TECSPG-3001: Advanced - ASR 9000 Operation and Troubleshooting
•
BRKSPG-2202: Deploying Carrier Ethernet Services on ASR9000
•
BRKARC-2024: The Cisco ASR9000 nV Technology and Deployment
•
BRKMPL-2333: E-VPN & PBB-EVPN: the Next Generation of MPLS-based L2VPN
•
BRKARC-3003: ASR 9000 New Scale Features - FlexibleCLI(Configuration Groups) & Scale ACL's
•
BRKSPG-3334: Advanced CG NAT44 and IOS XR Deployment Experience
Agenda •
ASR9000 Hardware Overview
•
Line Card System Architecture • • •
•
Switch Fabric Architecture • •
•
Bandwidth and Redundancy Overview Fabric QoS – Virtual Output Queuing
Packet Flow and Control Plane Architecture • • •
•
Typhoon, Trident, SIP-700, VSM Tomahawk Interface QoS Capability
Unicast Multicast L2
IOS-XR Overview • • •
32-bit and 64-bit OS IOS-XRv 9000 Virtual Forwarder Netconf/Yang
ASR9000 Hardware Overview
ASR9k Chassis Portfolio Offers Maximum Flexibility Physical and Virtual Compact & Powerful Access/Aggregation
High Density Service Edge and Core
Flexible Service Edge
• Small footprint with full IOS-XR feature capabilities for distributed environments (BNG, Pre-agg etc.)
• Optimized for ESE and MSE with high M-D scale for medium to large sites
• Scalable, ultra high density service routers for large, high-growth sites ASR 9922
ASR 9912
nV Satellites: ASR 9000v, ASR 901/903
ASR 9010
ASR 9910
ASR 9006 x86 IOS XRv
ASR 9904 ASR 9001 / 9001-S
MSE
Fixed 2RU
2 LC/2RU
4 LC/10RU
8 LC/21RU
8 LC/21RU
10 LC/30RU
20 LC/44RU
120 Gbps
2.4 Tbps
3.2 Tbps
6.4 Tbps
24 Tbps
30 Tbps
60T Tbps
E-MSE
Peering
P/PE IOS XR
CE
Mobility
Edge Linecard Silicon Slice Evolution Past Trident Class
120G
Trident 90nm 15 Gbps
Octopus 130nm 60 Gbps
Santa Cruz 130nm 90 Gbps
PowerPC Dual Core 1.2 Ghz
Typhoon 55nm 60 Gbps
Skytrain 65nm 60 Gbps
Sacramento 65nm 220 Gbps
PowerPC Quad Core 1.5 Ghz
SM15 28nm 1.20 Tbps
X86 6 Core 2 Ghz
Now Typhoon Class
360G
Future Tomahaw k Class
800G
Tomahawk Tigershark 28nm 28nm 240 Gbps 200 Gbps
ASR 9001 Compact Chassis Side-to-Side airflow 2RU
Shipping since IOS-XR 4.2.1 May 2012
Front-to-back air flow with air flow baffles, 4RU, require V2 fan
Sub-slot 0 with MPA
Redundant (AC or DC) Power Supplies Field Replaceable
Sub-slot 1 with MPA
Supported MPAs: 20x1GE 2x10GE 4x10GE 1x40GE
Fixed 4x10G SFP+ ports Fan Tray Field Replaceable
ASR-9001 System Architecture Overview
On-board 2x10 SFP+ ports
FIA
Typhoon
SFP+ 10GE SFP+ 10GE Internal EOBC
SFP+ 10GE
LC CPU
SFP+ 10GE MPAs 2,4x10GE 20xGE 1x40GE
Typhoon
RP CPU
Switch Fabric ASIC
MPAs 2,4x10GE 20xGE 1x40GE
FIA
“RP CPU” and “linecard CPU” – same arch. as the larger systems uses a single crossbar ASIC (just due to smaller bandwidth requirements)
ASR 9001-S Compact Chassis Side-to-Side airflow 2RU
Shipping since IOS-XR 4.3.1 May 2013
Front-to-back air flow with air flow baffles, 4RU, require V2 fan Supported MPAs: 20x1GE 2x10GE 4x10GE 1x40GE
Sub-slot 0 with MPA
Pay As You Grow • Low entry cost • SW License upgradable to full 9001 Sub-slot 1 with MPA
60G bandwidth are disabled by software. SW license to enable it
ASR-9001S System Architecture Overview
On-board 2x10 SFP+ ports
FIA
Typhoon
SFP+ 10GE SFP+ 10GE Internal EOBC
SFP+ 10GE
LC CPU
SFP+ 10GE MPAs 2,4x10GE 20xGE 1x40GE
Typhoon
RP CPU
Switch Fabric ASIC
MPAs 2,4x10GE 20xGE 1x40GE
FIA
Disabled by Default Upgradable via License “RP CPU” and “linecard CPU” – same arch. as the larger systems uses a single crossbar ASIC (just due to smaller bandwidth requirements)
Cisco ASR 9006 Overview Front-to-back air flow with air flow baffles, 13RU, vertical Side-to-back airflow, 10 RU
Feature
Description
Total Capacity
3.68T
Capacity per Slot
920G
Slots
6 slots - 4 Line Cards and 2 RSPs
Rack size
10RU
Power
1 Power Shelf, 4 Power Modules 2.1 KW DC / 3.0 KW AC supplies
Fan:
Side to Side Airflow Optional Baffle for Front-to-Back Airflow 2 Fan Trays, FRU
RSPs
Integrated Fabric, 1+1 Redundancy
Line cards
Tomahawk Typhoon VSM SIP700 & SPAs
Cisco ASR 9010 Overview Feature
Description
Total Capacity
7.36T
Capacity per Slot
920G
Slots
10 slots - 8 Line Cards and 2 RSPs
Rack size
21RU
Power:
2 Power Trays 2.1 KW DC / 3.0 KW AC supplies 4.4 KW DC / 6.0 KW AC supplies
Fan:
Front to Back Airflow 2 Fan Trays, FRU
RSPs
Integrated Fabric, 1+1 Redundancy
Line cards
Tomahawk Typhoon VSM SIP700 & SPAs
Cisco ASR 9904 Overview Front-to-back air flow with air flow baffles, 10RU
Feature
Description
Total Capacity
6T
Capacity per Slot
3T
Slots
4 slots - 2 Line Cards and 2 RSPs
Rack size
6RU
Power
1 Power Trays, 4 Power Modules 2.1 KW DC / 3.0 KW AC supplies
Fan
Side to Side Airflow, Front-to-Back Optional 1 Fan Tray, FRU
RSPs
Integrated Fabric, 1+1 Redundancy
Line cards
Tomahawk Typhoon SIP700, VSM
SW
XR 5.1.0 – September 2013
Cisco ASR 9912 Overview Features
Description
Total Capacity
30T
Capacity per Slot
3T
Slots
10 slot chassis
Rack Size
30 RU
Power
4 Power Trays 2.1 KW DC / 3.0 KW AC supplies 4.4 KW DC / 6.0 KW AC supplies
Fan
2 Fan Trays Front to back airflow
RP
1+1 RP redundancy
Fabric (SFC)
6+1 fabric redundancy
SW
XR 4.3.2 – Shipping
Cisco ASR 9922 Overview Features
Description
Total Capacity
60T
Capacity per Slot
3T
Slots
20 Line cards, 2 RP, 7 SFC
Rack Size
44 RU (Full Rack)
Power
4 Power Trays 2.1 KW DC / 3.0 KW AC supplies 4.4 KW DC / 6.0 KW AC supplies
Fan
4 Fan Trays Front to back airflow
RP
1+1 RP redundancy
Fabric (SFC)
6+1 fabric redundancy
Line cards
Tomahawk Typhoon, VSM
Target IOS XR 6.1 Q1CY16
Cisco ASR 9910 Overview Feature
Description
Total Capacity
24T
Capacity per Slot
3T
Slots
10 slots - 8 Line Cards and 2 RSPs
Rack size
21RU
Power:
2 Power Trays 4.4 KW DC supplies 6.0 KW AC supplies
Fan:
Front to Back Airflow 2 Fan Trays, FRU
RSPs
Integrated Fabric, 1+1 Redundancy
Fabric Cards
5 Fabric cards on rear of chassis for additional capacity 230G per FC at FCS Up to 6+1 Redundancy using RSP’s integrated fabric
Line cards
Tomahawk Typhoon VSM SIP700
Note: This information may change until FCS
Target IOS XR 6.1 Q1CY16
Cisco ASR 9910 Details Greater Capacity, Greater Flexibility: Start with 2 RSPs, then add fabric cards to scale beyond 460G per slot ASR 9910 comes prepared with sufficient power and cooling to support high density 100G line cards. Available with IOS XR 6.1 with the same feature parity as the rest of the ASR 9000 family.
2 Fan Trays
8 Line Card Slots 2 RSPs 5 Fabric Cards
2 Power Trays
This information may change until FCS
ASR-9910 Mid-plane Architecture New Mid-plane Architecture for Greater Flexibility FT0
RSP0 – 230G 1G/10G/40G/100G
1G/10G/40G/100G
LC0
LC7 RSP1 – 230G
Power Tray 0 Power Tray 1
M I D P L A N E
FT1
Fabric card 0 – 230G
Fabric card 4 – 230G
Fabric Control Line Side
ASR 9000 Route Switch Processor ASR 9010
Common for ASR 9904, ASR 9010, and ASR 9010 Common internal HW with RP for feature parity on IOS XR Integrated Multi-Stage Switch Fabric TR and SE Memory options Time and Synchronization Support RPS440
ASR 9006
ASR 9904
RSP880
Availability
Q1CY12
Q1CY15
Processor
Four Cores - 2.1GHz
Eight Cores - 2.2GHz
NPU Bandwidth
60G
240G
Fabric Planes
5
7
Fabric Capacity
440G
880G
Memory
6G for TR 12G for SE
16G for TR 32G for SE
SSD
2x 16GB Slim SATA
2x 32GB Slim SATA
LC Support
Typhoon/Trident
Tomahawk/Typhoon
ASR 9900 Route Processor ASR 9922
Common for ASR 9912 and ASR 9922 Built for massive control place scale Ultra High Speed Control Plane with Multi-Core Intel CPU Huge Scale through High Memory options Time and Synchronization Support
ASR 9912
RP1
RP2
Availability
Q1CY12
Q1CY15
Processor
Four Cores 2.1GHz
Eight Cores 2.2GHz
NPU Bandwidth
60G
240G
Fabric Planes
5
7
Memory
6G for TR 12G for SE
16G for TR 32G for SE
SSD
2x 16GB Slim SATA
2x 32GB Slim SATA
LC Support
Typhoon
Tomahawk/Typhoon
Route Switch Processors and Route Processors RSP used in ASR9904/9006/9010, RP in ASR9922/9912 9904/9006/9010 RSP440
9922/9912 RP1
2nd Gen RP and Fabric ASIC
RSP880
9922/9912 RP2
3rd Gen RP and Fabric ASIC
Intel x86
Intel x86
Intel x86 (Ivy Bridge EP)
Intel x86 (Ivy Bridge EP)
4 Core 2.27 GHz
4 Core 2.27 GHz
8 Core 2GHz
8 Core 2GHz
RSP440-TR: 6GB
-TR: 6GB
-TR: 16GB
-TR: 16GB
RSP440-SE: 12GB
-SE: 12GB
-SE: 32GB
-SE: 32GB
2x 16GB Slim SATA
2x 16GB Slim SATA
2x 32GB Slim SATA
2x 32GB Slim SATA
nV EOBC ports
2 x 1G/10G SFP+
2 x 1G/10G SFP+
4 x 1/10G SFP+
4 x 1/10G SFP+
Punt BW
10GE
10GE
40GE
40GE
660G+110G
450G + 450G (9006/9010)
(separated fabric card)
800G + 800G (9904)
1.61Tb + 230G (separated fabric card)
Processors
RAM SSD
220G + 220G (9006/9010) Switch fabric bandwidth
385G + 385G (9904) (fabric integrated on RSP)
23
ASR 9900 - Switch Fabric Cards ASR 9922
ASR 9912
Common for ASR 9912 and ASR 9922
7 Fabric Card Slots Decoupled, multi-stage switch fabric hardware True HW separation between control and data plane Add bandwidth per slot easily & independently Similar architecture to CRS
SFC110
SFC2
Availability
Q1CY12
Q1CY15
Fabric Capacity per SFC
110G
230G
Fabric Capacity Per Line Card Slot
660G N+1 770G N+0
1.38T N+1 1.61T N+0
Fabric Redundancy
N+1
N+1
LC Support
Typhoon
Tomahawk/Typhoon
In-Service Upgrade
New ASR-9922 and ASR-9006 V2 Fans ASR 9922-FAN-V2 IOS XR 5.2.2
ASR 9006-FAN-V2 IOS XR 5.3.0
V2 Fan provides higher cooling capacity for ultra high density cards. Motor and blade shape optimized to produce higher CFM. New material capable of dissipating more heat In service upgrade
PAYG Power
N+1 Redundancy for DC N+N Redundancy for AC Typhoon and Tomahawk LCs supported with V2 Plan for the future beyond 1T per slot with V3
Line Card
Used BW
Typhoon
360G / Slot
2.8 Watts/G
3.5 Watts/G
Tomahawk
1T / Slot
1.6 Watts/G
1.9 Watts/G
Version
Consumption at 27C
Consumption at 40C
AC
DC
Chassis
V2
3.0KW
2.1KW
ASR9006, ASR9010, ASR9904, ASR9912, ASR9922
V3
6.0KW
4.4KW
ASR9010, ASR9912, ASR9922
Benefit of Power Reductions for 100G 4 Tbps: Tomahawk vs. Typhoon in Provider-owned European Data Center
Power Cost
4x 100G Density vs Typhoon $3460 per port per year power savings $1,384,000 savings over 10 years
Year
Notes: 9922 with 20 2x100G vs. 9912 with 5 8x100G 8-year facility amortization with 1.7 PUE, 40C max, France power (USD $.19 per kWh) N:N Power redundancy
Line Card System Architecture 1. Typhoon, Trident, SIP-700, VSM 2. Tomahawk 3. Inteface QoS
Modular SPA Linecard •
20Gbps, feature ritch, high scale, low speed Interfaces Quality of Service • 128k Queues
•
128k Policers
•
H-QoS
•
Color Policing
Scalability •
Distributed Control and Data Plane
•
20Gbits, 4 SPA Bays
•
•
L3 i/f, route, session protocol – scaled for MSE needs
High Availability IC-Stateful Switch Over Capability
•
MR-APS
•
IOS-XR base for high scale and Reliability
Powerful & Flexible QFP Processor • • •
Flexible uCode Architectue for Feature Richness
SIP-700
L2 + L3 ServicesL FR, PPP, HDLC, MLPPP, LFI L3VPN, MPLS, Netflow, 6PE/6VPE
SPAs
SPA Support •
ChOC-3/12/48 (STM1/4/16)
•
POS: OC3/STM1, OC12/STM4, OC-48/STM16, OC192/STM64
•
ChT1/E1, ChT3/E3, CEoPs, ATM
ASR 9000 Ethernet Line Card Overview -L, -B, -E First-generation LC Trident NPU: 15Gbps, ~15Mpps, bi-directional A9K-40G
A9K-4T
A9K-8T/4
A9K-2T20G
A9K-8T
A9K-16T/8
-TR, -SE Second-gen LC Typhoon NPU: 60Gbps, ~45Mpps, bi-directional
A9K-MOD160 A9K-MOD80
A9K-24x10GE
A9K-2x100GE (A9K-1x100G)
A9K-36x10GE -L: low queue, -B: Medium queue, -E: Large queue, -TR: transport optimized, -SE: Service edge optimized
MPAs 20x1GE 2x10GE 4x10GE 8x10GE 1x40GE 2x40GE
ASR 9000 80-360G “Typhoon Class” Linecards Hyper Intelligence & Service Scalability
• High Control Plane Scale • 4M IPv4 or 2M IPv6 FIB per line card
• 2M MACs learned in hardware
2x100GE
• High Performance • Line rate performance on all line cards
24x10GE
• End-to-End Internal System QoS • Efficient multicast replication
• Micro-CPU based forwarding chip
36x10GE
• Feature flexibility, future proven • Programmable forwarding tables
Modular 80G & 160G
Network Processor Architecture Details TR and SE has same memory size
FIB
NPU Complex
STATS MEMORY
MAC
LOOKUP MEMORY
TR and SE has different memory size
Forwarding chip (multi core)
FRAME MEMORY
TCAM
-
•
TCAM: VLAN tag, QoS and ACL classification
•
Stats memory: interface statistics, forwarding statistics etc
•
Frame memory: buffer, Queues
•
Lookup Memory: forwarding tables, FIB, MAC, ADJ
•
TR/SE •
Different TCAM/frame/stats memory size for different per-LC QoS, ACL, logical interface scale
•
Same lookup memory for same system wide scale mixing different variation of LCs doesn’t impact system wide scale -TR: transport optimized, -SE: Service edge optimized
MAC Learning and Sync Hardware based MAC learning: ~4Mpps/NP 1 NP learn MAC address in hardware (around 4M pps)
RP
2 NP flood MAC notification (data plane) message to all other NPs in the system to sync up the MAC address system-wide. MAC notification and MAC sync are all done in hardware
CPU
Data packet
1NP 2
CPU
CPU
FIA
NP FIA NP NP FIA NP
3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP +
LC2
NP NP
FIA
2
NP NP
FIA
NP FIA NP
Switch Fabric ASIC
FIA NP
Switch Fabric
LC1
NP NP
FIA
Switch Fabric
Switch Fabric ASIC
3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP +
Punt FPGA
NP FIA
NP 33
Typhoon LC: 24x10G Ports Architecture 3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP +
3x 10G
Typhoon
30G
60G
FIA
3x 10G
Typhoon 3x 10G
Typhoon
30G
60G
FIA
3x 10G
Typhoon
90G
30G
Typhoon NPU 3x 10G BW and Performance
3x10GE 1. 120G and 90Mpps Typhoon Uni-directional 30G SFP +
2.
3x10GE SFP + 3x10GE SFP +
3x10GE SFP +
RSP 440 Switch Fabric
90G
No Congestion Point Non-Blocking everywhere 30G
Line cardlocal Fabric Complex (NextGen S/F ASIC)
8x55G
RSP 0
60G
60G and 45Mpps full-duplex (each direction ingress/egress) 3x 10G
Typhoon
30G
Typhoon
30G
Typhoon
30G
FIA
90G
3x 10G
60G
FIA
3x 10G
90G
RSP 1
Typhoon Line Card Architectures 36x10G Port LC
2x100G Port LC
MOD80 LC
MOD160 LC
Trident vs. Typhoon– Major Feature Scale Metric
FIB Routes (v4/v6) Multicast FIB
Trident
Typhoon
1.3M/650K*
4M/2M
32K
128K
512K*
2M
L3 VRFs
4K
8K
Bridge Domains / VFI
8K
64K
PW
64K
128K
L3 Subif / LC
4K
8K (TR)
MAC Addresses
20K (SE) L2 Interfaces (EFPs) / LC
4K (-L) 16K (-B) 32K (-E)
16K (TR)
256K
1M
Queue Scale (per NP)
64K egress + 32K ingress
192K egress + 64K ingress
Policer scale (per NP)
32K/64K (-L,-B/-E)
32K/256K (-TR/-SE)
MPLS labels
64K (SE)
* It require scale profile configuration to reach maximum FIB or MAC on the Trident line card. On Typhoon, FIB and MAC has dedicated memory
ASR9k Full-Mesh IPv4 Performance ASR9010, 2x RSP-440 8x Typhoon 24x10G = 192 ports Full mesh of flows = ~72,000 flows (36k Full-duplex Flows)
ASR9k
. . IXIA Tx/Rx . .
. . IXIA . Tx/Rx .
A9k-24x10GE – L3 Multicast Scale Performance Setup and Profile Scale Profile: 1. Unidirectional Multicast 2. 2k (S,G) mroutes 3. 24 OIFs per (S,G) mroute
L3 sub-interface multicast receiver facing ports port 0
ASR 9010
IXIA Multicast Receivers
port 1
port 10
10G
A9k-24x10GE
IXIA Multicast Source
A9k-24x10GE
IXIA Multicast Receivers
L3 sub-interface multicast Source facing ports
. . . . . port 21 port 22 port 23
IGMP Joins
IXIA Multicast Receivers IXIA Multicast Receivers IXIA Multicast Receivers
A9k-24x10GE – L3 Multicast Scale Throughput Performance Results
260 240 220 200 180 160 140 120 100 80 60 40 20 0
240G357M 240G
240G
240G
240G
240G
240G
240G
240G
7M 4,096
3M 9,000
203M
109M 56M 29M 64
128
256
512
Agg Rx Throughput (Gbps)
1,024 Frame Size
20M 1,518
15M 2,048
Agg Rx Throughput (Mfps)
360 340 320 300 280 260 240 220 200 180 160 140 120 100 80 60 40 20 0
Mfps
Gbps
A9k-24X10GE - L3 Multicast Throughput Performance Scale: 2k (S,G), 24 OIFs per (S,G)
Virtualized Services Module (VSM) Overview Service-3 Service-1 VM-4 VM-1
Service-4
ASR 9000 VSM
Service-2 VM-3 VM-2
• Data Center Compute: • 4 x Intel 10-core x86 CPU • 2 Typhoon NPU for hardware network processing
VMM OS / Hypervisor
• 120 Gbps of Raw processing throughput • HW Acceleration • 40 Gbps of hardware assisted Crypto throughput • Hardware assist for Reg-Ex matching • Virtualization Hypervisor (KVM) • Service VM life cycle management integrated into IOS-XR
CGN
IPSec
SecGW
Shipping
Shipping
Q2/Q3 CY15
Firewall On Radar
Anti-DDOS* Q1,CY15
3rd Party Apps*
VSM Details RSP/RP Switch Fabric
DDR3 DIMM
Crypto
DDR3 DIMM
Typhoon NPU
Crypto
10GE/FCoE SFP+
Linecard local Fabric Complex
RSP/ RP 0
Data Path Switch Typhoon NPU
DDR3 DIMM
Crypto DDR3 DIMM
RSP /RP1
Crypto
Virtualized Services Sub- Module
Router Infrastructure Sub-Module
New Tomahawk LCs Tomahawk 8x100GE CPAK Line Card LAN Only Shipping 5.3.0
LAN/WAN/OTN April 2015 (5.3.1)
Tomahawk 4x100GE CPAK Line Card LAN/WAN/OTN April 2015 (5.3.1)
Flex 100G CPAK
Investment Protection Start with 10 GE and upgrade to 100 GE in the future
100 GE LR4
CPAK Options
10x10 GE
2x40 GE
100 GE SR10
CPAK 100 GE ER4
CPAK 100 GE LR4
CPAK 100 GE SR10
CPAK 10x10-LR
Tomahawk LC: 8x100GE Architecture Slice Based Architecture CPAK: 100G, 40G, 10G
Macsec Suite B+, G.709, OTN, Clocking
VoQ buffering, Fabric credits, mcast hashing, scheduler for fabric and egress port
L2/L3/L4 lookups, all VPN types, all feature processing, mcast replication, QoS, ACL, etc …
PHY
Tomahawk NP
240G
240G
240G
Tigershark FIA Tigershark FIA
Tomahawk Tigershark Per Slice Power Management: (100-200W Power Savings) PHY FIA PE1(admin-config)# hw-moduleNP power disable slice [0-3] location
Ivy Bridge CPU Complex
240G
240G
240G
240G
Central XBAR
PHY
Tomahawk NP
Tigershark FIA
XBAR (SM15)
PHY
Tomahawk NP
240G
FPOE, Auto-Spread, DWRR, RBH, replication
Tomahawk Vs Typhoon Hardware Capability Metric MPLS Labels MAC Addresses FIB Routes (v4/v6) – Search Memory VRF Bridge Domains TCAM Packet Buffer EFPs L3 Subif (incl. BNG) Egress Queues Policers
Tomahawk-SE Scale 1M 2M @ FCS 6M - future
Typhoon –SE Scale 1M 2M
4M(v4) / 2M(v6) - XR 10M(v4) / 5M(v6) possible in future release
4M(v4+v6)
16K 256K/LC
8K/LC 64K/LC
TCAM (80Mb)
TCAM (40Mb)
12GB/NPU 200ms 256K/LC (64K/NP) 128K (64K/NP)
2GB/NPU 100ms 64K/LC 20k/LC (sys)
1M/NPU (4M for 8x100GE)
256k/NPU
512k/NPU
256k
-Note: Actual available LC/NPU capacity depends on software release and subject to change!
Tomahawk Line Card CPU CPU Subsystem: Intel IVY Bridge EN 6 cores @ ~2GHz with 3 DIMMs
Integrated acceleration engine for Crypto, Pattern Matching and Compression 1x32GB SSD HW Parameter
Typhoon LC CPU
Tomahawk-SE Linecard
Processor
P4040 4 Core 1.5GHz
Ivy Bridge EN 6 Core 2GHz
LC CPU Memory
4GB
24 GB
L1, L2, L3
L1: 32KB Instructions, L2: 256KB L3: 2.5MB per core
Cache
Tomahawk Modular LCs – Mod400 & Mod200 Mod400 LC
MPA #1 (2x100G)
MPA #2 (20x10G)
•
MPA’s Supported: 2x100GE CPAK. 1x100GE CPAK. 20x10GE SFP+ All Typhoon MPA’
•
Flexibility to use CPAK 10G/40G/100G optics on 100G MPAs
•
2 Flavors for Flexibility MOD400 has 2 Tomahawk ASICS (FCS – August 2015) MOD200 has 1 Tomahawk ASIC (FCS – Oct 2015)
2 x 100G CPAKs
20 x 10G SFP+
MOD200 Support Matrix
Comb 1
Comb 2
Comb 3
MOD400 Support Matrix
Comb 1
Comb 2
Combo 3
Comb 4
EP0
2x100G-MPA
20x10GMPA
1x100G or any Typhoon MPAs
EP0
2x100G-MPA
20x10G-MPA
2x100G-MPA
1x100G or any Typhoon MPAs
EP1
None
None
1x100G or any Typhoon MPAs
EP1
2x100G-MPA
20x10G-MPA
20x10G-MPA
2x100G-MPA OR 20x10G-MPA
Tomahawk Line Card Architectures 8x100G/80x10G Port LC
MOD400 LC
4x100G/40x10G Port LC
MOD200 LC
12x100G Tomahawk LC “Skyhammer”
High Level Differences: Tomahawk 8x100G and 12x100G 8x100GE “Octane”
12x100GE “Skyhammer”
8-port 100GE with 4 NPU slices
12-port 100GE with 6 NPU slices
CPAK Optics
QSFP28 Optics
5-Fabrics (7-Fabric in roadmap for Nov/Dec 2015 FCS)
7-Fabric card
Compatible with all 99xx-series and 90xx-series chassis
Support only for 99xx-series chassis
Full L3VPN/L2VPN feature support
Only L3 features in Ph-1, L2 support in Ph-2
Has external TCAM for High Qos/ACL scale
Has no external TCAM . Will use only 5Mb internal TCAM
Total TCAM entries: 192K
Total TCAM entries: 32K
32K sub-interfaces
1K sub-interfaces
4M V4 Route, 2M V6 routes
Under discussion
Tomahawk TCAM Optimized Scale Metric
Tomahawk -TR Scale
Tomahawk -SE Scale 1M
MPLS Labels Addresses
2M (6m Future)
FIB Routes (v4/v6) – Search Memory
4M(v4) / 2M(v6) – XR (10m/5m Future)
Mroute/MFIB (v4/v6)
VRF Bridge Domains TCAM (acl space v4/v6) Packet Buffer EFPs
Skyhammer Scale
128k/32k (256k/64k Future)
TCAM – 1/4 SE Packet Buffer – 100ms (6G/NPU) 16K/LC
8K (16k Future) 64K TCAM (80Mbit) Packet Buffer – 200ms (12G/NPU) 128K/LC (64K/NP)
Internal TCAM (5Mbit) Packet Buffer – 100ms N/A
L3 Subif (incl. BNG)
8K
128K (64K/NP)
1K
IP/PPP/LAC subscriber sessions per LC
16K
256K (64K/NP)
N/A
8 Queues / port + nV Sat Q’s
1M/NPU (4M for 8x100GE!)
32K/NPU 16k v4 or 4k v6 ACEs/LC
512K/NPU 98k v4 / 16k v6 ACEs
Egress Queues Policers QOS/ACL (v4/v6)
8 Queues / port 24K v4/1.5K v6
ASR9k Optical Solution Before IPoDWDM LC ASR9k + NCS2k Optical Shelf Integrated nV Optical System ASR9000
NCS2k
Low Cost Interconnect
SR10 CFP/CPAK transceiver 100GE Linecard
SR10 CXP/CPAK Coherent Transponder
400G IPoDWDM LC Overview (Tomahawk Based LC)
FCS Mid-CY15
• Tomahawk based Linecard
• Feature and scale parity with other –TR and –SE Tomahawk cards • 2xCFP2 based DWDM ports (100G, 200G) • BPSK, QPSK, 16QAM modulation options • 96 channels, ITU-T 50GHz spacing • FlexSpectrum support • HD FEC, SD FEC (3000+ km w/o regen) • 20x10GE SFPP ports (SR, LR, ZR, CWDM, DWDM) • Flexible port options up to 400 Gbps total capacity • 2 x 200G DWDM (CFP2) or
• 2 x 100G DWDM (CFP2) + 20 x 10G (SFP+) or • 1 x 100G + 1x200G DWDM (CFP2) + 10 x 10G (SFP+) • OTN and pre-FEC FRR • Target FCS: mid-CY15 (5.3.2) for 100G DWDM
• 10G Gray Ports and 200G DWDM in a future release
400G IPoDWDM LC Specific HW components NPMEM
Coherent CFP2 200G capable
Etna
HD-FEC FPGA
Ivy Bridge CPU Complex VOQ
Tomahaw k
X240
FIA TCAM
#1
10GE SFP+
X240 #20
SerDes Mux
10GE SFP+
SM15
NPMEM
Coherent CFP2
Etna
HD-FEC FPGA
X240
VOQ
Tomahaw k
FIA
200G capable
TCAM
400Gbs IPoDWDM LC Internals NPMEM
#0
Coherent CFP2 200G capable
Etna
HD-FEC FPGA
Ivy Bridge CPU Complex VOQ
Tomahaw k
X240
FIA TCAM
#1
10GE SFP+
X240 #20
SerDes Mux
10GE SFP+
SM15
NPMEM
#1
Coherent CFP2
Etna
HD-FEC FPGA
X240
VOQ
Tomahaw k
FIA
200G capable
TCAM
Tomahawk Per Slice MACSEC PHY Capability •
•
•
•
MACSEC Security Standards Compliant with: •
IEEE 802.1EA-2006
•
IEEE 802.1AEbn- 2011 (256-bit key)
•
IEEE 802.1AEbw-2013 (extended packet numbering)
Security Suites Supported: •
AES-GCM-128, 128-bit key (32 bits)
•
AES-GCM-256, 256-bit key (32 bits)
•
AES-GCM-XPN-128, provides extended packet number counter (64 bits)
•
AES-GCM-XPN-256, provides extended packet number counter (64 bits)
Unique Security Attributes Per Security Association (SA): •
10G port = 32 SA
•
40G port = 128 SA
•
100G port = 256 SA
Per Slice Port Combination Supported (CPAK) •
•
2x100G, 20x10G, 4x40G, 1x100G + 10x10G, 2x40G + 10x10G, 2x40G + 1x100G
All Tomahawk LC variations support MACSEC •
8x100G, 4x100G, MOD-400, MOD-200
ASR9k MACSEC Phase 1: XR 5.4.0 Release SP/CE/DCI/Enterprise Usecases Usecase #1: Link MACSEC in MPLS/IP Topology
Usecase #2: Link MACSEC over LAG members
ASR9k
CE
PE
P
PE
MACSEC on LAG
CE
MACSEC Links
MACSEC Links Usecase #3 CE Port Mode MACSEC over L2VPN MKA
ASR9k DCI/CE port mode
L2VPN CE/WAN
MACSEC Links
Member link Inheritance
port mode
Usecase #4 VLAN Clear Tags MACSEC over L2VPN
ASR9k
ASR9k
DCI/CE
DCI/CE
MKA
ASR9k DCI/CE
vlan clear-tags
L2VPN CE/WAN
MACSEC Links
vlan clear-tags
Secure L2VPN as a Service: PW/EVPN/PBB (any L2) Circuit Encryption using MACSEC Clear Frame DA
SA
Port 1
VLA N
PAYLOA D
FCS
EFP1
1 0 G
Clear Frame DA
Frame Encryption on Port 2 inside PHY DA
SA
VLA N
Port 2
1 0 G
Port 3
1 0 G
SEGTA G
PAYLOA D
ICV
FCS
EFP2
Fabric
PHY
NPU EFP3 Encrypted EoMPLS PW
Frame bypass Port 3 MACSEC inside PHY Phy Secure Channel Loopback + Clear Tags (Cisco IPR)
DA
SA
VLA N
SEGTA G
PAYLOA D
ICV
FCS
VC Label
DA
SA
VLA N
SEGTA G
PAYLOA D
ICV
FCS
SA
VLA N
PAYLOA D
FCS
Tomahawk MACSEC Raw Performance • AES-GCN-256-bit
Raw Performance (full-duplex)
Per LC Slice
200Gbps
Per LC Slot
800Gbps
Per Chassis
ASR9006 = 3.2Tbps ASR9010 = 6.4Tbps ASR9904 = 1.6Tbps ASR9012 = 8Tbps ASR9922 = 16Tbps
•
Tomahawk MACSEC AES-GCN-256-bit
Raw Scale
Total MACSEC Ports Raw Scale Per System
10G = 1,600 40G = 320 100G = 160
Total MACSEC SAs Per System
10G Tx/Rx SAs = 51,200 40G Tx/Rx SAs = 40,960 100G Tx/Rx SAs = 40,960
Customer Profile Tomahawk Performance •
Generally target >3.3x performance of Typhoon system
•
Cust benchmark: 2x 100GE/Tomahawk NPU vs 6x 10GE/Typhoon NPU
Customer Profile
Web/OTT 1
Web/OTT 2
Application Profile
LSR: top label swap + Bundle + TE Tunnel + iMarking + eMarking/Queue LER: MPLS imp + ieQoS + iNF + ieBundle + RSVP TE/FRR LSR: label swap + Bundle + TE Tunnel + iMarking + eMarking/queuing LER: IP + Bundle +TE Tunnel + iMarking + eMarking/queuing + iNF + Recursive
Tier1 ISP/Peering 1
Ipv4 & v6 recursive + in/out ACL + uRPF + in NF + iMAC Accounting
Tier1 SP/Peering 2
IP to IP (recursive) + QoS(out) + ACL(in+out) + Netflow(1:5K,in) + Bundle
Tomahawk LR Frame
Typhoon LR Frame @6x10G
Performance Incr Ratio
[email protected]
3.8x
[email protected]
5.5x
[email protected]
3.8x
[email protected]
6.3x
460B IP @52mpps
[email protected]
4.2x
286B IP @81.6mpps
iMix LR@6x10G
> 3.3x
160B IP @126mpps
160B IP @126mpps
Tomahawk Baseline Performance •
Generally target 3.3x performance of Typhoon system
•
Benchmark set: 2x100GE/T-hawk NPU vs 6x 10GE/Typhoon NPU LineCard BW
Packet Fwding Capacity
IPv4 NR + Rx ACL
IPv6 NR + IPv4 + in NF Rx ACL (1:1k)
Tomahawk
240G
149 mpps
128B Eth @ 169mpps
196B Eth @ 115mpps
225B Eth @ 102mpps
233B Eth @ 104mpps
123B Eth @ 174.6mpps
449B Eth @ 53.3mpps
Typhoon
60G
45 mpps
178B Eth @ 37.8mpps
300B Eth @ 23.4mpps
205B Eth @ 33.4mpps
316B Eth @ 22.3mpps
148B Eth @ 44.6mpps
445B Eth @ 16.1mpps
4x
3.3x
4.48x
4.48x
3.07x
4.42x
3.91x
3.3x
NPU
Perf Incr Ratio
IPv6 In Policing L2 Xconnect +Out Shaping
VPLS + QoS
All Platforms 9910, 9912, 9922
Tomahawk Roadmap
90xx, 9904, 9910 9912, 9922
8x100GE OTN*
MOD400
12x100G
4x100GE OTN
400G IPoDWDM
8x100G 7-Fab
MOD200
April 2015 XR 5.3.1
August 2015 XR 5.3.2
Nov 2015 XR 6.0.0
Mar 2016 XR 6.1.0
8x100GE LAN PHY*
Line Cards
SFC2
Commons
RSP880 RP2
Jan 2015 XR 5.3.0
* Oversubscribed on 9010, 9006 with Single RSP
Tomahawk Line Card Port QOS Overview •
4 Priority Interface Queuing: • •
PQ1, PQ2, PQ3 Strict Priority PQ4 = Remaining Queues (CBWFQs)
•
Configurable QoS policies using IOS XR MQC CLI
•
QoS policy is applied to interface (physical, bundle or logical*), attachment points –
Main Interface
MQC applied to a physical port will take effect for traffic that flows across all sub-interfaces on that physical port will NOT coexist with MQC policy on sub-interface ** you can have either port-based or subinter-face based policy on a given physical port – L3 – L2
•
sub-interface sub-interface (EFP)
QoS policy is programmed into hardware microcode and queue ASIC on the Line card NPU * Some logical interface could apply qos policy, for example PWHE, BVI ** it could have main interface level simple flat qos co-exist with sub-interface level H-QoS on ingress direction
Tomahawk Line Card Port QoS Overview – con’t •
Dedicated queue ASIC – TM (traffic manager) per NPU for QoS functions
•
-SE and -TR LC version has different queue buffer/memory size, different number of queues • 5 level hierarchy flexible queuing/scheduling support
NPU TM
FIA
– 5 level scheduling hierarchy: Port groups (L0), Ports (L1), Logical Ports (L2), Subinterfaces (L3), Classes (L4)
– Egress & Ingress, shaping and policing • Three strict priority scheduling with priority propagation • Flexible & granular classification, and marking –Full layer 2, full layer 3/4 IPv4, IPv6, mpls
Switch Fabric ASIC
5 Level Hierarchy QoS Queuing Overview L0
L1
L2
L3
L4
PG
Port
Logical Port
Subinterfaces
Classes
Port
Port Group
EFP1 (S-Vlan or Vlans)
EFP2 (S-Vlan or Vlans)
C-VLAN
Class Class
C-VLAN
Class Class
C-VLAN
Class Class
C-VLAN
Class Class
•
L0, L1 level schedulers are automated by TM, not user configurable
•
L2, L3 and L4 can be flexibly mapped to parent level scheduler
•
Hierarchy levels used are determined by how many nested levels a policymap is configured for and applied to a given sub-interface
•
Up to 16 classes per child/grandchild level (L4)
Queue/Scheduler Hierarchy MQC Capabilities L1 Shape (PIR, or Port Shaper)
L2: 2-param Shape BRR Weight (Bw or BwR)
L3: 3-param Shape Bandwidth BRR W.
L4: 2-param Shape, BRR W (Bw or BwR), Priority, WRED/Q-Limit Priority 1 P2 P3 Normal Pri Qs P1 P2 Normal Pri Qs
QoS Classification Criteria L2 Interfaces/EF Ps Or L3 Interfaces
Notes:
L2 Header Fields
L3 Header Fields
Internal Marking
Inner/outer COS, inner/outer vlan, DEI Source/Destination MAC address* match all/match any
Outer EXP DSCP/TOS TTL, TCP flags, Source/destination L4 ports Protocol Source/Destination IPv4 address*
Discard-class Qos-group
- Support match all or match any - Max 8 match statements per class, max 8 match entries per match statement - Not all header fields can be used in one MQC policy-map, see details next
Tomahawk 3 Level Policing •
Supports grand parent policing •
Conform-aware coupled between child and parent, parent and grandparent -
•
Color aware policing 1R2C, MEF 2R3C, No IETF 2R3C -
•
Only 1R2C policer at grand-parent level Color value can be one of the fields: outer CoS, Exp, Prec, QoS-group, DSCP and Discard-class Only one conform-color value per policer, others are considered as exceed-color
Color-aware coupled policing -
-
Child level: incoming packet’s color field value is used for matching a color (Parent or child remarking will not take effect for color matching) Parent level: the child level marking if any will be effective in matching the color at parent level Grand parent level: only support 1R2C without color aware
Child-Level No-policer Policer No-Policer Policer Coupled Child Coupled Child Policer
Parent-Level No-policer No-Policer Policer Policer Coupled Parent No-Policer Coupled Child
Grand-Parent Level Policer Policer Policer Policer Policer Coupled-Parent Coupled-Parent
Allowed Allow Allow Allow Allow Reject Reject Allow
Shaping, Policing Overhead Accounting •
L2 frame length is being used by default without preamble and IFG. • •
•
Same for ingress/egress Same for all QOS actions
QoS policy can be configured to take into account arbitrary L1 framing and L2 overhead
Inter Frame Gap 12
Preamble SFD 7 1
DA 6
SA VLAN 6 4
Length/ Type 2
Payload 46-1500
FCS 4
Switch Fabric Architecture
1. Bandwidth and Redundancy Overview 2. Fabric QoS – Virtual Output Queuing
Cisco ASR 9000 High Level System Architecture “At a Glance” Linecard
RSP/RP
CPU
FIA
CPU
FIA
CPU EP0 P H Y
EP0 P H Y
FIA
NP
SerDes XBAR
EP1 P H Y
Scalable & flexible Slice based data plane
CPU
FIA
NP
Switch Fabric (SM15)
SerDes XBAR
FIA
NP EP1 P H Y
NP
Multi stage (1,2,3) fabric operation
Switch Fabric (SM15)
Fully distributed & Redundant system
FIA
Switch Fabric integrated on RSP or Separated 72
ASR 9006/9010 Switch Fabric Overview 3-Stage Non-Blocking Fabric (Separate Unicast and Multicast Crossbars) Stage 1
Fabric frame format: Super-frame Fabric load balancing: Unicast is per-packet Multicast is per-flow
Stage 2
Stage 3
Active-Active Fabric
Unicast Crossbar fabric fabric
8x55Gbps fabric
Arbiter
FIA FIA FIA
RSP0
Multicast Crossbar
Typhoon LC
Typhoon LC
Ingress Linecard
FIA FIA FIA
8x55Gbps
fabric
Egress Linecard Fabric bandwidth:
Arbiter RSP1
8x55Gbps =440Gbps/slot with dual RSP 4x55Gbps =220Gbps/slot with single RSP
RSP440 73
ASR9k End-2-End System QoS Overview End-to-End priority (P1,P2, 2xBest-effort) propagation Unicast VOQ and back pressure Unicast and Multicast separation Ingress side of LC
Egress side of LC
CPU
CPU
2
PHY
NP
PHY
NP1 1
Ingress Port QoS
3
FIA
FIA
NP
4
PHY
NP
PHY
Switch Fabric
2 4 VOQ per each SFP virtual port in the entire system Up to 8K VOQs per TSK FIA (vs 4k per SKT FIA)
3 4 Priority Egress Destination Queues (DQs) per SFP (VQI) virtual port, aggregated at egress port rate
4 Egress Port QoS
74
ASR9k Virtual Output Queuing (VoQ) System Architecture VoQ Components (Where are they)
Egress NPU Backpressure and VoQ in Action Result is No Head of Line Blocking (HOLB)
ASR9904 RSP880 Switch Fabric Architecture Active/Active 3-Stage Fabric, Scale to 1.6Tbps LCs Stage 1
Stage 2
Ingress Linecard Fabric SM15
FIA FIA FIA
Stage 3
2x 5x 115Gbps ~ 1.15Tbps
Egress Linecard
Arbiter Fabric SM15
Tomahawk Linecard
Fabric bandwidth:
Fabric frame format: Super-frame Fabric load balancing: Unicast is per-packet Multicast is per-flow
2x 5x 115Gbps ~ 1.15Tbps
10x 115Gbps ~1.1Tbps/slot with dual RSP 5 x 115Gbps ~ 575Gbps/slot with single RSP
Fabric SM15
RSP880
FIA FIA FIA
Tomahawk Line Card Fabric SM15 Arbiter RSP880
Fabric bandwidth:
10x 115Gbps ~1.15Tbps/slot with dual RSP 5 x 115Gbps ~ 575Gbps/slot with single RSP 77
ASR90xx – RSP880 and Mixed LC Operation Stage 1
Stage 2
Fabric frame format: Super-frame Fabric load balancing: Unicast is per-packet Multicast is per-flow
Stage 3
Egress Linecard Ingress Linecard
Fabric SM15
Fabric
Arbiter
FIA FIA FIA
Fabric SM15
RSP880
Typhoon Linecard
Fabric bandwidth:
8x115Gbps
Tomahawk Line Card
Fabric SM15
8x55Gbps
8x55Gbps =440Gbps/slot with dual RSP 4x55Gbps =220Gbps/slot with single RSP
Arbiter RSP880
FIA FIA FIA
Fabric bandwidth: 8x115Gbps ~ 900Gbps/slot with dual RSP 4x115Gbps ~ 450Gbps/slot with single RSP 78
ASR90xx – RSP440 and Mixed LC Operation Stage 1
Stage 2
Fabric frame format: Super-frame Fabric load balancing: Unicast is per-packet Multicast is per-flow
Stage 3
Egress Linecard Ingress Linecard
fabric
FIA FIA FIA
fabric
8x55Gbps
Arbiter Fabric SM15
RSP0
FIA FIA FIA
Tomahawk Line Card
Typhoon Linecard fabric
Fabric bandwidth:
8x55Gbps 8x55Gbps =440Gbps/slot with dual RSP 4x55Gbps =220Gbps/slot with single RSP
Arbiter
Fabric bandwidth:
RSP1
8x55Gbps =440Gbps/slot with dual RSP 4x55Gbps =220Gbps/slot with single RSP
RSP440
79
ASR99xx Switch Fabric Card (FC2) Overview 6+1 All Active 3-Stage Fabric Planes, Scale to 1.6Tbps LCs Fabric frame format: Super-frame Fabric load balancing: Unicast is per-packet Multicast is per-flow
5x2x115G bi-directional = 1.15Tbps
FIA FIA FIA
Fabric SM15
Fabric SM15
Tomahawk Line Card
5x 2x115G (120G raw) bi-directional = 1.15Tbps
FIA FIA FIA
Tomahawk Line Card
Fabric bandwidth: 10x115Gbps ~ 1.15 Tbps/slot with 5x FC2 SFC v2
8x115Gbps ~ 920 Gbps/slot with single RSP 80
ASR9922/12 – SFC2 and Mixed Gen LCs When using 3rd Gen Fabric Cards (5-1)x2x115G bi-directional = 920Gbps (protected)
FIA FIA FIA
Fabric SM15
fabric Typhoon Linecard
FIA FIA FIA
Tomahawk Line Card
(5-1)x2x55G bi-directional = 440Gbps (protected) SFC v2
Fabric Lanes 6 & 7 are only used towards 3rd Gen 7-fabric plane linecards
81
ASR9K Tomahawk Module Fabric Redundancy and Bw Allocation Summary for 8x100G LCs with RSP880s/SFCv2 Fabric Redundancy
Per Slot Fabric BW
Per 8x100G LC Data BW
System QoS/Priority Protection
Dual RSP880s
920G
800G
Y
Single RSP880
460G
400G
Y
Dual RSP880s
1.15T
800G
Y
Single RSP880
575G
500G
Y
5x SFCv2
1.15T
800G
Y
4x SFCv2
920G
800G
Y
3x SFCv2
690G
600G
Y
2x SFCv2
460G
400G
Y
System ASR9010 ASR9006 ASR9904
ASR9922 ASR9912
Packet Flow and Control Plane Architecture 1. Unicast 2. Multicast 3. L2
ASR9000 Fully Distributed Control Plane CPU LPTS (local packet transport service): control plane policing
RP
CPU
Punt FPGA
Switch Fabric
FIA
Switch Fabric
Punt Switch
Control packet
3x10GE SFP +
Typhoo LPTS n
3x10GE SFP +
NP NP FIA
3x10GE SFP +
NP
3x10GE SFP +
NP
3x10GE SFP +
NP
3x10GE SFP +
NP
3x10GE SFP +
NP
FIA
Switch Fabric ASIC
3x10G E SFP +
FIA
RP CPU: Routing, MPLS, IGMP, PIM, HSRP/VRRP, etc
LC CPU: ARP, ICMP, BFD, NetFlow, OAM, etc
FIA
84
Layer 3 Control Plane Overview LDP
RSVP-TE
Static
LSD
BGP
OSPF
ISIS
EIGRP
RIB
RP
Over internal EOBC
ARP
FIB
SW FIB
Adjacency
AIB
LC NPU
LC CPU
AIB: Adjacency Information Base RIB: Routing Information Base FIB: Forwarding Information Base LSD: Label Switch Database
Hierarchical FIB table structure for prefix independent convergence: TE/FRR, IP/FRR, BGP, Link bundle
IOS-XR 2-Stage Lookup Packet Flow Unicast Packet Flow •
Ingress lookup yields packet egress port and applies ingress features
•
Egress lookup performs packet-rewrite and applies egress features
•
All ingress packets are switched by Central switch fabric
3x10GE SFP + 3x10GE SFP +
3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP +
3x 10G 3x 10G
3x 10G 3x 10G 3x 10G 3x 10G
FIA FIA
FIA
FIA
FIA
Switch Fabric
Switch Fabric
Switch Fabric ASIC
3x10GE SFP +
3x 10G
Typho on Typho on Typho on Typho on Typho on Typho on Typho on Typho on
Switch Fabric ASIC
3x10GE SFP +
3x 10G
Ingress Typhoo 100 G 100GE n
MAC/P HY
FIA
Egress Typhoo 100 G n
FIA
Ingress Typhoo 100 n G 100GE
FIA
Egress Typhoo 100 n G
MAC/P HY
L3 Unicast Forwarding Packet Flow (Simplified) Example from wire LAGID
lookup key L3: (VRF-ID, IP DA) TCAM
rxIDB
L3FIB
rx-adj
Packet classification
Source interface info
L3 FIB lookup
Next-hop
Rx LAG hashing LAG
SFP
SFP Switch Fabric Port (egress NPU)
Packet rewrite System headers added ECH Type: L3_UNICAST
rewrite
SFP
ACL and QoS Lookup also happen in parallel
Ingress NPU
Fabric
Tx LAG hashing
LAG rewrite
txIDB
tx-adj
L3FIB
destination interface info
Next-hop
L3 FIB lookup
ECH Type: L3_UNICAST => L3FIB lookup
ACL and QoS Lookup happens before rewrite
Egress NPU
to wire
ECH type: tell egress NPU type of lookup it should execute
L3 Multicast Control Plane
1. 2. 3. 4. 5.
Incoming joins (IGMP, PIM, MLD, MLDP, etc …) NPU punts joins to RP directly bypassing LC CPU Each protocol updates multicast info (VRF, route, olist, etc …) to MRIB MRIB downloads all multicast info (VRF, route, olist, etc …) to MFIB in each LC MFIB programs HW with multicast info: Typhoon NPU, FIA and LC Fabric
L2 Multicast Control Plane
1. 2. 3. 4. 5.
Incoming joins (IGMP, PIM, MLD, etc …) NPU punts joins to RP directly bypassing LC CPU Each snooping protocol updates L2FIB (mrouter, port-list, input port, BD, etc … ) RP L2FIB downloads L2 multicast info to each LC L2FIB LC L2FIB programs HW with multicast info: Typhoon NPU, FIA and LC Fabric
Multicast Replication Model Overview 2-Stage Replication Multicast Replication in ASR9k is like an SSM tree 2-stage replication model: 1.
Fabric to LC replication
2.
Egress NP OIF replication
ASR9k doesn’t use inferior “binary tree” or “root uniary tree” replication model
FGID (Slotmask) FGIDs: 10 Slot Chassis
FGIDs: 6 Slot Chassis Phy Slot Number
LC 5
LC 4
RSP 0
RSP 1
LC 3
LC 2
8
7
6
5
4
3
2
1
LC 0
LC 6
9
LC 1
LC 7
5
0
Logical Slot
Logical Slot LC 3
4
LC 2
3
LC 1
2
LC 0
1
RSP 1
0
RSP 0 Slot
Slot Logical
Slot Mask Physical
Binary
Hex
LC7
9
1000000000
0x0200
LC6
8
0100000000
0x0100
LC5
7
0010000000
0x0080
LC4
6
0001000000
0x0040
RSP0
5
0000100000
0x0020
RSP1
4
0000010000
0x0010
LC3
3
0000001000
0x0008
LC2
2
0000000100
0x0004
LC1
1
0000000010
0x0002
LC0
©
0 Cisco Systems,0000000001 2006 Inc. All rights reserved.
Cisco0x0001 Confidential
Slot Mask
Logical
Physical
Binary
Hex
LC3
5
0000100000
0x0020
LC2
4
0000010000
0x0010
LC1
3
0000001000
0x0008
LC0
2
0000000100
0x0004
RSP1
1
0000000010
0x0002
FGID Calculation 0 0000000001 Examples 0x0001
RSP0
Target Linecards
FGID Value (10 Slot Chassis)
LC6
0x0100
LC1 + LC5
0x0002 | 0x0080 = 0x0082
LC0 + LC3 + LC7
0x0001 | 0x0008 | 0x0200 = 0x0209 EDCS:xxxx
5
ASR9k NG-MVPN (MLDP/P2MP-TE Packet Flow Imposition PE
ASR9k NG-MVPN (MLDP/P2MP-TE Packet Flow Disposition PE
2 Stage Forwarding Model VPLS Packet Flow Example (Known Unicast) ETH
3x10GE SFP +
3x10GE SFP +
3x10GE SFP + 3x10GE SFP + 3x10GE SFP + 3x10GE SFP +
3x 10G 3x 10G
3x 10G 3x 10G 3x 10G 3x 10G 3x 10G
Typho on Typho on Typho on Typho on Typho on Typho on Typho on Typho on
ETH
VC
Local
Fab Hdr
ETH
VC
Local
Fab Hdr
ETH
VC
LDP
FIA FIA
FIA
FIA
FIA
Switch Fabric
Switch Fabric
Switch Fabric ASIC
3x10GE SFP +
3x 10G
VC
Switch Fabric ASIC
3x10GE SFP +
ETH
Ingress Typhoo 100 G 100GE n
MAC/P HY
FIA
Egress Typhoo 100 G n
FIA
Ingress Typhoo 100 n G 100GE
FIA
Egress Typhoo 100 n G
MAC/P HY
ETH L2 Rewrite
IOS-XR Architecture
1. 32-bit and 64-bit OS 2. IOS-XRv 9000 Virtual Forwarder 3. Netconf/Yang Programmability
Cisco IOS – A Recap
Operational Infra
Distributed Infra
Distributed Infra
SNMP XML NetFlow
ACL QoS LPTS
XR Code v2
BGP OSPF PIM
SNMP XML NetFlow
XR Code v1
ACL QoS LPTS
XML
NetFlow
Mgmt Plane
SNMP
LPTS
ACL
QoS
PIM
Data Plane
BGP
Control Plane
Cisco Virtual IOS-XR
BGP OSPF PIM
IOS “Blob”
Cisco IOS-XR
OSPF
IOSd
Hosted App 2
Cisco IOS-XE Hosted App 1
Cisco IOS
Distributed Infra
Kernel
Kernel
Kernel
Kernel
Kernel
Linux-BinOS
QNX, 32bit
Linux, 64bit
Linux, 64bit
Linux, 64bit
Virtualization Layer
1990s
System Admin
2000s
2003-14
Present Day
Incremental Development, with Industry leading investment protection
32-bit XR Key Concepts •
Two Stage Commit
•
Config History Database
•
Rollback
•
Atomic vs. Best Effort
•
Multiple Config Sessions
•
Etc, etc …
XR Command Modes Exec – Normal operations - monitoring routing and CEF RP/0/RP0/CPU0:router# show ipv4 interfaces brief show install active
show running-config show cef summary location 0/5/CPU0
Config – Configuration for L3 Node RP/0/RP0/CPU0:router(config)# router bgp 100 taskgroup admins policy-map foo mpls ldp ipv4 access-list block-junk
Admin – Chassis operations, outside of SDRs RP/0/RP0/CPU0:router(admin)# show controllers fabric plane all (CRS) show controllers fabric clock (12K)
Admin Config RP/0/RP0/CPU0:router(admin-config)# sdr backbone location 0/5/* pairing reflector location 0/3/* 0/4/*
config-register 0x0 install add (also in SDR)
Cisco IOS XR (Virtualized) Cisco Virtual IOS-XR
Distributed system architecture Scale, fault isolation, control plane expansion
XR Code v1
XR Code v2
Enhanced OS Infrastructure SNMP XML NetFlow
ACL QoS LPTS
BGP OSPF PIM
SNMP XML NetFlow
ISSU, control and admin plane separation, simplifies SDR Architecture
ACL QoS LPTS
Virtualization
BGP OSPF PIM
Scale, SMP support, 64-bit CPU Support, Open Source apps System Admin
ISSU and HA Architecture Provide Zero Packet Loss (ZPL) and Zero Topology Loss (ZTL) to avoid service outages
Fault Management
Distributed Infra
Distributed Infra
Kernel
Kernel
Kernel
Linux, 64bit
Linux, 64bit
Linux, 64bit
Carrier class fault handling, correlation and alarm management
Available since IOS XR 5.0 on NCS6K Other platforms rollout – NCS4K, ASR9K, Skywarp, Fretta, Sunstone and Rosco
Virtualization Layer
IOS-XRv 9000: Efficient Virtual Forwarder High-end Feature Rich Data Plane on x86 Innovative Virtual Forwarder
IOS-XRv 9000
x86-optimized SW based hardware assists: •
2x10G Line Rate FDX PCIe pass-through
•
SW hierarchical traffic manager with 3 level HQoS, 512K queues @ 20G FDX performance per core
•
SW policers that is color aware and nearly 4x faster than DPDK based SW routers
•
SW TCAM with logical super key & heuristic cuts algorithms
Data plane optimized for fast convergence 20Gbps+ forwarder with features for IMIX traffic with features (on a single socket)
IOS-XRv Control Plane High speed interface classification and fine grained load balancing
IOS-XRv Virtual Forwarder Hierarchical QOS Scheduler RX & Interface Classification
Traffic Manager & TX
TM
Forwarding & Features
Forwarding and Feature Path
Portable 64bit C-code (to ARM based platforms) TCAM
• 30Gbps+ throughput on a single CPU core • ½ million Queues • 3-Layer H-QOS
PLU
Common code base with Cisco nPower X family SW based HW Assists
Pkt. replication
• • • • • •
ACLs uRPF Marking, Policing IPv4, IPv6, MPLS Segment routing BFD
NETCONF/YANG – Supported on XR NETCONF – NETwork CONFiguration Protocol •
Network Management protocol – defines management operations
•
Initial focus on configuration, but extended for monitoring operations
•
First standard - RFC 4741 (December 2006)
•
Latest rev is RFC 6241 (June 2011)
•
Does not define content in management operations
YANG – Yet Another Next Generation •
Data modeling language to define NETCONF payload
•
Defined in the context of NETCONF, but not tied to NETCONF
•
Addresses gaps in SMIv2 (SNMP MIB language)
•
Previous failed attempt – SMI NG
•
First approved standard - RFC 6020 (October 2010)
YANG Data
N E T C O N F
Complete Your Online Session Evaluation •
Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
•
Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect. Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
Thank you