mcRNC Architecture
Noki okia a Solu oluti tions ons and Netwo Network rks s Aca Ac ademy Legal notice Intellectual Property Rights All copyrights and intellectual property rights rights for Nokia Solutions and Networks training training documentation, product documentation documentation and slide presentation material, all of which are forthwith known as Nokia Solutions and Networks training material, are the exclusive property of Nokia Solutions and Networks. Nokia Solutions and Networks owns the rights to copying, modification, translation, adaptation or derivatives including any improvements or developments. Nokia Solutions and Networks has the sole right to copy, distribute, amend, modify, develop, license, sublicense, sell, transfer and assign the Nokia Solutions and Networks training material. Individuals can use the Nokia Solutions and Networks training material for their own personal self-development only, those same individuals cannot subsequently pass on that same Intellectual Property to others without the prior written agreement of Nokia Solutions and Networks. The Nokia Solutions and Networks training material cannot be used outside of an agreed Nokia Solutions and Networks training session for development of groups without the prior written agreement of Nokia Solutions and Networks.
Objectives After this module the student should be able to:
• • • •
Describe the hardware architecture and functional units of mcRNC Explain mcRNC Configuration List mcRNC Hardware Items Explain mcRNC Data Flow
Contents • • • • •
Introduction to mcRNC and Differences with IPA2800 RNC mcRNC Architecture and Functional Units mcRNC Configuration and Hardware Hardware Items mcRNC Data Flow
Introduction to mc RNC and Differences with IPA2800 RNC
mcRNC Benefits • • • • • • •
Full WCDMA feature set support High data and voice capacity High reliability and availability Easy installation and maintenance Saving rollout cost and easy capacity upgrades Lowest RNC power consumption Future proof product to Single RAN
The new WCDMA Radio Network Controller is called the Multicontroller RNC (mcRNC) and is optimized for the whole IP network environment and provides the most optimal total cost of ownership to the operator. In addition to the mcRNC the Multicontroller platform provides the common platform also to the mcBSC and mcTC in GSM networks and a smooth upgrade path from GSM to WCDMA. The mcRNC configurations are based on easily installable, standard-sized, compact hardware modules. The modular and compact design results in high flexibility and scalabilityand efficient utilization of the available site space. Multicontroller modules are extremely easy to install, operate and maintain. The minimum mcRNC configuration consists of two multicontroller RNC hardware modules. Additional capacity is delivered through capacity licenses or, if the capacity limit of the existing hardware configuration is exceeded, by adding more hardware modules to the network element configuration. Configurations with two, four, six or eight hardware modules will be supported.
RNC IPA2800 VS mcRNC RNC2600 ATM and/or IP connectivity
mcRNC IP connectivity only
Same capacity and service availability (except ATM)
4U (U=44,45mm) high boxes, that can be installed in standard 19”ETSI rack or on a desk top Two 2,10 m high 60cm*60 cm cabinets
RNC IPA2800 VS mcRNC, cont. The next generation RNC program (hereby mcRNC) defines cost efficient Radio Network Controller (RNC) based on a new platform for future business needs. The new product replaces IPA2800 based RNC in the long term. The requirement of next generation RNC is to provide higher capacities on smaller footprint with reduced product costs.
Differences with IPA2800 RNC • •
There is no support for ATM interfaces planned in mcRNC
•
Integrated OMS is not supported; stand alone OMS is expected for the operation of mcRNC
•
The interface between the OMS and mcRNC is changed to BTSOM instead of EMT that is used in IPA2800 RNC before RU30
• •
The site solution for mcRNC may be different from that of IPA2800 RNC
•
The database solution is different in mcRNC. The Database solution in mcRNC shall make use of a SQL based database engine while the IPA2800 RNC uses a proprietary database engine using object collections
•
The resource management principles used in mcRNC is different from the IPA2800 RNC
•
No dedicated plug-in unit HW for a specific functional unit as in classic RNC
Due to this, there is no support for dual Iub and the related features like transport fallback to ATM
The redundancy solution in mcRNC is more fine-grained than that of IPA2800 RNC
mcRNC Comparison with RNC2600 • Small size • Low HW price • Easy installation (75% shorter commissioning time) • Improved product architecture enabling easy fault diagnostics and bug fixing as well as shorter release lead times
• Low power consumption • Flexible network building and topology • IP interfaces inbuilt only
mcRNC Interfaces
mcRNC Interface
The mcRNC provides logical interfaces for the mobile services switching center (MSC), the multimedia gateway (MGW), other RNCs, NetAct, base transceiver stations (BTSs), the serving GPRS support node (SGSN) and the cell broadcast center (CBC). Interface Description
• Iu-CS Logical interface between the radio network controller (RNC) and circuit switched core network
• Iu-PS Logical interface between the RNC and the packet core network • Iur Logical interface for the interconnection of two neighboring RNCs Iu r Logical • Iub Iu b Logical interface between the RNC and the WBTS • Iu-BC Logical interface between the RNC and the cell broadcast center(CBC)
• Iu-PC Logical Iu-PC Logical interface between the RNC and the Stand-alone SMLC (SAS) • O&M Proprietary management interface between network management system (NMS) and RNC
mcR mc RNC management management in interf terfa aces
Network management interface (RNC-NetAct)
The mcRNC has management interface to Nokia Solutions and Networks’ management system, system, i.e. NetAct, via a standalone Operation Operation & Manageme Management nt Server Server (OMS). A proprietary BTSO&M protocol is utilized between the mcRNC and OMS and NWI3 is used between OMS and NetAct. The Data Communications network (DCN) architecture provides connections for the implementation of O&M functions from mcRNC to the operation support system (NetAct). A common transport protocol is provided for the DCN network and IP is used as a flexible solution for network management. Following network internal management interfaces are used:
• CORBA and SOAP/HTTP based NWI3 interface for interconnection of NetAct and OMS
• BTS O&M interface for OMS – RNC, and OMS – BTS interconnection The O&M traffic is secured by IPSec protocol between OMS/RNC and NetAct and by https between RNC and BTS.
mcRNC Configuration and Hardware
Configuration and Dimensioning
BCN-A1 Configurations
Step S1-A1
BCN-B2 Configurations
Step S5-A1
Step S1-B2
Step S3-B2
Configuration and Dimensioning, cont. – BCN-A1 modules (as available since Multicontroller RNC 2.0)
Octeon+ processor
1 Gigabit Ethernet network connectivity
– BCN-B2 modules (introduced with Multicontroller RNC 3.0)
Octeon II processor
1 and 10 Gigabit Ethernet network connectivity
General about Multi controll er RNC configurations The Multicontroller RNC can be flexibly configured to meet the capacity requirements of individual customers because of its modular structure. When the capacity needs to be increased, the system can be easily expanded by adding new modules to the existing configuration. The capacity of the network element depends on the number of controller modules in the system. Two reference capacity steps are used in Multicontroller RNC. They differ in the number of Multicontroller RNC modules used: the Multicontroller RNC capacity step 1 employs 2 Multicontroller RNC modules, capacity step 5 uses 6 Multicontroller RNC modules. Possible controller modules are either type mc01 or mc02. The difference between mc01 and mc02 is, that mc01 has hard Disk AMC and mc02 does not have HD AMC.
mcRNC HW Release1 Support mcRNC capacity targets with BCN-A1 HW in RU40 (mcRNC HW Rel1) Configuration ID NE level perfo rmance Number of subscribers per RNC coverage area AMR Busy Hour Call Attempts PS BHCA (HSPA) NAS Busy hour call attempts on top of maximum call capacity AMR Erlangs AMR Erlangs (including soft handover) NE level capacit y Iub max total UP throughput (CS+PS, FP, UL+DL)/ Mbps Iub max total HSDPA UP throughput (CS+PS, FP, DL) Iub max total HSDPA UP throughput (CS+PS, FP, UL) Connectivity Max number of cells Max number of BTS sites Max number of RRC connected UE's
S1-A1
S5-A1
340000 340000 485000
1380000 1380000 1940000
1290000
5250000
8500 11900
34500 48300
1290 910 380
5190 3660 1530
1410 470 195000
3110 1020 780000
mcRNC HW Release2 Support mcRNC capacity targets with BCN-B2 HW in RU40 (mcRNC HW Rel2) Configuration ID NE level perfo rmance Number of subscribers per RNC coverage area AMR Busy Hour Call Attempts PS BHCA (HSPA) NAS Busy hour call attempts on top of maximum call capacity AMR Erlangs AMR Erlangs (including soft handover) NE level capacit y Iub max total UP throughput (CS+PS, FP, UL+DL)/ Mbps Iub max total HSDPA UP throughput (CS+PS, FP, DL) Iub max total HSDPA UP throughput (CS+PS, FP, UL) Connectivity Max number of cells Max number of BTS sites Max number of RRC connected UE's
S1-B2
S3-B2
760000 760000 1400000
2140000 2140000 3500000
3050000 19000 26600
7680000 53500 74900
2640 1850 790
7520 5260 2260
2600 520 352000
6600 1320 1000000
BCN-A module (HW release 1)
Dimension s (H x W x D) Weight
178 mm (4U) x 444 mm x 450 mm Fully equipped: Approx. 25-30 kg (depends on the configuration)
BCN-B module (HW release 2)
Dimension s (H x W x D) Weight
178 mm (4U) x 444 mm x 450 mm Fully equipped: Approx. 25-30 kg (depends on the configuration)
Control Module Functional Part
The main processing power of the controller module comes from the cutting-edge processor technology used. From the HW point of view processor environments are identical, so SW can allocate any kind of processing type to any of these processors. The processor uses hardware acceleration for various tasks. With these features, the same hardware can be used for processing of user, control, transport and management plane functions. PCI Express (PCIe) intercon nectin g Communication between the add-in cards, LMP, hard disk controller and AMC modules takes place through a PCIe switch. Local management pr ocessor (LMP) The LMP is a central component on the motherboard that is mainly responsible for the following functions: • Hardware management of the controller module (in cooperation with the virtual carrier management controller (VCMC) • Ethernet switch and interface management • Offers services for USB mass storage devices • Performs the function of a console server and provides direct access to the serial consoles of processors
BCN-A Front View
BCN = Box Controller Node = mcRNC Module
BCN-B Front View
BCN = Box Controller Node = mcRNC Module
2 x AMC bay
USB 2.0 (Ty pe B, target) SAS cro ss-c onnect LMP serial po rt (RS-232)
Network i nterfaces /
Network interfaces
Inter- module interfaces
2 x 10 GE (SFP+)
7 x 1 GE/10 GE (SFP+)
10 x 1 GE (SFP)
Reset Ind icator LEDs Sync hronization in terface
1 x 1 GE ( SFP)
2 x in /out (RJ45) NE managemen t interface 2 x 1GE (SFP)
Alarm in put interface 8 x vol tage input (RJ45)
Mod ule management interface
eSW/FW upd ate interface
1 x 10/100M/1GE ( RJ45)
2 x USB 2.0 (Ty pe A, host)
BCN Rear View FAN 1 FAN 2
PSU 2
PSU 1
AC or DC version
FAN 3 FAN 4
FAN 5 FAN 6
mcRNC Field Field Repl Replace aceable able Uni Units ts fr om servi ce point poin t of view v iew FRU FRU name mcRNC module Processor add-in card Power unit (AC/DC) (AC/DC ) AMC HDD SFP transceivers transcei vers Main fan Aux fan Air filter AMC filler Power cords Cables
Acc ess point poi nt
Rear Front Front Rear Rear Front Front Rear
Hot swappable swapp able No No Yes Yes Yes Yes Yes Yes Yes Yes * Yes
mcR mc RNC Hard rdware ware Ar Arch chit ite ect ctur ure e • The mcRNC consists of maximum eight 4U rack mount boxes, interconnected by 10Gbps XAUI cables
• Each BCN (Box Controller Node) contains a motherboard with a management processor and 8 separate add-on cards containing Octeon processors that are connected to the motherboard through PCI-e connectors.
• There are two releases of mcRNC hardware: • BCN-A (HW release 1) containing Octeon+ add-on cards • BCN-B (HW release 2) containing Octeon II add-on cards • There are 3 physical switches in every box • One for external network communication. • One for internal network communication. • One for local management.
The major architectural change with the mcRNC is the move from multisubrack blade system to a few identical rack mount modules. Depending on the capacity needs, one mcRNC can consist of two up to several modules. A multicontroller module is tightly integrated and has only a few fieldreplaceable parts. The key enablers of this approach are IP/Ethernet technology and advanced CPU technology. technology. They simplify network element architecture especially especially when IP proliferates in mobile networks. The new hardware and software platform allows new, optimized placement of the RNC functionalities in the system. A key principle in the design of the mcRNC is to simplify the processing and implement the services that are required by customers. Simplicity contributes to the performance as well by eliminating the unnecessary complexity involved in data processing.
Overview of the hardware architecture • Octeon processor comparison: Cavium Octeon+ CN5650 64 bit 12 cores 800MHz 4 x 2MB DDR DIMS
VS
Cavium Octeon II CN6680 64 bit 32 cores 48GHz 4 x 8MB DDR DIMS
• The same Octeon hardware can be used for processing of user, control, transport and management plane functions.
Motherboard and Processor Add-in Cards Dual Fan Module Processor Add-in Card
Power Supply
~40mm
AMC Slot
Motherboard (BCN-A)
Front panel LEDs
P0 NS A1 A2 P1
P5
P2
P6
P3
P7
P4
P8
Front panel Ethernet interface LEDs Link/Acti vity f or all Ethernet ports – Green:
Ethernet link is detected
– Green blink:
Port receives or sends frame
Link speed SFP+ ports y a B C M A
• Amber:
10GE
•No light:
1GE
SFP10 Speed SFP10 Link/Activity
SFP, Trace, LAN1/LAN2 and MGT ports Amber:
SFP+2
SFP+4
SFP+6
SFP8
SFP10
SFP6
SFP+1
SFP+3
SFP+5
SFP7
SFP9
SFP5
No light:
SFP9 Speed SFP9 Link/Activity
1GE 100Base-T
Box Controller Node Ethernet interfaces (BCN-A) Provided interfaces and s upport ed standards
1x RJ-45
Five of these ports required for connecting the BCN modules
6x SFP+ (BCN interconnect)
(HW maintenance)
16x SFP (UTRAN interfaces)
SFP (EM, DCN)
•
Provided netwo rk interfaces for UTRAN traffic (i.e. Iu, Iur, Iub in terfaces) – 6x 10 GE: 10GBASE-SR/LR, SFP+ (LC-type connector), four of these ports reserved for internal connections – 16x 1 GE: 1000BASE-SX/LX/TX, SFP (LC-type or RJ-45)
•
Provided network interfaces for NetAct/Element Manager connectivi ty – 1x 1 GE: 1000BASE-SX/LX/TX, SFP (LC-type or RJ-45) – 2nd SFP reservered for future use
•
Provided network interfaces for l ocal HW maintenance & service terminal – 1x 1 GE: 1000BASE-TX, RJ-45
Box Controller Node Ethernet interfaces (BCN-B) Provided interfaces and s upport ed standards
2x USB 1x RJ-45
Software download
Hardware maintenance
Debugging interfaces
1x SFP Tracing
9x SFP+ 7x BCN interconnect, 2x UTRAN interfaces
10x SFP UTRAN interfaces SFP13 – SFP22
10GE external ports SFP+ 11, SFP+ 12
SFP EM, DCN
4x RJ-45 Alarm and sync interfaces, not used by mcRNC
BCN-A Interfaces
Interface type
Number of interfaces
Printed label
Backplane ports (Internal 10GE)
6
SFP1 – SFP6
16
SFP7 – SFP22
External 1GE External 10GE
External 1GE network connectivity is implemented based on the following standards:
• 1000Base-TX, electrical transmission via SFP with RJ-45 connector
• 1000Base-SX/LX, optical transmission via SFP with LC-type connector
0
External 10GE network connectivity is implemented based on the following standards:
• 10GBASE-SR acc. IEEE 802.3-2008 Clause 49 Trace port
1
and 52.5
• 10GBASE-LR acc. IEEE 802.3-2008 Clause 49 and 52.6
BCN-B Interfaces Interface type
Number of interfaces
Printed label
Backplane ports (Internal 10GE)
7
SFP0 – SFP6
External 1GE
10
SFP13 – SFP22
External 1GE network connectivity is implemented based on the following standards:
• 1000Base-TX, electrical transmission via SFP with RJ-45 connector
• 1000Base-SX/LX, optical transmission via SFP External 10GE
2
Trace port
1
SFP+ 11 SFP+ 12
with LC-type connector External 10GE network connectivity is implemented based on the following standards:
• 10GBASE-SR acc. IEEE 802.3-2008 Clause 49 and 52.5
• 10GBASE-LR acc. IEEE 802.3-2008 Clause 49 and 52.6
Hardware management – controller module level
At the controller module level, the central hardware management entity is called node manager. The node manager consists of virtual carrier management controller (VCMC) and specific management software running on the local management processor (LMP). Each add-in card, as well as AMC contains a module management controller (MMC) which is connected to the VCMC through the local intelligent platform management bus (IPMB-L). Under the control of the VCMC, the MMCs perform hardware management operations on the processor add-in cards and AMCs. The MMCs are connected to the add-in card processors or the AMC processors through a universal asynchronous receiver/transmitter (UART) serial interface.
Hardware management – network element level
At the network element level, the central hardware management entity is called system management software (SMS). The network element, consisting of one or more controller modules, contains of one active and one standby SMS entity, which provides system manager functionality for the network element. In multimodule configurations, system manager entities are located in different controller modules. The system manager in one controller module can access a node manager located in another controller module through external intermodule Ethernet cabling.The active system manager is able to control any controller module within one network element. The control is performed by the node manager.
Power distribution principles
There are two power supply units (PSU) per BCN module and two power distribution units (PDU) per cabinet. The PDUs in the cabinet are optional. Either DC or AC PDUs and PSUs can be used, but both PSUs in any one BCN module must be of the same type. The supported options for the input voltages are 230 VAC for the mains power and -48 VDC / -60 VDC for battery feed. The DC PSU in BCN is BDFE-B, and the AC PSU is BAFE-B. The PDUs are called BDPDU-A for DC power feed, and BAPDU-A for AC power feed, respectively. In cabinet installations, the power feed input can be connected from the site power feed directly to the PSUs, or from the site power feed to the PDUs and from the PDU to the PSUs. The outputs of the DC PDU and the AC PDU are protected by circuit breakers. Each PDU has 8 output channels, and each 4 output group is independent and can be the redundancy to the other. Each PSU has one input, and the PSU provides protection against surges and transients in the power feed cables. To ensure 2N redundancy for the power distribution lines, the two PSUs in a BCN module provide two mutually redundant input feeds (PSU A and PSU B). Each input is capable of supplying the entire BCN module’s power feed. For further details about the BCN power supply, refer to Installation Site Requirements document. The power distribution principle is illustrated in the following figure.
Power Distribution Units DC PDU
AC PDU
International power cable
Mechanics and electromechanics • NSN CAB216SET-B 19-inch cabinet is recommended to be used as rack mount enclosure for BCN modules • fulfils requirements concerning earthquake, mechanical and electric shock, electromagnetic radiation and safety
• Temperature control using three dual Fans with rotation speed control • Two dual fans for the temperature control of all elements on the mother board
• One dual fan for the temperature control AMC modules
• A removable air filter is used on the front side for filtering inlet air • Normal dual in-line memory modules (DIMMs) can be used because of the space between modules
• Contains two mid-size AMC bays • Field-replaceable AMCs offer the possibility of expanding the BCN functionality
Hardware items
Processor add-in card (BOC-A) – Octeon+
Memory module for BOC-A processor add-in c ard (BDM2G-A)
Processor add-in card (BMPP2-B) – Octeon 2 variant B
Add-in filler card (BFC-A) •
Dummy module with no electrical components
•
Placed on empty card slots to ensure proper cooling of BCN module
Hard disk drive carrier AMC (HDSAM-A) • AMC (HDSAM-A) is a mid-size (single-width, 4 HP) AMC module
• Provides serial attached SCSI (SAS) storage in the system
• HDSAM-A is equipped with a 2.5-inch small form factor serial attached SCSI (SAS) hard disk drive
• Hard disk drive needs to be acquired separately
BCN AMC filler (BAMF-A) • AMC filler is a dummy module with no electrical components
• Empty AMC bays must always be equipped with AMC fillers • To ensure proper cooling of the BCN module
• AMC filler acts also as an EMC shield
AC power distribution unit (BAPDU-A) 485 mm
230 mm
90 mm
• • • •
Used in 19-inch cabinet installation Take the input power from the site power supply (180-264V) Eight circuit breakers installed in the front panel One PDU provides eight outputs • Can provide power up to eight BCN if the two PSU in each module take
power from two PDUs • Can provide power up to four BCN if the two PSUs in each module take power from the same PDUs
DC power distribution unit (BDPDU-A) • • • •
Used in 19-inch cabinet installation Take the input power from the site power supply Eight circuit breakers installed in the front panel One PDU provides eight outputs • Can provide power up to eight BCN if the two PSU in each module take power from two PDUs
• Can provide power up to four BCN if the two PSUs in each module take power from the same PDUs
• A 30 A circuit breaker on the negative wire at the input to protect the PDU from over-current
• HW Dimensions: 90 mm (2U) x 485 mm x 230 mm
AC power supply unit, variant B (BAFE-B)
• • • •
1200-watt redundant AC power supply units Located on the rear of the BCN module Hot swappable and has an IEC 320 C20 type input which operates on 230 VAC Two outputs to BCN module • Main output with 12V for all BCN electronics including HW management • Standby output with 3.3V for BCN HW management
DC power supply unit, variant B (BDFE-B)
• • • • •
1200-watt redundant DC power supply units Located on the rear of the BCN module Hot swappable and takes -48/60 VDC input. Two outputs to BCN module • Main output with 12V for all BCN electronics including HW management • Standby output with 3.3V for BCN HW management
Main fan (BMFU-A) • For cooling the BCN • Contains two dual-fans (BMFU-A)
• Located on the rear of the BCN module
• Fan speed is controlled by the hardware management system • Regulate the temperature within the BCN
• Dimensions (H x W x D) - 142 mm x 140 mm x 75 mm
Fan for the AMCs (BAFU-A) • For cooling the AMCs that are installed in BCN
• Located on the rear of the BCN module
• Fan speed is controlled by the hardware management system • Regulate the temperature within the BCN
• Dimensions (H x W x D) - 95 mm x 75 mm x 105 mm
Air filter (BAFI-A)
• Located at the front of the BCN module in the cooling air inlet • Prevent dust from accumulating inside the equipment • Meets the NEBS GR 63 CORE and GR 78 CORE requirements
Overview of cabling in BCN • Consists of two types of cabling • Internal cables • External cables • Internal cables • Cables inside the network element or the cabinet • Example: cables between BCN modules or PDU and BCN module • Internal cables between BCN modules come with attached pluggable transceivers
• External cables • cables leaving the network element and the cabinet, such as cables to external networks
• External cables to external networks need pluggable transceivers (SFP and SFP+) to connect to the 1GE interfaces of BCN modules.
Internal BCN cabling
External BCN cabling • • • •
LAN/Ethernet cables for connection to external networks External synchronization cables External alarm cable Power cables between site AC/DC power supply and BCN module • in standalone installations (without PDU and cabinet) • EU plug model AC power cord between site AC power supply and BCN module is a part of equipment delivery of mcRNC
• Power cables between site AC/DC power supply and PDU • when cabinet and PDU are in use
SFP and SFP+ transceivers
BCN installation kit for 19-inch cabinet (BIK19-A) • Installation kit for installing BCN module in a 19-inch cabinet • Used when cabinet is 600 mm deep where the distance between the front and rear poles is in the range of 448-462 mm
• The installation kit includes 1. 2. 3. 4.
2 x sliding rails 1 x cable tray 2 x ear plates for 19-inch rack 2 x handles
mcRNC installation
Installation kit for IR206 cabinet (old) Brackets
Installation kit for CAB216 cabinet (new) One mcRNC Module weights less than 25 kg without Power Supply Modules In principle it requires only one person to install modules. In practice it might require two.
Sliding Rails
mcRNC installation
PDU Power Distribution Unit
2 to 8 mcRNC Modules
Cabinet Any 19” rack cabinet which fulfils mcRNC cabinet requirements
mcRNC Software Architectur e
mcRNC SW Architecture
mcRNC SW Architecture, cont. All control plane and O&M software runs on Linux in the mcRNC compared to DMX in the IPA2800 RNC. In the mcRNC the user plane software runs without an actual operating system, on top of a hardware abstraction layer called simple executive. A set of services provided by the user plane middleware create a pseudo-OS interface to the user plane applications to ensure that the programming of user plane applications is kept simple. Linux distribution is provided by WindRiver and it is provided as part of the FlexiPlatform in mcRNC. In the mcRNC all SW runs on MIPS64-based Cavium Octeon, replacing the dedicated processing architectures used in the past: x86, TI DSP, PowerQuicc and APP network processors. The Octeon processor is big-endian, which is different compared to x86 hardware and that has some minor impact on the current control plane SW.
mcRNC SW Architecture
mcRNC SW Architecture, cont. Contro l plane
The mcRNC has a completely new and different platform compared to IPA2800 RNC and to minimize the impact on the currently already available control plane SW an IPA Light layer will be implemented between the Flexi Platform and the control plane SW. This has the benefit that no, or almost no, changes are needed to the current control plane SW as the IPA Light layer will provide the API needed by the control plane SW and IPA light will then use the Flexi Platform API and in that way hide the changes from the control plane SW. The SW architecture of the user plane is pretty much similar to the SW architecture in the IPA2800 RNC to the outside world, e.g. control plane. Internally the SW architecture is quite different. To the outside world the biggest difference in the user plane application is that it is running in the same processor as the control plane counter part. User plane
The simple executive does not share memory or cores with the control plane that is running on Linux so even if the RNC application and the user plane application is running on the same processor they still need to interact like they would be located in different processors, i.e. by using messages. Some Libgen functionality will be implemented also in SE to make it possible for SW running in SE to communicate with the control plane. The user plane of mcRNC consists of 4 significant layers the Octeon hardware, the Cavium Simple Executive for Octeon, the middleware for the user plane applications and the user plane applications themselves.
Flexi Platform Architecture and Services
Flexi Platform Architecture and Services, cont. FlexiPlatform is the strategic choice for Linux middleware within NSN and for radio access gateway kind of products. FlexiPlatform architecture is not part of mcRNC specifications but here is described a really basic overview of it. FlexiPlatform consists of several parts that can be individually selected, except for Base Platform, which is part of all FlexiPlatform configurations.
mcRNC Architecture and Functional Units
Terminology • Functional unit – A unit of execution and deployment that relates to a node in the cluster. – Belongs to one of Control, User, Transport or Management planes. – Equivalent to a “computer” in the traditional sense.
In a Linux based node, the Functional unit has one-to-one mapping to the concept of Recovery Unit. In a SE based node, the Functional unit has one-to-one mapping to the SE based node itself.
• Processing Unit – A unit of deployment that spans one multi-core processor containing one or more
functional units. – The functional units contained may belong to any of the planes but are grouped together to ease processing and communication.
• Interface card / Transport card – An add-in card containing one or more processing units (one in mcRNC2.0) used to process network interface related functions and transport layer services.
• Servi ce card – An add-in card containing one or more processing units (one in mcRNC2.0) that are used for radio layer services.
• BCN module – 1 Box Controller Node hardware containing 8 add-in cards. Also referred to as “the box”.
Four main mcRNC Functional Units
CFPU
Centralized Functions Processing Unit
CSPU
Cell-Specific Processing Unit
USPU
UE-Specific Processing Unit
EIPU
External Interface Processing Unit
Conceptually, mcRNC functionality is comprised of 4 planes – Control Plane, User Plane, Transport Plane and Management Plane. Thanks to the unique type of computing processing used in mcRNC hardware, a large degree of freedom is available in design of RNC functional architecture The mcRNC architecture consists of consists of the following high level functions:
• network interface functions • switching functions • control plane processing • user plane processing • carrier connectivity functions • O&M functions The functions are distributed in the entities of mcRNC hardware and software, and the logical functions can freely be allocated inside mcRNC physical units. To simplify mcRNC architecture, the number of different types of physical units as well as the number of functional units is highly minimized. Four main functional units are utilized in mcRNC functional architecture design. They are CFPU, CSPU, USPU and EIPU
mcRNC Processing Units CFPU Operation and Management Unit (OMU) and Centralized Functions for Control Plane (CFCP) CSPU Cell-specific control and user plane processing USPU All services for UE-specific control and user plane processing EIPU Hosts the networking and transport stacks needed for processing signalling and user plane data Ethernet Switches (no redundancy)
The distributed processing architecture of the mcRNC is implemented by a multiprocessor system, where the data processing capacity is divided among several processors. Based on the application need several general purpose processing units with appropriate redundancy principle can be assigned to different tasks. In general, processing capacity can even be increased later on by distributing the functionality of the network element to multiple modules, and by upgrading processors with more powerful variants. As the mcRNC has only one type of processing hardware, it allows a large degree of freedom in the design of functional software architecture. The Centr alized Funct ion s Proc essing Unit (CFPU) consists of Operation and Management Unit (OMU) and Centralized Functions for Control Plane (CFCP). OMU performs the basic system maintenance functions such as hardware configuration, alarm system, configuration of signaling transport and centralized recovery functions. It also contains cellular network related functions such as radio network configuration management, radio network recovery and RNW database.
The CFPU is the only processing unit that uses 2N redundancy type. All the functions that require 2N redundancy are located in CFPU, as well as all the location services related functions requiring this kind of redundancy type or centralized processing. For example, accounting of simultaneous on-going location related procedures in the whole network element are located in the CFPU. The communication between OMU in CFPU and OMS/NetAct happens through dedicated Ethernet interface. The Cell-Specific Processing Unit (CSPU) processing unit implements all cell-specific control and user plane processing. All control and user plane resources for a single BTS are allocated from the same CSPU unit. Therefore CSPU units are completely independent of each others and different CSPU’s might not have mutual communication at all. Allocation of BTSs under control of specific CSPUs is controlled by OMU. The same functionality in OMU allows also graceful reallocation of BTSs one-by-one from one CSPU under control of different CSPUs. This feature provides quite seamless shutdown and replacement of one mcRNC hardware unit. The CSPU unit uses N+M (M greater or equal to 1) redundancy type.
The UE-Specific Processing Unit (USPU) This processing unit implements all services for UE-specific control and user plane processing. Further, all dedicated control and user plane resources for a single UE are allocated from the same USPU unit. Therefore USPU units are completely independent of each others and different USPUs might not have mutual communication at all. It makes implementation of SN+ redundancy features like moving UE specific processing from processor to another simpler. The External Interface Processing Unit (EIPU) hosts the networking and transport stacks needed for processing both signaling and user plane data. The mcRNC provides Ethernet switching functionality both for the internal communication between the various processing units (USPUs, CSPUs and CFPUs) as well as for flexible connecting the external network interfaces to the processing units. The internal communication and external network switching parts are kept totally separated.
Functional Architectur e of mcRNC
• Each processing unit physically corresponds to an add-in card • The add-in cards are identical from the hardware point of view but can be differentiated by loading different software to different add-in cards
The mcRNC functional architecture consists of four types of processing units: USPU, CSPU, CFPU, and EIPU. Each processing unit physically corresponds to an add-in card in the hardware architecture. The add-in cards are identical from the hardware point of view but can be differentiated by loading different software to different add-in cards - in this way implementing the processing units shown in the figure. Only CFPU and EIPU processing units are involved in IP-layer and transportlayer protocol processing. The CFPU processing unit is in charge of handling Operations and Maintenance (O&M) functions and thus provides a Small Form-factor Pluggable (SFP) port for connecting towards the data communications network via the site switches. EIPU processing units provide several SFP ports towards the network. There are two EIPU units in each hardware module. For redundancy reasons the connectivity towards the site switches should be arranged as shown in the figure.
mcRNC Redundancy Schemes
The RNC applies a number of protection schemes in various levels to support its redundancy. The redundancy schemes are: Duplicatio n (2N)
Duplication redundancy scheme, abbreviated "2N", uses a dedicated spare unit designated for one active unit only. The spare unit is hot standby state, and all of data in a spare unit is always synchronized with the active unit. The spare unit will be taken into use immediately if the active unit fails. Replacement (N+M)
Replacement redundancy scheme, named as “N+M2, takes M spare units and tries to allocate the M spare units to N active units. The spare units are kept in cold standby states. The synchronization of a spare unit is performed during the switchover procedure between a spare unit and an active unit. A higher level Fault Management System monitors the health of the N active units, and selects one of spare units from the M units to replace an active unit if it fails.. Load s haring (SN+)
Load sharing, called SN+, employs resource pool concept. A group of units form a resource pool. The number of used units in the pool is defined, so that there is a certain amount of extra capacity left in the pool. Faulty units will be disabled in the resource pool. The whole group of units can still perform its designated functions if a few units in the pool are disabled because of faults, A higher level module performs the load distribution. It also maintains the health status of the hardware units. If one of the load sharing module fails, the higher level module starts distributing the load among the rest of the units. There is graceful degradation of performance with hardware failure.
mcRNC Functional units
Control Plane and User Plane
• • • • •
CSCP – Cell Specific functions and services in Control Plane USCP – UE Specific functions and services in Control Plane CFCP – Centralized Functions and services in Control Plane CSUP – Cell Specific functions and services in User Plane USUP - UE Specific functions and services in User Plane. This includes the dedicated and shared channel services since they are relevant for a UE.
Transport Plane
• SITP – Signaling Transport Plane • EITP – External Interface functions in Transport Plane. Management Plane
• OMU – Operation and Maintenance Unit for Management Plane
USPU: UE Specific Processing Unit
• Contains USCP and USUP • Co-located user and control planes for UE specific services • Redundancy: SN+ (load shared)
This processing unit implements all UE-specific control and user plane processing. Further, all dedicated control- and user plane resources for a single UE are allocated from the same USPU unit, as long as the resource management policies permit. Overload handling and shared channel processing optimization require some communication between the USPUs but this is minimal and is mostly limited to control message exchange only. Otherwise, the USPU units are mostly independent of one another. This design narrows the scope of UE-related software bugs and protects cell processing from them. Additionally, it makes implementation of SN+ redundancy features like moving UE specific processing from processor to another simpler. SCTP optional • IP used only by Flexi PF, not by RNC applications USUP • Handles DCH, HS-DSCH and E-DCH channels • Hosts RTP, RTCP USCP • Handles connection oriented protocols • Localized User plane resource manager
CFPU: Centralized Functions Processing Unit
•Contains OMU and CFCP •USSR terminates IP for management plane •Hosts critical services •Redundancy : 2N
The Centralized Functions Processing Unit (CFPU) consists of OMU and CFCP. Operation and Management Unit (OMU) performs the basic system maintenance functions such as hardware configuration, alarm system, configuration of signaling transport and centralized recovery functions. It also contains cellular network related functions such as radio network configuration management, radio network recovery and RNW database. All the functions that requires 2N type of redundancy are located in CFPU as it is the only 2N (hot standby) redundant processing unit. In addition to existing functionality from earlier releases all the location services related functions requiring 2N redundancy or centralized processing, like accounting of simultaneous on-going location related procedures in the whole network element are located in the CFCP part of CFPU. The USSR (User Specific SE for RNC O&M) in CFPU terminates the external Ethernet interface needed for management plane operations. Management connections (ssh) and connection to OMS goes through this interface. It runs in Simple Executive (SE) domain. OMU • Basic system maintenance functions • CM, FM, PM, HW and SW management • Hosts RNW Database • Plan management CFCP • LCS services, Iu-PC, SABP • Centralized information maintenance • Connectionless protocols including paging
CSPU: Cell Specific Processing Unit
•Contains CSCP and CSUP •Co-located user and control planes for cell specific services •Redundancy: N+M
This processing unit implements all cell-specific control and user plane processing. Further, all control- and user plane resources for a single BTS are allocated from the same CSPU unit. Therefore CSPU units can function are completely independent of each others one another. and different CSPUs might not have mutual communication at all The communication between CSCPs in different CSPUs shall be limited to exchange of information on own state rather than to delegate processing of Radio Layer functionality. The unit uses N+M (M >= 1) redundancy. Allocation of BTSs under control of specific CSPUs is controlled by OMU. The same functionality in OMU allows also graceful reallocation of BTSs one-by-one from one CSPU under control of different CSPU’s. Although each cell in turn is brought down for a moment during the operation, the feature provides quite seamless shutdown and replacement of one mcRNC hardware unit. CSUP • Handles common channels and BTSs • Resources for a BTS allocated from the same unit. CSCP • Handles NBAP, RRC-c and RRC-s • Admission control, load control and packet scheduler IP is used only by Flexi PF, not by RNC applications SCTP is optional
EIPU: External Interface Processing Unit
•Transport Network Layer unit •Handles incoming packets •Contains SITP and EITP
The External Interface Processing Unit (EIPU) hosts the networking and transport stacks needed for processing both signaling and user plane data. It also handles the load balancing and distribution to other units. It consists of two functional units - the Signalling Transport Plane (SITP) and External Interface Transport Plane (EITP).
mcRNC Example Configuration Physical units are identical Functional units are SW definable with the following principles
Cell specific services processing: Number depends on the coverage/connectivity, (N+M) External interface processing: Two per module for transport processing, (1+1) Centralized functions processing: Mandatory for centralized functions, one card in each of module 1 and 2 (2N) User specific services processing: The rest of the processors (violet color) are shared between user specific UP and CP by SW, (SN+)
U D H
U D H
U P F C
U P S C
U P S U
U P I E
U P S C
U P S U
U P S U
U P I E
U P F C
U P S C
U P S U
U P I E
U P S C
U P S U
U P S U
U P I E
mcRNC Example Configuration, cont. Two bays for standard AMC cards are provided in each module. Typically the controller module hot swappable hard disk is installed in an AMC slot. In the future it is possible to use other kind of AMC devices, expanding the controller module functionality. The CFPU and hard disk (HDU) are present only in the first two modules. It is possible to connect hard disks from different controller modules, using SAS connectors dedicated for this purpose. This solution enables processors in one controller module to access hard disk located in another controller module.
mcRNC Data Flows
Scenario: Selecting a CSPU for a BTS • A BTS object is added to the RNW DB • The BTS handler chooses the next available CSCP by round robin – The eligible list is maintained based on existing load – A unit in overload mode can ask to be made ineligible
• The CSCP uses its own CSUP in same processor for user plane resources – All resources needed for a BTS provided from the same processing unit
• The Transport Resource Manager selects an EIPU – Configures it with the address and port information for the newly added BTS and the address of the selected CSCP
– The distribution table in EIPU is updated.
Scenario: Selecting a USPU for a call • A new RRC Connection Request comes to the CSPU • The USCP to handle the call is chosen by round robin with USPU load information to ensure sufficient resource for both CP and UP. – The co-located USUP handles user plane resources
IuCS User Plane data flow
Figure shows the path of AMR and CS data traffic through the system: Downlink data processing When a packet arrives, the EITP in the EIPU terminates the network and transport layer protocols – IP, IPsec (if configured) and UDP. Application layerTNL protocols RTP and RTCP (if used) are terminated in USUP. These protocols are used to take care of real time transport issues and the control of call QoS. IuUP is used to decouple the service related user data characteristics from the underlying transport protocols and is used in the support mode. It is also terminated in the USUP since it belongs to the Radio Network Layer, serves to adapt the transport layers and needs to interact closely with the User Plane. After the processing and adaptation needed for the air interface, the data frames are sent to the EITP of the EIPU that serves the BTS, where the transport and network layer functions are located. The centralized scheduling of data is enforced to ensure that the transport functions can evolve independently and are localized to the transport plane unit only. If the UE is in a SHO mode, the data is copied to multiple links by the FP layer Uplink data processing When a packet arrives, the EITP in the EIPU terminates the network and transport layer protocols – IP, IPsec (if configured) and UDP. The frames are forwarded to the respective USPU unit using the internal transport and MDC is performed in the USUP. The data is forwarded to the Iu interface after required RNL processing through the EITP.
IuPS NRT DCH and HS-DSCh data flow
Figure shows the path of HSPA traffic through the RNC. Downlink The data processing is similar to PS over DCH and the protocols used are identical. The only difference is that the SHO mode of the UE is not applicable for HSDPA traffic and the data is sent through one carrier only. Uplink The Uplink processing is similar to the PS over DCH scenario for both E-DCH and DCH uplink channels. The MDC is performed in the USUP.
IuPS NRT traffic over CCH data flow
Figure shows the path for PS data sent over common channels. Downlink The data path for the transfer over FACH follows the same principles as discussed for PS data. The only difference is in the MAC scheduling. The MAC-c scheduler and the associated FP are involved after the MAC-d processing is completed. Uplink The data path for the transfer over RACH involves the MAC-c in CSUP and then the MAC-d in USUP. The other parts are similar to that of the PS data transfer over DCH.
CS Data flow comparison with IPA2800 RNC
NIU - NPS1(P)
ATM Iub
ICSU MXU
NIU - NPGE(P)
DMCU
SFU RSMU
ICSU MXU
SWU
OMU
OMS
MXU DMCU TBU EHU
HDD Standalone or Integrated
WDU
PDU
The picture shows CS user data flow involving ATM based Iub and IP based IuCS. DCH is used and AAL2 switching of
IP Iu-CS
Control Plane comparison DCH NPS1
SFU
MXU
Eth Switch
DMCU DSP
EIPU
MXU
Int Switch
SFU
MXU
ICSU
USPU
CCH NPS1
SFU
MXU
Eth Switch
DMCU DSP
EIPU
MXU
Int Switch
SFU
MXU
ICSU
CSPU
IPA mcRNC
Data Plane comparison AMR NPS1
Eth Switch
EIPU
SFU
MXU
Int Switch
DMCU DSP
USPU
MXU
Int Switch
SFU
EIPU
NPS1
Eth Switch
NRT (DCH) NPS1
Eth Switch
EIPU
SFU
MXU
Int Switch
DMCU DSP
USPU
MXU
Int Switch
SFU
EIPU
NPS1
Eth Switch
IPA
mcRNC
Data Plane comparison
NRT (CCH) NPS1
Eth Switch
SFU
EIPU
MXU
DMCU DSP
Int Switch
MXU
CSPU
SFU
Int Switch
MXU
USPU
DMCU DSP
MXU
Int Switch
SFU
EIPU
NPS1
Eth Switch
IPA
mcRNC
mcRNC Basic Site Solutions and Backplane Connections
Basic mcRNC site solution
BCN-A example for SFP port numbering.
Reference configurations are based on the following network elements:
• Multicontroller RNC 2.0 (capacity step 1) - feature RAN2440: Fast IP Rerouting is required
• OMS (RNC OMS 1.0 6.10) • NetAct (OSS5.4 CD Set1) • Symmetricom TP5000 IEEE1588 master clock
−Single IOC module −2x 100/1000Base-T SFPs −Suitable synchronization source (e.g. GPS receiver)
mc RNC Capa mcR apaci city ty Ste Step p1 Step S1 S1-A1 - 2bo box x con c onfi figu gurati ration on
Multicontroller RNC capacity step 1 is the basic configuration and it consists of two Multicontroller RNC modules. Site Solution includes in addition to BCN Modules two Cisco 7600 series Routers. Each controller module is connected to one router with two protected link aggregated link pairs. DCN connection goes also through these site routers. In this document Multicontroller RNC Network element is everything inside the yellow box. The Multicontroller RNC Capacity Step 1 configuration consists of the following items: Two module Multicontroller RNC Network Element with DC power includes following items:
−2 * Multicontroller RNC basic module −4 * DC power module −2 *AMC HDD module −2 * SFP+ Direct Attach cable −2 * BAMF-A Two module Multicontroller RNC Network Element with AC power includes following items:
−2 * Multicontroller RNC basic module −4 * AC power module −2 *AMC HDD module −2 * SFP+ Direct Attach cable −2 * BAMF-A
mc RNC Capa mcR apaci city ty Ste Step5 p5 Step S5 S5-A1 - 6bo box x con c onfi figu gurati ration on
Multicontroller RNC capacity step 5 consists of the basic NE configuration plus 4 additional type mc02 modules. The Multicontroller RNC Capacity Step 5 configuration consists of the following items: Six module Multicontroller RNC Network Element with DC power includes following items:
• • • • •
6 * Multicontroller RNC basic module 12 * DC power module 2 *AMC HDD module 15 * SFP+ Direct Attach cable 10 * BAMF-A
Six module Multicontroller RNC Network Element with AC power includes following items:
• • • • •
6 * Multicontroller RNC basic module 12 * AC power module 2 *AMC HDD module 15 * SFP+ Direct Attach cable 10 * BAMF-A
mcRNC Capacity Step1 Step S1-B2 - 2box configuration
The Multicontroller RNC Capacity Step 1 configuration consists of the following items: Two module Multicontroller RNC Network Element with DC power includes following items:
• • • • •
2 * Multicontroller RNC basic module 4 * DC power module 2 *AMC HDD module 2 * SFP+ Direct Attach cable 2 * BAMF-A
Two module Multicontroller RNC Network Element with AC power includes following items:
• • • • •
2 * Multicontroller RNC basic module 4 * AC power module 2 *AMC HDD module 2 * SFP+ Direct Attach cable 2 * BAMF-A
mcRNC Capacity Step3 Step S3-B2 - 4box configuration