Transmission network planning __________________________________________________________________________
NSI Ireland
Transmission Network Design & Architecture Guidelines Version 1.3 Draft
Reference:
Transmission network design & architecture guidelines
Version:
1.3 Draft
Date:
10 June 2013
Author(s):
David Powders
Filed As: Draft Version (1.3)
Status: Approved By: Signature Date:
/
.......................................................... / ......................
Transmission network design & architecture guidelines
13/06/2013
Page 1 of 64
Transmission network planning __________________________________________________________________________
Document History Version 1.0 Draft 1.1 Draft
1.2 Draft 1.3 Draft
Date Comment 27.01.2013 First draft 25.02.2013 Incorporating changes requested from parent operators; - Resilience - Routing - Performance monitoring 14.05.2013 - Updated BT TT Routing section 2.3 10.06.2013 - Section 2.3 – E-Lines added to TT design - Section 2.6 Updated dimensioning rules - Section 2.6.4 Updated Policing allocation per class - Section 3.x added (Site design)
Reference documents 1 2
(2012.12.27) OPTIMA BLUEPRINT V1.0 DRAFT FINAL.doc Total Transmission IP design - DLD V2 2 (2)[1].pdf
Transmission network design & architecture guidelines
13/06/2013
Page 2 of 64
Transmission network planning __________________________________________________________________________
Contents DOCUMENT HISTORY ....................................................................................................................... 2 REFERENCE DOCUMENTS............................................................................................................... 2 1.0
INTRODUCTION......................................................................................................................... 5
1.1 1.2 1.3
BACKGROUND............................................................................................................................. 5 SCOPE OF DOCUMENT.................................................................................................................. 5 DOCUMENT STRUCTURE .............................................................................................................. 5
2.0 PROPOSED NETWORK ARCHITECTURE ............................................................................... 6 2.1 TRANSMISSION NETWORK ............................................................................................................... 7 2.1 DATA CENTRE SOLUTION................................................................................................................ 8 2.1.1 Physical interconnection ........................................................................................................ 8 2.2 SELF BUILD BACKHAUL NETWORK .................................................................................................. 9 2.3.1 Self build fibre diversity ....................................................................................................... 12 2.3 MANAGED BACKHAUL .................................................................................................................. 13 2.3.1 TT Network contract ............................................................................................................ 13 2.3.3 Backhaul network selection.................................................................................................. 16 2.4 BACKHAUL ROUTING ................................................................................................................ 16 2.4.1 Legacy mobile services ........................................................................................................ 16 2.4.2 Enterprise services ..................................................................................................... 17 2.4.3 IP services .................................................................................................................. 17 2.4.3.1 2.4.3.2
L3VPN structure ............................................................................................................... 17 IP service Resilience........................................................................................................ 19
2.5 ACCESS MICROWAVE NETWORK ............................................................................................... 20 2.5.1 Baseband switching ................................................................................................... 21 2.5.2 Microwave DCN ........................................................................................................ 22 2.5.3 Backhaul Interconnections......................................................................................... 22 2.6 NETWORK TOPOLOGY & TRAFFIC ENGINEERING ....................................................................... 23 2.6.1 Access Microwave topology & dimensioning ............................................................ 24 2.6.2 Access MW Resilience rules ....................................................................................... 27 2.6.3 Backhaul & Core transmission network dimensioning rules ..................................... 28 2.6.4 Traffic engineering .................................................................................................... 29 2.7 NETWORK SYNCHRONISATION .................................................................................................. 35 2.7.1 Self Built Transmission network ................................................................................ 36 2.7.2 Ethernet Managed services ........................................................................................ 37 2.7.3 DWDM network ......................................................................................................... 38 2.7.4 Mobile network clock recovery .................................................................................. 39 2.7.4.1 2.7.4.2 2.7.4.3 2.7.4.4
Legacy Ran nodes ........................................................................................................... 39 Ericsson SRAN – 2G ....................................................................................................... 39 Ericsson SRAN – 3G & LTE ........................................................................................... 39 NSN – 3G .......................................................................................................................... 40
2.8 DATA COMMUNICATIONS NETWORK (DCN) ............................................................................ 40 2.8.1 NSN 3G RAN Control Plane routing ......................................................................... 41 2.8.2 NSN 3G RAN O&M routing ....................................................................................... 41 2.9 TRANSMISSION NETWORK PERFORMANCE MONITORING ........................................................... 43 3.0
SITE CONFIGURATION .......................................................................................................... 45
3.1 CORE SITES ............................................................................................................................... 45 3.2 BACKHAUL SITES ...................................................................................................................... 51 3.2.1 BT TT locations .................................................................................................................... 56 3.3 ACCESS LOCATIONS ...................................................................................................................... 57 3.3.1 Access sites (Portacabin installation) ........................................................................ 57 3.3.2 Access site (Outdoor cabinet installation) ................................................................. 60
Transmission network design & architecture guidelines
13/06/2013
Page 3 of 64
Transmission network planning __________________________________________________________________________
Figures Figure 1 Proposed NSI transmission solution ............................................................................................................ 7 Figure 2 Example data centre northbound physical interconnect ............................................................................... 8 Figure 3 - Dublin Dark fibre IP/MPLS network ...................................................................................................... 10 Figure 4 - National North East area IP/MPLS microwave network ......................................................................... 11 Figure 5a – BT Total Transmission network............................................................................................................ 14 Figure 5b – NSI logical and physical transmission across the BT network ............................................................. 15 Figure 7 – Access Microwave topology ................................................................................................................... 21 Figure 8 – Example VSI grouping configuration ..................................................................................................... 23 Figure 10 – IP/MPLS traffic engineering ................................................................................................................. 30 Figure 11 – Enterprise traffic engineering ............................................................................................................... 31 Figure 12 – Downlink traffic control mechanism .................................................................................................... 32 Figure 13- Normal link operation ............................................................................................................................ 34 Figure 14 – Self built synchronisation distribution .................................................................................................. 37 Figure 15 – 1588v2 distribution over Ethernet Managed service ............................................................................. 38
Tables Table 1: Table 2: Table 3 Table 4 Table 5: Table 6: Table 5 Table 6 Table 7 Table 7: Table 8: Table 9: Table 10: Table 11: Table 12: Table 11: Table 12:
Self build fibre diversity .............................................................................................. 12 TT Access fibre diversity ............................................................................................ 13 List of L3VPN’s required......................................................................................... 18 Radio configuration V air interface bandwidth ..................................................... 25 Feeder link reference ................................................................................................... 26 CIR per technology reference...................................................................................... 26 Sample Quality of Service mapping...................................................................... 30 City Area (Max link capacity = 400Mb\s).............................................................. 33 Non City Area (Max link capacity = 200Mb\s) ..................................................... 33 Synchronisation source and distribution summary .............................................. 36 DCN network configuration per vendor ................................................................. 41 NSI transmission network KPI’s and reporting structure .................................... 44 Core site build guidelines ............................................................................................ 51 Backhaul site build guidelines..................................................................................... 56 Access site categories .................................................................................................. 57 Access site consolidation – No 3PP services in place ................................................. 63 Outdoor cabinet consolidation – existing 3PP CPE on site ......................................... 64
Transmission network design & architecture guidelines
13/06/2013
Page 4 of 64
Transmission network planning __________________________________________________________________________
1.0 Introduction 1.1
Background
The aim of this document is to detail the design and architecture principles to be applied across the Netshare Ireland (NSI) transmission network. NSI, as detailed in the transition document, is tasked with collapsing the existing transmission networks inherited from both Vodafone Ireland and H3G Ireland onto one single network carrying all of each operator’s enterprise and mobile services. As detailed in the transition document it is NSI’s responsibility to ensure that the network is future proof, scalable and cost effective with the capability to meet the short term requirements of network consolidation and the long term requirements of service expansion.
1.2
Scope of document
This document will detail the proposed solutions for the access and backhaul transmission networks and the steps required to migrate from the current separate network configuration to one consolidated network. While the required migration procedures are detailed within this document timescales required to complete these works are out of scope.
1.3
Document structure
The document is structure as follows:
Section 2 describes the desired end to end solution for the consolidated network and the criteria used to arrive at each design decision
Section 3 covers the site design and build rules
Transmission network design & architecture guidelines
13/06/2013
Page 5 of 64
Transmission network planning __________________________________________________________________________
2.0 Proposed Network architecture As described in section 1.1, NSI is required to deploy and manage a transmission network which is future proof, scalable and cost effective. As services, particularly mobile, move to an all IP flat structure it is important to ensure that the transmission network is evolved to meet this demand. Traditionally transmission networks and the services that ran across them were linked in the sense that the service connections followed the physical media interconnections between the network nodes. For all IP networks where any to any logical connections are required, it is essential that the transmission network decouples the physical layer from the service layer. For NSI Ireland the breakdown between the physical and service layer can be described as: Physical media layer 1. Tellabs 8600 & 8800 multiservice routers 2. Ethernet Microwave (Ceragon / Siae) 3. Dark Fibre (ESB / Eircom / e|net) 4. Vodafone DWDM (Huawei) 5. SDH (Alcatel Lucent/Ericsson) 6. POS / Ethernet (Tellabs) 7. Managed Ethernet services (e|net, UPC,ESBT, Eircom) 8. BT Total Transmission network (TT)
Service layer o IP/MPLS (Tellabs / BT TT) o L2 VPN (Tellabs / BT TT) o E-Line (Ceragon / Siae) o TDM access (Ceragon / Siae / Ericsson MiniLink)
By decoupling the physical media layer from the service layer it allows NSI the flexibility to modify one layer without impacting the other. Therefore routing changes throughout the network are independent of the physical layer once established. In the same way changes in the physical layer such as new nodes or bandwidth changes are independent of the service routing. This in Transmission network design & architecture guidelines
13/06/2013
Page 6 of 64
Transmission network planning __________________________________________________________________________
turn ensures that transmission network changes requiring 3rd party involvement are restricted primarily to the physical layer which, once established, should be minimal. While seamless MPLS from access point through to the core network is possible, for demarcation purposes the NSI transmission network will terminate at the core switch (BSC / RNC / SGw / MME / Enterprise gateway).
2.1 Transmission network Figure 1 details the proposed solution for the NSI transmission network. TT Ethernet trunk Legend
DCN Network 192.168.0.0/16
Netshare GigE/POS Trunk Netshare VLAN Trunk
OMU Network 172.30.208.0/25
Netshare LSP TOP Server RNC’s 1-8
UP
ICSU
OMU
CP
O&M
TOP VLAN D O&M VLAN C CP VLAN B UP VLAN A
TOP VLAN D O&M VLAN C CP VLAN B UP VLAN A
IRB D 250. M
IRB C 250. M
IRB B 250. M IRB A 250. M
W.X.Y.Z/29 TOP VRRP
IRB D 230. B
W.X.Y.Z/29 O&M VRRP
IRB C 230. B
W.X.Y.Z/29 CP VRRP
IRB B 230. B
W.X.Y.Z/29 UP 1 VRRP
DN680 – VF Clonshaugh
IRB A 230. B
10G LACP
10G LACP
VRRP VPLS i/f Tellabs 8860
Tellabs 8860 10G LACP
Tellabs 8860
Tellabs 8860
Tellabs nx10Gig connecting the Data Centres
10G LACP 10G LACP 10G LACP
10G LACP
10G LACP 10G LACP
Trunk Netsh
are Et hernet
HPD-SR12-1
HPD-SR12-2
N
TT Eth ern
et T runk
h ets
ar
the eE
rn
et
u Tr
nk 680-SR12-1
680-SR12-2
BT TT Network
e Eth TT
GPOP-SR12
rne
t tr
u
nk
VLAN Trunk
/29 netrwork allocated to backend BTS interface. Static routes required to OMU, DCN and RNC networks ??
VLAN Trunk
Cgn
Siae
Cgn Cgn
Siae Cgn
Cgn
Cgn
Siae
UP CP O&M TOP
UP VID = 3170 – 172.17.x.x/32 CP VID = 3180 – 172.18.x.x/32 O&M VID = 3190 – 172.19.x.x/32 TOP VID = 3200 – 172.20.x.x/32 CGN O&M = 3210 – 172.21.x.x/32
R B S
dn1rnc01
Access 7210
Cgn
Cgn
Ge
Siae
Cgn
Cgn
Siae
Siae
Siae
GigE GigE GigE
Cgn elp ?
el p
Siae Siae Siae
GigE
dn1rnc02 dn1rnc03 dn1rnc04 dn1rnc05 dn1rnc06 dn1rnc07 dn1rnc08
Cgn
Tellabs 86xx UP VID = 3170 – 172.17.x.x/26 CP VID = 3180 – 172.18.x.x/26 O&M VID = 3190 – 172.19.x.x/26 TOP VID = 3200 – 172.20.x.x/26 CGN O&M = 3210 – 172.21.x.x/26
GigE
Cgn
Cgn
Cgn
Cgn RSTP ?
Cgn
UP VID = 3170 – 172.17.x.x/32 CP VID = 3180 – 172.18.x.x/32 O&M VID = 3190 – 172.19.x.x/32 TOP VID = 3200 – 172.20.x.x/32 CGN O&M = 3210 – 172.21.x.x/32 UP CP Cgn O&M TOP
Ge
Siae
The BT TT Network is configured for L2 PtP circuits to each of the CDC locations. Dual nodes at the Data centres may be used to load balance the traffic from the distributed BPOP locations
Netshare IP/MPLS network. L3 VPN’s are configured for each of the srevice types from each of the operators
172.30.213.0/24 172.30.214.0/24 172.30.215.0/24 172.30.216.0/24 172.30.217.0/24 172.30.218.0/24 10/196.0.0/20 10.196.16.0/20 10.196.32.0/20 10.196.48.0/20 10.196.64.0/20 10.196.80.0/20 10.196.96.0/20
R B S
Access Cluster - Each VLAN = Broadcast Domain - MAC Learning enabled throughout the cluster to enable layer 2 switching - No E-Lines in use
Figure 1 Proposed NSI transmission solution To explain in detail the proposed transmission solution the network will be broken into the following areas
Data centre Northbound interfaces
Self build backhaul
Managed backhaul
Backhaul routing
Access Microwave network
Transmission network design & architecture guidelines
13/06/2013
Page 7 of 64
Transmission network planning __________________________________________________________________________
Network QoS & link dimensioning
DCN
Network synchronisation
2.1 Data Centre solution 2.1.1 Physical interconnection VFIE and H3G operate their respective networks based on a consolidated core. All core switching (BSC’s, RNC’s, EPC’s, Master synchronisation, DCN, security) for both H3G and VFIE are located in Dublin across 4 data centres. They are; 1. CDC1
DN680
Vodafone, Clonshaugh (VFIE)
2. CDC2
DN706
BT, Citywest (VFIE)
3. CDC3
DN422
Data Electronics, Clondalkin (VFIE)
4. CDC4
DNxxx
Citadel, Citywest (H3G)
Figure 2 below details the possible northbound connections @ each data centre
Figure 2 Example data centre northbound physical interconnect NSI will deploy 2 x Tellabs 8800 multiservice routers (MSR’s) at each of the data centres. 2 routers are required to ensure routing resilience for the Transmission network design & architecture guidelines
13/06/2013
Page 8 of 64
Transmission network planning __________________________________________________________________________
customer traffic. The 8800 hardware will interface directly at 10Gb\s, 1Gb\s & STM-1 with the core switches, DCN and synchronisation networks for both operators. Each of the Data centres will be interconnected using n x 10Gb\s rings. RSVP LSP’s are not supported on the current release of 8800 interfaces in a Link Aggregation Group (LAG) so multiple 10Gb\s rings can be used to transport traffic from both operators. In the first deployment 1 x 10Gb\s ring will be deployed which can be upgraded as required. Consideration was given to a meshed MPLS core, however the Nx10Gb\s ring was deemed to be technically sufficient and more cost effective. This design may be revisited in the future based on capacity, resilience and expansion requirements. Interfacing to the out of band DCN (mobile and transmission networks) and synchronisation networks will be realised through 1Gb\s interfaces. All interfaces to legacy TDM and ATM systems are achieved through the deployment of STM-1c and STM-1 ATM interfaces. Physical and layer 3 monitoring of the physical interfaces is active on all trunk interfaces so in the event of a link failure all traffic is routed to the diverse path and the required status messaging and resilience advertisements are propagated throughout the network. These will be explained in detail in each of the sections dealing with service provisioning.
2.2 Self build backhaul network Self build refers to network hardware and transmission links that are within the full control of NSI in terms of physical provisioning. The Self built Backhaul network interconnects the aggregation sites and the core data centre locations via a mix of Ethernet, Packed over SDH (POS) and SDH point to point trunks. The service layer will be IP/MPLS based on the Tellabs 8600 and 8800 MSR hardware. Figure’s 3 and 4 are examples of the proposed network structure,
Transmission network design & architecture guidelines
13/06/2013
Page 9 of 64
Transmission network planning __________________________________________________________________________ Dublin Dark Fibre v1.0
9 10.82.0.8/30 10 L1B
172.25.0.6
ISIS 49.0031
113 10.82.10.112/30 114 L2B
ge9/0/7
ge6/0/7
DNCME200 172.25.1.6
ge9/0/7
117 10.82.10.116/30 118 L2B
ge7/0/0
L2B
DNFTZ200
ge3/0/7 ge6/0/7
From_ADM DNPRP201
ge13/0/7
ge3/0/7
ge6/0/7 ge9/0/7
From_ADM L1A
DN294200
34 10.82.0.32/30 33
From_ADM L1A
so6/0/0
172.25.0.8
so9/0/0
ge10/0/7
L2B 70 10.82.10.68/30 69
DN017200 172.25.4.3
ge5/0/7
L2B 74 10.82.10.72/30 73
ge10/0/7
DN923200 172.25.4.2
ge8/0/7
ge7/0/7
L2B
L2B From_ADM L1A
30 10.82.0.28/30 29
ge3/0/7
so10/0/0
ge12/0/7
105 10.82.10.104/30 106 L2B
DNHB1200 172.25.0.7
ge8/0/0
L2B
DNPRP200 172.25.3.1
ge2/0/7
ge5/0/7
so5/0/0
172.25.5.2
L2B 22 10.82.10.20/30 21
ge9/0/7
L2B
66 10.82.10.64/30 65
234 10.82.10.232/30 233
L2B
172.25.0.103
229 10.82.10.228/30 230
172.25.0.110
14 10.82.10.12/30 13
ge7/0/7
ge8/0/7
154 10.82.10.152/30 153
so9/0/0
ge7/0/7
ge8/0/7
ge6/0/7 ge10/0/7 25 10.82.10.24/30 26
From_ADM
L2B
ge6/0/7
L2B ge11/0/7
ge12/0/7
L1A
ge7/0/0
L2B DNCRL200
18 10.82.10.16/30 17
DN419200 172.25.2.2
ge6/0/7
250 10.82.10.248/30 249 L2B
172.25.0.9
58 10.82.10.56/30 57
ge7/0/7
so5/0/0
ge12/0/6
ge6/0/7
81 10.82.10.80/30 82
DNNGE200 172.25.3.3
L2B
DNWAL200
L1A
ge6/0/0
ge8/0/7 146 10.82.10.144/30 145
ge6/0/7
ge9/0/7
26 10.82.0.24/30 25
102 10.82.10.100/30 101 ge6/0/7
DN880200 172.25.3.2
ge8/0/7
ge8/0/7 10 10.82.10.8/30 9
ge9/0/7
221 10.82.10.220/30 222 226 10.82.10.224/30 L2B 225
172.25.0.102
L2B
ge9/0/0
237 10.82.10.236/30 238
ge3/0/7 ge7/0/7
ge10/0/7
ge5/0/7 89 10.82.10.88/30 90
ge9/0/7 ge6/0/7
ge7/0/7 37 10.82.0.36/30 38
29 10.82.10.28/30 30
ge9/0/7
L2B
ge8/0/0
DNNGE201 172.25.0.109
L2B
DN940200
242 10.82.10.240/30 241
121 10.82.10.120/30 122
ge6/0/7 ge9/0/7 6 10.82.10.4/30 5
33 10.82.10.32/30 34
93 10.82.10.92/30 94
ge7/0/7 37 10.82.10.36/30 38
ge9/0/7 ge6/0/7 ge8/0/6
ge8/0/7 ge7/0/7
ge9/0/7
ge8/0/7
From_ADM
53 10.82.10.52/30 54 L2B
ge7/0/7
172.25.0.101
L2B
245 10.82.10.244/30 L2B 246
ge8/0/0
L2B
DNBLB200
L2B
DNPAL200 172.25.1.1
L2B
DN419201 172.25.0.108
ge9/0/7 L2B 86 10.82.10.84/30 85
172.25.5.1
L2B
DN522200 172.25.2.3
DNSE1200 172.25.2.4
DN875200
DNLCN200
ge8/0/7
L2B
L2B
L2B
DNBW1200
172.25.0.107
DN915200 172.25.4.1
DN822200
L2B
172.25.0.105
ge7/0/7
L2B 97 10.82.10.96/30 98
DNBDE200
L1A L1B
L2B
DNTWR200 172.25.2.5
ge9/0/7
ge8/0/7
172.25.x.y
49 10.82.10.48/30 50
L2B
L2B
DNBLP201 172.25.x.y
ge9/0/7
ge8/0/7
109 10.82.10.108/30 110
DNDCT200
172.25.5.3
ge9/0/6
ge12/0/7 ge6/0/7 L2B 137 10.82.10.136/30 138
L2B
L1A
ISIS 49.0032
DNTLH200 172.25.1.4
L2B
ge6/0/6
DN680200 172.25.0.1
L1B
ge5/0/7
L2B
DNCLD200 172.25.1.2
172.25.128.1
so8/1/0 so9/1/0 98 10.82.0.96/30 97
L2B
L2B
ge7/0/6
DN680202 172.25.0.17
L1B
ge7/0/7 150 10.82.10.148/30 149
L1A
L1B
DN394200
L3B
so8/1/0 so7/1/0 13 10.82.0.12/30 14
ge6/0/7
DNCAB200 172.25.2.6
L1A
172.25.0.104
L3B
DN680201 172.25.0.2
18 10.82.0.16/30 17
17 10.82.0.16/30 18
41 10.82.10.40/30 42
ge5/0/7
172.25.64.1
L2B
so6/1/0
U14 U1
DN433200 172.25.6.1
ge9/0/7
so8/1/0
so7/1/0
ge9/0/7
L1B
5 10.82.0.4/30 6
1 10.82.10.0/30 2
142 10.82.10.140/30 141
DN422201
so6/1/0
L2B
ge7/0/7
DN422200 172.25.0.5
DNBLP200 172.25.2.1
L2B
ge7/0/0
so9/1/0
ge8/0/7
DNSAN200 172.25.0.106
ge8/0/7
ge12/0/7
so7/1/0
1 10.82.0.0/30 2
L1B
172.25.0.3
DNAGI200
ge7/0/7
L2B 129 10.82.10.128/30 130
so6/1/0
so6/1/0
so9/1/0
ge9/0/7
so7/1/0
L1B
21 10.82.0.20/30 22
DN113200 172.25.2.8
ge5/0/7 78 10.82.10.76/30 77
so8/1/0
so6/1/0
ge10/0/7
DN706200
DN706201 172.25.0.4
ge6/0/7 ge9/0/7 L2B 125 10.82.10.124/30 126
ge10/0/7
ge3/0/7
U1
U14
45 10.82.10.44/30 46 L2B
DNCP1200 172.25.1.5
134 10.82.10.132/30 133
ge6/0/7
L2B
L2B 62 10.82.10.60/30 61
Core Dublin ring STM16 (future 10G/nx10G/40G); ISIS L2-only or L1-2 if between routers in same location
ge8/0/7
ge7/0/7 DNFOX200 172.25.0.100
PoC2 connections GE (future 10G); ISIS L1-2 intra-area links or L2-only inter-area links
ISIS 49.0033
PoC3 connections GE (future subrate_10G/line_rate_10G); ISIS L1-only PoC3 connections GE; ISIS L1-only Sync Priority 1 Sync Priority 2 Sync Priority 3
Figure 3 - Dublin Dark fibre IP/MPLS network In the network Dark fibre from Eircom, ESB and e|net will be used as the physical media interconnect. Interconnections based on aggregation requirements will be at speeds of 1Gb\s, 2.5Gb\s (legacy) or 10Gb\s. A hierarchical ISIS structure of rings will be used to simplify the MPLS design. The Level 2 areas will be connected directly to the core data centre sites with level 1 access rings used to interconnect traffic from the access areas. The L2 access areas will have physically diverse fibre connections to 2 of the data centres. Physically diverse LSP’s are routed from the L2 aggregation routers to each of the data centres facilitating diverse connectivity to each of the core switching elements. This provides protection against a single trunk or node failure. The access rings will have diverse fibre connectivity to a L1/2 router which will perform the ABR function. Within each access ring diverse LSP’s will be configured to the ABR or ABR’s providing access route resilience against a single fibre break and/or node failure. RSVP LSP’s with no bandwidth reservations will be used to route the LSP’s across the backhaul network. All LSP’s will use a common Tunnel Affinity Template. This provides the flexibility to re-parent traffic to alternative trunks without a traffic intervention should that be required. It is proposed to use a Transmission network design & architecture guidelines
13/06/2013
Page 10 of 64
Transmission network planning __________________________________________________________________________
combination of strict and loose hop routing across the network. The working path should always be associated with the strict hop with the protection assigned to either a strict or loose hop. For those LSP’s routed over the Microwave POS trunks, strict hops will be used to ensure efficient bandwidth management. For those routed across Dark fibre or managed Ethernet loose hops will be used. In a mesh network where there are multiple physical failures and multiple paths possible this approach offers a greater level of resilience. CNMCR200 172.25.128.12
8/0/0
213 10.82.128.212/30 214
7/0/0
8/0/1 8/0/2 8/0/3
217 10.82.128.216/30 218 221 10.82.128.220/30 222 225 10.82.128.224/30 226
7/0/1 7/0/2 7/0/3
8/0/0 8/0/1
229 10.82.128.228/30 230 233 10.82.128.232/30 234
7/0/0 7/0/1
8/0/2
237 10.82.128.236/30 238 241 10.82.128.240/30 242
7/0/2 7/0/3
8/0/3
LHDDK200 172.25.128.14
LH038200 172.25.0.16
8/0/3 8/0/2
245 10.82.12 8.244/30 246 249 10.82.12 8.248/30 250 253 10.82.12 8.252/30 254 1 10.82.12 9.1/30 2
193 10.82.128.192/30 194
8/0/0
9/0/0 9/0/1 9/0/2 9/0/3
21 10.82.129.20/30 117 10.82.128.116/30 121 10.82.128.120/30 125 10.82.128.124/30
22 118 122 126
7/0/0 7/0/1 7/0/2 7/0/3
MH009200 172.25.128.7
8/0/0
129 10.82.128.128/30 130
7/0/0
8/0/1 8/0/2 8/0/3
133 10.82.128.132/30 134 137 10.82.128.136/30 138 141 10.82.128.140/30 142
7/0/1 7/0/2 7/0/3
LH001200 172.25.128.8
8/0/0 8/0/1 8/0/2 8/0/3
145 149 153 157
10.82.128.144/30 10.82.128.148/30 10.82.128.152/30 10.82.128.156/30
9/0/0 9/0/1 9/0/2 9/0/3
146 150 154 158
LH011200 172.25.128.9
7/0/1
7/0/2
7/0/3
101 10.82.128.100/30 102
105 10.82.128.104/30 106
109 10.82.128.108/30 110
113 10.82.128.112/30 114
8/0/0
8/0/1
8/0/3 8/0/2
8/0/0 8/0/1 8/0/2 8/0/3 8/0/4
161 10.82.128.160/30 165 10.82.128.164/30 169 10.82.128.168/30 173 10.82.128.172/30 177 10.82.128.176/30
162 166 170 174 178
7/0/0
7/0/0 7/0/1 7/0/2 7/0/3 7/0/4
WHLIN200 172.25.128.6
8/0/3
7/0/3
185 10.82.128.184/30 186 181 10.82.128.180/30 182
189 10.82.128.188/30 190
8/0/0 8/0/1 8/0/2
7/0/2
8/0/1
7/0/1
7/0/0
8/0/0
CNSGA200 172.25.128.11
7/0/0 7/0/1 7/0/2 7/0/3
201 10.82.128.200/30 202 205 10.82.128.204/30 206 209 10.82.128.208/30 210
8/0/1 8/0/2 8/0/3
197 10.82.128.196/30 198
8/0/0 8/0/1 8/0/2 8/0/3
7/0/3
7/0/1 7/0/0
7/0/2
7/0/0
42 46
7/0/1 7/0/2 7/0/3
MHWD1200 172.25.128.3
8/0/0 8/0/1 8/0/2 8/0/3
49 10.82.128.48/30 50 53 10.82.128.52/30 54 57 10.82.128.56/30 58 61 10.82.128.60/30 62
7/0/0 7/0/1 7/0/2 7/0/3
MHSKE200 172.25.128.4
8/0/0 8/0/1 8/0/2 8/0/3
65 10.82.128.64/30 69 10.82.128.68/30 73 10.82.128.72/30 77 10.82.128.76/30
9/0/0 9/0/1 9/0/2 9/0/3
66 70 74 78
MHFKS200 172.25.128.5
7/0/3 7/0/4
34 38
7/0/0
33 10.82.128.32/3 0 37 10.82.128.36/3 0 41 10.82.128.40/3 0 45 10.82.128.44/3 0
7/0/1 7/0/2
9/0/0 9/0/1 9/0/2 9/0/2
KECAP200 172.25.128.2
29 2.12
30 /30 8.28 6 0 2 .24/3 2 2 /30 8.20 2.12 0 18 /3 8.16 2.12
10.8
8 2.12
10.8
17
10.8
21
10.8
25
81 10 .82.12 8.80/3 85 10 0 82 .82.12 8.84/3 0 86 89 10 .82.12 8.88/3 93 10 0 90 .82.12 8. 92 97 10 /30 94 .82.12 8.96/3 0 98
8/0/3
4/0/4
4/0/5
4/0/6
7/0/3 13 10.82.128.12/30 14 10/1/7
7/0/0
7/0/1
7/0/2
5 10.82.128.4/30 6
1 10.82.128.0/30 2
9 10.82.128.8/30 10
10/1/4
10/1/5
10/1/6
4/0/7
8/0/0
8/0/2 8/0/1
DNBW1200 172.25.128.1
DN680200 172.25.0.1
DN706201 172.25.0.4
Figure 4 - National North East area IP/MPLS microwave network Figure 4 details a sample of the self built backhaul network routed over N+0 SDH microwave links and rings. In this situation LSP’s are routed over Transmission network design & architecture guidelines
13/06/2013
Page 11 of 64
Transmission network planning __________________________________________________________________________
multiple hops to the data centres and all routers will be added to the Level 2 area. In order to ensure that traffic is correctly balanced across the SDH trunks RSVP LSP’s will be routed statically giving NSI a greater level of control over the bandwidth utilisation. LSP’s from each collector will be associated with a particular STM-1 and routed to the destination accordingly. Traffic aggregating at each collector is then associated with a particular LSP. NOTE: The transition document states that the National SDH Microwave network should be replaced by NSI with the BT TT network (See section 2.1.2) or a National DF network. However, as this will take time and consolidated sites are required nationally in the short term, the network described in Figure 4 will be utilised over the short to medium term.
2.3.1 Self build fibre diversity The table below details the physical diversity requirements for fibre based on traffic aggregation in the transmission network. Note that in some cases where the capital cost to provide media diversity over fibre is prohibitive Microwave Ethernet will be considered as a medium term alternative. While the microwave link will for the most part be of a lower capacity than the primary fibre route the degradation of service during the fibre outage may be acceptable for short periods to maximise the fibre penetration Aggregation level
Diversity
Comments
<5
Single fibre pair
No diversity
5≤x≤9
Flat ring
Two fibre pairs sharing the same duct
>9
Fibre duct diversity
5m fibre separation to the aggregation router
Table 1: Self build fibre diversity
Note that the above table details the desired physical separation. In some cases the physical separation may not be physically possible and a decision on the aggregation level will be made based on other factors such as location, security, landlord, antenna support structure and cost.
Transmission network design & architecture guidelines
13/06/2013
Page 12 of 64
Transmission network planning __________________________________________________________________________
2.3 Managed backhaul Managed backhaul refers to point to point or network connections facilitated by an OLO over which NSI will transport traffic. In this case the OLO will provision the physical transmission interconnections. Presently VFIE use Eircom and e|net as a managed service vendor. In this case VFIE have individual leased lines from each of the vendor’s providing point to point fixed bandwidth connections. H3G to date have used BT as their backhaul transmission vendor where all traffic from the access network is routed across the National BT Total Transmission (TT) network.
2.3.1 TT Network contract The BT TT contract allows H3G to utilise up to 70Gb\s of bandwidth across a possible 200 “collector” or aggregation locations. Presently BT has configured multiple L3VPN’s across the TT to route traffic between the collector locations and the data centre site at Citadel (Citywest). BT deployed 2 x SR12 (ALU) routers at Citadel to terminate all of the traffic from the possible 200 x locations. H3G can interconnect from their access network at a BT GPOP onto a collocated SR12 or APOP. At an APOP BT deploy an Alcatel-Lucent (ALU) 7210 node and extend the TT to this point. The physical resilience from the GPOP to the APOP depends on the traffic to be aggregated at the site. See Table 2.
Collector
Sites
Type
aggregated
Small Medium Large Table 2:
≤5 5
Physical resilience
Comments
None Flat ring Diverse fibre duct
TT Access fibre diversity
Figure 5a details the configuration of the BT TT solution.
Transmission network design & architecture guidelines
13/06/2013
Page 13 of 64
Transmission network planning __________________________________________________________________________
Figure 5a – BT Total Transmission network Because BT route traffic to and from the collector points over L3VPN’s they must be involved in the provisioning process for all RBS across the network. As described in section 2.0 it is proposed to separate the physical interconnection of sites from the service provisioning for NSI. To achieve this across the TT NSI must use the TT to replicate physical point to point connections across the backhaul network. It is proposed to change the BT managed service from a Layer 3 network to a layer 2 network and replicate the approach taken in the self built network. The end result being that the provision of services across the NSI backhaul network is consistent regardless of the underlying physical infrastructure (Self built or managed). In order to replicate the self built architecture and utilise the BT TT contract it will be necessary to extend the TT network to a second data centre. It is proposed to extend the BT TT to the VFIE date centre in Clonshaugh. At Citadel and Clonshaugh the TT will interconnect with the 8800 network on Nx10Gb\s connections. While it is not necessary to deploy 2 x SR-12 TT Transmission network design & architecture guidelines
13/06/2013
Page 14 of 64
Transmission network planning __________________________________________________________________________
routers at the data centres due to the path resilience employed, it will be useful in terms of load balancing and future bandwidth requirements. As with the self build design, resilience will be achieved through the physical path diversity to diverse data centre locations from each of the BT GPoP’s. Figure 5b illustrates the physical and logical connectivity across the BT TT. 10Gb\ s NSI MPLS
BT IP GPoP HPD 1
ADVA XG210
1G/10G
10Gb\ s NSI MPLS 10Gb
1G/10G
Clonshaugh
10Gb\ s NSI MPLS
10Gb
Primary & secondary LSP’s
NSI MPLS ABR (1)
Citadel 100
BT Total Transmission Core
BT IP GPoP Ballymount
L1/2
BT IP GPoP HPD 2 E-Line’s on BT TT
1G/10G
ADVA XG210
NSI MPLS ABR (2)
1G/10G
L1/2
1G/10G
P1_ SyncE
P1_ SyncE
Symmetricom TP500
1Gb\s
ge
ge
1Gb\s
Symmetricom TP500
P1_ 1588v2
BT - Alcatel 7210- Collector
E-Lines from BT Collector to GPOP
P1_ 1588v2
1Gbit/s
1Gbit/s
NSI MPLS Collector
Figure 5b – NSI logical and physical transmission across the BT network VLAN trunks over E-Lines are configured from the collector to the GPOP over which LSP’s are configured to the ABR’s using LDP. LDP will facilitate automatic label exchange within the MPLS network and remove the requirement for manual configuration in the access area. In the BT TT network, VLAN trunks over E-Lines are configured for each ABR to one of the parent data centres. RSVP-TE LSP’s can be configured across these trunks to any of the data centre facilities in a resilient manner. Dual ABR’s are used to ensure hardware resilience for the access areas where up to 20 collector nodes could be connected to a BT GPOP in this manner.
Transmission network design & architecture guidelines
13/06/2013
Page 15 of 64
Transmission network planning __________________________________________________________________________
2.3.3 Backhaul network selection In some cases NSI will have the option to use either self build dark fibre or managed services to backhaul services from a particular aggregation point. In this case a number of factors must be considered when selecting the network type. They are;
Factor
Self Build
Managed
Comment
Long term
High
Low /
For large bandwidth sites
medium
dark fibre may offer the
bandwidth requirements Operational Cost
more attractive cost per bit High/Medium Low
impact
To reduce the impact on the operational expenditure dark Fibre CapEx deals may be more attractive
Surrounding
Dark fibre
Managed
network
The transmission network selection should take account of the surrounding backhaul type. This is to ensure that the interconnecting clusters are optimally routed through the hierarchical structure.
2.4
Backhaul routing
Backhaul routing can be split into legacy (TDM/ATM) services, enterprise services and IP services.
2.4.1 Legacy mobile services Legacy mobile services relate to 2G BTS and 3G RBS nodes with TDM and ATM interfaces. For these services NSI will configure pseudowires (PWE’s) Transmission network design & architecture guidelines
13/06/2013
Page 16 of 64
Transmission network planning __________________________________________________________________________
across the MPLS network. ATM services will be carried in ATM PWE’s with N:1 encapsulation used for the signalling VC’s to reduce the number required. User plane VC’s can be mapped into single PWE. TDM services will be transported using SAToP PWE’s. At the core locations MSP1+1 protected STM-1 interfaces will be deployed between the 8800 MSR’s and the core switches (BSC / RNC). Note: Multichassis MSP feature is not available on the Tellabs 8800 MSR’s. Therefore MSP1+1 protecting ports will be on separate cards. At the access locations MSP protection for ingress TDM traffic will be configured in the same way on the 8600 nodes. PWE’s for legacy services will be routed between the core and collector locations over physically diverse LSP’s.
2.4.2 Enterprise services Similar to legacy services, enterprise services will be routed between the core and collector locations over diverse or non diverse LSP’s based on the customer’s SLA. For the most part enterprise services are provided as Ethernet services. In this case Ethernet PWE’s will be configured to carry the services. A Class of Service (CoS) will be applied to the Ethernet PWE based on the customer’s SLA. At the core locations the service will be handed to the customer network over an Ethernet connection with VLAN separation for the individual customers. In the event that multiple customers are sharing the same physical interfaces SVLAN separation per customer can be implemented. This will be finalised based on a statement of requirements from the parent operator. TDM services for enterprise customers will be treated the same as legacy TDM services described in 2.4.1 with STM-1 interfaces used to interface the core switches
2.4.3 IP services
2.4.3.1
L3VPN structure
For IP services L3VPN’s will be configured across the MPLS network. All routing information will be propagated throughout each L3VPN using BGP.
Transmission network design & architecture guidelines
13/06/2013
Page 17 of 64
Transmission network planning __________________________________________________________________________
The IP/MPLS network will be configured in a hierarchical fashion with route reflectors used to advertise routing within each area. Route Reflectors (RR’s) will be implemented in the core area with all level 2 routers peering to those RR’s. The ABR’s between the level 1 and 2 areas will act as the route reflectors for the connected level 1 area’s. This will reduce the size and complexity of the routing tables across the network. For each service a L3VPN will be configured. Because H3G and VFIE use different vendor’s and have different requirements in the core the number of L3VPN’s required differ slightly. Table 3 details the L3VPN’s to be configured across the NSI network. Parent
L3VPN
Description
Comment
VFIE
2G UP
User Plane
Separate L3VPN’s are configured for each BSC
VFIE
SIU O&M
Baseband aggregation switch
VFIE
RNC UP
3G User Plane
Separate L3 VPN are configured for each RNC
VFIE
SRAN O&M
SRAN O&M
VFIE
Synchronisation
1588v2 network
VFIE
Siae O&M
Ethernet microwave O&M
VFIE
MiniLink O&M
O&M for the MiniLink PDH network (SAU-IP)
H3G
3G UP
User plane
A single L3VPN for all RNC’s
H3G
3G CP
Control plane
A single L3VPN for all RNC’s
H3G
3G O&M (RNC)
Operation and
A single L3VPN for all RNC’s
maintenance H3G
3G O&M RBS
Operation and
A single L3VPN for all RBS
maintenance H3G
TOP
1588v2 network
H3G
Ceragon O&M
Ethernet
Synchronisation
Microwave O&M VFIE
LTE
Tbc
Tbc
H3G
LTE
Tbc
Tbc
Table 3
List of L3VPN’s required
Transmission network design & architecture guidelines
13/06/2013
Page 18 of 64
Transmission network planning __________________________________________________________________________
As services are added to the network they will be added as endpoints to the respective L3VPN for that service and parent core node. This is achieved by adding the endpoint interface and subnet to the VPN. Any adjacent network routing required to connect to a network will be redistributed into the VPN also. VFIE use /30 subnets to address the mobile services across the network. This results in a large number of endpoints within each L3VPN. For that reason the networks will be split based on the parent core switch. This results in a L3VPN for each of the services routed to each of the RNC’s/BSC’s. For the H3G network, /26 networks are typically used at each of the endpoints. This summarisation reduces significantly the number of endpoints required within each VPN and consequently the number of VPN’s. Sections 3 and 4 detail the impacts the proposed design have on each of the operator’s existing solutions and the steps, if any, required to migrate to the proposed solution.
2.4.3.2
IP service Resilience
Transport resilience
Within the backhaul network IP services will be carried resiliently between the core and collector locations over diversely routed LSP’s. It is proposed to use a combination of strict and loose hop routing across the network. The working path should always be associated with the strict hop with the protection assigned to the loose hop. By configuring the protection on a loose hop it will allow the IGP to route the LSP between the source and destination. In the event of a failure all traffic will be switched to the protecting LSP which has been routed between the source and destination via the IGP. In a mesh network where there are multiple physical failures and multiple paths possible this approach offers a greater level of resilience. Note, as described in section 2.2, in the case where both the main and protecting paths are routed over Microwave STM-1 trunks, strict hop routing will be employed for both paths to ensure optimum utilisation of the available capacity. Transmission network design & architecture guidelines
13/06/2013
Page 19 of 64
Transmission network planning __________________________________________________________________________
Router Resilience Within the level 2 area of the network dual routers are deployed to ensure resilience at locations aggregating large volumes of traffic. In this case resiliently LSP’s are routed from the collector nodes to both routers. In the event of a router failure traffic will route over the operating router until such time as the second router is operational after which the routing will return to the initial configuration.
Core switch resilience - VRRP For all connections to the mobile core, Virtual Router Redundancy Protocol (VRRP) should be used. While the VRRP implementation will differ slightly based on the mobile core vendor and function, the objective is to ensure that the transmission network to the core has full interface and router redundancy. 10Gb\s (with LAG if required) cross links at each data centre location between the 8800 nodes will be implemented to support the router redundancy. For the 8800 nodes during restart it is possible that the router will advertise the interface addresses to the core switch (BSC/RNC/SGw/MME) before the router forwarding function is re-established. This may result in the temporary “Black Holing” of traffic. To avoid this scenario a separate connection is required between the routers with a default route added to each for all traffic. This will avoid the above scenario. It is proposed that a 10Gb\s link should be used for this also.
2.5
Access Microwave network
The target access microwave network with be based on an Ethernet microwave solution utilising ACM to maximise the available bandwidth. In the existing networks H3G use Ceragon IPx Microwave products while VFIE use the Siae Alc+2 and Alc+2e products. While it is envisaged that NSI will tender for one supplier it is not planned to replace one of the existing networks. The access network solution must be designed so as to ensure both vendor’s
Transmission network design & architecture guidelines
13/06/2013
Page 20 of 64
Transmission network planning __________________________________________________________________________
products and the services transported across them inter operate without issue. Figure 7 details a possible configuration of the access network topology utilising both vendors’ products.
Siae
Aggregation Node 86xx
Cgn
GigE
Siae
Cgn
Cgn Cgn
Cgn
GigE Cgn
GigE elp
GigE
Cgn
Cgn
Siae
Siae
Siae
Siae
Siae
Siae Siae
GigE
Cgn
Siae Cgn
Siae
Figure 7 – Access Microwave topology 2.5.1 Baseband switching For the access network all traffic will be routed at layer 2 utilising VLAN switching at each point. VLAN’s will be statically configured at each site on each of the indoor units. For VFIE, unique VLAN’s are used to switch traffic from each of the RBS nodes. For H3G, common VLAN’s are used for each of the service types switched across the network. They are;
UP VID = 3170
CP VID = 3180
O&M VID = 3190
TOP VID = 3200
Ceragon O&M = 3210
Note: Future developments may result in the deployment of all outdoor MW Radio products in the traditional MW Bands and in the E-Band. In this case at feeder locations a cell site router may be deployed to perform the baseband switching function using IP/MPLS routing functions. Should this solution be
Transmission network design & architecture guidelines
13/06/2013
Page 21 of 64
Transmission network planning __________________________________________________________________________
employed in the future, an additional design scenario will be described and added to this document.
2.5.2 Microwave DCN All Microwave DCN will be carried in band (this is already the case for the Ceragon network elements). As sites are consolidated and migrated to the consolidated network, it will be necessary to migrate the Siae DCN solution to an in band solution. It is proposed to assign VLAN ID 3000 to the Siae network for DCN.
2.5.3 Backhaul Interconnections The access network will interface with the backhaul network over multiple GE interfaces. The interfaces can be protected or not depending on the capacity requirements. While LAG is possible on the GE interfaces the preference will be to use ELP on the access router with interconnected IDU interfaces in an active / active mode. In a situation where greater than one 1Gb\s is required over the Radio link, LAG can be used. The limitation on the access interfaces is that the interfaces in a LAG on the Tellabs 8600 must be on the same interface module. This is a planned feature for release FP4.1 and is planned to be deployed in the NSI network for Q2 2014. VSI interfaces will be used to associate common network VLAN’s arriving on separate physical interfaces to a common virtual interface. This ensures that the approach used to assign a single subnet per traffic type per cluster can be continued where required. A separate VSI interface will be configured for each service type and added as the endpoint to the required IPVPN. Any static routes required to connect to and from the DCN network will use the VSI interface address. Figure 8 details the operation of the VSI interface.
Transmission network design & architecture guidelines
13/06/2013
Page 22 of 64
Transmission network planning __________________________________________________________________________
VID3170 VID3210 VID3000
VID3170
GE
VID3170 – H3G UP VID3210 – Ceragon DCN VID3000 – Siae DCN
GE
Cgn
VID3210 VID3000
VSI
VSI
VSI
VID3170 VID3210 VID3000
GE
VID3170 VID3210 VID3000
VID3000 VID3210
GE
VID3170
VID3170 VID3210 VID3000
VID3170 – H3G UP VID3210 – Ceragon DCN VID3000 – Siae DCN
GE
Siae
VID3170 – H3G UP VID3210 – Ceragon DCN VID3000 – Siae DCN
GE
Cgn
Tellabs 86xx
Figure 8 – Example VSI grouping configuration
2.6
Network topology & traffic engineering
The NSI transition document details the targets for network topology, traffic engineering and bandwidth allocation on a per site basis for each of the mobile networks. In summary they are;
No more than 1 Microwave hop to fibre (Facilitated by providing fibre solutions to 190 towns)
No contention for shared transmission resources (NSI are required to monitor utilisation and ensure upgrade prior to congestion on the transmission network)
Traffic engineering (CoS, DSCP, PHB) will be assigned equally to each service type from each operator. At a minimum the following will be applied; o Voice (GBR) o Video/interactive (VBR-RT) o Enterprise (VBR-NRT) o Data (BE)
Bandwidth allocation per site o Dublin & other cities
(400Mb\s\site)
o Towns (5 – 10K)
(300Mb\s\site)
Transmission network design & architecture guidelines
13/06/2013
Page 23 of 64
Transmission network planning __________________________________________________________________________
o Rural
(200Mb\s\site)
This chapter will explain in detail the required Access, Backhaul and Core transmission network dimensioning guidelines and traffic engineering rules to achieve the targets set out in the transition document
2.6.1 Access Microwave topology & dimensioning The national access microwave network will be broken into clusters of microwave links connected, over one or multiple hops, to a fibre access point. The fibre access point can be part of the self built or managed backhaul networks but must have the following characteristics;
Long term lease or wholly owned by NSI or one of the parent operators
24 x 7 access for field maintenance
Excellent Line of Site properties
Facility to house a significant number of microwave antennas
Space to house all the required transmission nodes and DC rectifier systems
No Health and safety restrictions
Before creating a cluster plan, each site in the MW network must be classified under the following criteria;
Equipment support capabilities
Line of sight capabilities – proximity to existing fibre solution
Existing frequency designations
Site development opportunities
Landlord agreements (Number and type of equipment/services permitted under the existing agreements)
Term of agreement
Creating a database as above will allow the MW network planning team to create cluster solutions where a number of sites are associated with a designated head of cluster. As per the transition document the target topology is one hop to a fire access point. However this will not always be possible due to one or a combination of the following factors; Transmission network design & architecture guidelines
13/06/2013
Page 24 of 64
Transmission network planning __________________________________________________________________________
Line of Site Channel restrictions Proximity of fibre solutions Once the topology of the cluster is defined it is necessary to define the capacity of each link within the cluster. For tail links this is straight forward, the link must meet the capacity requirements of the transition document;
Dublin & other cities
(400Mb\s\site)
Towns (5 – 10K)
(300Mb\s\site)
Rural
(200Mb\s\site)
For feeder links, statistical gain must be factored while still meeting the capacity requirements for each of the individual sites. Table 4 gives examples of existing MW Radio configurations and the average air interface speeds available. Channel
Configuration
Max Air interface speed @
Bandwidth
256QAM
14MHz
Single channel
85Mb\s
28MHz
Single channel
170Mb\s
28Mhz
2 channel LAG
340Mb\s
28MHz
3 channel LAG
500Mb\s
28MHz
4 channel LAG
680Mb\s
56Mhz
Single Channel
340Mb\s
56MHz
2 channel LAG
680Mb\s
56MHz
3 channel LAG
1.02Gb\s
56MHz
4 channel LAG
1.34Gb\s
E-Band
1GHz
1Gb\s
Table 4
Radio configuration V air interface bandwidth
Table 5 provides a guide for feeder link configurations based on the number of physical sites aggregated across that link. Physical sites aggregated 2
City P1: E-band P2: 2 x56MHz
Transmission network design & architecture guidelines
Urban P1: 1 x56MHz
13/06/2013
Rural P1: 1 x56MHz
Comments 3:1 Stat gain
Page 25 of 64
Transmission network planning __________________________________________________________________________ P1: E-band
3
P2: 2 x56MHz P1: E-band
4
P2: 2 x56MHz P1: E-band
5
P2: 2 x56MHz P1: E-band
6
P2: 3 x56MHz
7
8
Table 5:
P1: 1 x56MHz
P1: 1 x56MHz
3:1 Stat gain
P1: 2 x56MHz
P1: 1 x56MHz
3:1 Stat gain
P1: 2 x56MHz
P1: 1 x56MHz
3:1 Stat gain
P1: 2 x56MHz
P1: 2 x56MHz
3:1 Stat gain
P1: 2 x56MHz
3:1 Stat gain
P1: 2 x56MHz
3:1 Stat gain
P1: E-band
P1: E-band
P2: 3 x56MHz
P2: 3 x56MHz
P1: E-band
P1: E-band
P2: 4 x56MHz
P2: 3 x56MHz
Feeder link reference
Note that no more than 8 physical sites should be aggregated on any one feeder link. For MW links utilising adaptive code modulation (ACM) it is important that at the reference modulation (i.e. the modulation scheme for which ComReg have allocated the Max EIRP) is dimensioned so as to meet the sum of the CIR’s from each operator across that link. The total CIR per link is based on the product of the RAN technologies deployed and the CIR per RAN technology.
Table 6:
Service
RAN technology
CIR (Mb\s)
Voice
2G
1
Voice
3G
1
Voice
LTE
1
Data
GPRS
1.5
Data
R99
2
Data
HSxPA
15
Data
LTE
20
CIR per technology reference
Transmission network design & architecture guidelines
13/06/2013
Page 26 of 64
Transmission network planning __________________________________________________________________________
Should restrictions apply in terms of hardware, licensing, topology with the effect that links cannot be dimensioned as per table 4 then the following formula should be used to determine the minimum link bandwidth. Min Feeder link capacity = MAX (∑VFIE CIR + ∑H3G CIR, Max Tail link capacity)
∑CIR = Total CIR across all links aggregated from each operator
Max Tail link capacity = Max tail link capacity of all sites aggregated across the feeder link
The formula is designed to facilitate the required capacity for each site based on location while at the same time ensuring, where multiple sites are aggregated, that the minimum CIR is available to each site.
2.6.2 Access MW Resilience rules The resilience rules for the access MW network are based on the number of cell sites and enterprise services aggregated across the link. 1+1 HSB will be used to protect the physical path. Collector site
Sites
Physical resilience
Comments
aggregated Small
≤5
None
Medium /
>5
1+1 HSB
Large
Note that while LAG can be considered as a protection mechanism, allowing the link to operate at a lower bandwidth in the event of a Radio failure, NSI will protect the Radio’s in a LAG group using 1+1HSB to ensure the highest hardware availability for a physical link. NSI will consider LAG for capacity only and 1+1 HSB for protection. The target microwave topology, as described in the transition document, is for “1 microwave hop to fibre” which will result in minimal use of 1+1 HSB configurations. However in the event that this topology is not possible NSI will implement protection as described above.
Transmission network design & architecture guidelines
13/06/2013
Page 27 of 64
Transmission network planning __________________________________________________________________________
2.6.3 Backhaul & Core transmission network dimensioning rules Forecasting data utilisation across mobile networks is unpredictable due to the fact that service is quite new and technologies are still evolving. The dimensioning rules for the core and backhaul networks will be based in the first instance on projected statistical gain. To ensure that the Backhaul and Core networks are dimensioned correctly for the initial network consolidation the following criteria will be used; Network
Statistical gain
Action
Backhaul network
Less than 6
ok
Greater than 6 and less than 8
under review
8 or greater
upgrade
Less than 8
ok
Greater than 8 and less than 10
under review
8 or greater
upgrade
Core Dark Fibre
The statistical gain will be based on the average throughputs per technology aggregated. The statistical gain is based on the following calculation; Stat Gain = Total existing service capacity + Forecasted service capacity Backhaul capacity For the backhaul and core networks the current utilisation will be monitored on a monthly basis with the forecasted Statistical gain forecasted over an annual basis. This will give rise to programmed capacity upgrades across the Backhaul (managed and self build) and Core networks. The time to upgrade trunks across these networks is typically between 6 and 24 months depending on the upgrade involved. To facilitate this process the parent companies must provide 12, 24 and 36 month rolling forecasts at least twice yearly. These forecasts must detail at a minimum;
Volume deployment per service type per geographic area
Average throughput per service type
Max allowable latency per service type
Transmission network design & architecture guidelines
13/06/2013
Page 28 of 64
Transmission network planning __________________________________________________________________________
NSI will constantly monitor utilisation V’s forecast and feedback to the parent companies. This will ensure that the capacity forecasting processes are optimised over time.
2.6.4 Traffic engineering As described in section 2.6.2, while all efforts will be made to ensure congestion and contention is minimised across the transmission network, in some cases it will be unavoidable. NSI must ensure, in such circumstances, that both operators have equal access to the available bandwidth. To ensure that this is the case traffic engineering must be employed across the transmission and RAN networks;
QoS mapping
Shaping
Policing
Queue management
Quality of service is used to assign priority to certain services above others. Critical service signalling and GBR services will be assigned the highest priorities with VBR services assigned lower priority based on the service and/or the technology. There are large variations in the bandwidth requirements for LTE, HSPA, R99 and GPRS. For this reason, if all services were assigned equal priority, during periods of congestion, the low bandwidth services would be disproportionally impacted to such an extent that they may become unusable. For that reason, the low bandwidth data services will be assigned a higher priority to those presenting very high bandwidths.
QoS along with the queue management function should be designed to ensure, during periods of congestion, that equivalent services from the two operators have equal access to the available bandwidth. Table 5 details the proposed QoS mapping for all mobile RAN services. Traffic Type
DSCP
L2-pbit
MPLS Queue
Signalling, synchronisation, routing
24,40,48,49,56
7
CS7 (Strict)
46
6
EF (Strict)
protocols, Speech Transmission network design & architecture guidelines
13/06/2013
Page 29 of 64
Transmission network planning __________________________________________________________________________
VBR Streaming, GPRS data,
32,34,36,38
4
AF4 (WRED)
R99 data
24,26,28,30
3
AF3 (WRED)
HS Data
18,20,22
2
AF2 (WRED)
Premium Internet access
10
1
AF1 (WRED)
LTE Data
0,8
0
BE
Gaming
Table 5
Quality of Service mapping
Traffic engineering across the IP/MPLS network IP/MPLS traffic Engineering
Ingress from Core
Traffic Classification
Queue & Queue Management
Shaping
Scheduling
Trunk Interface
EF Tail drop
IP Flow RT Tail drop
Shaping
Strict priority
IP Flow pG WRED
Shaper CIR / PIR
G+E WRED
Shaper CIR / PIR
Classification IP Flow
IP Flow
WFQ
BE WRED
Ingress - POC 1 Location
IP/MPLS PHB
Figure 10 – IP/MPLS traffic engineering Figure 10 describes the flow of traffic through the IP/MPLS network. On ingress from the core and access networks traffic is classified according to the DSCP value and mapped to the required Per Hop Behaviour (PHB) service class. From there it is passed to the egress interface where it is queued and scheduled based on a strict plus weighted fair queue (WFQ) mechanism. GBR services are passed to the strict queue and VBR services are passed to a weighted fair queue where access to the egress interface is controlled based on the service class priority. In times of no congestion all traffic is Transmission network design & architecture guidelines
13/06/2013
Page 30 of 64
Transmission network planning __________________________________________________________________________
passed without delay. In a congested environment, GBR services are passed directly to the egress interface and the VBR services are queued with access to the egress interface controlled by the weighted fair algorithm. Weighted Random Early Discard (WRED) is used to ensure efficient queue management. Packets from data flows are discarded at a pre-determined rate as the queue fills up. By doing this the 3G flow control and TCP/IP flow control should slow down resulting in reduced retransmissions and more efficient use of the available bandwidth. For enterprise services, policing on ingress will be implemented to ensure the enterprise customer is within the SLA. In such circumstances a CIR and PIR can be allocated to the customer services with a CBS and PBS assigned also. In this case the two rate three colour marking (trTCM) mechanism will be used to control the flow of enterprise traffic through the network. Yellow marked traffic. First to be discarded in case of network congestions
Discarded traffic
Data
Input data burst
Policing marking
Output traffic
Policing implementation according to standard Two Rate Three Color Marker (trTCM) CBS allows to tolerate bursts above CIR short bursts will be marked GREEN PBS allows to tolerate bursts above PIR short bursts will not be discarded
Figure 11 – Enterprise traffic engineering Traffic within contract and within the PBS will be marked Green, traffic greater than the CIR and within the PIR including the PBS will be marked Yellow, all other traffic will be marked Red and discarded. In congestion scenarios the WRED queue management function will discard the Yellow marked packets first.
Traffic engineering across the Layer 2 Microwave network
Transmission network design & architecture guidelines
13/06/2013
Page 31 of 64
Transmission network planning __________________________________________________________________________
Across the microwave network a combination of Shaping, CoS based policing, trTCM and WRED queue management should be used to ensure congestion control and fairness in terms of bandwidth contention.
For downlink traffic, the physical interface from the IP/MPLS network must be shaped to the maximum bandwidth of the radio interface. This is to ensure that egress buffer overflow is not experienced, in particular for large bursts of LTE traffic. For LTE traffic, shaping per VLAN should also be implemented to ensure that tail links, which may be connected to feeder links and be of lower capacity, do not experience buffer overflow. Note: VLAN shaping for LTE must be considered when considering the Layer 2 VLAN structure and Layer 3 addressing to the H3G LTE network.
Shaping
H3G
LTE 3G 2G
VFIE
GE BEP2.0 BEP2.0
BEP1.0
LTE traffic shaping per service & per port (VLAN group) shaping In order to avoid BEP2.0 buffer overflow
Figure 12 – Downlink traffic control mechanism For uplink traffic, shaping should be applied on both the H3G and VFI RAN nodes. This is to ensure that both operators present the same bandwidth to the Transmission network for sharing. Data traffic should be policed on ingress to the access microwave network on a per service level. This ensures that during congestion, out of policy traffic from each operator is discarded first during periods of congestion.
Transmission network design & architecture guidelines
13/06/2013
Page 32 of 64
Transmission network planning __________________________________________________________________________
As detailed in previous sections the target bandwidth for RBS sites is 400Mb\s in the City areas, 300Mb\s in towns and 200Mb\s for all others. Tables 6 and 7 detail the proposed policing settings for the two areas. Data traffic
CIR
PIR
(Per operator)
(Per Operator)
Comments
GBR Services
NA
NA
No Policing - Green
GPRS Data
1Mb\s
Not set
PIR will not be greater than max link capacity. Out of policy = yellow
R99 Data
2Mb\s
Not set
PIR will not be greater than max link capacity. Out of policy = yellow
HSDPA
15Mb\s
Not Set
PIR will not be greater than max link capacity. Out of policy = yellow
LTE
20Mb\s
400Mb\s
Contracted SLA to operator. Out of policy = Red
Table 6
City Area (Max link capacity = 400Mb\s)
Traffic
GBR Services
CIR
PIR
(Per operator)
(Per Operator)
NA
NA
Comments
No Policing - Green PIR will not be greater than
GPRS Data
1Mb\s
Not set
max link capacity. Out of policy = yellow PIR will not be greater than
R99 Data
2Mb\s
Not set
max link capacity. Out of policy = yellow PIR will not be greater than
HSDPA
15Mb\s
Not Set
max link capacity. Out of policy = yellow
LTE
Table 7
20Mb\s
200Mb\s
Contracted SLA to operator. Out of policy = Red
Non City Area (Max link capacity = 200Mb\s)
Transmission network design & architecture guidelines
13/06/2013
Page 33 of 64
Transmission network planning __________________________________________________________________________
All packets within CIR and the CBS will be marked green. For 3G and HS services the PIR should not exceed the available link capacity so packets will be marked as yellow. For LTE traffic, out of policy traffic will be marked red and discarded. In some cases the sum of both operators PIR will be greater than the available link capacity, even at maximum modulation. In this case, it will be possible for both operators to peak to the maximum available capacity, but not at the same time.
Policing will color packets according to trTCM
Discarded traffic by policing Operator 2
CIR2 PIR1 TX link capacity PIR2 CIR1
Operator 1 Op1 + Op2 traffic exceeding TX link capacity. When queues start to fill-up WRED (QoS) mechanism will start dropping YELLOW marked packets from data traffic 3G flow control & TCP/IP LTE sessions will slow down traffic of both Operators Thus preserving GREEN packets (CIR) for both operators
Figure 13- Normal link operation Figure 13 details the operation of both policing and queue management on a microwave link. For operator 1, when the traffic presented exceeds the PIR, it is marked Red and discarded. Where the sum of both operators traffic does not exceed the interface PIR but exceeds the available link capacity the WRED mechanism in the outbound queue will start discarding yellow marked packets at a predetermined rate based on the queue size. In this instance, the 3G flow control and TCP/IP (LTE) flow control mechanisms will slow down the
Transmission network design & architecture guidelines
13/06/2013
Page 34 of 64
Transmission network planning __________________________________________________________________________
data sessions, minimising the number of retransmissions and optimising the use of the available bandwidth. This approach ensures that both operators GBR traffic is always transmitted, while also ensuring in a congested scenario both operators have fair access to the available bandwidth for each service provided. Note that for the incumbent vendor’s of Ethernet microwave radio systems, the majority of the deployed links will not support the required hierarchical QoS features. During the consolidation of both networks it will be necessary to swap out that hardware for hardware supporting those functions. A tender process will be run to select one vendor to fulfil these requirements.
2.7
Network synchronisation
NSI are responsible for managing the quality and distribution of the synchronisation reference clock throughout the mobile network. Table 7 summarises the clock distribution methods that will be implemented for the transmission and mobile networks. Clock distribution Source
Comments
PRC / SSU with Rubidium
Each SSU is configured with
holdover (Symmetricom
redundant source and supply
SSU2000 )
modules. Redundant SSU’s are distributed across the data centre locations
Self built backhaul (Ethernet)
Synchronous Ethernet
Synchronous Ethernet with SSM
Self built backhaul(SDH)
SDH trunks
SSM enabled
Self built backhaul (DWDM)
1588v2 (IPVPN configured
TP500 slaves used to
for 1588v2 distribution)
recover clock and reference the southbound self built network
Ethernet managed Service
1588v2 (IPVPN configured
TP500 slaves used to
for 1588v2 distribution)
recover clock and reference the southbound self built network
Self built Access Microwave
Synchronous Ethernet &
Synchronous Ethernet with
(Ethernet)
Radio interface
SSM
Transmission network design & architecture guidelines
13/06/2013
Page 35 of 64
Transmission network planning __________________________________________________________________________ Self built access microwave
E1 connections and Radio
For legacy RBS nodes
(PDH)
interface
Ericsson DUW (3G network)
NTP phase synchronisation
Parent RNC is reference to
from NTP server in RNC
PRC and distributes clock via NTP carried over Iub link
Ericsson SIU-02
Synchronous Ethernet
Ericsson DUG (2G)
Legacy E1
Interfaces connected to SIU02
Ericsson DUL (LTE)
NTP phase synchronisation
NTP servers for LTE will be
from resilient NTP server’s at
slaves of the SSU2000
data centre locations
nodes.
Mixed mode Remote Radio
DUG synchronised from
DUW is synchronised over
units (U900 & GSM 900)
DUW directly.
NTP network
Mixed mode Remote Radio
DUG synchronised from DUL
DUL is synchronised over
units (LTE1800 & GSM 1800)
directly.
NTP network from Standalone NTP servers
NSN 3G network
1588v2 slaves (IP VPN
SSU2000 nodes as servers
1588v2 packet distribution.
for NSN 1588v2 network
Table 7: Synchronisation source and distribution summary
The following sections provide additional details for each of the synchronisation solutions and their applications.
2.7.1 Self Built Transmission network Over the self built transmission network synchronisation will be distributed at layer 1. For the legacy TDM networks, synchronisation will be distributed within the SDH frame with SSM enabled to transmit quality levels and reduce the risk of timing loops in the case of ring topologies. The PDH microwave networks will distribute synchronisation over the E1 and Radio interfaces. For the IP/MPLS network, Synchronous Ethernet (SyncE) is the preferred method of synchronisation distribution. Like SDH SyncE supports SSM and this will be enabled to transmit the clock quality level and reduce the risk of timing loops in the case of ring topologies. The Ethernet microwave indoor units (IDU’s) will receive their timing reference using Synchronous Ethernet with southbound IDU’s synchronised over the radio interface.
Transmission network design & architecture guidelines
13/06/2013
Page 36 of 64
Transmission network planning __________________________________________________________________________
Figure 14 – Self built synchronisation distribution It should be noted that in the access microwave network, TDM interfaces are supported for the transport of legacy RAN technologies. In this case, SyncE should be selected as the preferred timing reference for the IDU’s with TDM interfaces retimed where required. TDM (SDH/PDH) interfaces to the backhaul can be used as a valid timing reference for the access microwave but will be selected with a lower priority to that assigned to synchronous Ethernet. This is to ensure as the network migrates to Ethernet transmission only, no changes are required to synchronisation configuration of the access network. For both SDH and SyncE the number of SDH Equipment Clocks (SEC) and Ethernet Equipment Clocks (EEC) between the SSU and the end user should not exceed 20 as per the relevant recommendations governing synchronisation distribution (G.8261 & G.8262).
2.7.2 Ethernet Managed services
Transmission network design & architecture guidelines
13/06/2013
Page 37 of 64
Transmission network planning __________________________________________________________________________
For Ethernet managed services it is assumed that the synchronisation source within the 3rd party’s network is not from a trusted source. NSI will configure a L3VPN to distribute a 1588v2 timing reference from the PRC to the provider edge and recover the reference from the PRC at that point. From there synchronisation will be distributed as described in the self built network. 1588v2 synchronisation is independent of the underlying physical network and will ensure that the clock recovered at the provider edge is referenced to the network PRC.
Figure 15 – 1588v2 distribution over Ethernet Managed service 2.7.3 DWDM network For SDH wavelengths the distribution of SDH synchronisation is valid and so no change is required. However for Ethernet trunks, while the DWDM nodes do support SyncE, the current installed base does not. In this case 1588v2 will be implemented across the initial deployment and the scenario described in section 2.7.3 will be deployed with 1588v2 slaves used to recover the reference from the PRC. Note for future deployment of Ethernet trunks across the DWDM backbone, SyncE will be considered and where implemented no 1588v2 clock recovery will be required. Transmission network design & architecture guidelines
13/06/2013
Page 38 of 64
Transmission network planning __________________________________________________________________________
2.7.4 Mobile network clock recovery As described in table 8, a number of clock recovery methods are required which are dependent on the RAN technology and the RAN vendor. This section will describe the clock recovery for each RAN vendor and the RAN technology. Note that in all cases, while the mechanism used to recover the timing reference may not be the same, it must be possible to trace all timing references to the master PRC for the network. This is essential for the correct interoperation of all RAN technologies.
2.7.4.1
Legacy Ran nodes
TDM will be used to synchronise the legacy RAN technologies namely the legacy 2G systems and the 3G RAN connected via ATM. The legacy RAN technologies will use the E1 connections as their timing reference.
2.7.4.2
Ericsson SRAN – 2G
Ericsson use the SIU-02 as the aggregation device for the baseband connections from their SRAN nodes (DUG, DUW and DUL). The SIU-02 converts the PDH signals from the 2G node (DUG) to a format suitable for transmission over Ethernet to the BSC. The SIU-02 supports synchronisation over synchronous Ethernet. In this configuration the SIU-02 will be connected to the transmission network via its WAN interface either directly to a co-sited MPLS router or via Ethernet microwave via a GigE trunk. This connection will be used as the timing reference for the node.
2.7.4.3
Ericsson SRAN – 3G & LTE
The Ericsson 3G (DUG) and LTE (DUL) nodes are synchronised using a NTP network. NTP is similar to 1588v2 with the SRAN core nodes for 3G (RNC) and LTE (SGw) using the Iub and S1 interfaces respectively to transmit the required synchronisation phase information for accurate timing reference to the PRC. As the timing signals are carried within the respective user planes separate VPN’s for timing distribution are not required. Note: Ericsson in future releases of DUG and DUL software will support 1588v2. Once this is the case, a decision should be taken as to the benefit of replacing the existing NTP solution for 1588v2.
Transmission network design & architecture guidelines
13/06/2013
Page 39 of 64
Transmission network planning __________________________________________________________________________
NSN – 3G
2.7.4.4
The NSN 3G network nodes can act as 1588v2 slaves and recover the clock from a 1588v2 master. NSI will configure a 1588v2 L3VPN dedicated for the NSN 3G network. The network will be configured as described in section 2.7.2 with the NSN node B recovering the 1588v2 timing reference from the 1588v2 master clock.
2.8
Data Communications Network (DCN)
DCN refers to the distribution of O&M communications between the various management systems and their respective managed elements and networks. All network elements, namely RAN or transmission technologies, require connection to a network or element management platform for performance and configuration management. This section describes, by vendor, the transmission network configuration required to support such communications. Table 8 details the DCN for each of the vendor’s networks. Vendor Tellabs
Technology IP/MPLS
Transmission
In band mgt
CM & PM are carried in band and connected to the
network Siae
Ethernet
Comments
network solution
corporate DCN @ the data centre locations In Band mgt
- MPLS network gateway
Microwave
- L3VPN for Siae microwave network. - Access clusters are addressed in sub-networks based on the cluster size - Interconnect to corporate DCN at Data centre location
Ceragon
Ethernet
In Band mgt
- MPLS network gateway
Microwave
- L3VPN for Ceragon microwave network. - Access clusters are typically addressed in /26 subnetworks - Interconnect to corporate DCN @ Data centre locations
Ericsson
Mobile RAN
Out of band solution
SRAN
- MPLS network gateway - L3VPN for each RAN technology (2G, 3G & LTE) and split over multiple VPN’s based on network size. - Each network element has a /30 allocation - Interconnect to corporate DCN @ Data centre locations
NSN 3G
Mobile RAN
Out of band solution
Transmission network design & architecture guidelines
- MPLS network gateway
13/06/2013
Page 40 of 64
Transmission network planning __________________________________________________________________________ RAN
- L3VPN for each RAN technology (2G, 3G & LTE) and split over multiple VPN’s based on network size. - Access clusters are typically addressed in /26 subnetworks - Static routes required from access Gateway to RBS mgt address for CP address - Interconnect to corporate DCN @ Data centre locations - Static routes required to OMU & DCN networks for each RNC
Table 8: DCN network configuration per vendor
For the most part the DCN network will be configured as either in band with direct connectivity to the OSS via the DCN at the data centre locations, or where this is not possible L3VPN’s should be configured to connect the remote elements to their respective management systems via the IP/MPLS network. At the data centre locations routing information will be shared between the corporate DCN networks and the transmission network VPN’s through OSPF. The exception to this is the NSN 3G RAN where the CP and O&M networks require static routes via the RBS parent RNC to the respective ICSU and O&M network.
2.8.1 NSN 3G RAN Control Plane routing Control plane traffic is terminated in the ICSU function in the RNC. Because the RNC does not support dynamic routing protocols it is necessary to configure static routes from the CP VPN endpoints on the IP/MPLS routers to the ICSU network via the CP interface in the RNC. The static routes are redistributed throughout the CP VPN using BGP.
2.8.2 NSN 3G RAN O&M routing Each NSN RBS in the network requires an O&M IP address and backend /29 networks allocated. The O&M address is part of the allocated /26 network for a particular access cluster. The backend /29 network requires two connections to the NSN core, one to the DCN and one to the logical OMU interface on the RNC. The hierarchy within the 3G RAN O&M function is such that the DCN communicates with the RBS via the OMU and O&M interfaces on the parent RNC. Transmission network design & architecture guidelines
13/06/2013
Page 41 of 64
Transmission network planning __________________________________________________________________________
Northbound traffic will be routed to the parent RNC via the gateway router at the collector site. At the access clusters static routes are configured to the /29 networks with the O&M IP address for the RBS as the next hop. At the core sites vrf filters are applied on the 8800 nodes to ensure correct routing of incoming packets to the correct RNC. Each RBS is allocated a /29 subnet from an overall /20 allocated to each RNC. The vrf filter will inspect the source packet and route to the correct RNC. Static routes are required on the endpoints to the OMU and DCN networks via the parent RNC’s O&M interface.
Transmission network design & architecture guidelines
13/06/2013
Page 42 of 64
Transmission network planning __________________________________________________________________________
2.9
Transmission network performance monitoring
As detailed in the transition document, NSI are responsible for ensuring the transmission network meets the target performance KPI’s described therein and to provide periodic reporting and backup data to prove adherence to those KPI’s. Table 9 describes the KPI’s which must be measured;
KPI
Description
Target
Reporting
Comment
period Access MW
MW link
% time
Network
availability
available per
99.99x%
Access MW
MW link ACM
% of time
Network
operation
link
Weekly /
Based on the link license
rolling 365 day
conditions
period 99.99x%
Weekly /
Based on the link license
operating at
rolling 365 day
conditions
each
period
modulation level per link Access MW
MW Network
% time
Network
availability
available
99.96%
Weekly /
Access MW
MW Network
% time
Network
ACM operation
available
rolling 365 day
across network
period
rolling 365 day
across network
period 99.96%
Access MW
MW link
Network
performance
% BBE per link
Access MW
MW link packet
% Packet loss
Network
network performance
Access MW
MW link packet
Delay variation
Network
network
across each
performance
link
Tbc
Weekly /
Weekly /
Requires integration to
rolling 365 day
post processing function
period Tbc
Weekly /
Requires export and post
across each
rolling 365 day
processing of RMON
link
period
counters per link
Weekly /
Not available in release 1
rolling 365 day
hardware. Integration to
period
post processing tool
Tbc
necessary IP/MPLS
Latency
network
One way
<15mS
Weekly /
packet delay
rolling 365 day
from collector
period
switch to Core MPLS routers IP/MPLS
Jitter
network
One way
<3mS
Weekly /
packet delay
rolling 365 day
variance from
period
collector switch to core MPLS router IP/MPLS
Packet loss
network
% packet loss
<0.2%
per MPLS trunk
Weekly / rolling 365 day period
Transmission network design & architecture guidelines
13/06/2013
Page 43 of 64
Transmission network planning __________________________________________________________________________ IP/MPLS
Throughput
network
Per collector
NA
Weekly /
sites. Daily
rolling 365 day
average and
period
Busy Hour IP/MPLS
Availability
network
Availability of
Weekly /
Based on Small, Medium
each collector
rolling 365 day
or Large design
site
period
End to end
Throughput per
Per service
transmission
service
throughput
End to end
Packet loss per
% packet loss
transmission
service
per service
99.99x%
NA
Weekly /
Collected from RAN /
rolling 365 day
Enterprise client
period % Tbc
Weekly /
Collected from RAN /
rolling 365 day
Enterprise client
period End to end
Per service
% time
Weekly /
Collected from RAN /
transmission
availability
available per
% Tbc
rolling 365 day
Enterprise client
service
period
Table 9: NSI transmission network KPI’s and reporting structure
In order to ensure efficient collection, post processing and reporting against each of the KPI’s described above and those required in the future NSI are required to export the performance and configuration management of the transmission network elements to a post processing tool. This will require the evaluation of those tools available today and possible replacements. This section will be updated to reflect the selected system and its operation once selected and designed. Until such time as a post processing tool is available all KPI’s will be measured using the available tools on the respective vendor management platforms.
Transmission network design & architecture guidelines
13/06/2013
Page 44 of 64
Transmission network planning __________________________________________________________________________
3.0 Site configuration This section will outline guidelines to be followed when planning and deploying at consolidated sites throughout the network. Throughout the network the site types can be subdivided into 3 broad categories. 1. Core 2. Backhaul 3. Access Within each of these categories there can be variations in site design based primarily on the site provider, equipment shelter, deployed hardware and required aggregation. It should be noted that while the following subsections will detail guidelines that should be followed when designing each site, in certain circumstances bespoke solutions may be required. For such solutions, the NSI transmission team should be consulted prior to finalising the design.
3.1
Core sites
Core sites refer to those locations where the transmission network is directly connected to the mobile Core and/or enterprise core networks. The main features that categorise these locations are;
Transmission network has direct connectivity within the same site to a mobile core node (BSC, RNC, EPC)
Transmission network has direct physical access to the core enterprise network
The following table details the minimum requirements which must be satisfied when designing such sites. Requirement
Category
Description
Network
External optical
For diverse fibre routes a
resilience
cabling
minimum of 5m physical
Additional notes
separation is required from the external network through to the ODF presentation in the NSI equipment room Network
Internal diverse
- Intra ODF & ODF to
resilience
optical baseband
equipment rack will not at
cable management
any point share the same
Transmission network design & architecture guidelines
13/06/2013
Page 45 of 64
Transmission network planning __________________________________________________________________________ section of the fibre management system (FMS). - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP, RPS, VRRP) will terminate on diverse ODF and will at no point share sections of the FMS. Network
Internal diverse
- Intra DDF, DDF to
These guidelines
resilience
electrical baseband
equipment rack will not at
apply to both 120Ohm
cable management
any point share the same
and 75Ohm systems.
section of the cable
For clarity 120Ohm
management
distribution frames
infrastructure.
may also be referred
- Diverse ports (e.g.
to as Patch panels.
East/West) and protecting ports (e.g. MSP, ELP, RPS, VRRP) will terminate on diverse ODF and will at no point share sections of the cable management infrastructure. Network
Power
resilience
All equipment within the core site will have diverse A & B dc power. The A & B supply must be traced to separate DC rectifier systems within the core site. The DC rectifiers within the core site will be powered from a UPS ac supply which is backed up by generator power for a minimum of 24 hours
Network
Power cabling
resilience
Cables for A and B power (ac and dc) will at no stage share sections of the cable
Transmission network design & architecture guidelines
13/06/2013
Page 46 of 64
Transmission network planning __________________________________________________________________________ management infrastructure Network
Rack layout
resilience
Core Equipment (DWDM, IP/MPLS, ATM, SDH) operating in a resilient or load sharing capacity should not be collocated within the same rack.
Network
Power
Dimensioning
DC rectifiers should be dimensioned with consideration for a minimum of 2 x spare rectifier units within each cabinet. Once this limit is reached additional rectifiers should be deployed to meet any additional requirements.
Network
Power
Dimensioning Network
3 phase ac supplies should be used in all cases
Power
Dimensioning
AC power for each rectifier unit should be dimensioned with a minimum overhead of 20% to facilitate emergency expansions and inefficiencies within the rectifier units
Network
Power cable
- All power cables must be
Dimensioning
labelling
labelled indicating the remote end equipment and location - All MCB’s must be labelled indicating the remote equipment ID
Internal cabling
Internal cabling
Optical cabling
Single Mode fibre should
(standard)
be used in all cases
Optical cabling
All equipment
(equipment
interconnects must be
interconnect)
done via ODF. No direct cabling from equipment to
Transmission network design & architecture guidelines
13/06/2013
Page 47 of 64
Transmission network planning __________________________________________________________________________ equipment should be implemented at any stage Internal cabling
Optical cabling
All cables must be labelled
(Labelling)
at the equipment and at the frame indicating the following; - Next hop (e.g. ODF and position position) - Final destination (Equipment and Port ID)
Internal cabling
Internal cabling
Structured cabling
CAT6 should be used in all
(standard)
cases at a minimum
Structured cabling
All equipment
(equipment
interconnects must be
interconnect)
done via patch panel. No direct cabling from equipment to equipment should be implemented at any stage
Internal cabling
Structured cabling
All cables must be labelled
(Labelling)
at the equipment and at the frame indicating the following; - Next hop (e.g. patch panel and position) - Final destination (Equipment and Port ID)
Internal cabling
75 Ohm (standard)
RA7000 should be used in all cases at a minimum
Internal cabling
75 Ohm cabling
All equipment
(equipment
interconnects must be
interconnect)
done via DDF. No direct cabling from equipment to equipment should be implemented at any stage
Internal cabling
Structured cabling
All cables must be labelled
(Labelling)
at the equipment and at the frame indicating the following; - Next hop (e.g. DDF and
Transmission network design & architecture guidelines
13/06/2013
Page 48 of 64
Transmission network planning __________________________________________________________________________ position) - Final destination (Equipment and Port ID) MW Radio
Rack installation
- Dedicated racks to house the MW Radio IDU’s will be installed - A DC headrail should be installed in the transmission cabinet with facility for a minimum of 5 x A and 5 x B MCB’s. - 6A MCB’s should be fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit
MW Radio
Baseband cabling
- To facilitate cabling
(75Ohm Type 43 to
between MW IDU
75Ohm Type 43)
equipment within the same rack a DDF will be installed within the MNW equipment rack
MW Radio
Baseband cabling
- To facilitate cabling
(75Ohm Type 43 to
between MW IDU
120Ohm RJ45)
equipment within the same rack a 24 port BALUN should be installed within the same equipment rack
MW Radio
MW Radio
MW Radio
Baseband cabling
- Direct cabling between
(120 Ohm to 120
MW IDU equipment should
Ohm RJ45 for TDM
be implemented within the
services)
same rack
Baseband cabling
- Direct cabling between
(120 Ohm to 120
MW IDU equipment should
Ohm RJ45 for
be implemented within the
Ethernet services)
same rack
Baseband cabling
- To facilitate cabling
(optical)
between MW IDU equipment within the same
Transmission network design & architecture guidelines
13/06/2013
Page 49 of 64
Transmission network planning __________________________________________________________________________ rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio
IF Cable
- All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU.
MW Radio
IDU Labelling
-
Near end ID
-
Far end ID
-
Local IP Address and subnet
-
Remote IP Address and subnet
-
Commissioned Tx Power
MW Radio
IF Labelling
-
Commissioned RSL
-
Tx Freq (Mhz)
-
All IF cable labels should be prefixed with NSI
-
Far end ID on Fly lead
-
Far end ID at Bulk head connector
-
Far end ID inside of Roxtec
-
Far end ID outside of Roxtec
MW Radio
ODU and Antenna
-
Far end ID @ ODU
-
Far end Site name & ID
Transmission network design & architecture guidelines
-
Tx Frequency (MHz)
-
Polarisation
13/06/2013
Page 50 of 64
Transmission network planning __________________________________________________________________________ -
Commissioned Tx power
-
Table 10:
3.2
Commissioned RSL
Core site build guidelines
Backhaul sites
Backhaul sites refer to those locations where the transmission network is aggregating large amounts of customer traffic onto high speed transmission links. For TDM traffic this refers to N+0 where N>1 SDH backhaul and for the MPLS network this refers to the Level 2 routing area. For all of these cases the equipment must be housed in a building or Portacabin. Table 11 details the minimum requirements which must be satisfied when designing such sites. Requirement
Category
Description
Network
External optical
For diverse fibre routes a
resilience
cabling
minimum of 5m physical
Additional notes
separation is required from the external network through to the ODF presentation in the NSI equipment room Network
Internal diverse
- Intra ODF & ODF to
resilience
optical baseband
equipment rack will not at
cable management
any point share the same section of the fibre management system (FMS). - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP) will terminate on diverse ODF and will at no point share sections of the FMS.
Network
Internal diverse
- Intra DDF, DDF to
These guidelines
resilience
electrical baseband
equipment rack will not at
apply to both 120Ohm
cable management
any point share the same
and 75Ohm systems.
section of the cable
For clarity 120Ohm
Transmission network design & architecture guidelines
13/06/2013
Page 51 of 64
Transmission network planning __________________________________________________________________________ management
distribution frames
infrastructure.
may also be referred
- Diverse ports (e.g.
to as Patch panels.
East/West) and protecting ports (e.g. MSP, ELP) will terminate on diverse ODF and will at no point share sections of the cable management infrastructure. Network
Power
resilience
- All equipment within the backhaul site will have diverse A & B dc power.
Network
Power cabling
resilience
Cables for A and B power will at no stage share sections of the cable management infrastructure
Network
Rack layout
resilience
Core Equipment (DWDM, IP/MPLS, ATM, SDH) operating in a resilient or load sharing capacity should not be collocated within the same rack.
Network
Power
Dimensioning
DC rectifiers should be dimensioned with consideration for a minimum of 2 x spare rectifier units within each cabinet. Once this limit is reached additional rectifiers should be deployed to meet any additional requirements.
Network
Power
Dimensioning Network
3 phase ac supplies should be used in all cases
Power
Dimensioning
AC power for each rectifier unit should be dimensioned with a minimum overhead of 20% to facilitate emergency
Transmission network design & architecture guidelines
13/06/2013
Page 52 of 64
Transmission network planning __________________________________________________________________________ expansions and inefficiencies within the rectifier units Network
Power
Dimensioning
- Sufficient battery backup should be in place to power all Tx equipment on site for a minimum of 8 hours
Network
Power
Dimensioning
For remote locations, diesel generators should be in place to facilitate full Tx site operation for a minimum of 24 Hours
Network
Power cable
- All power cables must be
Dimensioning
labelling
labelled indicating the remote end equipment and location - All MCB’s must be labelled indicating the remote equipment ID
Internal cabling
Optical cabling
Single Mode fibre should
(MPLS / SDH)
(standard)
be used in all cases
Internal cabling
Optical cabling
All equipment
(MPLS / SDH)
(equipment
interconnects must be
interconnect)
done via ODF. No direct cabling from equipment to equipment should be implemented at any stage
Internal cabling
Optical cabling
All cables must be labelled
(MPLS / SDH)
(Labelling)
at the equipment and at the frame indicating the following; - Next hop (e.g. ODF and position position) - Final destination (Equipment and Port ID)
Internal cabling
Structured cabling
CAT6 should be used in all
(MPLS / SDH)
(standard)
cases at a minimum
Internal cabling
Structured cabling
All equipment
(MPLS / SDH)
(equipment
interconnects must be
Transmission network design & architecture guidelines
13/06/2013
Page 53 of 64
Transmission network planning __________________________________________________________________________ interconnect)
done via patch panel. No direct cabling from equipment to equipment should be implemented at any stage
Internal cabling
Structured cabling
All cables must be labelled
(MPLS / SDH)
(Labelling)
at the equipment and at the frame indicating the following; - Next hop (e.g. patch panel and position) - Final destination (Equipment and Port ID)
Internal cabling
75 Ohm (standard)
(MPLS / SDH)
RA7000 should be used in all cases at a minimum
Internal cabling
75 Ohm cabling
All equipment
(MPLS / SDH)
(equipment
interconnects must be
interconnect)
done via DDF. No direct cabling from equipment to equipment should be implemented at any stage
Internal cabling
Structured cabling
All cables must be labelled
(MPLS / SDH)
(Labelling)
at the equipment and at the frame indicating the following; - Next hop (e.g. DDF and position) - Final destination (Equipment and Port ID)
MW Radio
Rack installation
- Dedicated racks to house the MW Radio IDU’s will
installation
be installed MW Radio
Transmission rack
- A DC headrail should be
equipment rack
installed in the
installation
transmission cabinet with
(Power
facility for a minimum of 5A
distribution)
and 5B MCB’s. - 6A MCB’s should be fitted as standard - The A and B side will be
Transmission network design & architecture guidelines
13/06/2013
Page 54 of 64
Transmission network planning __________________________________________________________________________ connected to the respective A & B side of the DC rectifier unit MW Radio
Baseband cabling
- To facilitate cabling
installation
(75Ohm Type 43 to
between MW IDU
120Ohm RJ45)
equipment within the same rack a 24 port BALUN should be installed within the same equipment rack
MW Radio
Baseband cabling
- Direct cabling between
installation
(120 Ohm to 120
MW IDU equipment should
Ohm RJ45 for TDM
be implemented within the
services)
same rack
MW Radio
Baseband cabling
- Direct cabling between
installation
(120 Ohm to 120
MW IDU equipment should
Ohm RJ45 for
be implemented within the
Ethernet services)
same rack
MW Radio
Baseband cabling
- To facilitate cabling
installation
(optical)
between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack
MW Radio
IF Cable
- All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU.
MW Radio
IDU Labelling
-
Near end ID
-
Far end ID
-
Local IP Address and subnet
-
Transmission network design & architecture guidelines
Remote IP Address
13/06/2013
Page 55 of 64
Transmission network planning __________________________________________________________________________ and subnet -
Commissioned Tx Power
MW Radio
IF Labelling
-
Commissioned RSL
-
Tx Freq (Mhz)
-
All IF cable labels should be prefixed with NSI
-
Far end ID on Fly lead
-
Far end ID at Bulk head connector
-
Far end ID inside of Roxtec
-
Far end ID outside of Roxtec
MW Radio
ODU and Antenna
-
Far end ID @ ODU
-
Far end Site name &
labelling
ID -
Tx Frequency (MHz)
-
Polarisation
-
Commissioned Tx power
-
Table 11:
Commissioned RSL
Backhaul site build guidelines
3.2.1 BT TT locations One specific type of backhaul site is those co-located with the BT TT network. In this case certain restrictions apply in terms of space and presentation of managed circuits which must adhere to BT co-locations rules. Specifically;
NSI transmission equipment will be housed within the same rack
BT will present all circuits on a single ODF patch panel within the NSI equipment rack
Inter-shelf cabling can be run directly between the NSI equipment within the same equipment rack.
Transmission network design & architecture guidelines
13/06/2013
Page 56 of 64
Transmission network planning __________________________________________________________________________
3.3 Access locations Access locations refer to all sites not covered under sections 3.1 and 3.2 above. This classification of site covers the vast majority of sites in the network and can be subdivided into the site categories described in Table 12. Access site
Characteristics
Comments
Tail site
Single unprotected
Tx solution for a single
(Portacabin &
transmission link
site
Feeder site
Aggregation site with fibre
Aggregation of multiple
(fibre)
backhaul to MPLS ABR
tail and/or feeder links
category
outdoor cabinet options)
(Portacabin & outdoor cabinet options) Feeder site (MW) Feeder site with MW
Aggregation of multiple
(Portacabin &
transmission link to backhaul
tail and feeder links
outdoor cabinet
site
options)
Table 12:
Access site categories
Within this section each site category will be described in terms of equipment installation, power and baseband interconnection
3.3.1 Access sites (Portacabin installation) Requirement Power
Category Transmission rack
Description
Additional notes
- 19” racks should be installed as standard - A DC head rail should be installed in the transmission rack with facility for a minimum of 10 x A & 10 x B MCB’s. - 6A MCB’s should be
Transmission network design & architecture guidelines
13/06/2013
Page 57 of 64
Transmission network planning __________________________________________________________________________ fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit Power
Transmission
- 2 x 63A connections
equipment
should be fitted as standard from the rectifier A & B supply to the respective A & B connections on the DC headrail - The transmission equipment A & B power will be connected to the respective A & B side of the DC head rail within the Tx rack
Power
Battery
Battery backup for the TX
configuration
equipment should be configured for a minimum of 4 hrs
Power
Labelling
- All power cables will be labelled with the remote termination ID - All MCB’s will be labelled with the remote equipment ID
Indoor
Hardware
All Indoor transmission
equipment
installation
equipment should be housed within a 19” rack
3PP
Optical
3PP services will be presented on a 19” SC
presentation
patch panel within the Tx rack 3PP CPE
Hardware
All 3PP CPE will be housed within the Tx rack
MW Radio
IDU installation
installation
- All MW Radio IDU hardware to be installed in a 19” Tx rack
Transmission network design & architecture guidelines
13/06/2013
Page 58 of 64
Transmission network planning __________________________________________________________________________ MW Radio
Baseband cabling
- To facilitate cabling
installation
(75Ohm Type 43 to
between MW IDU
120Ohm RJ45)
equipment within the same rack a 24 port BALUN should be installed within the same equipment rack
MW Radio
Baseband cabling
- Direct cabling between
installation
(120 Ohm to 120
MW IDU equipment should
Ohm RJ45 for TDM
be implemented within the
services)
same rack
MW Radio
Baseband cabling
- Direct cabling between
installation
(120 Ohm to 120
MW IDU equipment should
Ohm RJ45 for
be implemented within the
Ethernet services)
same rack
MW Radio
Baseband cabling
- To facilitate cabling
installation
(optical)
between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack
MW Radio
IF Cable
- All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU.
MW Radio
IDU Labelling
-
Near end ID
-
Far end ID
-
Local IP Address and subnet
-
Remote IP Address and subnet
-
Commissioned Tx Power
Transmission network design & architecture guidelines
13/06/2013
Page 59 of 64
Transmission network planning __________________________________________________________________________
MW Radio
IF Labelling
-
Commissioned RSL
-
Tx Freq (Mhz)
-
All IF cable labels should be prefixed with NSI
-
Far end ID on Fly lead
-
Far end ID at Bulk head connector
-
Far end ID inside of Roxtec
-
Far end ID outside of Roxtec
MW Radio
ODU and Antenna
-
Far end ID @ ODU
-
Far end Site name &
labelling
ID -
Tx Frequency (MHz)
-
Polarisation
-
Commissioned Tx power
-
Commissioned RSL
3.3.2 Access site (Outdoor cabinet installation) Table 11 details the rules to follow when consolidating onto a single site where no 3PP services are in place. Table 12 details the additional guidelines that must be considered where network consolidation is proposed on a site with existing 3PP services.
Requirement Power
Category Transmission rack
Description
Additional notes
- A 2m site support unit should be installed on all outdoor cabinet sites as standard to facilitate Tx consolidation - A DC head rail should be installed in the site support unit with facility for a minimum of 10 x A & 10 x B MCB’s.
Transmission network design & architecture guidelines
13/06/2013
Page 60 of 64
Transmission network planning __________________________________________________________________________ - 6A MCB’s should be fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit Power
Transmission
- 2 x 63A connections
equipment
should be fitted as standard from the rectifier A & B supply to the respective A & B connections on the DC headrail - The transmission equipment A & B power will be connected to the respective A & B side of the DC head rail within the Tx rack
Power
Battery
Battery backup for the TX
configuration
equipment should be configured for a minimum of 4 hrs
Power
Labelling
- All power cables will be labelled with the remote termination ID - All MCB’s will be labelled with the remote equipment ID
Indoor
Hardware
All new hardware will be
equipment
installation
installed in the Site support unit
3PP
Optical
presentation
All new 3PP services will be presented on a 1U ODF within the site support unit
3PP CPE
Hardware
All new 3PP CPE will be housed within the site support unit
MW Radio
IDU installation
- All MW Radio IDU
installation
hardware to be installed in
Transmission network design & architecture guidelines
13/06/2013
Page 61 of 64
Transmission network planning __________________________________________________________________________ a 19” Tx rack MW Radio
Baseband cabling
- To facilitate cabling
installation
(75Ohm Type 43 to
between MW IDU
120Ohm RJ45)
equipment within the same rack a 24 port BALUN should be installed within the same equipment rack
MW Radio
Baseband cabling
- Direct cabling between
installation
(120 Ohm to 120
MW IDU equipment should
Ohm RJ45 for TDM
be implemented within the
services)
same rack
MW Radio
Baseband cabling
- Direct cabling between
installation
(120 Ohm to 120
MW IDU equipment should
Ohm RJ45 for
be implemented within the
Ethernet services)
same rack
MW Radio
Baseband cabling
- To facilitate cabling
installation
(optical)
between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack
MW Radio
IF Cable
- All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU.
MW Radio
IDU Labelling
-
Near end ID
-
Far end ID
-
Local IP Address and subnet
-
Remote IP Address and subnet
-
Transmission network design & architecture guidelines
Commissioned Tx
13/06/2013
Page 62 of 64
Transmission network planning __________________________________________________________________________ Power
MW Radio
IF Labelling
-
Commissioned RSL
-
Tx Freq (Mhz)
-
All IF cable labels should be prefixed with NSI
-
Far end ID on Fly lead
-
Far end ID at Bulk head connector
-
Far end ID inside of Roxtec
-
Far end ID outside of Roxtec
MW Radio
ODU and Antenna
-
Far end ID @ ODU
-
Far end Site name &
labelling
ID -
Tx Frequency (MHz)
-
Polarisation
-
Commissioned Tx power
-
Commissioned RSL
Table 11: Access site consolidation – No 3PP services in place
Requirement IP/MPLS
Category
Description
Equipment
IP/MPLS equipment
installation
should be installed within
Additional notes
the same outdoor cabinet as the existing 3PP CPE IP/MPLS
Equipment
Where space restricts the
installation
possibility to install the IP/MPS equipment within the same cabinet, the IP/MPLS equipment should be housed in the site support unit
IP/MPLS
Intra cabinet
- Where the Site support
cabling rules
unit and the existing 3PP
Transmission network design & architecture guidelines
13/06/2013
Page 63 of 64
Transmission network planning __________________________________________________________________________ CPE are in separate outdoor cabinets but on the same plinth all cabling should be done direct via the cable management systems in place between the outdoor cabinets - Where the outdoor cabinets do not share the same plinths structured cabling is required between the outdoor cabinets. The following rules apply for each service( Optical Ethernet & TDM) -
12 pair SM fibre suitable for outdoor installation should be run and presented on a 1U splice/presentation tray within each cabinet
-
12 pair CAT6 suitable for outdoor installation should be run and presented on a 1U patch panel within each cabinet
-
16 core Coax suitable for outdoor installation should be run and presented on a 2U DDF within each cabinet.
Table 12: Outdoor cabinet consolidation – existing 3PP CPE on site
Transmission network design & architecture guidelines
13/06/2013
Page 64 of 64