Ni m b l e St o r ag e Introduction Michael Teoh
[email protected] 22nd August 2017
Ag A g end en d a
Nimble Storage Overview • Architecture (CASL) • AF Series (All Flash Array) • CF Series (Adaptive Flash Array) • SF Series (Secondary Flash Array)
Nimble Storage InfoSight
Summary
Challenges Challenges of using usi ng a legacy legacy stor s torage age array array Need
375 DISKS
30TB TB database capacity capacity that r equires “How do I support a 30
30,000 IOPS?” Need several arrays
Need
for capacity… and
150 DISKS
LOTS of $$$
SSD Flash Array 20K IOPS / array ~7TB ~7TB c apacity
Performance
Enterprise SAS disk 15,000 RPM ~200 IOPS /spindle
Enterprise SATA disk 7,200 RPM ~80 IOPS /spindle
Capacity
Nimble Storage product family overview
Performance (IOPS)
Capacity (Effective TBs)
All Flash Arrays
Adapti ve Fl ash Ar rays
AF series
Storage Tier Usage/ Workloads
CS series
Primary storage High performance primary workloads
Other primary workloads
Secondary Flash Arrays SF series
Secondary storage Veeam, DR, dev/test, other secondary apps
Cache Accelerated Sequential Layout C.A.S.L. & NIMBLE OS
Native on all Nimble Platforms!
CASL – Shared Foundation for AFA and Adaptive Arrays Nimble Adaptive Flash Array Writes
Nimble All Flash Array
Reads
Writes
8K
4K 4K
8K
4K 4K
4K 4K
8K
4K 4K
8K
Reads
Seq Layout on Flash (Read cache)
Sequential layout on disk
Sequential layout on flash
CASL – Shared Foundation for AFA and Adaptive Arrays Nimble Adaptive Flash Array Writes
8K
4K 4K 4K 4K
Nimble All Flash Array
Reads
Writes 8K
4K 4K
4K 4K
8K
Reads
8K
Seq Layout on Flash (Read cache)
Sequential layout on disk
Sequential layout on flash
CASL – Shared Foundation for AFA and Adaptive Arrays Nimble Adaptive Flash Array Writes
Reads
Nimble All Flash Array Writes
Reads
Seq Layout on Flash (Read cache)
Sequential layout on disk
Sequential layout on flash
Nimbl e Storage File System – Always write full stripes • Good AND consistent write performance • Very efficient snapshots • Fast inline compression • Efficient flash utilization,
long flash life • Ground up design • Enables variable block size • Uses a sweeping process to ensure full stripe
write space
All Flash Array AF-Series
All Flash Array Family: AF-Series AF9000 • 23TB – 553TB raw • Up to 2PB eff. • 4U – 12U
AF7000
300,000 IOPS
• 11TB – 323TB raw • Up to 1.2PB eff. • 4U – 12U
AF5000
230,000 IOPS
• 11TB – 184TB raw • Up to 680TB eff. • 4U – 8U
120,000 IOPS
AF3000 • 6TB – 92TB raw • Up to 335TB eff. • 4U – 8U
AF1000
50,000 IOPS
• 6TB – 46TB raw • Up to 165TB eff. • 4U – 8U
35,000 IOPS
IOPS based on a 70% Read and 30% write workload
What’s in an AF -Series Array?
Back :
Dual Power Supplies (AC and DC available)
Front :
SSD Drives
Back :
Dual Controllers (CPU, Network)
Controller Head Shelf Expansion Shelves ( optional)
AF-Seri es Chassis Front View / Dual-Flash Carrier (DFC) Slot 21 24
Slot 22
Slot 23
Slot Power On LED
Slot 17 Slot 13
NIC1/2
Slot 9
Power Fault
Slot 5 Slot 1
Slot 2
Slot 3
Heartbeat LED Over Temperature
DFC latch
Slot 4 Bank B Ejector latch
Bank B
Bank A Bank A Ejector latch
AF-Seri es Dual-Flash Carrier (DFC) – Hot swappable base carrier hosting two banks. – Hot swappable SSD carrier provides tool-less drive installation. – LEDs on each SSD Carrier and on the Base Carrier
Presence & Activity
SSDCarrier Release
Base Carrier Release
Drive Fault LED
Slot Fault
Integrated Spare: Data, Parity, and Spare layout AFA Tripl e+ Pari ty 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
21
22
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D D
R
23
S
P
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
Q
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
R
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
S
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
P
Q
R
Data Metadata
Tolerates Simultaneous Failure of Any 3 SSDs
Built in virtual spare allows 4th failure
Intra-drive parity fixes sector loss in single read
Quick RAID rebuild
Unconstrained by Memory Requires 10 –30X less memory than the competition Reduced Cost of Controller
More Flash Capacity/Controller
Less Controllers Needed
Cost and Perfor mance Optimized Optim ized for 3D-NAN 3D-NAND D Designed Design ed for costoptim opt imized ized 3D-NAND 3D-NAND
Advanced flash endurance management
• Seven year
Large-scale coalescing
• Increased
SSD lifespan
performance • 20% more
Integrated hot-sparing
usable capacity
Comprehensive Data Reducti Reduction on
Variable block deduplication
Variable block compression
Zero pattern elimination Plus more from: Thin provisioning & Zero copy clones
5X or mo more re Data Reduction
Deduplication Differentiators SQL Vo l .4
Vo l .5 DRAM
VDI Vo l .1
Vo l .2
Vo l .3
App licat li cat ion io n awar e, variable block dedupe • Exploits locality
(clustering of duplicates) in real world apps for higher speed and less memory
• Auto-configures block
size, dedupe dedupe on/of on/off, f, extensibility • Accelerates dedupe by
optimizing search, index updates within like apps • Provides savings insight
by app, allows for selectivity if desired
• AFA has global dedupe
enabled by default, with high performance • For hands on admins
wanting to optimize CPU/memory further… • User can switch dedupe
off for app categories with little savings
• Requires 10-30x less
memory for comparable flash capacity • Lowers controller cost,
allows more capacity per controller (lower $/GB)
Adaptive Flash Array CS-Series
Adaptive Flash Array: CSx000 CS7000 • 21TB –882TB raw • Up to 1.4PB eff. • 4U-32U
Up to 230,000 IOPS
CS5000 • 21TB –882TB raw • Up to 1.4PB eff. • 4U-32U
Up to 120,000 IOPS
CS3000 • 21TB –882TB raw • Up to 1.4PB eff. • 4U-32U
Up to 50,000 IOPS
CS1000 CS1000H • 11TB – 882TB raw • Up to 1.4PB eff. • 4U-32U
Up to 35,000 IOPS
IOPS based on a 70% Read and 30% write workload
CS-Series Triple+ Parity – Utilizes the same Intra-Parity as the AF-Series – Left synchronous rotation – first two parities (P & Q) are rotational and the third
parity (R) is non-rotational. – Supports the loss of three disks. Stripe 0
D1
D2
D3
P
Q
R
Stripe 1
D2
D3
P
Q
D1
R
Stripe 2
D3
P
Q
D1
D2
R
Stripe 3
P
Q
D1
D2
D3
R
Stripe 4
Q
D1
D2
D3
P
R
Stripe 5
D1
D2
D3
P
Q
R
The system will shutdown if there are three disk failures prior to any one of those failed disks being rebuilt.
Adaptive Flash Arrays: Flash Performance for Less Au to Flas h — All Flash performance with just 5 –10% flash capacity*
Dynamically configurable SLAs All Flash
Auto Flash
All flash performance, all the time
All flash performance, >95% of the time
d 10% e r i u q e R h s 5% a l F f o t n u o m 0% A
Minimal Flash
Lowest cost of capacity *Source: Nimble installed base analysis
One Third the TCO of Legacy Hybrid Flash Arrays
Unconstrained by Disk
Ground-up Flash Design
Comprehensive Data Reduction
Integrated Data Protection Efficiency
One Third the TCO of Legacy Hybrid
Absolute Resiliency
Nonstop Availability
Measured at less than 23 seconds per year
Triple+ Parity RAID
Tolerate three simultaneous drive failures plus intra-drive protection
Integrated Data Protectio n
Backup more frequently and recover faster with application consistent SmartSnap and SmartReplicate
SmartSecure Encryption
Application-granular encryption and secure data shredding
What’s in an CSx000 Array? Similar components as an AF-Series Back :
Dual Power Supplies (AC and DC available)
Front :
SSD Drives
Front :
HDD Drives
Back :
Dual Controllers (CPU, Network)
Controller Head Shelf Expansion Shelves ( optional)
Drive Layout – CS1000, CS3000, –CS5000, CS7000 HDDs: 18 + 3 RAID
– 4U 24 Chassis
– 24 x 3.5” Slots carry 21x HDDs + 3x DFCs
– New Nimble-branded HDD carriers
– DFCs : minimum of 3 SSDs in Bank A – Bank B available for cache upgrades
Slots 21-24 Slots 17-20 Slots 13-16 Slots 9-12 Slots 5-8 Slots 1-4
21X Disk Carriers
Drive Layout – CS1000, CS3000, CS5000, CS7000 Slots 21-24 Slots 17-20
Power On
Slots 13-16
NIC1/2
Slots 9-12
Power Fault
Slots 5-8 Slots 1-4
DFC latch
Cache: 3x Dual Flash Carriers Bank B
Bank A
Bank A latch
Bank A latch
Heartbeat Over Temperature
Drive Layout – CS1000H: A half populated CS1000 Firs t 11 HDDs referred to as CS1000H
Cache: 2x Dual Flash Carriers
Fully populated CS1000H is referred to as CS1000FP once upgraded
The WebUI will only display CS1000. To identify a CS1000H or CS1000FP look at the controller shelf capacity or navigate to Manage >> Array >> [Select array] and view the visual representation. A CS1000H can only be upgraded to a CS3000 when scaling up after population of 2nd half of controller chassis.
Secondary Flash Array SF-Series
Secondary Storage Is Ready for Change
“By 2020, 30% of organizations will leverage
backup for more than just operational recovery, up from less than 10% at the beginning of 2016.”
Source: Gartner Magic Quadrant for Data Center Back up and Recovery Software, June 2016
Nimble Secondary Flash Arrays Integrated with leading Data Availability software
Flash-enabled Storage Dedupe & Capacity Optimized
Put your backup and DR data to work!
Instant restores and recovery
Flash-based Performance
Inline Dedupe
Radical Simplicity
Unified Fabric Scale-out InfoSight Predictive Anal ytic s
InfoSight and Veeam One
Veeam & Commvault integration
Integrated Data Protection Secondary Flash Array as replication target
Advanced flash endurance management
Ap pl ic ati on co ns is ten t back up s
Protect more frequently Recover rapidly from online backups
Backup , DR and archival
DR down to 5 min RPO Retain backups for months cost-effectively Cost-optimized test/dev with cloning
Nimble Storage Secondary Flash Array Portfo lio
Platform
SF100
SF300 iSCSI and FC
Connectivity Max Write Throughput
400 MB/s
800 MB/s
Flash Capacity
1.3TB – 8.7TB
2.6TB – 15.7TB
Raw Capacity
21TB – 126 TB
42 TB – 252 TB
Usable Capacity
16 TB – 100 TB
30 TB – 200 TB
Effective Capacity (8:1)
800 TB
1.6 PB
Drive Layout Secondary Flash Arrays and SF Expansion Shelves
Cache: 3x Dual Flash Carriers
4U chassis (Base array, Expansion Shelf) 24x 3.5” slots carry 21x HDDs and 3x DFCs HDDs: 18+3 Triple+ Parity RAID Minimum of 3 SSDs (Bank A in 3 DFCs) – Bank B available for cache upgrades
DFC latch Bank B
Bank A
Bank B latch
Bank A latch
Veeam v9.5 Integration with Nimble Highlights
Backup from Storage Snapshots
Veeam Explor ers ™ for Storage Snapshots
On-Demand Sandbo x ™ for Storage Snapshots
Minimize impact on production VMs
RTPO™ <15 Minutes
Verified recoverability
Granular recovery
Dev/Test, training and troubleshooting
Simplify scheduling Instant visibility Agentless, applicationaware consistency
Low-risk deployments
Data Management Services
Enterprise Level Data Protection, Efficiency, and Security Options, flexibility, and effortless management with all-inclusive packaging SmartSnap
Instant, zero-copy, backups Efficient (thin, compressed & deduplicated) Near instantaneous restores WAN optimized w/ data integrity checks No license required
SmartCopies
Instant Zero-data copy Efficient (thin, compressed & deduplicated) Used dev-test or backups No license required
SmartReplicate
Efficient (thin, block diffs+ data reduction) WAN optimized Secure (AES-256bit encryption) No license required
SmartSecure
Flexible Secure WAN replication FIPS 140-2 certified No hardware change No license required
AF1000
AF1000 SF100
DR
Production 10:00am 09:00am
Recovery Points
AF1000
Thin Clones
Production
Encrypt volumes, applications, tenants or entire arrays
Encryption preserved with replication
Data Protection & Copy Data Management Fast, cost-effective hybrid s torage
Primary
9:00 9:15 9:30 9:45
Snapshots + Replication
Secondary
Disaster Recovery
9:00 10:00 11:00 12:00 Day 2 Day 3 …….
Day 32
Space Efficient Clones
No backup window Rapid local recovery Cost-effective, simple DR via Replicate once Longer retention at Target
9:00 10:00 11:00 12:00
Instant zero-copy clones (e.g., for dev and test instances)
Quality of Service in Nimble Storage
Simple to Manage Unlike most QoS designs— it’s very simple to manage Automated noisy neighbor avoidance requires zero configuration No painstakingly setting priorities for every volume Perfect when the “right” performance level is not known
Flexibl e Limit s Allows independent limits for IOPS and MB/s to allow controlling “day-time” operations and backup without requiring scheduling (because 1,000 IOPS is a meaningless limit for a backup workload) Allows limits per tenant(folder) and/or application(volume)
No Licensing As with all NimbleOS features—there are no licenses to worry about
Quality o f Service: Automated Noisy Neighbor Avoid ance
Latency Spike
Noisy Neighbor Avoidance
No QoS “Bully” workload,
“Victim” workload,
e.g., backup dominates
“Bully” workload is auto-regulated
e.g., small block random IO is starved
No QoS
Consistent Latency
“Victim” workload
stays healthy
Auto mat ed Noi sy Neig hb or Avoi dan ce
Sometimes specific workloads are “bullies”— consuming
unfair share of resources (i.e., backup, batch jobs) This can degrade performance for neighboring workloads
Ensures fairness between very disparate and changing workloads (e.g., large block vs. small block) Schedules CPU and disk resources to avoid starvation Ensures neighboring workloads experience good performance
Closing the App-Data Gap with Predictive Analytics
Apps
Data
“The App-Data Gap”
Technology slow-downs impact IT as well Forklift upgrades
Vendor finger pointing
How do I solve the problem?
Weeks to provision
Huge CAPEX investments
Just go to the Cloud?
Over Half of Problems Happen Outside Storage Top problems contri buting to the App-Data Gap
1
Storage related
2
Configuration issues
3
Interoperability issues
4
Non-storage best practices impacting performance
5
Host, compute, VM
Source: InfoSight analysis across more than 7,500 customers
28% 46% 11% 7%
8%
What options are available today? Addresses Storage Performance
Flash alone is not enough
Flash
Simplifies Deployment and Management
Increases Agility
Black Box Penalty
(Hyper) Convergence The app-data gap still remains
Cloud
Closing the app-data gap
Reliably Fast
InfoSight Predictive Analytics
Multicloud Flash Fabric
Radically Simple
Cloud-ready
Predictive Analytics Close the App-Data Gap
Cloud-based Predictive Analytics
Millions of Sensors collected every second across installed-base
Cross-Stack Telemetry
Global Learning
>10,000 customers Millions of virtual objects under continuous monitoring
Non-Stop Availability wi th InfoSight Predictive Analytics Prevent issues and avoid downtime Cross-stack rapid root cause analysis Predict future needs and simplify planning Inoculate install base once issue found
Issues detected before you do
Multi-tenant SaaS Portal
Measured uptime
Predictive Analytics
<1 minute
Hold time to speak to level 3 engineer
Getting started is as simple as logging in!
Predict Future Needs and Simplify Planning – Accurately forecasts future capacity,
performance and bandwidth needs – Prescriptive guidance ensures optimal
long-term performance – Predicts performance hotspots and tells
you how to avoid them – Eliminates planning guesswork
Correlation figures out what is causing the problem
Leverage predictive analytics to identify future needs and potential hot-spots specific to your environment, with prescriptive guidance to ensure optimal long-term performance
InfoSight Simplifies Complex Planning
Key Insights
What is our current growth rate? Which Applications are consuming storage? When will we likely need more capacity? How do we report back to the business?
Visibility Beyond Storage InfoSight VMVision gives visibility up to the VM layer
Determine VM latency factors: storage, host or network Take corrective action on noisy neighbor VMs Reclaim space from underutilized VMs
InfoSight VMVision pinpoints VMrelated issues
Simpl e VM Centri c Management
SQL Performance Policy RPO = 15m, Retain 1 week
VM Infrastructure
Replicate every HR
Hypervisor
QoS Limit = 5,000 IOPS Encryption Enabled
vCenter
Rogue VM Assets
Support
Wellness
Capacity
Volumes
Performance DataProtection
InfoSight
Dashboard
Unified Flash Fabric: A single architecture simplifies the use of flash for all applications, both on-premises and in the cloud Primary Flash
All Flash Arrays Adaptive Flash Arrays
Secondary Flash
Secondary Flash Arrays
Multicloud Storage
Nimble Cloud Volumes*
*NCV is currently available in the US only
Scale-to-Fit: Flexibl e and Non-Disruptive Scalability Scale-out with up toWithin 4 Arrays as One —Non-disruptively Scale Performance Capacity Within an Array an Managed Array —Non-disruptively —Non-disruptively r e h t s s a l u l F C l l A
350,000IOPS 35,000 IOPS- -4RU 4RU 350k >1.2M
35K r h e s t a s l u F l e C v i t p a d A
IOPS IOPSX4
270,000IOPS IOPS- 4RU - 4RU 35,000
2PB >9PB
7TB
Capacity CapacityX4
Scale-to-Fit: Flexibl e and Non-Disruptive Scalability Scale-out with Mix-n-Match Allup Flash to 4 Arrays and Adaptive Managed Flash as— One Non-disruptively – Non-disruptively r e t s u l C
r e t s u l C
IOPS X4
Capacity X4
Summary of Nimbl e Storage
Reliably Fast – Flash accelerated storage everywhere – Primary Flash with All Flash and Adaptive Flash
Arrays – Secondary Flash Arrays
Radically Simple – InfoSight Predictive Analytics – Provisioning, availability and data reduction – Data management services – All-inclusive licensing
Cloud Ready – Nimble Cloud volumes