6/26/2014
Big Data: Prelude I‐Jen Chiang
What’s Big Data? No single definition; from Wikipedia:
• Big data is the term for a collection of data sets so large and ‐
data databas base e manag managemen ementt tools tools or tradit tradition ional al data data proces processin sing g applications.
• The challe challeng nges es inc includ lude e capture capture,, cur curat ation ion,, st stora orage, ge, sea searc rch, h, sharing, transfer, transfer, analysis, and visualiza visualization tion..
• T e tr t ren
to arger ata se s ets s ue to t e a t ona inform inf ormati ation on deriv derivabl able e from from analys analysis is of a single single large large set of related data, as compared to separate smaller sets with the same total amount of data, allowing correlations to be found to "sp spot ot bu busi sine ness ss tr trend ends, s, de dete term rmin ine e qu qual alit ityy of re rese sear arch ch,, prev pr even entt di disea sease ses, s, li link nk le leg gal ci cita tati tion ons, s, co comb mbat at cri crime me,, an and d 2 determine real‐time roadway traffic conditions. conditions.”
1
6/26/2014
How much data? • Google processes 20 PB a day (2008) • Wayback Machine has 3 PB + 100 TB/month (3/2009) of user data + 15 TB/day (4/2009) • Facebook has 2.5 PB of user of user data + 50 TB/day (5/2009) • eBay has 6.5 PB of user • CERN’s Large Hydron Collider (LHC) generates 15 PB a year
640K ought to be enough 640K ought for anybody.
A Single View to the Customer
Banking Finance
Social Media
Our Known History
Customer
Gaming
Entertain
Purchase
2
6/26/2014
Collect
Accumulate
Store
5
Life Cycle of Big of Big Data Cloud Computing I n t e r n e t o f T h i g s
Quer MapReduce Distributed Storage Big Data
Mobile Computing 6
3
6/26/2014
Data, data, and more data of the data in the world today world today • According to IBM, 90% of the
was created in the past the past two two years (2011~2013). (IBM quote, microscope.co.uk)
• According to International Data Corporation, the
total amount of global of global data is expected to grow to 2.7 zettabytes during 2012. (International Data Corporation 2012 prediction, IDC website)
• The data is growing exponentially (43% growth rate) and is estimated to be 7.9 zettabytes by 2015. (CenturyLink 2015 prediction, ReadWriteWeb website)
7
Tendency of Data of Data
• Poorly structured, lightly
• Structured and •
a u ar o object/relational • Fixed schema
10s‐100s GB Up to 100,000s of transactions per hour
Optimized for transaction processing
mens one
• Specialized DBs for BI
TB‐low PBs a c oa s
Optimized for reporting and analysis
structured, or unstructured • Simple structure and extreme data rate erar erarcc ca an or e‐ • oriented • Sparsely attributed
PB and up Stream Str eam ca tur ture e batch
Optimized for distributed, cloud‐based processing
4
6/26/2014
Data Growth
9
Volume (Scale) • Data Volume – 44x increase from 2009 2020 – From 0.8 zettabytes to 35zb
• Data volume is increasing exponentially
Exponential increase in collected/generated data 10
5
6/26/2014
30 billion RFID tags today (1.3B in 2005)
12+ TBs of tw eet data every day
4.6 billion camera phones world wide
100s of millions of GPS enabled
y a f d o y s r e B v T e a ? t a d
devices sold annually
25+ TBs of log data every day
2+ billion 76 million smart meters in 2009… 200M by 2014
people on the Web by end 2011
Real‐time/Fast Data
Mobile devices (tracking all objects all the time)
Social media and networks (all of us are generating data)
Scientific instruments (collecting all sorts of data)
Sensor technology and networks (measuring all kinds of data)
• The progress and innovation is no longer hindered by the ability to collect data
• But, by the ability to manage, analyze, summarize, visualize, and discover knowledge from the collected data in a timely manner and in a scalable 12 fashion
6
6/26/2014
The Model Has Changed… • The Model of Generating/Consuming Data has Changed Old Model: Few companies are generating data, all others are
New Model: all of us are generating data, and all of us are consuming data
13
What’s driving Big Data ‐ Optimizations and predictive analytics ‐ Complex statistical analysis ‐ All types of data, and many sources ‐ Very large datasets ‐ More of a real‐time
‐ Ad‐hoc querying and reporting ‐ Data mining tec niques ‐ Structured data, typical sources ‐ Small to mid‐size datasets
14
7
6/26/2014
What is Big Data? • Datasets which are too large, grow too rapidly, or are too varied to handle using traditional techniques • Characteristics: • Volume – 100’s of TB’s, petabytes, and beyond • Velocity – e.g., machine generated data, medical
• Variety – unstructured data, many formats, varying semantics • Not every data problem is a “Big Data” problem!!
15
Internet of Things
16
8
6/26/2014
Characteristics Volume
Big Data
17
IBM Definition
18
9
6/26/2014
A New Era of Computing 12 terabytes
5 million
of Tweets create daily
trade events per second
Volume
Variety
100’s Of video feeds from surveillance cameras
Velocity
Veracity
Only
1 in 3
Decision makers trust their information
http://watalon.com/?p=722
“We have for the first time an economy based on a key resource [Information] that is not only renewable, but self ‐generating. Running out of it is not a pro em, u rown ng n is.” – John Naisbitt
20
10
6/26/2014
21
Big Data Explained Ach iev e Br eakt hr ou gh Outcomes about your Customers
By Analyzing Any Big Data Type Transactional / Application Data
Run Zero‐latency Operations Machine Data Innovate new products at Speed and Scale Instant Awareness of Fraud and Risk Exploit Instrumented Assets
Social Media Data
Content
11
6/26/2014
Big Data Stack
24
12
6/26/2014
Value of Big Data • Unlocking significant value by making information .
• Using data collection and analysis to conduct controlled experiments to make better management decisions.
• Sophisticated analytics that substantially improve decision‐making.
• Improved development of the next generation of products and services. 25
Why does big data matter? • Big data is not just about storing large datasets • Rather, it is about leveraging datasets – Mining datasets to find new meaning – Combining datasets that have never been combined before
– Making more informed decisions – Offering new products and services
• Data is a vital asset, and analytics are the key to unlocking its potential “We don’t have better algorithms than anyone else, we just have more data.” ‐ Peter Norvig, Director of Research, Google, spoken in 2010
26
13
6/26/2014
Big Data: Processing I‐Jen Chiang
Knowledge Pyramid Data (Text) Mining area
Semantic level Wisdom (Knowledge + experience)
Knowledge (Information + rules)
Information (Data + context)
Data
What made it that unsuccessful ? What was the lowest selling product ? How many units were sold o eac pro uct ne
Signals
Resources occupied
14
6/26/2014
Value Chain Emerges Prescri tive Anal tics
Automaticall Prescribe and Take action
Predictive Analytics
Sets of Potential Future Scenarios
Identification of Patterns And Relationships
An Evaluation of what happened in the past
Analytics I n c r e a s i n g V a l e
Reporting
rocess ng
a a repare
or na ys s
Big Data
Containers and Feeds of Heterogeneous Data
n exe , rgan ze an Optimized Data Access to Structured and Unstructured Data
Michael Porter, Competitive Advantage: Creating and Sustaining Superior Performance
Big Data Processing Transactional Data
Operational & Partner
Social Data
Machine to Machine
Cloud Services
…… Event Streams …… High Speed Low Latency Infra/Band/Ethernet Interconnect
Working Local Flash Storage Layer as an extension of DRAM sr ue Shared Flash Storage Layer Low‐cost Distributed Archive & Backup Disk Storage Layer
Operational Systems
Business Analytics
Indexing & Metadata
Big Data Analytics
Databases
Indexes
Metadata
Cubes
…
Government Systems Databases
Active Data Mianagement Shared Databases
Active Indexes
Shared Metadata
Archive Data & Metadata
Archive/Backup Data Management … 30
15
6/26/2014
The Traditional Approach Query‐driven (lazy, on‐demand) Clients
Integration System
Metadata
...
... Source
Source
Source 31
Disadvantages of Query‐Driven Approach
Dela in uer
rocessin
Slow or unavailable information sources
Complex filtering and integration
Inefficient and potentially expensive for frequent queries Competes with local processing at sources caught on in industry
Hasn’t
32
16
6/26/2014
The Warehousing Approach Information integrated in advance Stored in wh for direct querying and analysis
Clients
Data Warehouse
Integration System
Metadata
.. . Extractor/ Monitor
Extractor/ Monitor
Extractor/ Monitor
... Source
Source
Source
33
Advantages of Warehousing Approach • High query performance –
• Doesn’t interfere with local processing at sources – Complex queries at warehouse – OLTP at information sources
• Information copied at warehouse – Can modify, annotate, summarize, restructure, etc. – Can store historical information – Security, no auditing • Has caught on in industry 34
17
6/26/2014
Business Intelligence Information Sources
Data Warehouse Server (Tier 1)
OLAP Servers Tier 2
Clients Tier 3
e.g., MOLAP Semistructured Sources
Data Warehouse extract transform load re resh etc.
OLAP serve
Query/Reporting serve
e.g., ROLAP
Operational DB’s
serve
Data Mining
Data Marts 35
Not Either‐Or Decision • Query‐driven approach still better for – Rapidly changing information – Rapidly changing information sources – Truly vast amounts of data from large numbers of sources
– Clients with unpredictable needs
36
18
6/26/2014
What is a Data Warehouse? A Practitioners Viewpoint “A data warehouse is simply a single, complete, and consistent store of data obtained from a variety of sources and made available to end users in a way t ey can un erstan an use t n a us ness context. ‐‐ Barry Devlin, IBM Consultant An Alternative Viewpoint
“A DW is a – subject‐oriented, – , – time‐varying, – non‐volatile
collection of data that is used primarily in organizational decision making.” ‐‐ W.H. Inmon, Building the Data Warehouse, 1992 37
A Data Warehouse is... • Stored collection of diverse data – Single repository of information
• Subject‐oriented – Organized by subject, not by application – Used for analysis, data mining, etc.
• Optimized differently from transaction‐ oriented db
• User interface aimed at executive 38
19
6/26/2014
Characteristics of a Data Mart
KROENKE and AUER ‐ DATABASE CONCEPTS (6th Edition) Copyright © 2013 Pearson Education, Inc. Publishing as Prentice Hall
Gaining market intelligence from news feeds
40 Sreekumar Sukumaran and Ashish Sureka
20
6/26/2014
Integrated BI Systems Intermedia Data ETL
omp ete ata Warehouse
ETL
Text taggor & Annotator
Structural Data DBMS
File System
XML
XML
RDBMS
Unstructured Data EA
Legacy
CMS
Scanned Documents
Email
Sreekumar Sukumaran and Ashish Sureka
41
Data Warehouse Components
SOURCE:
Ralph Kimball
21
6/26/2014
Data Warehouse Components – Detailed
SOURCE:
Ralph Kimball
Linux Adoption
44
22
6/26/2014
Distributing processing between gateways and cloud
45
Big Data Processing Techniques • Distributed data stream processing technology for on‐
the‐fly real‐time analytics of data/events generated at extremely high rates. • Technologies for reliable distributed data store, high‐ speed data structure transform to create analytics DB and quick data placement management. • Scalable data extraction technology to speed up rich querying functionality, such as multi‐dimensional ‐ , . • Scalable distributed parallel processing of a huge amount of stored data for advanced analysis such as machine learning. 46
23
6/26/2014
Batch and Real‐time Platform
http://www.nec.com/en/global/rd/research/cl/bdpt.html 47
http://info.aiim.org/digital‐landfill/newaiimo/2012/03/15/big‐data‐and‐big‐content‐48 just‐hype‐or‐a‐real‐opportunity
24
6/26/2014
Value Chain of Big Data
Fritz Venter (LEFT) and Andrew Stein, Images & videos: really big data, 2012
49
Big Data Moving Forward
50
Source: Dion Hinchcliffe
25
6/26/2014
51
Evolution of Data Evolutionary Step
Business Question
Enabling Technologies
Product Providers
Characteristics
Data Collection (1960s)
"What was my total revenue in the last five years?"
Computers, tapes, disks
IBM, CDC
Retrospective, static data delivery
Data Access (1980s)
"What were unit sales in New England last March?"
Oracle, Sybase, Informix, IBM, Microsoft
Data Warehousing & Decision Support (1990s)
"What were unit sales in New England last March? Drill down to Boston."
Relational databases (RDBMS), Structured Query Language (SQL), ODBC On-line analytic processing (OLAP), multidimensional databases, data warehouses
Retrospective, dynamic data delivery at record level Retrospective, dynamic data delivery at multiple levels
Data Mining (Emerging Today)
"What’s likely to happen to Boston unit sales next month? Why?"
Advanced algorithms, multiprocessor computers, massive databases
Pilot, Comshare, Arbor, Cognos, Microstrategy
Pilot, Lockheed, IBM, SGI, numerous startups (nascent industry)
Prospective, proactive information delivery
Data Mining
26
6/26/2014
Big Data Processing: Real‐Time • Collect and Store – In‐ memory data grid
• Speed up Processing through co‐location of business logic with data – Reduce network hops –
• Integrate with the big data store to meet volume and cost demands 53
Real‐Time Big Data Processing Data Bi Streams
Data Streaming System
Real‐time Analytic System
Metadata
Big Systems
Data Warehouses Systems of Record
Big Data
ETL Hadoop
Archive & Backup Systems (Objects, geographicallydistributed
54
27
6/26/2014
KDD Process Interpretation/ Evaluation Data Mining Transformation Preprocessing
Knowledge Pattern
Selection
Transformed Data Preprocessed Data Target Data
Data Warehouse
Data Mining
Process for Generating Evidence-based Guidelines Computer Interpretable
N. Stolba and A. M. Tjoa, The Relevance of Data Warehousing and Data Mining in the Field of Evidence‐based Medicine to Support Healthcare Decision Making. PROCEEDINGS OF WORLD ACADEMY OF SCIENCE, ENGINEERING AND TECHNOLOGY VOLUME 11 FEBRUARY 2006 ISSN 1307‐6884
29
6/26/2014
What is Cloud Computing? of computing in • Cloud computing is a style of computing w c ynam ca y sca a e an v r ua ze resources are provided as a services over the Internet. of,, expertise • Users need not have knowledge of in or control over the technolo infrastructure in the "cloud" that supports them.
Definitions • A style of computing of computing where massively scalable IT‐related capabilities are provided ‘as a service’ using Internet tec no og es to mu t p e externa customers – Gartner
• Definitions – one focusing on remote access to services and computing resources provided over the Internet "cloud“
• Ex: CRM and payroll services, as well as vendors that offer access to storage and processing power over the Web (such as Amazon's EC2 Amazon's EC2 service.
– the other focusing on the use of technologies of technologies such
as virtualizati virtualization on and automation that enable the creation and delivery of service of service‐based computing capabilities. • is an extension of traditional of traditional data center approaches and can be
applied to entirely internal enterprise systems with no use of external of external off ‐premises capabilities provided by a third party
31
6/26/2014
The NIST Cloud Definition Framework Hybrid Clouds
Deployment Models Service Models
Community Cloud
Private Cloud
Software as a Service (SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
On Demand Self ‐Service
Essential Characteristics
Common Characteristics
Broad Network Access
Rapid Elasticity
Resource Pooling
Measured Service
Massive Scale
Resilient Computing
Homogeneity
Geographic Distribution
Virtualization
Service Orientation
Low Cost Software
Advanced Security 63 Based upon original chart created by Alex Dowbor - http://ornot.wordpress.com http://ornot.wordpress.com
Cycle of SOA, of SOA, CLOUD COMPUTING, WEB 2.0
D. Delen, H. Demirkan / Decision Support Systems (2013)
64
32
6/26/2014
A lifecycle of Big Data • Collection/Identification • Repository/Registry • . • Integration
• Analytics / Prediction .
• Visualization
Data Data
Insight Insight
Action Action
• Data Curation • Data Scientist • Data Engineer
Decision Decision
3.
4.
• Workflow • Data Quality
65
Model for Big Data Model for Big Data • A Reference Analysis & Prediction
Service Layer
Big Data Management
Interface
Workflow Management
Data Quality Management
Data Visualization
Service Support Layer
Interface
Data Curation
Data Integration Platform Layer
Data Semantic Intellectualization
Security
Interface
Data Identification (Data Mining & Metadata Extraction) Data Collection
Data Registry
Data Layer
Data Repository
66
33
6/26/2014
Data feeds Library cata cata ogs
Search engine
Locally held documents
Public repositories
Commercial data sources
Agenc y dat a sources
Search engine
Search engine
Search engine
Search engine
INTERNET (public)
spiders
Filtered content
Search engine
Meta-Se Meta-Search arch Tool TAXONOMY
Web portal
System Design Preview Node.js Based I‐Jen Chiang
34
6/26/2014
Scenario Sensor Network
69
Distributed Processing
http://phys.org/news/2012‐03‐technology‐efficiently‐desired‐big‐streams.html
70
35
6/26/2014
71
Node.js vs Cloud Computing Child Process Pool
Node.js Master Process Incoming WebSocket Request
Static con en Request
Node.js Chile Process pp cat on Module
Node. s Application
Communication
Web Server & WebSockets Interface
Dispatcher Node.js Chile Process Application Module
Node.js Application
Communication
Websocke t request
Node.js Chile Process
Fully Asynchronous
Application Module
Node.js Application
Communication
Static Content 72
36
6/26/2014
Event Loop
73
Event Loop Example
74
37
6/26/2014
Learning Javascript …
…
__proto__
__proto__
Prototype
Prototype
__proto__
SuperConstr
__proto__ new
Object
Object
Layer 1: Single Object
Layer 2: Prototype Chain
Constructor
Instance
Layer 3: Constructor
SubConstr
Layer 4: Constructor inheritance
75
Create a single object • Objects: atomic building blocks of Javascript OOP – Objects: maps from strings to values – Properties: entries in the map – Methods: properties whose values are functions • This refers to receiver of method call // Object literal var jane = { // Property name: ‘Jane’, // Method describe: function () { return ‘Person named ‘ + this.name; } }; Advantage: create objects directly, introduce abstractions later
76
38
6/26/2014
var jane = { name: ‘Jane’, describe: function () { return ‘Person named ‘ + this.name; } };
# jane.name ‘Jane’ # jane.describe [Function] # jane.describe() ‘Person named Jane’ # jane.name = ‘John’ #jane.describe() ‘Person named John’ # jane.unknownProperty undefined 77
Objects versus maps • Similar: – Very dynamic: freely delete and add properties
• Different: – Inheritance (via prototype chains) – Fast access to properties (via constructors)
78
39
6/26/2014
Sharing properties: the problem var jane = { name: ‘Jane’, describe: function () { return ‘Person named ‘ + this.name; } }; var tarzan = { name: ‘Tarzan’, describe: function () { return ‘Person named ‘ + this.name; } };
79
Sharing properties: the solution PersonProto describe
function(0 {…}
jane
tarzan
__proto__
__proto__
name
‘Jane’
name
‘Tarzan’
.
• Both prototype chains work like single object.
80
40
6/26/2014
Sharing properties: the code var PersonProto = { describe: function () { return ‘Person named ‘ + this.name; } }; var jane = { __proto__: PersonProto, name: ‘Jane’, }; var tarzan = { __proto__: PersonProto, name: ‘Tarzan’, };
81
Getting and setting the prototype
• ECMAScript 6: __ proto__ • ECMAScript 5: – Object.create() – Object.getPrototypeOf()
82
41
6/26/2014
Getting and setting the prototype Object.create(proto) var PersonProto = { return ‘Person named ‘ + this.name; } }; var jane = Object.create(PersonProto); jane.name: ‘Jane’,
. # Object.getPrototypeOf(jane) === PersonProto true
83
Sharing methods // Instance ‐s ecific ro erties funcion Person(name) { this.name = name; } // Shared properties Person.prototype.describe = function () { return ‘Person named ‘ + this.name; };
84
42
6/26/2014
Instances created by the constructor Person
Person.prototype
prototype
describe
function(0 {…}
function Person(name) { this.name = names; } jane
tarzan
__proto__
__proto__ ‘
’
name
‘ arzan’
85
instanceof • Is value an instance of Constr?
• How does instanceof work? Check: Is Constr.prototype in the prototype chain of value? // Equivalent value instanceof Constr Constr.prototype.isPrototypeOf(value) 86
43
6/26/2014
Goal: derive Employee from Person funcion Person(name) { this.name = name; Person.prototype.sayHelloTo = function (otherName) { console.log(this.name + ‘ say hello to ‘ + otherName; }; Person.prototype.describe = function () { return ‘Person named ‘ + this.name; }; , , • Additional instance property: title • describe() return ‘Person named
()’
87
Things we need to do • Employee must – Inherit Person’s instance properties – Create the instance property title – Inherit Person’s prototype properties – . . (and call overridden method)
88
44
6/26/2014
Employee: the code funcion Employee(name, title) { Person.call(this, name); // (1) this.title = title; (2) Person.prototype = Object.create(Person.prototype); // (3) Person.prototype.describe = function () { return Person.prototype.describe.call(this) // (5) + ‘ (‘ + this.title + ‘)’; };
(1) Inherit instance properties (2) Create the instance property title (3) Inherit prototype properties (4) Override method Person.prototype.describe (5) Call overridden method (a super‐call) 89
Instances created by the constructor Object.prototype Person
Person.prototype
prototype
__proto__ sayHelloTo function(0 {…} calls
describe
function(0 {…}
Employee
Employee.prototype
prototype
__proto__ escr e
unc on
…
jane __proto__ name title
‘Jane’ ‘CTO’
90
45
6/26/2014
Built‐in constructor hierarchy Object
Object.prototype
prototype
__proto__ toString
null
function(0 {…}
… Array
Array.prototype
prototype
__proto__ toString
sort
function(0 {…} function(0 {…}
… {‘foo’, ‘bar’} __proto__ 0
‘foo’
1
‘bar’
length
2
91
Hello World var http = require('http'); http.createServer( function (req, res) { res.writeHead(200, {'Content‐Type': 'text/plain'}); res.end('Hello World.'); .
,"
. . . "
console.log('Server running at http://127.0.0.1:1337/'); 92
46
6/26/2014
Express – Web Application Framework =
•
'
',
app = express.createServer();
app.get('/', function(req, res) {
• res.send('Hello World.'); app.listen(1337);
93
Express – Create http services app.get('/', function(req, res){ res.sen e o wor ; }); app.get('/test', function(req, res){ res.send('test render');
app.get('/user/', function(req, res){ res.send('user page'); }); 94
47
6/26/2014
Router Identifiers // Will match /abcd app.get('/abcd', function(req, res) res.send('abcd'); }); // Will match /acd app.get('/ab?cd', function(req, res) { res.send('ab?cd'); ;
// Will match /abxyzcd app.get('/ab*cd', function(req, res) { . ' * ' }); // Will match /abe and /abcde app.get('/ab(cd)?e', function(req, res) { res.send('ab(cd)?e'); });
// Will match /abbcd app.get('/ab+cd', function(req, res) { res.send('ab+cd'); });
95
Express – Get Parameters // ... Create http server app.get('/user/:id', function(req, res){ res.send('user: ' + req.params.id); }); app.get('/:number', function(req, res){ res.send('number: ' + req.params.number); }); 96
48
6/26/2014
Connect ‐ Middleware var connect = require("connect"); var http = require("http"); var app = connect(); app.use(function(request, response) { response.writeHead(200, { "Content‐Type": "text/plain" }); response.en He o wor n ; }); http.createServer(app).listen(1337); 97
Connect: request, response, next var connect = require("connect"); var http = require("http"); var app = connect(); // log app.use(function(request, response, next) { console.log("In comes a " + request.method + " to " + request.url); next(); }); re urn " e o wor " app.use(function(request, response, next) { response.writeHead(200, { "Content‐Type": "text/plain" }); response.end("Hello World!\n"); }); http.createServer(app).listen(1337);
98
49
6/26/2014
Connect: logger var connect = re uire "connect" ; var http = require("http"); var app = connect(); app.use(connect.logger()); app.use(function(request, response) { response.writeHead(200, { "Content‐Type": "text/plain" }); response.en e o wor n ; }); http.createServer(app).listen(1337);
99
Connect Logging var connect = require("connect"); var http = require("http"); var app = connect(); app.use(connect.logger()); // Homepage app.use(function(request, response, next) { if (request.url == "/") { response.writeHead(200, { "Content‐ Type": "text/plain" }); . " homepage!\n"); // The middleware stops here. } else { next(); } });
// About page app.use(function(request, response, next) { if (request.url == "/about") { response.writeHead(200, { "Content‐Type": " " response.end("Welcome to the about page!\n"); // The middleware stops here. } else { next(); } }); // 404'd! app.use(function(request, response) { response.writeHead(404, { "Content‐Type": "text/plain" }); response.end("404 error!\n"); }); http.createServer(app).listen(1337);
100
50
6/26/2014
Big Data: Analysis I‐Jen Chiang
Big data issues
102
51
6/26/2014
The CRISP‐DM reference model
Harper, Gavin; Stephen D. Pickett (August 2006)
The Complete Big Data Value Chain Collection
Ingestion
Discovery & Cleansing
Integration
Analysis
Delivery
Collection – Structured, unstructured and semi‐structured data from multiple sources Ingestion – loading vast amounts of data onto a single data store Discovery & Cleansing – understanding format and content; clean up and formatting Integration – linking, entity extraction, entity resolution, indexing and data fusion
–
,
,
,
Delivery – querying, visualization, real time delivery on enterprise‐class availability
10 4
52
6/26/2014
Phases and Tasks Business Understanding
Data Understanding
Determine Collect Initial Data Business Objectives Initial Data Collection Background Business Objectives Business Success Criteria
Situation Assessment
Data Preparation
Modeling
Data Set Description
Select Modeling Technique
Select Data
Modeling Technique Modeling Assumptions
Data Set
Report
Describe Data Data Description Report
Rationale for Inclusion / Exclusion
Explore Data
Clean Data
Inventoryof Resources Data Exploration Report Requirements, Verify Data Quality Assumptions, and Constraints Data Quality Report Risksand Contingencies Terminology Costsand Benefits
Evaluate Results
Deployment
Plan Deployment
Assessment of Data Mining Results w.r.t. Business Success Criteria Generate Test Design Approved Models Test Design
Deployment Plan
Plan Monitoring and Maintenance Monitoring and Maintenance Plan
Review Process
Data Cleaning Report
Build Model
Review of Process
Produce Final Report
Construct Data
Parameter Settings Models Model Description
Determine Next Steps
Final Report Final Presentation
List of Possible Actions Decision
Review Project
Derived Attributes Generated Records
Asses s Mod el Integrate Data M er e d D at a
Determine Data Mining Goal
Evaluation
Model Assessment Revised Parameter Settings
Experience Documentation
Format Data
Data Mining Goals Data Mining Success Criteria
Reformatted Data
ProduceProject Plan Project Plan Initial Asessment of Tools and Techniques
Data Mining Context Dimension
Examples
Application Data Mining Domain Problem Type
Technical Aspect
Modeling
Summarization Values
Churm Prediction
Segmentation
Outliers
…
Concept Descri tion
…
Tool and Technique
MineSet
Classification Prediction Dependency Analysis
53
6/26/2014
What in Big Data
How do you extract value from big data?
You surely can’t glance over every record;
And it may not even have records…
What if you wanted to learn from it?
Understand trends
Classify into categories
Detect similarities
Predict the future based on the past… (No, not like Nostradamus!)
Machine learning is quickly establishing as an emerging discipline.
Thousands of features
Billions of records
The largest machine that you can get, may not be large enough…
Get the picture? 10 7
Data Accumulation
108
54
6/26/2014
Service‐Oriented DSS
D. Delen, H. Demirkan / Decision Support Systems (2013)
109
Ten common big data problems • Modeling true risk • ustomer c urn analysis
• Recommendation engine
• Ad targeting • PoS transaction analysis
• Analyzing network data • Threat analysis • Trade surveillance • Search quality • Data “sandbox”
110
55
6/26/2014
Business Applications • Modeling risk and failure prediction • Analyzing customer churn • Web recommendations (ala Amazon) • Web ad targeting • Point of sale transaction analysis • Threat analysis • Compliance and search effectiveness 111
Dynamics of Data Ecosystems
http://www3.weforum.org/docs/WEF_TC_MFS_BigDataBigImpact_Briefing_2012.pdf
112
56
6/26/2014
The big data opportunity
113
Industries are embracing big data
114
57
6/26/2014
115
116
58
6/26/2014
Business Analytics
117
D. Delen, H. Demirkan / Decision Support Systems (2013)
Multidimensional Concept Analysis Att ri but e Variabl es
T N E M U C O D
Doc
a
bb
D1
True
True
D2
True
True
c
dd
Top
e
True
D3
True
D4
True
D5
True
D6
True
True
(D1,D2), (a,b)
(D1,D3,D6), (d)
True (D1), (a,b,d)
True
Bottom
Anal ys is of Acc ess es to State: Elements (Documents) + Properties (accesses to attibute variables) Anal ys is of Beh avior : Elements (Documents) + Properties (invocations to other documents)
59
6/26/2014
Three Approaches
119
Mining Schemes
A fully distributed and extensible set of Machine Learning techniques for Big Data
State of the
art algorithms in each of the Machine Learning domains, including supervised and unsupervised learning: Correlation Classifiers Clustering Statistics Document
manipulation
N‐gram extraction
Histogram computation
Natural Language Processing
Distributed and
parallel underlying linear algebra library
120
60
6/26/2014
Statistical Approach
121
Random Sample and Statistics • Population: is used to refer to the set or universe of all entities under study.
• However, looking at the entire population may not be ,
.
• Instead, we draw a random sample from the population, and compute appropriate statistics from the sample, that give estimates of the corresponding population parameters of interest.
61
6/26/2014
Statistic • Let Si denote the random variable corresponding to i ,
ˆ
ˆ
1,
S2, ∙ ∙ ∙ , Sn) → R.
• If we use the value of a statistic to estimate a population parameter, this value is called a point , as an estimator of the parameter.
Empirical Cumulative Distribution Function
Where
Inverse Cumulative Distribution Function
62
6/26/2014
Example
Measures of Central Tendency (Mean) Population Mean:
Sample Mean (Unbiased, not robust):
63
6/26/2014
Measures of Central Tendency (Median) Population Median:
or
Sample Median:
Example
64
6/26/2014
Measures of Dispersion (Range) Range: Sample Range:
Not ro ust, sens t ve to extreme va ues
Measures of Dispersion (Inter‐Quartile Range) Inter‐Quartile Range (IQR):
Sample IQR:
More ro ust
65
6/26/2014
Measures of Dispersion (Variance and Standard Deviation) Variance:
Standard Deviation:
Measures of Dispersion (Variance and Standard Deviation) Variance:
Standard Deviation:
Sample Variance & Standard Deviation:
66
6/26/2014
Univariate Normal Distribution
Multivariate Normal Distribution
67
6/26/2014
OLAP and Data Mining
Warehouse Architecture
Client
Client
Query & Analysis
Metadata
Warehouse
Integration
Source
Source
Source 136
68
6/26/2014
Star Schemas
data at a warehouse. It consists of: 1. Fact table : a very large accumulation of facts such as sales.
Often “insert‐only.”
. D mens on ta es : sma er, genera y stat c information about the entities involved in the facts. 137
Terms • Fact table • Dimension tables • Measures
sale orderId product prodId name price
custId prodId storeId qty amt
custId name address city
store storeId city
138
69
6/26/2014
Star product
prodId p1 p2
name price bolt 10 nut 5
store storeId c1 c3
s ale o d erId d ate o100 1/7/97 o102 2/7/97 105 3/8/97
c u s to m er
c u s tId 53 53 111
c u s tId 53 81 111
n am e joe fred sally
p ro d Id p1 p2 p1
s to reId c1 c1 c3
ad d res s 10 main 12 main 80 willow
q ty 1 2 5
city nyc la
am t 12 11 50
c ity sfo sfo la 139
Cube
Fact table view: s ale
p ro d Id p1 p2 p1 p2
Multi‐dimensional cube: s to reId c1 c1 c3 c2
am t 12 11 50 8
p1 p2
c1 12 11
c2
c3 50
8
dimensions = 2
140
70
6/26/2014
3‐D Cube Fact table view: sale
prodId p1 p2 p1 p2 p1 p1
Multi‐dimensional cube: storeId c1 c1 c3 c2 c1 c2
date 1 1 1 1 2 2
amt 12 11 50 8 44 4
day 2 day 1
p1 p2 c 1 p1 12 p2 11
c1 44
c2 4 c2
c3 c3 50
8
dimensions = 3
141
ROLAP vs. MOLAP • ROLAP: e a ona
n‐ ne na y ca
rocess ng
• MOLAP: Multi‐Dimensional On‐Line Analytical Processing
142
71
6/26/2014
Aggregates • Add up amounts for day 1 • WHERE date = 1 s ale
p ro d Id p1 p2 p1 p2 p1 p1
s to reId c1 c1 c3 c2 c1 c2
d ate 1 1 1 1 2 2
am t 12 11 50 8 44 4
81
143
Aggregates • Add up amounts by day • , GROUP BY date s ale
p ro d Id p1 p2 p1 p2 p1 p1
s to reId c1 c1 c3 c2 c1 c2
d ate 1 1 1 1 2 2
am t 12 11 50 8 44 4
ans
date 1 2
sum 81 48
144
72
6/26/2014
Another Example • Add up amounts by day, product • , GROUP BY date, prodId s ale
p ro d Id p1 p2 p1 p2 p1 p1
s to reId c1 c1 c3 c2 c1 c2
d ate 1 1 1 1 2 2
am t 12 11 50 8 44 4
sale
prodId p1 p2 p1
date 1 1 2
amt 62 19 48
rollup drill‐down
145
Aggregates • Operators: sum, count, max, min, me an, ave
• “Having” clause • Using dimension hierarchy – average by region (within store) –
146
73
6/26/2014
What is Data Mining? • Discovery of useful, possibly unexpected, patterns in data • Non‐trivial extraction of implicit, previously unknown and potentially useful information from data , semi‐automatic means, of large quantities of data in order to discover meaningful patterns
Data Mining Tasks • Classification [Predictive] • Clustering [Descriptive] • Association Rule Discovery [Descriptive] • Sequential Pattern Discovery [Descriptive] • Regression [Predictive] • Deviation Detection [Predictive] • Collaborative Filter [Predictive]
74
6/26/2014
Classification: Definition • Given a collection of records (training set ) – Each record contains a set of attributes, one of the a r u es s
e c ass.
• Find a model for class attribute as a function of the values of other attributes. • Goal: previously unseen records should be assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the model. Usuall the iven data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
Decision Trees Example: • Conducted survey to see what customers were n eres e n new mo e car • Want to select customers for advertising campaign
sale
custId c1
car taurus
age 27
c3 c4 c5 c6
van taurus merc taurus
40 22 50 25
city newCar sf yes sf sf la la
yes yes no no
ranng set
150
75
6/26/2014
Clustering
income
age
151
K‐Means Clustering
152
76
6/26/2014
Association Rule Mining
sales records:
tran1 tran2 tran3 tran4 tran5 tran6
cust33 cust45 cust12 cust40 cust12 cust12
p2, p5, p8 p5, p8, p11 p1, p9 p5, p8, p11 p2, p9 p9
market-basket data
• Trend: Products p5, p8 often bought together • Trend: Customer 12 likes product p9
153
Association Rule Discovery • Marketing and Sales Promotion: – Let the rule discovered be {Bagels, … } ‐‐> {Potato Chips} – Potato Chips as consequent => Can be used to
determine what should be done to boost its sales. – Bagels in the antecedent => can be used to see which products would be affected if the store discontinues selling bagels. – Ba els in antecedent and Potato chi s in consequent => Can be used to see what products should be sold with Bagels to promote sale of Potato chips!
• Supermarket shelf management. • Inventory Management
77
6/26/2014
Collaborative Filtering • Goal: predict what movies/books/… a person may be interested in, on the basis of
– Past preferences of the person – Other people with similar past preferences – The preferences of such people for a new movie/book/…
• One approach based on repeated clustering – Cluster people on the basis of preferences for movies – Then cluster movies on the basis of being liked by the same clusters of
people – Again cluster people based on their preferences for (the newly created clusters of) movies – Repeat above till equilibrium
• Above problem is an instance of collaborative filtering, where users
collaborate in the task of filtering information to find information of interest
155
Other Types of Mining • Text mining: application of data mining to textual – cluster Web pages to find related pages – cluster pages a user has visited to organize their visit history
– classify Web pages automatically into a Web directory
• – Deal with graph data
156
78
6/26/2014
Data Streams • What are Data Streams? – Continuous streams – Huge, Fast, and Changing • Why Data Streams? – The arriving speed of streams and the huge amount of data are beyond our capability to store them.
– “Real‐time” processing • Window Models – Landscape window (Entire Data Stream) – – Damped Window • Mining Data Stream
157
A Simple Problem • Finding frequent items – Given a se uence x …x where x ∈ 1 m and a real number θ between zero and one. – Looking for xi whose frequency > θ – Naïve Algorithm (m counters) • The number of frequent items ≤ 1/θ • Problem: N>>m>>1/θ
P×(Nθ) ≤ N
158
79
6/26/2014
KRP algorithm ─ Karp, et. al (TODS’ 03)
m=
=
Θ=0.35
⌈1/θ⌉ = 3
N/ (1/θ) ≤ Nθ
159
Streaming Sample Problem • Scan the dataset once • Sample K records – Each one has equally probability to be sampled – Total N record: K/N
80
6/26/2014
Introduction to NoSQL I‐Jen Chiang
Big Users
81
6/26/2014
Big Data
163
Data Storage
164
82
6/26/2014
Features of the Data Warehouse • A Data Warehouse is a subject oriented, n egra e , nonvo a e, me var an co ec on of data in support of management’s decision – W.H. Inmon
Data Warehouse Architecture Monitoring & Administration
OLAP Servers
Metadata Repository
Reconciled data External Sources
Extract Transform Load Refresh
Analysis
Serve Query/Reporting
Operational Dbs
Data Mining
DATA SOURCES
TOOLS
DATA MARTS
83
6/26/2014
Business Intelligence Loop Business Strategist
OLAP
Data Mining
Reports
Decision Support
Data Storage Data Warehouse x rac on, rans orma on, & Cleansing
CRM
Accounting
Finance
HR
Traditional vs. Big Data Traditional Data Warehouse
Big Data Environment
Complete record
ata rom many sources ns e and outside of organization, including traditional DW Data often physically distributed Need to iterate solution to test/improve models Large‐memory analytics also part of iteration Every iteration usually requires complete reload of information
from transactional system
All data
centralized
168
84
6/26/2014
NoSQL Database • Wide Column Store / Column Families ‐ Hadoop /
HBase, Cassandra,Cloudata, Cloudera, Amazon SimpleDB • Document Store – MongoDB, CouchDB, Citrusleaf • Key Value / Tuple Store ‐ Azure Table Storage, MEMBASE, GenieDB, Tokyo Cabinet / Tyrant, MemcacheDB • Eventually Consistent Key Value Store ‐ Amazon ynamo, o emort • Graph Databases – Neo4J, Infinite Graph, Bigdata • XML Databases ‐ Mark Logic Server, EMC Documentum, eXist 169
Why NoSQL • Too Much Data: the database became too single machine • Data Volume was growing – FAST • Data wasn’t all consistent with a specific, • Time was critical
170
85
6/26/2014
CAP Theorem • Three properties of a system: consistency, availability and partitions • You can have at most two of these three properties for any shared-data system • To scale out, you have to partition. That leaves either consistency or availability to choose from
– In almost all cases, you would choose ava a y over cons s ency
Theory of noSQL: CAP C
• Many nodes • Nodes contain replicas of partitions of data
• Consistency – all replicas contain the same version of data
• Availability
A
P
– s stem remains o erational on failing nodes
• Partition tolarence – multiple entry points – system remains operational on system split
CAP Theorem: satisfying all three at the same time is impossible 172
86
6/26/2014
ACID - BASE
• Basically Available (CP) • Soft‐state • Eventually cons stent AP
• Atomicity • Consistency • Isolation
Pritchett, D.: BASE: An Acid Alternative (queue.acm.org/detail.cfm?id=1394128)
173
NoSQL
Key / Value
Colum n
Gr ap h
Docu m ent
174
87
6/26/2014
Key‐Value Store Pros: very fast very scalable simple model able to distribute horizontally
Cons: structures (objects) can't be easily modeled as key value pairs
175
Column Stores Row oriented Id
username
email
Department
1
John
john@foo.com
Sales
2
Mary
mary@foo.com
Marketing
3
Yoda
yoda@foo.com
IT
Column oriented Id
Username
email
Department
1
John
john@foo.com
Sales
2
Mary
mary@foo.com
Marketing
3
Yoda
yoda@foo.com
IT
88
6/26/2014
Graph Database
177
An introduction to MongoDB I‐Jen Chiang
89
6/26/2014
Why NoSQL • Too Much Data: the database became too arge o n o a s ng e a a ase a e on a single machine
• Data Volume was growing – FAST • Data wasn’t all consistent with a specific, well‐ • Time was critical
179
Document Stores • The store is a container for documents – Fields may or may not have type definitions • e.g. XSDs for XML stores, vs. schema-less JSON stores
• Can create "secondary indexes" • These provide the ability to query on any document field s • Operations: • Insert and delete documents • Update fields within documents 180
90
6/26/2014
MongoDB MongoDB is a scalable, high‐performance, open source NoSQL database.
• Document‐oriented storage • Full Index Support • Replication & High Availability • Auto‐Sharding • Querying • Fast In‐Place Updates • Map/Reduce • GridFS 181
Theory of noSQL: CAP C
• Many nodes • Nodes contain replicas of partitions of data
• Consistency – all replicas contain the same version of data
• Availability
A
P
– s stem remains o erational on failing nodes
• Partition tolarence – multiple entry points – system remains operational on system split
CAP Theorem: satisfying all three at the same time is impossible 182
91
6/26/2014
Schema‐Less Pros: ‐
‐
pairs ‐ eventual consistency ‐ many are distributed ‐ still provide
excellent performance and
scalability Cons: ‐ typically no ACID transactions or joins
Common Advantages • Cheap, easy to implement (open source) identical and fault‐tolerant) and can be partitioned – Down nodes easily replaced – No single point of failure
• Easy to distribute ' • Can scale up and down • Relax the data consistency requirement (CAP)
92
6/26/2014
What is NoSQL giving up? • joins • group y • order by • ACID transactions • SQL as a sometimes frustrating but still powerful query language • easy integration with other applications that support SQL
Cassandra • Originally developed at Facebook ‐ • • Uses the Dynamo Eventual Consistency model • Written in Java • Open‐sourced and exists within the Apache family • Uses Apache Thrift as it’s API
93
6/26/2014
Cassandra • Tunable consistency. • Decentralized. • Writes are faster than reads. • No Single point of failure. • Incremental scalability. • Uses consistent hashing (logical partitioning) when clustered. • Hinted handoffs. • Peer to peer routing(ring). • Thrift API. • Multi data center support.
Couchdb • Availability and Partial Tolerance. • Views are used to query. Map/Reduce. • – u p e oncurren vers ons. o oc s. • • • • •
• • •
– A little overhead with this approach due to garbage collection. – Conflict resolution. Very simple, REST based. Schema Free. Shared nothing, seamless peer based Bi‐Directional replication. Auto Compaction. Manual with Mongodb. Uses B‐Trees ocumen s an n exes are ep n memory an us e o sc periodically. Documents have states, in case of a failure, recovery can continue from the state documents were left. No built in auto ‐sharding, there are open source projects. You can’t define your indexes.
94
6/26/2014
Mongodb • Data types: bool, int, double, string, object(bson), oid, array, null, date. • Database and collections are created automatically. • Lots of Language Drivers. • Capped collections are fixed size collections, buffers, very fast, FIFO, good for logs. No indexes. • Object id are generated by client, 12 bytes packed data. 4 byte time, 3 byte machine, 2 byte pid, 3 byte counter. • Possible to refer other documents in different collections but more efficient to embed documents. • Replication is very easy to setup. You can read from slaves.
Document store RDBMS
MongoDB
Database
Database
Table, View
Collection
Row
Document (JSON, BSON) "first" : "John",
Column
"last" : "Doe", "age" : 39, Index "interests" : [ Embedded Document "Reading", Reference "Mountain Biking ] Shard "favorites": { "color": "Blue", "sport": "Soccer” } }
Index Join Foreign Key Partition
> db.user.findOne({age:39}) "_id" : ObjectId("5114e0bd42…"),
Field
190
95
6/26/2014
Mongodb • Connection pooling is done for you.. • Su orts a re ation. – Map Reduce with JavaScript. • You have indexes, B‐Trees. Ids are always indexed. • Updates are atomic. Low contention locks. • Querying mongo done with a document: – Lazy, returns a cursor. – Reduceable to SQL, select, insert, update limit, sort etc. • There is more: upsert (either inserts of updates)
– Several operators: • $ne, $and, $or, $lt, $gt, $incr,$decr and so on.
• Repository Pattern makes development very easy.
Terminology RDBMS
MongoDB
Table, View
Collection
Row
Document (JSON, BSON)
Column
Field
Index
Index
Join
Embedded Document
Foreign Key
Reference
Partition
Shard 192
96
6/26/2014
Features • Document‐Oriented storege • Replication & High
Agile
Availability
• Auto‐Sharding • Querying • Fast In‐Place Updates • Map/Reduce
193
Mongodb ‐ Sharding
replica set C1 mongod C2 mongod C mongod
Config servers: Keeps mapping Mongos: Routing servers Mongod: master ‐slave replicas
97
6/26/2014
Mongodb Data Analysis
195
CRUD
Create db.collection.insert
db.collection.save( ) db.collection.update( , , { upsert: true } )
Read db.collection.find( ,
) db.collection.findOne( , )
U date db.collection.update( ,
, )
Delete db.collection.remove( ,
)
196
98
6/26/2014
SQL to Mongodb SQL Statment
Mongo Statement
a um er,
um er
.crea e o ec on “myco ”
SELECT a, b FROM users
db.users.find({},{a:1,b:1})
SELECT * FROM users WHERE name LIKE “%Joe%”
db.users.find({name:/Joe/})
SELECT * FROM users WHERE a = 1 AND b = ‘q’
db.users.find({a:1,b:’q’})
y
users
.users.coun
CREATE INDEX myindexname ON uses(name)
db.users.ensureIndex({name:1})
UPDATE users SET a= 1 WHERE b=‘q’
db.users.update({b:’q’},{$set:{a:1}},false,tru e)
DELETE FROM users WHERE z=“abc”
db.users.remove({z:’abc’}) 197
BSON • JSON has powerful, but limited set of datatypes –
,
,
,
• BSON is a binary representation of JSON – Adds extra datatypes with Date, Int types, Id, … – Optimized for performance and navigational abilities – And compression
• MongoDB sends and stores data in BSON – bsonspec.org 198
99
6/26/2014
Mongo Document
199
Collection
200
100
6/26/2014
Query
201
Modification
202
101
6/26/2014
Select
203
Query Stage
204
102
6/26/2014
CRUD example Create, Read, Update, Delete > db.user.insert({ first: "John", last : "Doe", icd: [ 250, 151 ], age: 39 })
> db.user.find ( { "_id" : ObjectId("51…"), "first" : "John", "last" : "Doe", "age" : 39 })
> db.user.update( {"_id" : ObjectId("51…")}, { $set: { age: 40, salary: 7000} } )
> db.user.remove({ "first": /^J/ }) 205
Import Excel into Mongodb
• Mongoimport --db dbname --type csv -headerline --file /directory/file.csv • Ex. mongoimport ‐d mydb ‐c things ‐‐type csv ‐ ‐file locations.csv –headerline
206
103
6/26/2014
Export Mongodb to Excel • mongoexport --host localhost --port 27017 --username a c --passwor 12345 -collection collName --csv --fields id,sex,brithday,icd --out all_patients.csv -db my_db --query "{\”_id\": {\"\$oid\": \"5058ca07b7628c0999000006\"}}"
207
Blog Post Document • > p = { author: "Chris", date: new ISODate(), text: "About MongoDB...", tags: ["tech", "databases"]}
• > db.posts.save(p)
208
104
6/26/2014
Querying • db.posts.find() { _id : ObjectId("4c4ba5c0672c685e5e8aabf3"), author : "Chris", date : ISODate("2012‐02‐ 02T11:52:27.442Z"), text : "About MongoDB...", tags : [ "tech", "databases" ] }
Notes: _id is unique, but can be anything you'd like 209
Insertion db.unicorns.insert({name: 'Horny', dob: new Date(1992,2,13,7,47),loves: ['carrot','papaya'], weight: 600, gender: 'm', vampires: 63}); db.unicorns.insert({name: 'Aurora', dob: new Date(1991, 0, 24, 13, 0), loves: ['carrot', 'grape'], weight: 450, gender: 'f', vampires: 43}); db.unicorns.insert name: 'Unicrom', dob: new Date 1973, 1, 9, 22, 10 , loves: 'energon', ' redbull' , weight: 984, gender: 'm', vampires: 182}); db.unicorns.insert({name: 'Roooooodles', dob: new Date(1979, 7, 18, 18, 44), loves: ['apple'], weight: 575, gender: 'm', vampires: 99}); db.unicorns.insert({name: 'Solnara', dob: new Date(1985, 6, 4, 2, 1), loves:['apple', 'carrot', ' chocolate'], weight:550, gender:'f', vampires:80}); db.unicorns.insert({name:'Ayna', dob: new Date(1998, 2, 7, 8, 30), loves: ['strawberry', 'lemon'], weight: 733, gender: 'f', vampires: 40}); db.unicorns.insert({name:'Kenny', dob: new Date(1997, 6, 1, 10, 42), loves: ['grape', 'lemon'], weight: 690, gender: 'm', vampires: 39}); db.unicorns.insert({name: 'Raleigh', dob: new Date(2005, 4, 3, 0, 57), loves: ['apple', 'sugar'], weight: 421, gender: 'm', vampires: 2}); db.unicorns.insert({name: 'Leia', dob: new Date(2001, 9, 8, 14, 53), loves: ['apple', 'watermelon'], weight: 601, gender: 'f', vampires: 33}); db.unicorns.insert({name: 'Pilot', dob: new Date(1997, 2, 1, 5, 3), loves: ['apple', 'watermelon'], weight: 650, gender: 'm', vampires: 54}); db.unicorns.insert({name: 'Nimue', dob: new Date(1999, 11, 20, 16, 15), loves: ['grape', 'carrot'], weight: 540, gender: 'f'}); db.unicorns.insert({name: 'Dunx', dob: new Date(1976, 6, 18, 18, 18), loves: ['grape', 'watermelon'], weight: 704, gender: 'm', vampires: 165}); 210
105
6/26/2014
Master Selector : {field: value} db.unicorns.find({gender: ‘m’, weight: {$gt:
•or (not quite the same thing, but for demonstration purposes) db.unicorns.find({gender: {$ne: ‘f’}, weight:
$lt, $lte, $gt, $gte and $ne are used for less than, less than or equal, greater than, greater than or equal and not equal operations 211
$exist and $or • db.unicorns.find({vampires: {$exists: a se • db.unicorns.find({gender: ’f’, $or: [{loves: ’apple’}, {loves: ’orange’}, {weight:
212
106
6/26/2014
Indexing • // 1 means ascending, ‐1 means descending > db.unicorns.ensureIndex({name: 1}) > db.unicorns.findOne({name: 'Kenny'}) name:' enny', o : new a e , , , 42), loves: ['grape', 'lemon'], weight: 690, gender: 'm', vampires: 39}
,
213
Indexing on Multiple Fields •
// 1 means ascending, ‐1 means descending db.posts.ensureIndex({author: 1, ts: ‐1})
• Query: db.posts.find({author: 'Chris'}).sort({ts: ‐1})
• Return: [ { _id : ObjectId("4c4ba5c0672c685e5e8aabf3"), author: "Chris", ...}, { _id : ObjectId("4f61d325c496820ceba84124"), author: "Chris", ...} ]
214
107
6/26/2014
GIS location1 = { name: "10gen East Coast , " city: "New York , zip: "10011 , ”
,
”
”
”
tags: [ business , mongodb ], “
”
“
”
latlong: [40.0,72.0]
db.locations.ensureIndex({latlong:”2d”}) db.locations.find({latlong:{$near :[40,70]}})
GIS ‐ Place location1 = { name: "10gen HQ , address: "17 West 18th Street 8th Floor , city: "New York , zip: "10011 , latlong: [40.0,72.0], ”
”
”
tags: [ business , cool place ], “
”
“
”
{user:"nosh", time:6/26/2010, tip:"stop by office hours on Wednesdays from 4-6pm"}, {.....}, ] }
for
216
108
6/26/2014
Querying your Places Creating your indexes db.locations.ensureIndex({tags:1}) db.locations.ensureIndex({name:1}) db.locations.ensureIndex({latlong: 2d }) ”
”
Finding places: db.locations.find({latlong:{$near:[40,70]}}) With regular expressions: db.locations.find({name: /^typeaheadstring/) By tag: db.locations.find({tags: business }) “
”
Inserting and updating locations Initial data load: . oca ons. nser p ace
Using update to Add tips: db.locations.update({name:"10genHQ"}, {$push :{tips: user:"nos ", me: , tip:"stop by for office hours on Wednesdays from 4‐6"}}}}
109
6/26/2014
Requirements • Locations – Need to store locations Offices Restaurants etc • Want to be able to store name, address and tags • Maybe User Generated Content, i.e. tips / small notes ?
– Want to be able to find other locations nearby
• – User should be able to check in to a location – Want to be able to generate statistics ʻ
ʼ
Users user1 = { name: nos email: nosh@10gen.com , . . . checkins: [{ location: 10gen HQ , ts: 9/20/2010 10:12:00, …, … ] “
”
“
”
}
110
6/26/2014
Simple Stats db.users.find({ checkins.location : 10gen HQ ) ‘
’
“
”
db.checkins.find({ checkins.location : 10gen HQ }) .sort({ts:-1}).limit(10) ‘
’
“
”
db.checkins.find({ checkins.location : 10gen HQ , ts: {$gt: midnight}}).count() ‘
’
“
”
Alternative user1 = { name: nosh email: nosh@10gen.com , . . . checkins: [4b97e62bf1d8c7152c9ccb74, 5a20e62bf1d8c736ab] “
“
”
”
}
checkins [] = ObjectId reference to locations collection
111
6/26/2014
User Check in
Check‐in = 2 ops
read location to obtain location id Update ($push) location id to user object Queries: find all locations where a user checked in:
checkin_array = db.users.find({..}, c ec ns: rue .c ec ns db.location.find({_id:{$in: checkin_array}})
Query Operators Conditional Operators ‐ $all, $exists, $mod, $ne, $in, $nin, $nor, $or, $size, $type ‐ t, te, gt, gte
• find posts with any tags db.posts.find({tags: {$exists: true }})
• find posts matching a regular expression .
.
:
• count posts by author db.posts.find({author: 'Chris'}).count() 224
112
6/26/2014
Examine the query plan Query: db.posts.find({"author": 'Ross'}).explain() esu t: { "cursor" : "BtreeCursor author_1", "nscanned" : 1, "nscannedObjects" : 1, "n" : 1, "millis" : 0, "indexBounds" : { "author" : [ [ "Chris”, "Chris” ] ] } }
225
Atomic Operators $set, $unset, $inc, $push, $pushAll, $pull, $pullAll, $bit
•
rea e a commen
new_comment = { author: "Fred", date: new Date(), text: "Best Post Ever!"}
•
Add to ost
db.posts.update({ _id: "..." }, {"$push": {comments: new_comment}},"$inc": {comments_count: 1} }); 226
113
6/26/2014
Nested Documents { _id : ObjectId("4c4ba5c0672c685e5e8aabf3"), author : "Chris", date : "Thu Feb 02 2012 11:50:01", text : "About MongoDB...", tags : [ "tech", "databases" ], comments : [{ author : "Fred", date : "Fri Feb 03 2012 13:23:11", text : "Best Post Ever!" }], comment_count : 1 } 227
Second Indexing // Index nested documents > .posts.ensureIn ex comments.aut or : 1 > db.posts.find({"comments.author": "Fred"})
• // Index on tags (multi-key index) .pos s.ensure n ex ags: > db.posts.find( { tags: "tech" } )
228
114
6/26/2014
GEO • Geo‐spatial queries • Require a geo index • Find points near a given point • Find points within a polygon/sphere // geospatial index . . " . > db.posts.find( "author.location" : { $near : [22, 42] } )
" "
"
229
Memory Mapped Files • A memory‐mapped file is a segment of virtual memory w c as een ass gne a rec byte‐for‐byte correlation with some portion of a file or file‐like resource.1
• mmap()
1:
http://en.wikipedia.org/wiki/Memory-mapped_file
230
115
6/26/2014
Replica Sets • Redundancy and Failover
Host1:10000 Host2:10001
upgrades and maintaince
Host3:10002 replica1
• Master-slave replication – Strong Consistency – e aye ons s ency
Client
• Geospatial features 231
Sharding • Partition your data • ca e wr te t roug put • Increase capacity • Auto‐balancing 1
Host1:10000
2
Host2:10010
configdb Host3:20000 Host4:30000
Client 232
116
6/26/2014
Sharding ‐ horizontal scaling
233
Unsharded Deployment
Primary
•Configure as a replica set for automated failover
•Async replication between nodes
Secondary
117
6/26/2014
High Throughput • Sharding reduces the number of operations each shard handles. Each shard processes fewer operations as the cluster grows. As a result, a cluster can increase capacity and throughput horizontally . For example, to insert data, the application only needs to access the shard responsible for that record. • Sharding reduces the amount of data that each server needs to store. Each shard stores less data as the cluster grows. For , , 4 shards, then each shard might hold only 256GB of data. If there are 40 shards, then each shard might hold only 25GB of data.
235
Sharded Deployment MongoS
config
Primary
Secondary
•Autosharding distributes data among two or more replica sets •Mongo Config Server(s) handles distribution & balancing •Transparent to applications
118
6/26/2014
Sharded Cluster
237
Main components • Shard – – Each Shard can be a single mongod or a replica set
• Config Server (meta data storage) – Stores cluster chunk ranges and locations – Can be only 1 or 3 (production must have 3) – Not a replica set – Acts as a router / balancer – No local data (persists to config database) – Can be 1 or many 238
119
6/26/2014
Relational Database Clustering
index
data file
0
parrot Elmer cat Mittens
arrot
parakeet
2
cat cat
3
dog dog
hash index. cat Natasha parakeetTweety dog dog
4
Buck Lassie
goat Sertrude
5 6
This is a
goat
k
h(k )
c= d=3 g=6 p=15
3 6 1
a=0, b=1, ... z=25 h(k ) = (1st letter of k )(mod 7)
Deploy a sharded cluster • Start the Config Server Database Instances – m r a a con g – mongod ‐‐configsvr ‐‐dbpath ‐‐port • mongod ‐‐configsvr ‐‐dbpath /data/configdb ‐‐port 27019
• Start the mongos Instances (27017) –
‐‐ • mongos ‐‐configdb cfg0.example.net:27019,cfg1.example.net:27019,cfg2.e xample.net:27019 240
120
6/26/2014
Deploy a sharded cluster (contd) • Add Shards to the Cluster – mongo ‐‐host ‐‐port • mongo ‐‐host mongos0.example.net ‐‐port 27017
– sh.addShard() • sh.addShard( "rs1/mongodb0.example.net:27017" )
241
Enable Sharding for a Database • mongo ‐‐host ‐‐por
• sh.enableSharding("") ~ db.runCommand ( { enableSharding: } )
Before you can shard a collection, you must enable sharding for the collection’s database.
242
121
6/26/2014
Chunk Partitioning
Chunk is a section of the entire range
Chunk splitting
• A chunk is split once it exceeds the maximum size • There is no split point if all documents have the same shard key • Chunk split is a logical operation (no data is moved)
Chunk is a section of the entire range
122
6/26/2014
Range based sharding
245
Hash based Sharding
246
123
6/26/2014
Linear Hashing: Example ,
*
Insert 15 and 3 Bucket id
0
1
2
3
4
6
7 11 4
13
8
5 9
17
Linear Hashing: Example =
,
Bucket id
0
=
1
2
17
8
9
* 3
4
5
15
6
7 11 4
13 5
3
124
6/26/2014
Linear Hashing: Search h0(x) = x mod N ( for the un-split buckets) h1 x = x mod 2*N or the s lit ones Bucket id
0
1
2
17
8
9
3
4
5
15
6
7 11 4
13 5
3
Enable Sharding for a Collection • Determine what you will use for the shard key. Your selection of the shard key affects the efficiency of sharding.
•
If the collection already contains data you must create an index on the shard key using ensureIndex (). If the collection already contains data you must create an index on the shard key using ensureIndex(). If the collection is empty then MongoDB will create the index as part of the sh.shardCollection () step.
• Enable sharding for a collection by issuing the sh.shardCollection () method in the mongo shell. The method uses: – sh.shardCollection(".", shard‐key‐pattern) sh.shardCollection("records.people", { "zipcode": 1, "name": 1 } ) sh.shardCollection("people.addresses", { "state": 1, "_id": 1 } ) sh.shardCollection("assets.chairs", { "type": 1, "_id": 1 } ) sh.shardCollection("events.alerts", { "_id": "hashed" } )
250
125
6/26/2014
Mixed shard1
...
Host1:10000 Host2:10001
shardn Host4:10010
Host3:10002 replica1 Host5:20000 Host6:30000
Client
Host7:30000 251
Map/Reduce db.collection.mapReduce( , , { out: , query: <>, sort: <>, limit: , finalize: , scope: <>, , verbose: } ) var mapFunction1 = function() { emit(this.cust_id, this.price); }; var reduceFunction1 = function(keyCustId, valuesPrices) { return sum(valuesPrices); };
252
126
6/26/2014
Map Reduce • The caller provides map and reduce // Emit each tag > map = "this['tags'].forEach( function(item) {emit(item, 1);} );" // Calculate totals > reduce = "function(key, values) { var total = 0; var valuesSize = values.length; for (var i=0; i < valuesSize; i++) { total += parseInt(values[i], 10); } return total; }; 253
Map Reduce // run the map reduce >
db.posts.mapReduce(map, reduce, {"out": { inline : 1}});
Answer { "results" : [ {"_id" : "databases", "value" : 1}, {"_id" : "tech", "value" : 1 } ], "timeMillis" : 1, "counts" : { "input" : 1, "emit" : 2, "reduce" : 0, "output" : 2 }, "ok" : 1, }
254
127
6/26/2014
References • http://docs.mongodb.org/manual/core/inter ‐
rocess ‐authentication • http://api.mongodb.org/python/2.6.2/examples/ authentication.html • https://securosis.com/assets/library/reports/Sec uringBigData_FINAL.pdf • http://docs.mongodb.org/manual/reference/user ‐pr v eges • http://www.slideshare.net/DefconRussia/firstov ‐ attacking ‐mongo‐db 256
128
6/26/2014
Case: CRM I‐Jen Chiang
Relationship Marketing • Relationship Marketing is a Process –
commun ca ng w
your cus omers
– listening to their responses
• Companies take actions – marketing campaigns –
new products
–
new channels
–
new packaging 258
129
6/26/2014
Relationship Marketing ‐‐ continued • Customers and prospects respond – most common response is no response
• This results in a cycle – data is generated – opportunities to learn from the data and improve the – process emerge
259
An Illustration • A few years ago, UPS went on strike • • After the strike, its volume fell • FedEx identified those customers whose FedEx volumes had increased and then decreased
•
ese cus omers were us ng
aga n
• FedEx made special offers to these customers to get all of their business 260
130
6/26/2014
The Corporate Memory • Several years ago, Land s End could not ’
recognize regular Christmas shoppers – some people generally don t shop from catalogs – but spend hundreds of dollars every Christmas – if you only store 6 months of history, you will miss ’
them
• Victoria
s Secret builds customer loyalty with a no‐hassle returns policy ’
– some
loyal customers outfits each month “
– they are really
”
return several expensive
loyal renters
“
”
261
CRM Requires Learning and More
• Form a learning relationship with your customers – Notice their needs • On‐line Transaction Processing Systems
– Remember their preferences • Decision Support Data Warehouse
– • Data Mining
– Act to make customers more profitable 262
131
6/26/2014
Customer Relationship Management (CRM) Traditional Marketing
CRM
, increase market share by mass marketing
, ‐ term, one‐to‐one relationship with customers; understanding their needs, preferences, expectations
Product oriented view
Customer oriented view
Mass marketing / mass production
Mass customization, one‐to‐one marketing
Standardization of customer needs
Customer‐supplier relationship
Transactional relationship
Relational approach 263
What is CRM? The a roach of identif in establishin maintainin and enhancing lasting relationships with customers.
“
”
The formation of bonds between a company and its .
“
”
264
132
6/26/2014
Strategies in CRM for Mass Customization ‐ • • Loyalty • Cross‐selling / Up‐selling • Win back or Save
265
Business Processes Organize Around the Customer Lifecycle
Winback Former Customer High Value Prospect
New Customer
Voluntary Churn
Established Customer Potential
Low Value
Forced Churn 266
133
6/26/2014
Strategic Customer Relationship Entire universe
Satisfy these Customers Before They defect
Spend less marketing $ On these segments Change higher rates
Retain these Loyal customers!
Cross-sell and up-sell opportunities
Behavioral clustering/ segmentation
Get these Customers back
Demographic clustering/ segmentation
Distinct demographic group Market distinct portfolio in sequence
Product Asso ciat ion s Customer basket analysis Market the Missing products
Marketing Strategy High-valued Customer
Clustering Purchasing Behaviors
Keep r on Recognition
Mid-valued Customer
Cross-sell , up-sell
Low-valued
Get Best Benefit
Before Attrition ,
With low costs
Win-back
Marketing Benefit Registration
Demography Clustering Product Design and Refinement
Products
Win-back
Find Customers
Pr odu ct Cl as si fi cati on s
Di fferent ial Seg men tat io n
Find Products
134
6/26/2014
The Profit/Loss Matrix Someone who scores in the top 30%, is predicted to respond ACTUAL Those predicted to respond cost $1 those who actually respond yield a gain of $45 those who don t respond ’
d e t c i d e r P
YES
NO
YES
$44
-$1
NO
$0
$0
Those not predicted to respond cost $0 and yield no gain 269
Marketing Plan • Correctly identify the customer requirements • Define promotion for customers • Adjust the flight schedule
135
6/26/2014
Recommendation EC Multiple and effective merchandising platform Cross-products (different types) recommendation
Online retailers
Anonymizing filter (optional)
Marketplace
My Library & Preferences
Aggregated Recommendations Recommendation-Based Purchases
My Library (all media types)
User Profile <-> Recommendations
My Preferences
Information/data integration Find houses with 2 bedrooms priced under 200K
New faculty member
realestate.com
homeseekers.com
…sources on the Web which provide house listings
homes.com 272
136
6/26/2014
Architecture of Data Integration System simply pose the query in the mediated schema
Find houses with 2 bedrooms priced under 200K
mediated schema
source schema 1
realestate.com
source schema 2
source schema 3
homeseekers.com
homes.com 273
Semantic Matches between Schemas the schema‐matching problem is to find semantic mappings between the elements of the two schemas
Mediated schema
price
agent‐name
1‐1 match
Source schema .
listed ‐price
contact‐name
320K 240K
Jane Brown Mike Smith
address
complex match
city
state
Seattle WA Miami FL
274
137
6/26/2014
Big Data Platforms and Paradigms Sourangshu Bhattacharya (CSE)
Outline
Big Data
What is Bi Data ?
Challenges with Big Data Processing.
Hadoop – HDFS
Map Reduce
PIG
Analytics
Basic Statistics
Text Analytics
SQL Queries
138
6/26/2014
What is Big Data ? • 6 Billion web ueries er da . ~ 6 TB per day, ~ 2.5 PB per year
• 10 Billion display ads per day. ~ 15 TB per day, ~ 5.5 PB per year
• 30 Billion text ads per day. ~ 30 TB per day, ~ 11 PB per year
•
on re t car transact ons per ay. ~ 150 GB per day, ~ 5.5 TB per year
• 100 Billion emails per day. ~ 1 PB per day, ~ 360 PB per year
What is Big Data ? CERN ‐ Large
Hadron Collider
~10
PB/year at start ~1000 PB in ~10 years 2500 physicists collaborating Large
Synoptic Survey Telescope (NSF, DOE, and private donors) –~5‐10
PB/year at start in 2012 –~100 PB b 2025 Pan‐STARRS (Haleakala, Hawaii)
US Air Force –now:
800 TB/year –soon: 4 PB/year Courtesy: King et al., IEEE Big Data 2013.
139
6/26/2014
Big Data Challenges 3 V’s Scalability Cost Effective
Flexibility Fault
Tolerance
• 3 Vs ‐ Volume, Variety, Velocity. • Fault‐tolerance: Computers
1
10
100
Chance of failure in an hours
0.01
0.09 0.63
• Data locality. Computation goes to data
What is Hadoop ? • Hadoop Map Reduce – Batch processing. • Spark – Distributed in‐memory processing. • Storm – Distributed online processing. • Graphlab – distributed processing on graph.
A scalable fault ‐tolerant distributed system for data storage and processing.
Core Hadoop: a oop
sr ue
e ys em
Hadoop
YARN: Job Scheduling and Cluster Resource Management
Hadoop
Map Reduce: Framework for distributed data processing.
Open Source system with large community support. https://hadoop.apache.org/
140
6/26/2014
Hadoop Architecture
Courtesy: http://hadoop.apache.o http://hadoop.apache.org/docs/r2.3.0/hado rg/docs/r2.3.0/hadoop op‐yarn/hadoop ‐yarn‐site/YARN.html
HDFS
Assumptions . Streaming data Write
access.
once, read many times.
High throughput, not
low latency.
Large datasets.
Characteristics: Performs
best with modest number of large of large files
Optimized for streaming reads Layer
on top of native of native file system.
141
6/26/2014
HDFS
Data is organized into file and directories. .
Block placement is known at the time of read of read Computation moved
to same node.
Replication is used for: Speed Fault tolerance
Self healing. healing. Self
What is Map Reduce ? Method
for distributing a task across multiple servers.
Proposed by Dean and Ghemawat, 2004.
Consists of two of two developer created phases: Map Reduce
In between Map and Reduce is the Shuffle and Sort phase.
What was the max/min temperature for the last century ?
142
6/26/2014
Map Phase
User writes the mapper method. E.g. A
A row of RDBMS of RDBMS table,
line of a of a text file, etc
Output is a set of records of records of the of the form: Both
key and value can be anything, e.g. text, number, etc.
E.g.
for row of RDBMS of RDBMS table:
Line
of text of text file:
Shuffle/Sort phase
Shuffle phase ensures that all the mapper output records with the same ke value value oes to the same reducer reducer..
Sort ensures that among the records received at each reducer, records with same key arrives together.
143
6/26/2014
Reduce phase
Reducer is a user defined function which processes mapper out ut recor records ds with with some some of the ke s out ut b ma er. er.
Input is of the of the form All
records having same key arrive together.
Output is a set of records of records of the of the form Key is
not important
Parallel picture
144
6/26/2014
Example • Word Count: Count the total no. of occurrences of each word Map
Reduce
Hadoop Map Reduce
Provides: Fault Tolerance Methods for interfacing with
HDFS for colocation of computation and storage of output.
Status and API
Monitoring tools
in Java
through Hadoop streaming.
145
6/26/2014
Word Count
Word Count: Mapper
146
6/26/2014
Word Count: Reducer
Word Count: Main
147
6/26/2014
Hadoop Streaming
Allows the user to specify mappers and reducers as executables.
Mapper launches the mapper executable and pipes the input data to the stdin of the executable.
Mapper executable processes the data and writes the mapper output to stdout, which is collected by the mapper.
Mapper key and value are separated by tab.
Reducer operates in the same way.
Word count in perl • Main Command: •
hadoop j ar / opt / cl ouder a/ par cel s/ CDH/ l i b/ hadoop- 0. 20mapr educe/ cont r i b/ st r eami ng/ hadoop- st r eami ng- 2. 0. 0- mr 1cdh4. 6. 0. j ar - i nput / t mp/ smal l _svm_dat a - out put / user/ sour angshu/ out put - mapper "/ usr / bi n/ per l wcmap. pl “ - r e duc er " / us r / bi n/ per l wc r educ e. pl “ - f i l e wcmap. pl - f i l e wcreduce. pl
• Mapper: wcmap.pl
148
6/26/2014
Word count in perl • Reducer:
Pig
Pig is a system for fast development of big data processing al orithms.
Pig has two components: Pig
Latin language
Interpreter
for pig latin.
A program expressed in Pig Latin is translated into a set of map reduce jobs.
Users interact with the Pig latin language.
149
6/26/2014
Pig Latin
Pig Latin statement is an operator that takes a relation as in ut and roduces another relation as out ut. Exception: Load
and Store statements, which read from and
write to files.
A general Pig program looks like: ‐ A LOAD statement reads data from the file system. ‐ A series of "transformation" statements rocess the data. ‐ A STORE statement writes output to the file system; or, a
DUMP statement displays output to the screen.
Pig Latin Datatypes A relation
is a bag (more specifically, an outer bag). . . , , , A tuple is an ordered set of fields e.g. (19,2) A map is a set of key value pairs e.g. [open#apache] A field is a piece of data A field can be of type: , , , Array: chararray, bytearray Complex fields: bag, tuple, map.
150
6/26/2014
Example • A = LOAD ' st udent ' USI NG
, age: i nt , gpa: f l oat ) ; • DUMP A; • Out ut: • • • •
( J ohn, 18, 4. 0F) ( Mary, 19, 3. 8F) ( Bi l l , 20, 3. 9F) ( J oe, 18, 3. 8F)
Pig Latin
FOREACH .. GENERATE .. : applies operation to each element of a ba .
Example: A = LOAD 'data' AS (f1,f2,f3); B = FOREACH A GENERATE f1 + 5; C = FOREACH A generate f1 + f2;
151
6/26/2014
Pig Latin
Filtering:
= ( f 2+f 3 > f 1) ) ;
==
Grouping:
X = GROUP A BY f 1; DUMP X; ( 1, {( 1, 2, 3) }) ( 4, {( 4, 2, 1) , ( 4, 3, 3) }) ( 8, {( 8, 3, 4) })
Grouping generates an inner bag.
Pig Latin
Cogrouping:
152
6/26/2014
Pig Latin Operators on Bags
AVG: Computes the average of the numeric values in a single‐ column ba .
Pig Latin Operators
COUNT: Computes the number of elements in a bag.
MAX: Computes the maximum.
MIN: Computes the minimum.
SUM: Computes the sum of the numeric values.
SIZE: Computes the number of elements based on any Pig data type.
153
6/26/2014
Pig Latin
Flatten converts inner bags to multiple tuples:
X = FOREACH C GENERATE DUMP X;
r ou , FLATTEN( A) ;
( 1, 1, 2, 3) ( 4, 4, 2, 1) ( 4, 4, 3, 3) ( 8, 8, 3, 4) ( 8, 8, 4, 3)
Example: Projection • X = FOREACH A GENERATE a1, a2; • • • • • • •
DUMP X; ( 1, 2) ( 4, 2) ( 8, 3) , ( 7, 2) ( 8, 4)
154
6/26/2014
Business Analytics for Decision Making Ram Babu Roy
Decision Making • Definition – Selecting the best solution from two or more alternatives • Levels of Managerial Decision Making Information aracter st cs Structure
Unstructured
Strategic Management Executives and Directors
Semistructured
Tactical Management Business Unit Managers and Self ‐Directed Teams
Operational Management Structured
Operating Managers and Self ‐Directed Teams
Ad Hoc Unscheduled Summarized Infrequent Forward Looking External Wide Scope Prespecified Scheduled Detailed Frequent Historical Internal Narrow Focus
Source: Euiho (David) Suh, POSTECH Strategic Management of Information and Technology Laboratory, (http://posmit.postech.ac.kr)
155
6/26/2014
Analysis, Analytics, and BI • Analysis: Process of careful study of something to learn about its components, what they do, and w y • Analytics: Application of scientific methods and techniques for analysis • Business Intelligence and Analytics1 – techniques, technologies, systems, practices, methodologies, to help an enterprise better understand its business and market and make timely business decisions 1Source:
Business Intelligence and Analytics: From big data to big impact, H.Chen et.al MIS Quarterly, Dec 2012
BI&A Overview
Source: Business Intelligence and Analytics: From big data to big impact, H.Chen et.al MIS Quarterly, Dec 2012
156
6/26/2014
Big Data and Traditional Analytics
Source: Big data at work, Davenport, HBS
Which type of analytics is appropriate? • Once you gather data, you build a model of how these
data work. • Descriptive ‐ to condense big data into smaller, more useful nuggets of information • Inquisitive – Why is something happening – validate/reject hypotheses • Predictive – to forecast what might happen in the future, uses statistical modeling, data mining, and machine learning techniques, e.g. whether sentiment is
• Prescriptive ‐ recommending one or more courses of
action and showing the likely outcome of each decision – requires a predictive model with two additional components: actionable data and a feedback system
Source: http://www.informationweek.com/big‐data/big‐data‐analytics/big‐data‐ analytics‐descriptive‐vs‐predictive‐vs‐prescriptive/d/d‐id/1113279
157
6/26/2014
Evolution of DSS into Business Intelligence • Change in the Use of DSS – Specialist → Managers → Whomever, Whenever, Wherever • Emergence of “Business Intelligence” OLAP
Data Warehousing
Business Intelligence Data Mining
Intelligent Systems
Source: Euiho (David) Suh, POSTECH Strategic Management of Information and Technology Laboratory, (http://posmit.postech.ac.kr)
Business Intelligence (BI) • Definition – An umbrella term that combines architectures, tools, databases, analytical tools, applications, and methodologies
• Objective – managers with the ability to conduct analysis Data Business Intelligence
ACTION
Information
Knowledge
Decisions
Source: Euiho (David) Suh, POSTECH Strategic Management of Information and Technology Laboratory, (http://posmit.postech.ac.kr)
158
6/26/2014
Foundational Technologies in Analytics
Source: Business Intelligence and Analytics: From big data to big impact, H.Chen et.al MIS Quarterly, Dec 2012
Taxonomy for Data Mining Tasks
159
6/26/2014
Classification Techniques • Decision tree analysis • Neural networks • Support vector machines • Case‐based reasoning • Bayesian classifiers (naïve, Belief Network) • Rough sets
Bayesian Classification: Why? • A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities
• Foundation: Based on Bayes’ Theorem. • er ormance: s mp e ayes an c ass er, na ve ayes an classifier , has comparable performance with decision tree and selected neural network classifiers
• Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct — prior knowledge can be combined with observed data
• Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured
160
6/26/2014
Bayes’ Theorem: Basics • Let X be a data sample (“evidence”): class label is unknown • Let H be a hypothesis that X belongs to class C • Classification is to determine P(H|X), ( posteriori probability), the probability that the hypothesis holds given the observed data sample X
• P(H) ( prior probability ), the initial probability – E.g., X will buy computer, regardless of age, income, … • • P(X|H) (likelihood), the probability of observing the sample X, given that the hypothesis holds
– E.g., Given that X will buy computer, the prob. that X is 31..40, medium income
Bayes’ Theorem • Given training data X , posteriori probability of a hypothesis H , P(H|X) , follows the Bayes’ theorem
P ( H | X ) P ( X | H ) P ( H ) P ( X | H ) P ( H ) / P ( X ) P(X) • Informally, this can be written as posteriori = likelihood x prior/evidence
• Predicts X belongs to Ci iff the probability P(Ci|X) is the highest among all the P(Ck|X) for all the k classes
• Practical difficulty: require initial knowledge of many probabilities, significant computational cost
161
6/26/2014
Towards Naïve Bayes Classifier • Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n‐D attribute vector X = (x1, x2, …, xn
• Suppose there are m classes C1, C2, …, Cm. • Classification is to derive the maximum posteriori, i.e., the maximal P(Ci|X) • This can be derived from Bayes’ theorem P C X
P(X | C )P(C ) i i
• Since P(X) is constant for all classes, only needs to be maximized
P(C | X) P(X | C ) P(C ) i i i
323
Derivation of Naïve Bayes Classifier • A simplified assumption: attributes are conditionally independent (i.e., no dependence relation between attributes): n P(X | C i) P( x | C i) P( x | C i) P( x | C i) ... P( x | C i) 1 2 k n k 1
• This greatly reduces the computation cost: Only counts the class distribution
• If A is categorical, P(x |C ) is the # of tuples in C having value x for Ak divided by |Ci, D| (# of tuples of Ci in D)
162
6/26/2014
Naïve Bayes Classifier: Training Dataset Class: C1:buys_computer = ‘yes’ C2:buys_computer = ‘no’ Data to be classified: X = (age <=30, Income = medium Student = yes Credit_rating = Fair)
age <=30 <=30 31…40 >40 >40 >40 31…40 <=30 <= >40 <=30 31…40 31…40 >40
income student redit_ratin high no fair hi h no excellent high no fair medium no fair low yes fair low yes excellent low yes excellent medium no fair ow yes a r medium yes fair medium yes excellent medium no excellent high yes fair medium no excellent
com no no yes yes yes no yes no yes yes yes yes yes no
Naïve Bayes Classifier: An Example age
•
income tudent redit_rating_com no fair no n o excel len t no no fair yes no fair yes yes fair yes yes excel len t n o y es e xc el l en t y es no fair no yes fair yes yes fair yes yes excellent yes no excellent yes yes fair yes n o e xc el l en t no
<=30 high P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643 <=30 h ig h 31…40 high P(buys_computer = “no”) = 5/14= 0.357 >40 medium >40 low • Compute P(X|Ci) for each class >40 l ow 3 1… 40 l ow <=30 medium P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222 <=30 low >40 medium P a e = “<= 30” bu s _com uter = “no” = 3 5 = 0.6 <=30 medium 31…40 medium P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444 31…40 h ig h > 40 m ed iu m P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667 P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667 P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4 • X = (age <= 30 , income = medium, student = yes, credit_rating = fair) uys_computer = yes = . x . x . x . = . i : P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019 P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028 P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007 Therefore, X belongs to class (“buys_computer = yes”)
163
6/26/2014
Avoiding the Zero‐Probability Problem • Naïve Bayesian prediction requires each conditional prob. be non‐zero. Otherwise, the predicted prob. will be zero P ( X | C i )
n
P ( x k | C i ) k 1
• Ex. Suppose a dataset with 1000 tuples, income=low (0), income= medium (990), and income = high (10) • Use Laplacian correction (or Laplacian estimator) – Adding 1 to each case Prob income = low = 1 1003 Prob(income = medium) = 991/1003 Prob(income = high) = 11/1003 – The “corrected” prob. estimates are close to their “uncorrected” counterparts
Naïve Bayes Classifier: Comments • Advantages – Easy to implement – Good results obtained in most of the cases • Disadvantages – Assumption: class conditional independence, therefore loss of accuracy – Practically, dependencies exist among variables • E.g., hospitals: patients: Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc. • Dependencies among these cannot be modeled by Naïve Bayes Classifier • How to deal with these dependencies? Bayesian Belief Networks
164
6/26/2014
Support Vector Machines (SVM) • A relatively new classification method for both linear and nonlinear data
• It uses a nonlinear mapping to transform the original training data into a higher dimension
• With the new dimension, it searches for the linear optimal separating hyperplane (i.e., “decision boundary”)
• With an appropriate nonlinear mapping to a sufficiently high , hyperplane
• SVM finds this hyperplane using support vectors (“essential” training tuples) and margins (defined by the support vectors) 329
Support Vector Machines • Aim : To find the best classification function to the training data – a separating hyperplane f(x) that passes through the middle of the two classes – best such function is found by maximizing the margin between the two classes – xn belongs to the positive class if f(xn) > 0
• One of the most robust and accurate methods • Requires only a dozen examples for training • Insensitive to the number of dimensions
165
6/26/2014
SVM—When Data Is Linearly Separable
m
Let data D be (X1, y1), …, (X||D||, y|D| | |), where Xi is the set of training tuples associated with the class labels yi There are infinite lines (hyperplanes) separating the two classes but we want to find the best one (the one that minimizes classification error on unseen data) SVM searches for the hyperplane with the largest margin , i.e., maximum marginal hyperplane (MMH) 331
SVM—Linearly Separable
A separating hyperplane can be written as W ● X + b = 0
=
1,
2,
…,
n
For 2‐D it can be written as w0 + w1 x1 + w2 x 2 = 0
The hyperplane defining the sides of the margin: H1: w0 + w1 x1 + w2 x2 ≥ 1 for yi = +1, and H2: w0 + w1 x1 + w2 x2 ≤ – 1 for yi = –1
Any training tuples that fall on hyperplanes H1 or H2 (i.e., the sides defining the margin) are support vectors This becomes a constrained (convex) quadratic optimization problem: Quadratic objective function and linear constraints Quadratic Programming (QP) Lagrangian multipliers 332
166
6/26/2014
SVM—Linearly Inseparable
A2
Transform the original input data into a higher dimensional s ace
Search for a linear separating hyperplane in the new space
A1
333
SVM: Different Kernel functions
Instead of computing the dot product on the transformed data, it is math. equivalent to applying a kernel function K(Xi, X j) to the original data, i.e., K(Xi, X j) = Φ(Xi) Φ(X j)
Typical Kernel Functions
SVM can also be used for classifying multiple (> 2) classes and for regression analysis (with additional parameters) 334
167
6/26/2014
Parallel Implementation of SVM • Start with the current w and b, and in parallel do
several iterations based on each trainin exam le • Average the values from each of the examples to create a new w and b. • If we distribute w and b to each mapper, then the Map tasks can do as many iterations as we wish to do in one round • e nee o use e e uce as s on y o average the results • One iteration of MapReduce is needed for each round.
Model Selection: ROC Curves •
ROC (Receiver Operating Characteristics) curves: for visual comparison of classification models • Originated from signal detection theory • Shows the trade‐off between the true positive rate and the false positive rate • The area under the ROC curve is a measure of the accuracy of the model • Rank the test tuples in decreasing order: the one that is most likel to belon to the positive class appears at the top of the list • The closer to the diagonal line (i.e., the closer the area is to 0.5), the less accurate is the model
Vertical axis represents the true positive rate . the false positive rate The plot also shows a diagonal line A model with perfect accuracy will have an area of 1.0
168
6/26/2014
Issues Affecting Model Selection • Accuracy – classifier accuracy: predicting class label • Speed – time to construct the model (training time) – time to use the model (classification/prediction time) • Robustness: handling noise and missing values • Scalabilit : efficienc in disk‐resident databases • Interpretability – understanding and insight provided by the model • Other measures, e.g., goodness of rules, such as decision tree size or compactness of classification rules
What is Cluster Analysis? • Cluster: A collection of data objects – similar (or related) to one another within the same group – dissimilar (or unrelated) to the objects in other groups • Cluster analysis (or clustering, data segmentation, …) – Finding similarities between data according to the characteristics found in the data and grouping similar data objects into clusters
•
. ., observations vs. learning by examples: supervised)
• Typical applications – As a stand‐alone tool to get insight into data distribution – As a preprocessing step for other algorithms
169
6/26/2014
Cluster Analysis: Applications • Identify natural groupings of customers • en y ru es or ass gn ng new cases o classes for targeting/diagnostic purposes • Provide characterization, definition, labelling of populations • Decrease the size and complexity of problems for other data mining methods • Identify outliers in a specific domain (e.g., rare‐event detection)
Cluster Analysis: Applications • Biology: taxonomy of living things: kingdom, phylum, class, order, family, genus and species
• • Land use: Identification of areas of similar land use in an earth observation database
• Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs
• City‐planning: Identifying groups of houses according to their house type, value, and geographical location
•
ar ‐qua e s u es: serve ear along continent faults
qua e ep cen ers s ou
e c us ere
• Climate: understanding earth climate, find patterns of atmospheric and ocean
• Economic Science: market resarch
170
6/26/2014
Requirements and Challenges • Scalability – Clustering all the data instead of only on samples • Ability to deal with different types of attributes – Numerical, binary, categorical, ordinal, linked, and mixture of these • Constraint‐based clustering • •
User may give inputs on constraints Use domain knowledge to determine input parameters
• Interpretability and usability • Others – Discovery of clusters with arbitrary shape – Ability to deal with noisy data – Incremental clustering and insensitivity to input order – High dimensionality
Association Rule Mining • Finds interesting relationships (affinities) between variables items or events
• Also known as market basket analysis • A representative applications of association rule mining include – In business: cross‐marketing, cross‐selling, store design, catalog design, e‐commerce site design, optimization of online advertising, pro uc pr c ng, an sa es promo on con gura on
– In medicine: relationships between symptoms and illnesses; diagnosis and patient characteristics and treatments (to be used in medical DSS); and genes and their functions (to be used in genomics projects)…
171
6/26/2014
Association Rule Mining • Are all association rules interesting and useful? A Generic Rule: X
Y [S%, C%]
X, Y: products and/or services X: Left‐hand‐side (LHS) Y: Right‐hand‐side (RHS) S: Support: how often X and Y go together C: Confidence: how often Y go together with the X
Example: {Laptop Computer, Antivirus Software} {Extended Service Plan} [30%, 70%]
Privacy Preserving Data Mining • There is a growing concern among people and organization in protecting their privacy
• Many business and government organizations have strong requirements for privacy preserving data mining
• The primary task in data mining: development of models about aggregated data. – meeting privacy requirements – providing valid data mining results Source: Privacy Preserving Association Rule Mining in Vertically Partitioned Data, Jaideep Vaidya, Chris Clifton, SIGKDD ’02 Edmonton, Alberta, Canada
172
6/26/2014
What is Privacy • Ability of an individual or group to reveal
themselves selectively to remain unnoticed or
• Privacy laws prohibits unsanctioned invasion of privacy by the government, corporations or individuals • Privacy may be voluntarily sacrificed, normally in exchange for perceived benefits and very o ten wit speci ic angers an osses • What is privacy preserving data mining?
– Study of achieving some data mining goals without scarifying the privacy of the individuals
Challenges • Privacy considerations seems conflicting with data mining. How do we mine data when we can’t even look at it?
• Can we have data mining and privacy together? • Can we develop accurate models without access to precise information in individual data records?
• Leakage of information is inevitable – how to minimize the leakage of information PPDM offers a compromise !
173
6/26/2014
Example of Privacy • Alice and Bob are both teaching the same class, and each of them suspects that one specific student is cheating. None of them is completely sure about the identity of the cheater, and
• For students privacy – if they both have the same suspect, then they should learn his or her name – but if they have different suspects then they should learn nothing beyond that fact.
• They therefore have inputs x and y, and wish to compute f(x, = • If f(x, y) = 0 then each party does learn some information, namely that the other party’s suspect is different than his/hers, but this is inevitable
Total count: “How many pneumonia deaths under age 65?” r 1= r 0+12
hosp1 r 9= r 8+0
hosp9 r 8= r 7 +3
hosp2 r 9+5
r 2= r 1+1
r 0
detect
hosp3 Total=61 r 3= r 2+8
8
r 7 = r 6 +14
hosp7
hosp6 r 6 = r 5+7
hosp4 hosp5 r 5= r 4+11
r 4= r 3+0
174
6/26/2014
Distributed Computing Scenario • Two or more parties owning confidential databases their databases without revealing any unnecessary information
• Although the parties realize that combining their data has some mutual benefit, none of them is w ng to revea ts ata ase to any ot er party
• Partial leak of information is inevitable
Association rule mining in vertically partitioned data • Privacy concerns can prevent a central database approach for data mining
• The transactions may be distributed across sources • Collaborate to mine globally valid data without revealing individual transaction data
• Prevent disclosure of individual relationships – Join key revealed – Universe of attribute values revealed
175
6/26/2014
Real‐life Example • Ford Explorers with Firestone tires from a specic factory had tread separation problems in certain situations, resulting in 800 injuries.
• Since the tires did not have problems on other vehicles, and
other tires on Ford Explorers did not pose a problem, neither side felt responsible
• Delay in identifying the real problem led to a public relations
nightmare and the eventual replacement of 14.1 million tires
• Both manufacturers had their own data ‐ early generation of association rules based on all of the data may have enabled Ford and Firestone to resolve the safety problem before it became a public relations nightmare.
Problem Definition • To mine association rules across two databases, where the columns in the table are at different sites, splitting each row.
• One database is designated the primary, and is the initiator of the protocol. The other database is the responder.
• There is a join key present in both databases. The remaining attributes are present in one database or the other, but not both.
• The goal is to find association rules involving attributes other than the join key observing the privacy constraints
176
6/26/2014
Problem Definition •
Let I ={i1; i2; ; im} be a set of literals, called items. Let D be a set of transactions, where each transaction T is a set of items such that T I.
•
An association rule is an implication of the form, X Y , where X I, Y I, and X Y =
•
The rule X Y holds in the transaction set D with confidence c if c% of transactions in D that contain X also contain Y . The rule X Y has support s in the transaction set D if s% of transactions in D contain X Y
•
. an attribute is represented as a 0 or 1. Transactions are strings of 0s and 1s.
•
To find out if a particular itemset is frequent, we count the number of records where the values for all the attributes in the itemset are 1.
Mathematical problem formulation • Let the total number of attributes be l + m, where A has l attributes A1 through Al, and B has the remaining m attributes B1 through Bm. transactions/records are a se uence of l + m 1s or 0s.
• Let k be the support threshold required, • n be the total number of transaction/records • Let X and Y represent columns in the database, i.e., xi = 1 if row i has value 1 for attribute X. The scalar (or dot) product of two cardinality n vectors X and Y is
• Determining if the two‐itemset
is frequent thus reduces to testing if
177
6/26/2014
Example • Find out if itemset {A1, B1} is frequent (i.e., If support of {A1, B1} ≥ k) A
B
Key
A1
Key
B1
k1
1
k1
0
k2
0
k2
1
k3
0
k3
0
k4
1
k4
1
k5
1
k5
1
• Support of itemset is defined as number of transactions in which all attributes of the itemset are present n
Support
A B i
i
i1
Basic idea n
Support
A B i
i
i 1
• This is the scalar (dot) product of two vectors • To find out if an arbitrary (shared) itemset is
frequent, create a vector on each side consisting of the component multiplication of all attribute vectors on that side (contained in the itemset) • 1, 3, 5, 2, 3 – A forms the vector X = ∏ A1 A3 A5 – B forms the vector Y = ∏ B2 B3 – Securely compute the dot product of X and Y
178
6/26/2014
Conclusion • Privacy preserving association rule mining algorithm using an efficient protocol for computing scalar product while preserving privacy of the individual values
• Communication cost is comparable to that required to build a centralized data warehouse
• Although secure solutions exist, achieving efficient secure solutions for privacy preserving distributed data mining is still an
• Handling multi‐party case and avoiding collusion is challenging. Non‐categorical attributes and quantitative association rule mining are significantly more complex problems
References
Privacy Preserving Association Rule Mining in Vertically Partitioned Data, Jaideep Vaidya, Chris Clifton, SIGKDD ’02 Edmonton, Alberta, Canada
Privacy‐preserving Data Mining. R. Agrawal and R. Srikant, ACM SIGMOD Conference on Management of Data, 2000.
Cryptographic techniques for privacy‐preserving data mining. B. Pinkas, SIGKDD Explorations, Vol. 4, Issue 2.
KNOWLEDGE VALUATION: Building blocks to a knowledge valuation system (KVS), Annie Green, The journal of information and knowledge management systems Vol. 36 No. 2, 2006, pp. 146‐154
179
6/26/2014
Business Applications Ram Babu Roy
, survives. It is the one that is the most adaptable to change. ‐ Charles Darwin
From Big Data to Big Impact
Source: Business Intelligence and Analytics: From big data to big impact, H.Chen et.al MIS Quarterly, Dec 2012
180
6/26/2014
Big Impacts by Big Data • Radical transparency with data widely available – change the way we compete • Impact of real‐time customization on business • Augmenting management and strategy – better risk management • n ormat on‐ r ven us ness mo e nnovat ons – Leveraging valuable exhaust data by business transactions – Data aggregator as an entrepreneurial opportunity Source: Are you ready for the era of ‘big data’?, Brad Brown et.al, McKinsey Quarterly, Oct 2011
Source: http://www.forbes.com/sites/danmunro/2013/04/28/big‐problem‐with‐little‐data/
181
6/26/2014
Magic Quadrant for Advanced Analytics Platforms CHALLENGERS
NICHE PLAYERS
LEADERS
VISIONARIES
Source: Gartner (February 2014)
Sources of Competitive Advantage • Big Data – a new type of corporate asset • Effective use of data at scale • Data – driven decision making • Radical customization – Gaining market share
• Constant Experimentation – ‐ – Adjust prices in real‐time – Bundling synthesizing and making information available across organization
• Novel Business Models
182
6/26/2014
Growing Business Interest in Big Data • Hundreds of articles published in
technolo industr ournals and eneral business press (Forbes, Fortune, Bloomberg BusinessWeek, The Wall Street Journal, The Economist etc.) • 3 V’s ‐ use of disparate data sets including social media , ‐ • requirements • The advancement of the fields of machine learning and visualization
Role of Data Scientist • We often do not know what question to ask ‐
requires domain expertise to identify the important roblems to solve in a iven area • Which aspect of big data makes more sense • How to apply it to the business • How one can achieve success by implementing a big data project • New challenges – Lack of structure – What technology one must use to manage it – Challenging to convert it into insights, innovation and business value – But new opportunities
183
6/26/2014
Variation of Potential Value across Sectors
Source: Are you ready for the era of ‘big data’?, Brad Brown et.al, McKinsey Quarterly, Oct 2011
Sources of Big Data • Various business units – Govt./Private • Partners • Customers • Internet of Things • Social Media • Transaction data • Web pages
184
6/26/2014
Major industries that get benefitted • Financial services • an e ecommun ca on • Healthcare • Manufacturing • Real Estate • • Travel • Media and Entertainment • Retailing
Why is Big Data so Important? • Potential to radically transform businesses and industries – e‐ nven ng us ness processes – Operational efficiencies – Customer service innovations. • You can’t manage what you don’t measure – More knowledge about the business – Improved decision making and performance – rather than intuition – More‐effective interventions
• But need to change the decision making culture Source: Big Data: The Management Revolution, Andrew McAfee, Erik Brynjolfsson, HBR, Oct 2012
185
6/26/2014
How are managers using Big Data? • To create new businesses – – Airlines had a gap of at least 5 to 10 minutes between the ETA (by pilots) and ATA. – Improved ETA by combining weather data, flight schedules, information about other planes in the sky, and other factors
• To drive more sales – To tailor promotions and other offerings to customers – To personalize the offers to take advantage of local conditions
Integrating Big Data into Business • How to utilise unstructured data within your • The latest technical changes related to using Hadoop in the enterprise • Why big data solutions can enhance your ROI and deliver value
• The future of big data and the Internet of Things • How cloud computing is changing the enterprise’s use of data
186
6/26/2014
Applications of Big data analytics • Disaster management • • Healthcare • Spatio‐temporal data analytics • Time‐series based long‐term analysis (e.g. climate change) • Short‐term real‐time Traffic mana ement disaster management) • Benchmarking across the world • Preserving heritage ‐ Creation of e‐repositories
Big Data in Manufacturing • Manufacturing generates about a third of all data today – Di itall ‐oriented businesses see hi her returns – Detecting product and design flaws with Big Data • Forecasting, cost and price modeling, supply chain connectivity • Warranty data analytics, text mining for product development • Visualization and dashboards, fault detection, failure prediction, in‐process verification tools, • Machine system/sensor health, process monitoring and correction, product data mining, • Big Data generation and management, and the Internet of Things
187
6/26/2014
Big Data in Manufacturing • Integrating data from multiple systems • Collaboration amon functional units • Information from external suppliers and customers to cocreate products
• Collaboration during design phase to reduce costs • Implement changes in product to improve quality and prevent future problems
• Identify anomalies in production systems – Schedule preemptive repairs before failures – Dispatch service representatives for repair
Retailing and Logistics • Optimize inventory levels at different locations • Improve the store layout and sales promotions • Optimize logistics by predicting seasonal effects
• Minimize losses due to limited shelf life
188
6/26/2014
Financial Applications • Banking and Other Financial – Automate the loan application process – Detecting fraudulent transactions – Optimizing cash reserves with forecasting
• Brokerage and Securities Trading – Predict changes on certain bond prices – Forecast the direction of stock fluctuations – Assess the effect of events on market movements – Identify and prevent fraudulent activities in trading
• Insurance – Forecast claim costs for better business planning – Determine optimal rate plans – Optimize marketing to specific customers – Identify and prevent fraudulent claim activities
Understanding Customer Online Behaviour • Drawing insight from the online customers • en men na ys s – o gauge responses o new marketing campaigns and adjust strategies • Customer Relationship Management – Maximize return on marketing campaigns – Improve customer retent on c urn ana ys s – Maximize customer value (cross‐, up‐selling) ‐ revenue streams – Identify and treat most valued customers
189
6/26/2014
Case Study: Quantcast • How do advertisers reach their target • Consumers spend more time in personalized media environments • It’s harder to reach large number of relevant consumers • Advertisers need to use web to choose their audiences more selectively • Decision ‐ which ad to show to whom Source: To big to ignore: the business case for big data, Phil Simon, Wiley, 2013
Quantcast: A Small Big Data Company • Web measurement and targeting company – Founded in 2006 – focus on online audience • Models marketers’ prospects and finds lookalike audiences across the world • Analyses more than 300 billion observations of media consumption per month • Detailed demographic, geographic and lifestyle information • Created a massive data processing infrastructure – Quantcast File System (QFS) • Incorporates data generated from mobile devices
190
6/26/2014
Quantcast: A Small Big Data Company • Big data allows organizations to drill down to reac spec c au ences
• Different businesses have different data requirements, challenges and goals
• Quantcast provides integration between its • An organization doesn’t need to be big to benefit from Big Data
Promise of Big Data in Healthcare • Predictive and prescriptive analytics • Public health • Disease management • Drug discovery • Personalized medicine • Continuously scan and intervene in the healthcare practices
191
6/26/2014
What is healthcare? • Broader than just practicing medicine • Role of physician, pharmaceutical companies, hospitals, diagnostics services
• Co‐creation of health value • Role of patients and their family • Objective: to deliver high‐quality and cost‐ effective care to patients
Uniqueness of Healthcare • Every patient is unique and need personalized care • All the medical professionals are unique • Can we match the core competency and unique style of medical professionals to specific patients? principles to improve the overall efficiency? • Need for engaging medical professionals to the tasks they are best at doing
192
6/26/2014
Healthcare Business Innovation • Needs exchange of knowledge between us ness an me c ne
• Entrepreneurship in healthcare • Availability of financial support to engage in value creation
•
eve opmen an ana ys s o business model
ea
ca e
Healthcare Analytics • Data collected through mobile devices, health
workers individuals other data sources • Crucial for understanding population health trends or stopping outbreaks • Individual electronic health records ‐ not only improves continuity of care for the individual, but also create massive datasets – ‐ – Trends in service line utilization – Improvement opportunities in the revenue cycle
• Treatments and outcomes can be compared in an efficient and cost effective manner.
193
6/26/2014
Sources of Information • Government officials • Industry representatives • Information technology experts • Healthcare professionals
Enabler • Increase of processing power and storage capacities – technology – Reduced cost of storage and processing of big data
• Availability of data – Digitization of data – Increase in adoption of digital gadgets by users – opu ar y o soc a me a – Mobile and internet population and penetration • Awareness about benefits of having knowledge • Requirement of data‐driven insights by various stakeholders
194
6/26/2014
Explorys: The human Case for Big Data • January 2011, US health care spending $3 • Behavioural, operational and clinical wastes • Long‐term economic implications • Opportunity to improve healthcare delivery and reduce expenses – Integrating clinical, financial and operational data – Volume, velocity and variety of health care data
• Big data can have a big impact
Better Healthcare using Big Data • Real‐time exploration, performance and • Vast user base – Major integrated delivery networks in the US • Users can view key performance metrics across providers, groups, care venues, and ocat ons • Identify ways to improve outcomes and reduce unnecessary costs
195
6/26/2014
Better Healthcare using Big Data • Why do people go to emergency room rather • Analytics to decide whether the care is available in the neighbourhoods • Service providers can reach out to patients to guide them through the treatment processes • Patients can receive the right care at the right venue at the right time • Privacy concerns
Mobilising Data to Deal with an Epidemic • Case of Haiti‟s devastating 2010 earthquake e use o • o e a a pa erns cou understand the movement of refugees and the consequent health risks • Accurately analyse the destination of over 600,000 people displaced from Port‐au‐Prince • They made this information available to government and humanitarian organisations dealing with the crisis
196
6/26/2014
Cholera outbreak in Haiti • Mobile data to track the movement of people rom a ec e zones
• Aid organisations used this data to prepare for new outbreaks
• Demonstrates how mobile data analysis could responses.
Cost‐effective Technology • DataGrid platform on Cloudera’s enterprise ‐ready Hadoo solution • Platform needs to support
– The complex healthcare data – Evolving transformation of healthcare delivery system
• Need for a solution that can evolve with time • Flexibility, scalability and speed necessary to
answers complex questions on ly • Explorys could focus on the core competencies of its business • Imperative to learn, adapt and be creative
197
6/26/2014
Benefits of Information Technology • Improved access to patient data ‐ can help clinicians as they diagnose and treat patients • Patients might have more control over their health • Monitoring of public health patterns and trends • Enhanced ability to conduct clinical trials of new diagnostic methods and treatments • Creation of new high‐technology markets and jobs • Support a range of healthcare ‐related economic reforms
Potential Benefits • Provide a solid scientific and economic basis or nves men recommen a ons
• Establish foundation for the healthcare policy decisions
• Improved patient outcomes • os ‐sav ngs • Faster development of treatments and medical breakthroughs
198
6/26/2014
Action Plan • Government should facilitate the nationwide a op on o a un versa exc ange anguage or healthcare information
• Creation of a digital infrastructure for locating patient records while strictly ensuring patient rivac .
• Facilitate a transition from traditional electronic health records to the use of healthcare data tagged with privacy and security specifications
NASA: Open Innovation • Effects of crowdsourcing on the economy – collaboration and innovation – Wikipedia • What is the relationship between Big Data, collaboration and open innovation? • Democratize big data to benefit from the wisdom of crowds • Groups can often use information to make better
• TopCoder – brings together diverse community of software developers, data scientists, and statisticians – Risk‐prevention contest • Predict the riskiest locations for major emergencies and crimes
199
6/26/2014
NASA’s Real‐world Challenges • Recorded more than 100TB of space images and
more • Encourage exploration and analysis of Planetary Data System (PDS) databases • Image processing solutions to categorize data from missions to the moon ‐ types and numbers of craters • Software to handle com lex o erations of satellites and data analysis – Vehicle recognition – Crater detection • Real‐time information to contestants and collecting
their comments
Lessons Learnt • To effectively harness big data, organization nee no o er
g rewar s
• State the problem in a way to allow a diverse set of potential solvers to apply their knowledge
• Create a compelling Big Data and open • Organization’s size doesn’t matter to reap rewards of Big Data
200
6/26/2014
Crowd Management Cricket Match • Directional flow density during Cricket Match • Availability of public transport for commuters from different destinations
• Mobile data can be used to predict the direction of movement of spectators post
A study by Uttam K Sarkar et al, IIM Calcutta
Potential Applications • Managing resources to pursue most promising
customers and markets • Technology platform for managing the risk and value of network of different stakeholders • Meeting regulatory compliance • Development of patient referral system to reduce patient churn • Life science industries such as diagnostics, pharmaceuticals, and medical devices • Development of mobile apps for referral • Genetic data mining for prediction of disease and effective care
201
6/26/2014
Simple Techniques for Big Data Transition • Which problem to tackle? – Need for domain • Habit of asking key questions – What do the data say? – Where did the data come from? – What kinds of analyses were conducted? – How confident are we in results?
• Computers can only give answers
Need for Change Management • Leadership ‐ set clear goals, define success, spot
great opportunity, and understand how a market
• Talent management – organizing big data,
visualization tools and techniques, design of experiments • Technology – to integrate all internal and external sources of data, adopt new technologies • Decision Making – maximize cross‐functional cooperations • Company culture – shift from asking “What do we think?” to “What do we know?”
202
6/26/2014
Challenges in Getting Returns • Collecting, processing, analyzing and using Big
Data in their businesses • Handling the volume, variety and velocity of the data. • Finding data scientists ‐ by 2018, the demand for analytics and Big Data people in the U.S. alone will exceed supply by as much as 190,000
• Sharing data across organizational silos • Driving data‐driven business decision rather than that based on intuition
Confronting Complications • Shortage of adequately skilled work force • Shortage of managers and analysts with sharp understanding of big data applications
• Tension between privacy and convenience – Consumer captures larger part of economic
• Data/IP security concerns – cloud computing and open access to information
203
6/26/2014
What Big Data Can’t Answer • How do we know what’s important in a complex wor
• E.g. to predict and explain market crashes, food prices, the Arab Spring, ethnic violence, and other complex biological and social systems.
• ignoring the rest) is the key to solving the world’s increasingly complex challenges. Yaneer Bar‐Yam and Maya Bialik,Beyond Big Data: Identifying important information for real world challenges(December 17, 2013), arXiv in press
Complexity Profile •The amount of information that is required to describe a system as a function of the scale of description.
•The most important n orma on a ou a system for informing action on that system is the behavior at the largest scale.
204
6/26/2014
References • Holdren, John P., Eric Lander, and Harold Varmus. " Potential of Health of Health Information Technology to Improve Healthcare for Americans: The Path Forward." President's Council of Advisors of Advisors on Science and Technology. Executive Office of the of the President, Dec. 2010. • The Emerging Big Returns on Big Data, A TCS 2013 Global Trend Study, 2013 • w w w . i k a n o w . c om
Mining Social‐Network Graphs Prof. Ram Babu Roy
205
6/26/2014
Types of Networks of Networks Technological – man made, consciously created • Technological – Kolacz k, 2009 Social – interactions among social entities • Social – Biolo ical ical –in –inter teract action ion amon amon biolo biolo ical ical ele elemen ments ts • Biolo Informational – elements of information of information • Informational –
What is a Social Network? of entities that participate in the • Collection of entities
network ere s at east one re at ons p etween • entities of the of the network. of nonrandomness ss or • There is an assumption of nonrandomne locality • Examples – “Friends” networks ‐ Facebook Twitter Goo le+ – Telephone Networks – Email Networks – Collaboration Networks Ref: Mining of Massive of Massive Datasets, Jure Leskovec, Anand Rajaraman, and Jeffrey D. Ullman, 2014
206
6/26/2014
Social Networks as Graphs of a • Not every graph is a suitable representation of a . of social networks that • Locality ‐ the property of social says nodes and edges of the of the graph tend to cluster in communities. • Important questions –
‐
with unusually strong connections – communities usually overlap ‐ you may belong to several communities
Example of a of a Small Social Network •Nine edges out of the of the 7C2 = 21 pairs of nodes of nodes •Suppose X, Y , and Z are nodes with edges between X and Y and also between . be? If the graph were large, that probability would be very close to the fraction of • If the the pairs of nodes of nodes that have edges between them, i.e., 9/21 = .429
•If the If the graph is small, there are edges (X, Y ) and (X,Z), therefore only seven edges remaining. Thus, the probability of an of an edge (Y,Z) is 7/19 = .368.
By actually counting the pairs of nodes, of nodes, fraction of times of times the third edge exists is 9/16 = .563
207
6/26/2014
Social Network Analysis of relations (Wasserman & Faust, 1994) • An actor exists in a fabric of relations – 1979 , Friedkin, 1991)
,
• Closeness ‐ ability to reach other actors at ease of a node in linking two nodes • Betweenness‐ relative importance of a of connections • Eigenvector (Bonacich, 1987) ‐ entire pattern of connections
(C max C ( x)) A
• Network centralization (Freeman, 1979)
A
xU
Max
(C max C ( x)) A
A
xU
Link Analysis • Link Analysis Algorithms – – HITS (Hypertext‐Induced Topic Selection)
if it has more links • Page is more important if it • In‐coming links • Out‐going links • Think of in of in‐links as votes • Are all in‐links are equal? – Links from important pages count more • Recursive question!
208
6/26/2014
PageRank Scores
Source: Jure Leskovec, Stanford C246: Mining Massive Datasets
Simple Recursive Formulation • Each link’s vote is proportional to the • If page j with importance r j has n out ‐links, each link gets r j / n votes
• Page j’s own importance is the sum of the votes on its in‐links • r j = r /3+r /4 i k Source: Jure Leskovec, Stanford C246: Mining Massive Datasets
209
6/26/2014
The Flow Model • A “vote” from an important page is wor
more
• A page is important if it is pointed to by other important pages
• Define a “rank” r j for page j
Source: Jure Leskovec, Stanford C246: Mining Massive Datasets
Matrix Formulation • Stochastic adjacency matrix • Let page has out‐links • If → , then = 1 / else = 0 • is a column stochastic matrix • Columns sum to 1
• Rank vector : vector with an entry per page is the importance score of page
• Σ =1 • The flow e uations can be written = ⋅ • The rank vector r is an eigenvector of the stochastic web matrix M • We can now solve for r using the method called Power iteration Source: Jure Leskovec, Stanford C246: Mining Massive Datasets
210
6/26/2014
Power Iteration Method • Given a web graph with N nodes, where the no es are pages an e ges are yper n s
• Initialize: r(0) = [1/N,….,1/N]T • Iterate: r(t+1) = M ∙ r(t) • Stop when |r(t+1) – r(t)|1 < ε • |x|1 = Σ1≤i≤N|xi| is the L1 norm • Can use any other vector norm, e.g., Euclidean Source: Jure Leskovec, Stanford C246: Mining Massive Datasets
How to Solve?
Source: Jure Leskovec, Stanford C246: Mining Massive Datasets
211
6/26/2014
HITS (Hypertext‐Induced Topic Selection) • Is a measure of importance of pages or documents • Similar to PageRank
– Proposed at around same time as PageRank (‘98)
• Goal: Say we want to find good newspapers – Don’t just find newspapers. Find “experts” – people – who link in a coordinated way to good newspapers
• Idea: Links as votes • Page is more important if it has more links • In‐coming links? Out‐going links?
Hubs and Authorities • Each page has 2 scores: • – Total sum of votes of authorities pointed to – Hubs are pages that link to authorities • List of newspapers
• Quality as a content (authority): – – Authorities are pages containing useful information • Newspaper home pages
212
6/26/2014
Hubs and Authorities
Source: Jure Leskovec, Stanford C246: Mining Massive Datasets
Example
•Under reasonable assumptions about the adjacency matrix A, HITS converges to vectors h* and a*: •h* is the principal eigenvector of matrix A AT •a* is the principal eigenvector of matrix AT A Source: Jure Leskovec, Stanford C246: Mining Massive Datasets
213
6/26/2014
Financial Market as CAS • Complex Systems – interconnectedness, hierarchy of subsystems, decentralized decisions, non‐linearity, self ‐ or anisation evolution uncertaint self ‐or anized criticality
• Complex Adaptive Systems – adaptation • Many socio‐economic systems behave as CAS (Mauboussin 2002; Markose 2005) – Com lex ‐ networks of interconnections – Adaptive ‐ optimizing agents, capable of learning – Complexity and homogeneity ‐ both robust and fragile
• Structural vulnerabilities builds up over time (Haldane, 2009)
Network‐based Modeling • Modeling Complex systems – SD, ABM, VSM, Econophysics
• (Barabási, 1999)
• Visual representation of the interdependence (Newman 2008) ‐ knowledge discovery
• Dynamics of networks may reveal underlying mechanism ,
• Recent works using network approach (Boginski et al. 2006; Tse, C.K, 2010)
214
6/26/2014
Market graph • Logarithm of return of the instrument i over the one‐day period from (t − 1 ) to t = Ri (t) = ln Pi (t)/Pi (t − 1 )
• i • Correlation coefficient between instruments i and j is
• An edge connecting stocks i and j is added to the graph if Cij >= threshold – prices of these two stocks behave similarly over time – degree of similarity is defined by the chosen value of threshold
• Studying the pattern of connections in the market graph would provide helpful information about the internal structure of the stock market
Source: Mining market data: A network approach, Vladimir Boginski, Sergiy Butenko, Panos M. Pardalos, Computers & Operations Research 33 (2006) 3171–3184
State‐of ‐the art • Regional behaviour of stock markets, relatively small sample size Korean (Jung et al. 2006; Kim et al. 2007), Indian (Pan & Sinha 2007), and Brazilian (Tabak et al. 2009)
• Evolution of interdependence and characterizing the dynamics (Garas, 2007; Huang et al. 2009)
• Not much work on identifying dominant stock indices (Eryigit, 2009)
• Little work on the impact of recent financial crisis during 2008 • Limited business application of SNA (Bonchi et al., 2011)
215
6/26/2014
Research Gap • Characterising the global market dynamics is a complex problem
• Lack of system‐level analysis of global stock market • The network based methodologies inadequately explored • Adaptation of SNA methodologies to other domains
Research Objective • Understanding underlying network structure of markets • Methods to capture interdependence structure of complex systems
• Methods to characterize evolutionary behavior • Methods for change detection ‐ the impact of events on the topology
216
6/26/2014
Research Questions • Is there any regional influence on the evolutionary behavior? • normal phase?
• How to capture the macroscopic interdependence structure among stock markets and economic sectors?
• • What is the response of the network to extreme event?
Application: Change Detection in the Stock Markets
Source: A Social Network Approach to Change Detection in the Interdependence Structure of Global Stock Markets by Ram Babu Roy, Uttam Kumar Sarkar Social Network Analysis and Mining, Springer, Vol. 3, Number 3, (2013)
217
6/26/2014
Methodology • Secondary data on major stock markets from across the globe obtained from Bloomberg
• • Characterization and pattern mining to investigate the structural and statistical properties and behavior
• Statistical Control chart to detect anomaly in evolution • Graph theoretic methods and algorithms, network visualization tool (Pajek, Matlab and MS Excel)
• Non‐parametric methods for analysis and change detection
Data Description •
The daily closing prices for 85 stock indices from 36 countries from across the world from January 2006 to December 2010 obtained from Bloomberg
•
In addition to these stock indices from various countries, 8 other indices namely, SX5E, SX5P, SXXE, SXXP, E100, E300, SPEURO, SPEU from European region were included to investigate whether the regional indices have any influence on the network structure
•
‐
market network before and after the collapse of Lehman Brothers in the USA.
• Restricted our samples to only those indices existing for
longer period and have data available on Bloomberg (say from 1990) giving us 93 such indices
218
6/26/2014
Computations • Logarithmic return Ri(t) of the instrument i over the one‐day period from (t − 1 ) to t = Ri (t) = ln { Pi (t)/Pi (t − 1 )}
• Correlation coefficient between returns of instruments i and j C ij
R i R j R i R j R i2 R i
2
R j2 R j
2
• An edge connecting stock indices i and j is added to the graph if = – returns of these two stock indices behave similarly over time – degree of similarity is defined by the chosen value of threshold
• MST is used for obtaining simplified connected network d ij
2(1 ij ) ,
0 d ij 2,
Illustration of MST creation
219
6/26/2014
Empirical Findings
Period No
Start Date
End Date
Period No
Start Date
End Date
1
1/11/2006
4/23/2008
36
9/17/2008
12/29/2010
6
5/31/2006
9/10/2008
37
1/4/2006
12/29/2010
7
6/28/2006
10/8/2008
Pre‐LB
220
6/26/2014
Post‐LB
• Indices cluster with the regional hubs • Relatively more decentralized network •European stock indices emerge more central
Application: Identifying Dominant Economic Sectors and Stock Markets Source: Identifying dominant economic sectors and stock markets: A social network , Coast, Australia, Springer, LNCS 7867, pp 59‐70., (2013)
221
6/26/2014
Data Description Stock Index
Number of Stocks
Stock Index
Number of Stocks
GICS Economic Sector
Number of Stocks
AS52
136
NMX
192
Consumer Discretionary
486
CNX500 FSSTI HDAX
303 19 38
NZSE50FG SBF250 SET
26 140 285
Consumer Staples Energy Financials
229 128 408
HSI IBOV
25 27
SHCOMP SPTSX
382 147
Health Care Industrials
146 512
KRX100
65
SPX
384
Information Technology
234
MEXBOL
24
TWSE
313
Materials
425
NKY
192 Total
829
Total
1869
Telecommuni cation Services Utilities
36 94
Grand Total
2698
Data for 13 years from January 1998 to January 2011 GICS ‐ Global Industry Classification Standard
Identification of Dominant Economic Sectors and Stock Markets • The normalized intra‐sectoral edge density (in percent) is the
ratio of the number of edges between the stocks of the particular sector and the maximum number of possible edges between the stocks of that particular sector (i.e. n‐1 where n is the no. of stocks of that particular sector)
• The normalized inter‐sectoral edge density it is the ratio of the
number of edges between the stocks of the two different sectors and the maximum number of possible edges between the stocks of those two sectors (i.e. the min(n1, n2) where n1 and n2 are the number of stocks belonging to the two sectors)
• Similar procedure has been followed to identify dominant stock markets after computing the normalized inter‐index and intra‐ index edge densities.
222
6/26/2014
Index
Color
NMX
Green
Index AS52 SPTSX
Color Pink Black
CNX500
Green
SPX
HDAX
Magenta
SBF250
Brown
HSI
Purple
SET
Brown
IBOV
Yellow
FSSTI
Cyan
KRX100
Magenta
TWSE
Orange
MEXBOL
Blue
SHCOMP
Red
NKY
Black
No. of Stocks
Continent Australia Zealandia Asia Europe North America South America
NZSE50FG Green Red
Node Shape Diamond Diamond Triangle box Circle Ellipse
Continent
Node Shape
Australia
Diamond
Zealandia
Diamond
Asia
Triangle
Europe
box
North America
Circle
South America
Ellipse
No. of Stocks
G IC S S ec to r
C ol or
GICS Sector
Color
Materials
Brown
425
Financials
Magenta
408
Industrials Health Care
Cyan Orange
512 146
Blue Black
234 229
Energy Consumer Discretionary
Red
128
Information Technology Consumer Staples Telecommunication Services
Green
36
Yellow
486
Utilities
Purple
94
223
6/26/2014
Interdependence structure of economic sectors (Weighted edge) Eigenvector
1 Financials
0.4785
2 Industrials
0.4042
3 Materials Consumer 4 Discretionary Information 5 Technology
0.3795 0.3449 0.2962
6 Consumer Staples Telecommunication 7 Services
0.2702
8 Utilities
0.2002
9 Energy
0.1957
10 Health Care
0.2694
0.1817
Network of Stock Indices (Weighted edge) Rank
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Index
SBF250 HDAX NMX SPX AS52 NZSE50FG SPTSX HS Index SET MEXBOL CNX500 NKY FSSTI SHCOMP IBOV TWSE KRX100
Eigenvector Centrality
0.6381 0.5938 0.393 0.1523 0.1517 0.1248 0.1112 0.071 0.0465 0.0423 0.0353 0.0289 0.0208 0.0048 0.0043 0.0004 0
224
6/26/2014
Inter‐sectoral interdependence linking cross‐country stock markets
Rank
Economic Sector
1 2 3 4 5 6 7 8 9 10
Industrials Financials Materials Utilities Health Care Consumer Staples Consumer Discretionary Information Technology Telecommunication Services Energy
EV Centrality
0.4738 0.4575 0.3105 0.2921 0.2886 0.2747 0.2685 0.2306 0.225 0.2232
Findings and Conclusions • Presence of distinct regional and sectoral clusters • Regional influence dominates economic sectors • Stock indices from Europe and US emerge as dominant
• Financial,
Industrial, Materials, Discretionary sectors dominate
Consumer
• oc a pos t on o stoc s ‐ port o o management • System level understanding of the structure and behavior
225
6/26/2014
Scope for Future Research • Potential application for classification of stocks in portfolio management
• Detecting epicenter of turbulence in near real‐time ‐ development of EWS
• Modeling shock propagation through the network • • Whether the self ‐organizing network provides in‐built resilience
References • Bessler, D.A., Yang, J., The structure of interdependence in international stock markets. Journal of International Money and Finance 22, 261‐287, 2003.
• Eryiğit, M. and R. Eryiğit, Network structure of cross‐correlations among the world market indices. Physica A: Statistical Mechanics and its Applications 388(17): 3551‐3562 , 2009.
• Adams, J., K. Faust, et al. , Capturing context: Integrating spatial and social network analyses. Social • • • • • • •
•
Networks 34(1), 1‐5, , 2012. , , , , 2005 Tse, C.K., Liu, J., Lau, F.C.M., A network perspective of the stock market. Journal of Empirical Finance doi:10.1016/j. jempfin.2010.04.008, 2010. Boginski, V., Butenko, S., Pardalos, P.M., Mining market data: A network approach. Computers & Operations Research 33, 3171–3184, 2006. Wasserman, S. and K. Faust , Social Network Analysis: Methods and Applications."Cambridge University Press: 461‐502, 1994. Freeman, L.C., Centrality in social networks: Conceptual clarification. Social Networks 1, 215‐239, 1979. Bonacich, P., Power and Centrality: A Family of Measures. The American Journal of Sociology 92, 1987.. Roy, R. B. and U. K. Sarkar, A social network approach to examine the role of influential stocks in shaping interdependence structure in global stock markets. International Conference on Advances in Social Network Analysis and Mining (ASONAM), Kaohsiung, Taiwan, DOI: 10.1109/ASONAM.2011.87,2011. Roy, R. B. and U. K. Sarkar, "A social network approach to change detection in the interdependence structure of global stock markets." Social Network Analysis and Mining DOI 10.1007/s13278‐012‐ 0063‐y, 2012.
226
6/26/2014
Classification using Neural Networks Uttam K Sarkar Indian Institute of Management Calcutta
Session Plan • The need for neural networks – Signature recognition problem • Whether the signature in the cheque is ‘same’ as the signature the bank has against the account in its database
• Concept of a Neural Network • Demonstration of how it works using Excel • Potential business applications • Issues in using neural networks
227
6/26/2014
Artificial Neural Network (ANN) An ANN is a computational paradigm inspired by e s ruc ure o o og ca neura ne wor s and their way of encoding and solving problems
Biological inspirations • Some numbers… – The human brain contains about 10 billion nerve cells
(neurons) – Each neuron is connected to the others through about 10000 synapses
• Properties of the brain – It can learn, reorganize itself from experience – It a apts to t e env ronment – It is robust and fault tolerant
228
6/26/2014
Biological neural networks • Human brain contains several billion nerve‐cells • A neuron receives electrical signals through its dendrites • The accumulated effect of several signals received simultaneously is linearly additive • Output is non‐linear (all or none) type of output signal • Connectivity (no of neurons connected to a neuron) varies from 1 to 105. For the cerebral cortex it’s about 103.
Biological Neuron synapse
axon
nucleus
cell body
dendrites
• Dendrites sense input • Axons transmit output • Information flows from dendrites to axons via cell body • Axon connects to other dendrites via synapses – Interactions of neurons – synapses can be excitatory or inhibitory – synapses vary in strength
• How can the above biological characteristics be modeled in an artificial system?
229
6/26/2014
Artificial implementation using a computer ? • Input – Acce tin external in ut is sim le and common lace
• Axons transmit output – Output mechanisms too are well known
• Information flows from dendrites to axons via cell body – Information flow is doable
• Axon connects to other dendrites via synapses – Interact ons o neurons ow W at n o grap or networ – synapses can be excitatory or inhibitory (1/0 ? Continuous?) – synapses vary in strength (weighted average?)
Typical Excitement or Activation Function at a Neuron (Sigmoid or Lo istic curve
1 y f ( x) e
Logistic
230
6/26/2014
Interconnections? Feed Forward Neural Network • Information is fed at the input • Com utations done at the hidden layers • Deviations of computed results from desired goals retunes computations • Network thus ‘learns’ • Computation is terminated once the learning is assumed acceptable or resources earmarked for computation get exhausted
Output layer
2nd hidden layer 1st hidden
x1
x2
…Input layer..
xn
Supervised Learning • Inputs and outputs are both known. • The network tunes its weights to transform the inputs to the outputs without trying to discover the mapping in an explicit form
• One may provide examples and teach the knowing exactly how!
231
6/26/2014
Characteristics of ANN • Supervised networks are good approximators • oun e unct ons can e approx mate y an to any precision
• Does self ‐learning by adapting weights to environmental needs
• Can work with incomplete data • The information is distributed across the network. If one part gets damaged the overall performance may not degrade drastically
What do we need to use NN ? • Determination of pertinent inputs of the neural network • Finding the optimum number of hidden nodes • Estimate the parameters (Learning) • Evaluate the performances of the network
• the precedent points
232
6/26/2014
Applications of ANN • Dares to address difficult problems where cause‐ effect relationship of input‐output is very hard to quantify – Stock market predictions – Face recognition – Time series prediction – Process control – Optical character recognition – Optimization
Concluding remarks on Neural Networks • Neural networks are utilized as statistical tools – Adjust non linear functions to fulfill a task – Need of multiple and representative examples but fewer
than in other methods • Neural networks enable to model complex phenomena • NN are good classifiers BUT – Good representations of data have to be formulated – Training vectors must be statistically representative of the
• Effective use of NN needs a good comprehension of the problem and a good grip on underlying mathematics
233
6/26/2014
Business Data Mining Promises and Reality
Uttam K Sarkar Indian Institute of Management Calcutta
Background • “War is too important to be left to the Generals” – Georges Benzamin Clemenceau – Decision making now requires navigating beyond transactional data
• Need for exploring ocean of data – – – – – –
How to filter? How about outliers? How to summarize? How to analyze? How to apply?
234
6/26/2014
Challenges (Opportunities?) • Real world does not have a consistent behaviour – What model is to be extracted by analytics then? “Future, by definition, is uncertain”
• Ca tured data are error rone and involve uncertaint – “You are dead by the time you know the ultimate truth”
• Often there is no concrete clarity on goal as you optimize – What are the strengths and weakness of underlying assumptions?
• Data are voluminous, have high variety, and high velocity – How to capture, store, transmit, share, sample, analyze? – “Sufficiently tortured and abused, statistics would confess to almost anything”
Analytics Promises, Myths, and Reality • – Business intelligence, Data mining, Data warehousing, Predictive analytics, Prescriptive analytics, Big data, ….
• Too many questions – emerge from? Why are they getting so popular? Is analytics the panacea? Is it yet another overhyped buzzword?
235
6/26/2014
Analytics in action • Instead of struggling for academic definitions e us oo n o w a ese are n en e o achieve and look at some examples from the business world where these are being used
Netflix • Disruptive innovation riding on analytics dimension –
: ee as ngs, ompu er c ence as er s rom Stanford, pays $40 late fee for a VHS cassette of the movie Apollo 13
– The recommender system – The transportation logistics fine tuning – The silent battle of the nondescript entrant Netflix against incumbent billion dollar Blockbuster
– Blockbuster initially ignored Netflix – Recognition (and virtual surrender) by Blockbuster – it was too late!
236
6/26/2014
Harrah’s • Knowing customers better using analytics – The magic of CRM
Wal‐Mart , and inventory management and the success of Radio Frequency Identifcation (RFID) based analytics
,
– Reduced stockout!
Examples Galore … •
HP
– Which employee is likely to quit? • Target – Which customers are expectant mothers? • Telenor – Which customers can be persuaded to stay back? • Latest US presidential election – Which voter will be positively persuaded by political campaign suc as a ca , oor noc ,
er, or
a
• IBM (Watson) – Automated Jeopardy! Analytics software challenged human opponent!
237
6/26/2014
Analytics defined? • Analytics is (Ref: Davenport) – Extensive use of data, statistical, quantitative, and computer based analysis, explanatory and predictive models, and fact‐based rather than gut feeling‐ based approach to arrive at business decisions and to drive actions
• Ana yt cs can e p arr ve at nte gent decisions • What is intelligence in this context? • How can a machine help take intelligent decisions?
Changing nature of competition in business world: How to get competitive advantage?
• Operational business processes aren’t much different from an bod else’s
– Thanks to R&D on “best practices”
238
6/26/2014
…Competitive advantage • Operational business processes aren’t much different from anybody else’s – Thanks to R&D on “best practices” ’
– Thanks to comparable technology accessible to all • Unique geographical advantage doesn’t matter much – Thanks to improved transportation/communication logistics • Protective regulations are largely gone – Thanks to globalization • Pr Pro o rie rietar te tech chno nolo lo ets ra idl idl co ied ied – Thanks to technological innovations • What is left is to improve efficiency and effectiveness by taking the smartest business decisions possible out of of data data which may as well be available to competitors
Stock Market Analogy • It’s too well known the stock market would cease to exist if if one one could find a predictive theory of of price price movements! – The market exists because no such ideal model is possible. Players try disparate models ‐ some gain while others lose
•
– interpretation is meaningless – The smarter guy discovers more meaningful patterns for his business by using analytics!
239
6/26/2014
Emergence of of analytics: analytics: the contributors • Old wine in new bottle? – Yes and No
of business business analytics • Pillars of – Data capture and storage in bulk – Statistical principles revisited – Developments in machine learning principles of easy easy to use software tools – Availability of ange ge m n se sett – Manager a nvo vement w t a c an
Panacea ? • Too many ready to use tools and techniques are available! – “A fool with a tool is still a fool”
Road to Analytics Uttam K Sarkar Indian Institute of of Management Management Calcutta
240
6/26/2014
Attributes of of organizations organizations thriving on analytics • Identification of distinctive of distinctive strategic capability for analytics to make a difference – Highly organization specific – Ma be customer customer lo alt for Harrah’ Harrah’ss revenue revenue mana ement for Marriott Marriott supply chain performance for Wal‐Mart. Customer’s movie preference for Netflix , …
• Enterprise‐level approach to analytics – Gary Loveman of Harrah’s of Harrah’s broke the fiefdom of marketing of marketing and customer service islands into cross‐department chorus on shared data and thoughts
• Senior management capability and commitment – Reed Hastings of Netflix of Netflix • Computer Science graduate from Stanford) – Gary Loveman of Harrah’s of Harrah’s • PhD in Economics from MIT – Bezos of Amazon.com of Amazon.com • Computer Science graduate from Princeton
Stages of of analytics analytics preparedness of of an an organization
• Analytically impaired – Groping for data to improve operations
• – Using limited analytics to improve a functional activity
• Analytics Experimenter – Exploring analytics to improve a distinctive capability
• Analytics Mindset –
• Competing on analytics of analytics – Staying ahead on strength of analytics
241
6/26/2014
Investment in analytics without due diligence may not yield results
• United airlines had been the world’s largest airline in terms of fleet of fleet size and number of destinations, of destinations, operating over 1200 aircraft with service to 400 odd destinations
• United airlines had invested heavily in analytics • The company filed for Chapter 11 bankruptcy rotection in 2002
• Despite business downturn, most other airlines were not as adversely affected
• What went wrong with its analytics?
United airlines analytics …postmortem • UA pioneered only yield management – Other smaller airlines were doing cost cutting and offering sea s a ower ares
• UA were developing complex route planning optimization analytics with multiple plane types SouthWest used only one kind of plane of plane – Competitor like SouthWest and had a far simpler and cheaper system of running of running
• UA pioneered loyalty programme based on analytics – Their customer service was so pathetic that frequent flyers hardy had any loyalty to it
• UA spent a fortune developing analytics – Other airlines can buy at far cheaper price Sabre system for pretty similar analysis
242
6/26/2014
Questions to ask when evaluating an analytics initiative
• How does this initiative improve enterprise‐wide capabilities? • What complementary changes need to be made to take advantage of the capabilities? – More IT? More training? Redesign jobs? Hire new skills?
• Do we have access to right data? – Are data timely, accurate, complete, consistent?
• Do we have access to right technology? – Is the technology workable, scalable, reliable, and cost effective?
Missteps to avoid when getting into an analytics initiative • Focusing excessively on one dimension of the capability (say, only on technology)
• Attempting to do everything at once – Any complex system is best handled incrementally • Investing too much or too little on analytics without
matching impact on and demand of business • Choosing the wrong problem – Wrong formulation, wrong assumptions, wrong data, , • Making the wrong interpretation – Tool + data + few mouse clicks = model – Model + input = output – Who assesses whether the output is garbage?
243