HA300 - SAP HANA Implementation and Modeling Collection 97 SPS4
Material number: 50110712 Version: 97
DISCLAIMER This presentation and SAP's strategy and possible future developments are subject to change and may be changed by SAP at any time for any reason without notice. This document is provided without a warranty of any kind, either express or implied, including but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. SAP assumes no responsibility for errors or omissions in this document, except if such damages were caused by SAP intentionally or grossly negligent. © SAP 2012
© 2012 SAP AG. All rights reserved.
2
Copyright © 2012 SAP AG. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors. Microsoft, Windows, Excel, Outlook, PowerPoint, Silverlight, and Visual Studio are registered trademarks of Microsoft Corporation. IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, z10, z/VM, z/OS, OS/390, zEnterprise, PowerVM, Power Architecture, Power Systems, POWER7, POWER6+, POWER6, POWER, PowerHA, pureScale, PowerPC, BladeCenter, System Storage, Storwize, XIV, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, AIX, Intelligent Miner, WebSphere, Tivoli, Informix, and Smarter Planet are trademarks or registered trademarks of IBM Corporation. Linux is the registered trademark of Linus Torvalds in the United States and other countries. Adobe, the Adobe logo, Acrobat, PostScript, and Reader are trademarks or registered trademarks of Adobe Systems Incorporated in the United States and other countries. Oracle and Java are registered trademarks of Oracle and its affiliates. UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group.
Google App Engine, Google Apps, Google Checkout, Google Data API, Google Maps, Google Mobile Ads, Google Mobile Updater, Google Mobile, Google Store, Google Sync, Google Updater, Google Voice, Google Mail, Gmail, YouTube, Dalvik and Android are trademarks or registered trademarks of Google Inc. INTERMEC is a registered trademark of Intermec Technologies Corporation. Wi-Fi is a registered trademark of Wi-Fi Alliance. Bluetooth is a registered trademark of Bluetooth SIG Inc. Motorola is a registered trademark of Motorola Trademark Holdings LLC. Computop is a registered trademark of Computop Wirtschaftsinformatik GmbH. SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP BusinessObjects Explorer, StreamWork, SAP HANA, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. Business Objects and the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and other Business Objects products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Business Objects Software Ltd. Business Objects is an SAP company.
Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems Inc.
Sybase and Adaptive Server, iAnywhere, Sybase 365, SQL Anywhere, and other Sybase products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Sybase Inc. Sybase is an SAP company.
HTML, XML, XHTML, and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology.
Crossgate, m@gic EDDY, B2B 360°, and B2B 360° Services are registered trademarks of Crossgate AG in Germany and other countries. Crossgate is an SAP company.
Apple, App Store, iBooks, iPad, iPhone, iPhoto, iPod, iTunes, Multi-Touch, Objective-C, Retina, Safari, Siri, and Xcode are trademarks or registered trademarks of Apple Inc.
All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary.
IOS is a registered trademark of Cisco Systems Inc. RIM, BlackBerry, BBM, BlackBerry Curve, BlackBerry Bold, BlackBerry Pearl, BlackBerry Torch, BlackBerry Storm, BlackBerry Storm2, BlackBerry PlayBook, and BlackBerry App World are trademarks or registered trademarks of Research in Motion Limited.
© 2012 SAP AG. All rights reserved.
The information in this document is proprietary to SAP. No part of this document may be reproduced, copied, or transmitted in any form or for any purpose without the express prior written permission of SAP AG.
3
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
4
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
1-1
Unit 1: Approaching SAP HANA Modeling Lesson1: Best practice guidelines
©SAP AG
HA300
1-2
Objectives Approaching SAP HANA Modeling
At the end of this Lesson you will be able to:
Take into account Persistency Considerations
Explain the different engine types in the SAP HANA Architecture
To choose the best views for the Information Model
To discuss some General Recommendations
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
1-3
Overview Approaching SAP HANA Modeling This module covers the following topics:
Persistency Considerations
SAP HANA Engine Overview
Choosing Views for the Information Model
© 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
1-4
Client
Persistency Considerations I Approaching SAP HANA Modeling
Queries
Column Table
Select a, b, c FROM persistence_model
Column View
NewDB
Analytic View Attribute View Data Foundation
1
Attribute View Attribute View
RowTable
Calculation View
Calculation View
… …
… …
Via SQL Statements
Out = SELECT a, b, c FROM table:
2
Out = CE_PROJECTION (: data, [ a, b, c ] ;
3
© 2012 SAP AG. All rights reserved.
Via CE Build-In function
4
5
5
Before start creating tables on HANA database, you should take a little time to think about your scenarios. Different scenarios have different requirements and for that also different persistency models. Here is a small checklist you should take into consideration: Write or read intensive scenarios y Criteria for storage type − for write-Intensive scenarios the recommendation is to use the row storage − for read-intensive scenarios the recommendation is to use the column storage Real-time Data access y Depending on the scenario requirements, various data replication tools are available. The tools can differ from each other by: − support of table type, data type − capability of data type transformation Authorization y HANA data content authorization/privilege applies to specific models only (e.g Analytic View, Calculation View, Attribute View) Application / Client y Different clients consume HANA DB differently and might require client specific models, some examples: − Explorer does not support filters in the queries on design time − Currently only AAO and Excel plan to support hierarchies − WEBI always retrieves and caches the max result set − Only Explorer, Excel and AAO do currently consume additional meta data, e.g. for multi language support Functionality y Some complex algorithm can only expressed in L language − HANA Currency Conversion supported only via Analytic / Calculation View − UNION only in Calculation View Performance ©SAP AG
HA300
1-5
Persistency Considerations II Approaching SAP HANA Modeling 2: Analytical View
3: Calculation View (SQL)
4: Calculation View (CE Functions)
Usage
Good for quick start with HANA. Shall be used for simple applications and showcases.
Most recommended for analytical purpose, where read operations on mass data is required.
Good for quick build of scenarios with complex calculations. The model is usually simple and contains only a few fields.
Recommended for analytical purpose using complex calculation, which can not be expressed in an analytic view.
Pros
No additional modeling required. For most clients easy to consume.
Very high performance on SELECT. Supported by modeling. Well optimized.
Building calculation views via SQL syntax is easy.
Client queries can be well optimized and parallelized. Usually better performance results than SQL.
Limitations in regards to functions.
Client queries can be less optimized and could significantly be slower compared to other models.
Syntax is different compared to well-known SQL Language.
Cons
No support for analytical privileges, multi language and client handling. Complex calculation and logic shifted to client side. In general low performance.
1: Column Table
© 2012 SAP AG. All rights reserved.
©SAP AG
6
HA300
1-6
SAP HANA Engine Overview I Approaching SAP HANA Modeling
SQL Optimizer Calculation Engine OLAP Engine
Join Engine
Row Store Engine
Column Store
Row Store
© 2012 SAP AG. All rights reserved.
7
The SAP HANA architecture provides different types of engines: • Join Engine The Join Engine used to perform all type of joins. • OLAP Engine The OLAP Engine is used for calculation and aggregation ‚based on star schema‛ or similar. • Calculation Engine The Calculation Engine is used on top of OLAP engine and/or Join Engine for complex calculation which cannot be done by Join Engine or OLAP Engine . The SQL Optimizer decides the best way to call the engines depending on the involved models and queries.
©SAP AG
HA300
1-7
SAP HANA Engine Overview II Approaching SAP HANA Modeling
Calculation Views
Calculation Engine Analytic Views
OLAP Engine
Join Engine
© 2012 SAP AG. All rights reserved.
Attribute Views
8
When building new Information Models, we need to keep in mind which engine is utilized. The Graphic shows, what engine is utilized based on the view that has been designed. It is important to mention that any Analytical View with a calculate attribute or an Attribute view containing a Calculation attribute, will become a Calculation View. This is very important to be taken in consideration during the modeling, because it can have a big impact regarding the performances of the data model. In general this kind of approach creating Calculate Attribute View into Analytic View or Attribute view must be avoided.
©SAP AG
HA300
1-8
General Modeling Principles Avoid transfer data of large resultsets between the HANA DB and client application
Client / Application A
G
Y
Z
- Do calculation after aggregation. - Avoid Complex expressions (IF, CASE, ... )
Calculation Views
Reduce data transfer between views
Attribute Views
Analytical Views
A
G
Y
A
B
C
Aggregate data records (e.g using GROUP BY, reducing Coulmns) D
G
Y
Join on Key Columns or Indexed Columns Avoid calculations before aggregation on line item level
A
B
C
D
Column Filter data amount as early as possible in the lower layers (CONSTRAINTS, WHERE Clause, Analytical Privileges..)
Store
© 2012 SAP AG. All rights reserved.
©SAP AG
9
HA300
1-9
Choosing Views for the Information Model Approaching SAP HANA Modeling Analyze Data in HANA DB
Yes Use Analytic View
Use Starschema or Aggregation?
Yes Use Attribute View
No Only Joins and Calculated Expressions?
OK Use Graphical Calculation View
No Try Graphical Calculation View
OK Use CalcScenario or using CE-Functions
© 2012 SAP AG. All rights reserved.
Not enough Use CalcScenario or Scripted CalcView with CE-Functions
Not enough Use Scripted Calculation View or Procedures
10
When deciding on what views will be most efficient for your Information Model, we recommend to follow the decision tree as a guide to identify the views you need to create based on the analysis requirements. For the best performances and easier maintenance, it is important to stay on the top three levels of the decision tree when ever possible.
©SAP AG
HA300
1-10
Summary Approaching SAP HANA Modeling
You should now be able to:
Take into account Persistency Considerations
Explain the different engine types in the SAP HANA Architecture
To choose the best views for the Information Model
To discuss some General Recommendations
© 2012 SAP AG. All rights reserved.
©SAP AG
11
HA300
1-11
You should now be able to:
© 2012 SAP AG. All rights reserved.
©SAP AG
12
HA300
1-12
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
2-1
Objectives Connecting Tables
At the end of this Lesson you will be able to:
Explain differences between Inner Join, Left Outer Join, Right Outer Join, Full Outer Join, Text Join and referential Join when connecting tables.
Explain how using Standard Union and Union with constant values.
© 2012 SAP AG. All rights reserved.
©SAP AG
2
HA300
2-2
Business Example Connecting Tables Sales Order
Customer
We want to connect the Sales Order table to the Customer table linked to the State table.
© 2012 SAP AG. All rights reserved.
State
3
Order 8 does not have a customer master record. Customer TOM does not have any orders. State TX does not have a description. No customer resides in Alabama.
©SAP AG
HA300
2-3
Overview Connecting Tables This module covers the following topics: How to connect tables using Inner Left
Outer Join
Right Full
Join
Outer Join
Outer Join
Text
Join
Referential
Join
Union
© 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
2-4
Join Types – Definitions and Referential Integrity Join Type
Use when you need to report on…
Be aware that…
INNER
facts with matching dimensions only
facts without any dimension will be excluded dimensions without any fact will be excluded JOIN is always performed
LEFT OUTER
all posted facts whether there is a matching dimension or not
dimensions without any fact will be excluded best for performance because JOIN is omissible
RIGHT OUTER
all dimensions whether there are matching facts or not
facts without any dimension will be excluded JOIN is always performed
REFERENTIAL
facts for the requested dimensions AND referential integrity is ensured
TEXT
an SAP dimension table joined to a text table for translation purpose
only available for Attribute Views with SAP ERP tables (SPRAS field) or equivalent design acts as an INNER
it is the default join type acts as an INNER for Attributes Views join on attribute views is “optional”, which facts are returned will depend on which attributes are queried for
© 2012 SAP AG. All rights reserved.
5
Referential Joins Is semantically a inner join that assume that referential integrity is given which means that the left table always have an corresponding entry on the right table. It could be used in e.g. data foundation for header-item relations where it can be assumed that for each item a header exists. It can be seen as an optimized or faster inner join where the right table is not checked if no field from the right table is requested. That means that the Referential Joins will be only executed, when fields from both tables are requested. Therefore, if a field is selected from the right table it will act similar to inner join, and if no fields from the right table is selected it will act similar to a left outer join. From performance perspective, the Left Outer Join are almost equally fast as Referential Join, while the Inner Join is usually slower due to the fact, that the join is always executed. Referential joins should be used with caution since it assumes that referential integrity is ensured. The only valid scenario for the Referential Join is that (a) it is 100% guaranteed that for each row in one table, there is at least one join partner in the other table, and (b) that holds true in both directions (c) at all times. If that is not the case then referential joins have the possibility to give incorrect calculations if the referential integrity is not met – meaning if a delivery header is created but the items is not processed until a later stage then any calculations that use referential joins will be incorrect.
©SAP AG
HA300
2-5
Inner Join – Attribute View Connecting Tables Inner Join returns rows when there is at least one match in both sides of the join. Inner is used even if it’s not added.
Ä
Attribute View Ä Customer (3 & 4) is not returned due to no corresponding entry (TX) in the state table.
© 2012 SAP AG. All rights reserved.
©SAP AG
6
HA300
2-6
Inner Join – Analytical View Connecting Tables Analytical View
Ä Be aware that Inner Joins
Ä
lose facts with fragmented dimensions. Order (4 & 77) lost due to no corresponding customer and state record
© 2012 SAP AG. All rights reserved.
©SAP AG
7
HA300
2-7
Inner Join and Design Time Filters Connecting Tables
Design time filter applied (AGE < 13) of left/central table
Design time filter applied to (STATE = MI) on right table
Both design time filters are applied first before the join is executed
© 2012 SAP AG. All rights reserved.
©SAP AG
8
HA300
2-8
Left Outer Join – Attribute View Connecting Tables
Left Outer Join returns all rows from the left table even if there are no matches in the right table.
This join is popular in Analytical Views whereby the Attribute view is joined to the fact table.
Attribute View
Ä Ä Ä No matches for TX in the right table. © 2012 SAP AG. All rights reserved.
©SAP AG
9
HA300
2-9
Left Outer Join – Analytical View Connecting Tables Analytical View
Ä
© 2012 SAP AG. All rights reserved.
©SAP AG
Ä Customer (TOM) is not returned due to no corresponding sale item record in sales order table.
10
HA300
2-10
Left Outer Join and Design Time Filters Connecting Tables
Design time filter applied (AGE < 13) of left/central table
Design time filter applied to (STATE = MI) on right table
Filters are applied to both tables and then afterwards the join is executed. Due to the left outer join TOM will be included in the result set even though he resides in TX
© 2012 SAP AG. All rights reserved.
©SAP AG
11
HA300
2-11
Right Outer Join – Attribute View Connecting Tables
Right Outer Join returns all the rows from the right table, even if there are no matches in the left table.
Attribute View
Ä Alabama is included in the result set, though there is no match in the left table.
© 2012 SAP AG. All rights reserved.
12
Right outer join is rarely used, be aware, that the number of records in the result set is determined by the number of records in the right table. This means that if there is a 1:n relationship the number of records can be greater that the number of records in the left table.
©SAP AG
HA300
2-12
Right Outer Join – Analytical View Connecting Tables Analytical View
Ä Ä Right Outer Join results in NULL measure.
© 2012 SAP AG. All rights reserved.
©SAP AG
13
HA300
2-13
Full Outer Join Connecting Tables
Full Outer Join is neither left nor right - it's both. It includes all the rows from both of the tables or result sets participating in the Join.
When no matching rows exist for rows on the left side or right side of the Join, you see NULL values.
? ? ? ? 1 WERNER 10 MI 2 MARK 11 MI 3 TOM 12 TX 4 BOB 13 TX C_ID CNAME AGE STATE
STATE SNAME AL ALABAMA MI MICHIGAN MI MICHIGAN ? ? ? ?
© 2012 SAP AG. All rights reserved.
14
Be aware that Full Outer Joins make up facts for unused dimension entries (ALABAMA)
©SAP AG
HA300
2-14
Text Join Connecting Tables
Text Join are used to join a text table to a master data table.
Text Joins acts as a Left Outer join and can be used with SAP tables where the language column (SPRAS) is present.
For each attribute it is possible to define a description mapping that will be specific to the end users language.
© 2012 SAP AG. All rights reserved.
©SAP AG
15
HA300
2-15
Join Types – Text Join for multilingual reporting
Text Join is used when translation for a dimension is available
Designed for ERP table (and typically SPRAS field)
User language is used as a filter at runtime to find the right translation for that attribute
© 2012 SAP AG. All rights reserved.
©SAP AG
16
HA300
2-16
Referential Join Connecting Tables Relies on Referential Integrity
Each entry in the left table MUST have a corresponding entry in the right table
Optimized for performance
Join is only performed if at least one field from the right table is requested.
Like an Inner Join when join is executed
When field from both tables are requested an inner Join is performed.
Only available in OLAP engine
Referential join is a feature available only in OLAP engine, when testing Attribute Views outside the context of a Analytical view then the Join Engine will perform a Inner Join.
© 2012 SAP AG. All rights reserved.
17
Is semantically a inner join that assume that referential integrity is given which means that the left table always have an corresponding entry on the right table. It could be used in e.g. data foundation for header-item relations where it can be assumed that for each item a header exists. It can be seen as an optimized or faster inner join where the right table is not checked if no field from the right table is requested. That means that the Referential Joins will be only executed, when fields from both tables are requested. Therefore, if a field is selected from the right table it will act similar to inner join, and if no fields from the right table is selected it will act similar to a left outer join. From performance perspective, the Left Outer Join are almost equally fast as Referential Join, while the Inner Join is usually slower due to the fact, that the join is always executed.
©SAP AG
HA300
2-17
Referential Join – Attribute View Connecting Tables
*** Referential join is a feature available only in OLAP engine, when testing Attribute Views outside the context of a Analytical view then the Join Engine will perform a Inner Join. As a result TOM and BOB will not be returned.
© 2012 SAP AG. All rights reserved.
©SAP AG
18
HA300
2-18
Referential Join – Analytical View Connecting Tables
*** Customer 77, TOM and BOB are not returned since C_ID is a Joined key field resulting in an Inner Join. BOB has no corresponding Texas description and TOM has no corresponding facts.
*** TOM is not returned due to no corresponding facts in the sales table
*** The Amount includes all facts including Customer 77 and BOBs order even through master records do not exist reason is when only non-key fields are selected from the left table, all joins to other tables will be omitted © 2012 SAP AG. All rights reserved.
©SAP AG
19
HA300
2-19
Referential Join – Using MDX Connecting Tables
Like Inner Join
No referential integrity of data in both tables No Join processed
© 2012 SAP AG. All rights reserved.
20
Referential joins should be used with caution since it assumes that referential integrity is ensured. The only valid scenario for the Referential Join is that (a) it is 110% guaranteed that for each row in one table, there is at least one join partner in the other table, and (b) that holds true in both directions (c) at all times. If that is not the case then referential joins have the possibility to give incorrect calculations if the referential integrity is not met – meaning if a delivery header is created but the items is not processed until a later stage then any calculations that use referential joins will be incorrect. Referential Join cannot be used if a filter is set on a field in the right table.
©SAP AG
HA300
2-20
Calculation View- Join vs. Union Connecting Tables Caution!! Do not JOIN Analytical Views, this could lead to performance implications. Instead use Union with constant values when working with multiple fact tables
© 2012 SAP AG. All rights reserved.
©SAP AG
21
HA300
2-21
Unions Connecting Tables
Unions are used to combine the result-set of two or more SELECT statements.
The Union operation is popular for combining plan and actual values in CO-PA.
Note that Unions are not supported in modeled artifacts (Attribute Views or Analytical Views) and can only be realized in Calculation Views.
Refrain from Joining Analytical views; rather use Unions with Constant values.
Unions with Constant values are supported within Graphical Calculation Views and the UNION operator can accept 1..N input sources.
Whereas a Script Based calculation view’s comparable CE_UNION_ALL function can only accept 2 input sources at a given time.
© 2012 SAP AG. All rights reserved.
©SAP AG
22
HA300
2-22
Standard Union Connecting Tables
© 2012 SAP AG. All rights reserved.
©SAP AG
23
HA300
2-23
Union with Constant Values Connecting Tables
© 2012 SAP AG. All rights reserved.
©SAP AG
24
HA300
2-24
Summary Connecting Tables
You should now be able to:
Explain differences between Inner Join, Left Outer Join, Right Outer Join, Full Outer Join, Text Join and Referential Join when connecting tables.
Explain how using Standard Union and Union with constant values.
© 2012 SAP AG. All rights reserved.
©SAP AG
25
HA300
2-25
You should now be able to:
© 2012 SAP AG. All rights reserved.
©SAP AG
26
HA300
2-26
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
3-1
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion © 2012 SAP AG. All rights reserved.
©SAP AG
2
HA300
3-2
Objectives Creating Attribute Views
At the end of this Lesson you will be able to: Explain how to create derived attribute views, Explain how to create shared attribute views, Explain how to create calculated attributes, Explain how to create time characteristics based attribute views, Explain how to create stand alone text tables. Explain how to use base table aliases, Explain how to include hidden attributes in an attribute view
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
3-3
Overview Creating Attribute Views
This module covers the following topics: Derived Attribute Views, Shared Attribute Views Calculated Attributes Time Characteristics Based Attribute Views Stand Alone Text Tables Using Base Table Aliases Hidden Attributes
© 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
3-4
Concept Attribute views Creating Attribute Views
Calculation Views
Calculation Engine
Analytic Views
OLAP Engine
Join Engine
© 2012 SAP AG. All rights reserved.
Attribute Views
5
Attribute views are used to give master data tables context. This context is provided by text tables which give meaning to the master data. For example, if our fact table or analytic view only contains some numeric ID for elink in information about each dealer using an attribute view. We could then display the dealers’ names and addresses instead of their IDs thus providing the context for the master ach car dealer then we can data table. Attribute views are used to select a subset of columns and rows from a data table. As it is of little use to sum up attributes from master data tables there is no need to define measures or aggregates for Attribute Views. You can also use attribute views to join master data tables to each other, e. g. joining “Plant” to “Material”.
©SAP AG
HA300
3-5
Derived Attribute Views Creating Attribute Views
© 2012 SAP AG. All rights reserved.
6
In some business cases, it is required touse the same attribute view more than once. (Example :Two logical join of different join types but defined on the same attribute view.) In such cases, one can derive an attribute view using the base view which acts as reference to the base attribute view. • The derived attribute view will be opened in the read only mode. The only editable field will be its description. • The derived attribute view will act as a reference to its base attribute view.
©SAP AG
HA300
3-6
Shared Attribute Views Creating Attribute Views
© 2012 SAP AG. All rights reserved.
7
Previous, we saw that Attribute views are used to give master data tables context. In according to that statement, Attribute Views are reusable objects. So One Attribute View can be shared between several Analytical Views. For example, Product Attribute View can be used in the Purchase Order Analytical View and in the same time in the Sales Order Analytical View.
©SAP AG
HA300
3-7
Calculated Attributes Creating Attribute Views
© 2012 SAP AG. All rights reserved.
8
Calculated attributes are similar to Calculated Measures by behavior. There are business needs where in users would like to derive attributes using available attributes and measures. • The existing Calculated Measures behavior are extended to support calculated Attributes where in users are allowed to use non measure attributes also part of the calculation. • The calculation could be an arithmetic or character manipulation. • Ideally it is better to classify all calculated attributes in one place. Once created will behave like any other attributes in the whole information modeling paradigm. • Optionally this could be extended to Attribute View and Calculation View.
©SAP AG
HA300
3-8
Time Characteristics Attribute View Creating Attribute Views
© 2012 SAP AG. All rights reserved.
9
In business, different calendars are used different purposes with different fiscal periods which need not align with Calendar and Calendar Periods. The analytics will support information needs in accordance to the calendar that are defined for reporting using fiscal calendars. • The Calendar Attribute view will be of a new type calendar (Time Type) that are very similar to Time Attribute Views. • User will be able to generate different calendars and periods as defined in ERP System using ERP Fiscal Calendar tables.
©SAP AG
HA300
3-9
Creating Stand Alone Text Tables Creating Attribute Views
© 2012 SAP AG. All rights reserved.
10
In some cases, there is no combination of value table and text table that can be joined with a text join and language field mapping ending up in an Attribute View. But still we need to model those in a way that supports dynamic language handling in terms of the texts. Those tables shall be included in an Attribute View with a dynamic filter on the language field ($$language$$). In the Analytical View they are used as normal. Don't use the Text Join in the Analytical View ! Its not supported and will be removed from the drop-down list box soon.
©SAP AG
HA300
3-10
Using Multiple Base Tables Using Aliases Creating Attribute Views You will be prompted for an alias name when adding multiple instances of the same base table
© 2012 SAP AG. All rights reserved.
11
By adding the table again, just as when adding it the first time, a prompt box will ask you to provide an alias for the new instance of the base table.
©SAP AG
HA300
3-11
Hidden Attributes Creating Attribute Views There may be instances where you want to include Attributes in your Analytical View (such as for granularity purposes) but for reporting you want these to be hidden. This can be achieved by setting the “Hidden” property of the attribute to “True”. In the context of an Analytical View this attribute will then not be visible.
© 2012 SAP AG. All rights reserved.
©SAP AG
12
HA300
3-12
Summary Creating Attribute Views
You should now be able to: Explain how to create derived attribute views, Explain how to create shared attribute views, Explain how to create calculated attributes, Explain how to create time characteristics based attribute views, Explain how to create stand alone text tables, Explain how to use base table aliases, Explain how to include hidden attributes in an attribute view
© 2012 SAP AG. All rights reserved.
©SAP AG
13
HA300
3-13
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion
©SAP AG
HA300
3-14
Objectives Using Hierarchies
At the end of this Lesson you will be able to: Explain how to implement leveled hierarchies, Explain how to leverage parent / child hierarchies, Explain how to create attribute based hierarchies.
© 2012 SAP AG. All rights reserved.
©SAP AG
15
HA300
3-15
Overview Using Hierarchies
This module covers the following topics: Implement Leveled Hierarchies Leverage Parent Child Hierarchies Create Attribute Based Hierarchies
© 2012 SAP AG. All rights reserved.
©SAP AG
16
HA300
3-16
Concept Using Hierarchies Using Hierarchies For example, consider the TIME attribute view with YEAR, QUARTER, and MONTH attributes. You can use these YEAR, QUARTER, and MONTH attributes to define a hierarchy for the TIME attribute view as follows: Hierarchies in HANA could only be used by reporting tools using MDX connectivity
Year
Quarter 1
Quarter 2
Quarter 3
Jan Feb Mar
Jul
Apr
May Jun
Quarter 4
Aug Sep
Oct
Nov Dec 17
© 2012 SAP AG. All rights reserved.
The definition of hierarchy objects is similar to the definition of conventional relational database views: In a CREATE COLUMN VIEW HanaDB-SQL statement, a source query is defined whose result is the basis of the hierarchy tree graph edges. A primary design goal of the hierarchy definition syntax has been flexibility and extensibility, in order to support various source data formats and transformations in the future. At present, the so-called recursive and leveled source transformation types are supported. The recursive source transformation (also known as parent-child or pred/succ hierarchy) expects hierarchy graph edges as input, defined as pairs of predecessor and successor node IDs. Obviously this source type is best suitable to take a single flat table as input whose rows correspond to hierarchy nodes and contain a node ID and a parent node ID. Since any valid SQL query is accepted as source, this source transformation is flexible enough to support also completely different source data as well. For example, the recursive source transformation is able to interpret a geographical hierarchy that is based on multiple independent tables representing hierarchy levels connected by foreign key relations, provided that the source data is suitably pre-transformed via SQL.
©SAP AG
HA300
3-17
Using Hierarchies Using Hierarchies Let's have a look at the following hierarchy, a minimal example :
Hierarchies in HANA could only be used by reporting tools using MDX connectivity
© 2012 SAP AG. All rights reserved.
18
In order to represent this hierarchy as HanaDB database object, we need a source object providing the hierarchy's node and edge data : CREATE COLUMN TABLE h_mini_src ( pred CHAR(2), succ CHAR(2) PRIMARY KEY ); INSERT INTO h_mini_src VALUES ( null, 'A1' ); INSERT INTO h_mini_src VALUES ( 'A1', 'B1' ); INSERT INTO h_mini_src VALUES ( 'A1', 'B2' ); INSERT INTO h_mini_src VALUES ( 'B1', 'C1' ); INSERT INTO h_mini_src VALUES ( 'B1', 'C2' ); INSERT INTO h_mini_src VALUES ( 'B2', 'C3' ); INSERT INTO h_mini_src VALUES ( 'B2', 'C4' ); INSERT INTO h_mini_src VALUES ( 'C3', 'D1' ); INSERT INTO h_mini_src VALUES ( 'C3', 'D2' ); INSERT INTO h_mini_src VALUES ( 'C4', 'D3' ); Now we define a hierarchy view over the source data : CREATE COLUMN VIEW h_mini TYPE HIERARCHY WITH PARAMETERS ( 'hierarchyDefinitionType' = 'select', 'hierarchyDefinition' = '{ "sourceType":"recursive", "nodeType":"string", "runtimeObjectType":"blob", "sourceQuery":"SELECT pred, succ FROM h_mini_src" }' ); Congratulations, your first hierarchy view is ready to answer your queries! Display all nodes and their attributes : SELECT * FROM h_mini; Display all nodes subordinate to node B2 : SELECT result_node FROM h_mini WITH PARAMETERS ( 'expression' = 'subtree("B2",1,99)' );
©SAP AG
HA300
3-18
Implement Leveled Hierarchies Using Hierarchies Level Hierarchies are hierarchies that are rigid in nature, where the root and the child nodes can be accessed only in the defined order. For example, organizational structures, and so on. This page describes step-by-step how to create a SAP HANA Database hierarchy view by means of the SAP Hana Modeler. Create a table providing some hierarchy source data :
© 2012 SAP AG. All rights reserved.
19
Create a table providing some hierarchy source data : • CREATE COLUMN TABLE t_hierarchy_source ( id INT PRIMARY KEY, level1 VARCHAR(32), level2 VARCHAR(32) ); • INSERT INTO t_hierarchy_source VALUES ( 1, 'a1', 'b1'); • INSERT INTO t_hierarchy_source VALUES ( 2, 'a1', 'b2'); • INSERT INTO t_hierarchy_source VALUES ( 3, 'a2', 'b3'); • INSERT INTO t_hierarchy_source VALUES ( 4, 'a2', 'b4');
©SAP AG
HA300
3-19
Implement Leveled Hierarchies Using Hierarchies If necessary, create a package in the Modeler. Create an attribute view in the Modeler :
© 2012 SAP AG. All rights reserved.
©SAP AG
20
HA300
3-20
Implement Leveled Hierarchies Using Hierarchies Select the source table as basis for the view :
© 2012 SAP AG. All rights reserved.
©SAP AG
21
HA300
3-21
Implement Leveled Hierarchies Using Hierarchies Select the attributes that should be part of the source view :
© 2012 SAP AG. All rights reserved.
©SAP AG
22
HA300
3-22
Implement Leveled Hierarchies Using Hierarchies Right click on the Hierarchies folder, create a New Level Hierarchy:
© 2012 SAP AG. All rights reserved.
©SAP AG
23
HA300
3-23
Implement Leveled Hierarchies Using Hierarchies For a leveled hierarchy, add the attributes to the hierarchy in the correct level order from top to bottom, with the lowest granularity at the lowest level of the hierarchy:
© 2012 SAP AG. All rights reserved.
©SAP AG
24
HA300
3-24
Leverage Parent Child Hierarchies Using Hierarchies Parent and Child Hierarchies can be illustrated as for example an Employee Master (Employee and Manager). The hierarchy can be explored based on a selected parent, and there are cases where the child can be a parent. If you want to create a parent-child hierarchy, make sure the child/successor attribute's property "Principal Key" to True :
© 2012 SAP AG. All rights reserved.
©SAP AG
25
HA300
3-25
Leverage Parent Child Hierarchies Using Hierarchies Right click on the Hierarchies tab, and select a New Parent Child Hierarchy:
© 2012 SAP AG. All rights reserved.
©SAP AG
26
HA300
3-26
Leverage Parent Child Hierarchies Using Hierarchies For a Parent Child Hierarchy there is no need to define the Child Attribute, as you have already done so when you have specified the Principal Key of the Table. What you will however need to specify is the Parent Attribute of the Child:
© 2012 SAP AG. All rights reserved.
©SAP AG
27
HA300
3-27
Hierarchy Active Property Using Hierarchies MDX per default only shows key Attributes. If “Hierarchy Active” = “false” for non-key Attribute, then the Attribute will not show up in the MDX reporting tool.
© 2012 SAP AG. All rights reserved.
28
MDX per default only shows key fields • This is governed by an output field property of the attribute view • If “Hierarchy Active” = “false” for non-key field, -> Field does not show up in Excel
©SAP AG
HA300
3-28
Hierarchy Active Property Using Hierarchies By enabling “Hierarchy Active” for non-key Attributes, you can make sure the Attribute can be used for reporting, even though it is not a key field.
© 2012 SAP AG. All rights reserved.
29
PropertiesEnabling display of non-key fields via MDX • Set output-field property “Hierarchy active” to “true” -> all fields show up in field list for Excel Pivot Table
©SAP AG
HA300
3-29
Summary Using Hierarchies
You should now be able to: Explain how to implement leveled hierarchies, Explain how to leverage parent / child hierarchies, Explain how to create attribute based hierarchies.
© 2012 SAP AG. All rights reserved.
©SAP AG
30
HA300
3-30
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion
©SAP AG
HA300
3-31
Objectives Creating Restricted & Calculated Measures
At the end of this Lesson you will be able to: Understand the benefits of Restricted Measures Use Restricted Measures Understand when to use Calculated Measures Create Calculated Measures
© 2012 SAP AG. All rights reserved.
©SAP AG
32
HA300
3-32
Overview Using Filter Operations
This module covers the following topics: Concepts for Restricted & Calculated Measures Using Restricted Measures Creating Calculated Measures
© 2012 SAP AG. All rights reserved.
©SAP AG
33
HA300
3-33
The benefits of Restricted Measures Creating Restricted & Calculated Measures What is a Restricted Measure?
As the name implies it is a measure that does not give the complete picture of a measure, it is restricted to a subset of the original measure. The benefit of a Restricted Measure is, that it expands the modeling options in a view, giving the modeler the possibilities of creating objects that can be easily reported on or reused.
Country DE DE DE DE DE DE DE DE DE DE DE DE US US US US US US US US US US US US
Month 2010-01 2010-02 2010-03 2010-04 2010-05 2010-06 2010-07 2010-08 2010-09 2010-10 2010-11 2010-12 2010-01 2010-02 2010-03 2010-04 2010-05 2010-06 2010-07 2010-08 2010-09 2010-10 2010-11 2010-12
Amount 12.345,00 15.678,00 25.814,00 21.586,00 21.861,00 11.258,00 12.387,00 13.589,00 12.345,00 15.678,00 25.814,00 21.586,00 21.861,00 11.258,00 12.387,00 13.589,00 12.345,00 15.678,00 25.814,00 21.586,00 21.861,00 11.258,00 12.387,00 25.814,00
Restricted by Country: DE 209.941,00
US 205.838,00
Difference 4.103,00
Restricted by Months: Country DE US
© 2012 SAP AG. All rights reserved.
Q1 53.837,00 45.506,00
Q2 54.705,00 41.612,00
34
The restricted measure is one of the three types of measures available: • Measures (the standard type) • Calculated Measures (this will be dealt with later in this module) • Restricted Measures The Restricted Measure is restricted based on a defined Attribute(s). These Attributes can be anything in the base table or view that the modeler wants to define in order to help reporting or further modeling.
©SAP AG
HA300
3-34
Using Restricted Measures Creating Restricted & Calculated Measures Picture the example in the right table. You have a transactional table with cost data items, with each cost type split on a different line. If you want to find out the shipping cost you could create an Analytic View with Cost Type as an Attribute, and Amount as a Measure. You could then restrict your report by reporting on Cost Type, setting the Attribute filtered on Cost Type = “Shipping Cost”.
© 2012 SAP AG. All rights reserved.
35
When describing how you could restrict your report by reporting on Cost Type = ”Shipping Cost”, this means that you could for example create a data provider in Business Objects with a query restriction where Cost Type is set to only display ”Shipping Cost”.
©SAP AG
HA300
3-35
Using Restricted Measures Creating Restricted & Calculated Measures In order to utilize a Restricted Measure, you can instead create one which already limits the results to Shipping Costs only within the measure itself.
© 2012 SAP AG. All rights reserved.
36
Continuing with the example of using Business Objects for reporting, when having access to this restricted measure, you could report on both the total cost amount and just the shipping costs in the same data provider or query.
©SAP AG
HA300
3-36
Using Restricted Measures Creating Restricted & Calculated Measures The Attribute you filter the Restricted Measure does not have be limited to one single Attribute. You can set it restricted to multiple attributes depending on your reporting requirements. There are also multiple operators to choose from.
© 2012 SAP AG. All rights reserved.
©SAP AG
37
HA300
3-37
Using Restricted Measures Creating Restricted & Calculated Measures
Further, the Restricted Measure does not have to be a straight sum of the Measure that it is based on. The Aggregation Types that you have available are:
SUM
MIN
MAX
© 2012 SAP AG. All rights reserved.
©SAP AG
38
HA300
3-38
When to use Calculated Measures Creating Restricted & Calculated Measures
In a data model sometimes not all Measures available in the base data will give your users sufficient information for reporting if you just provide the base Measures in your views.
V=
SAP HANA has a type of Measure available called Calculation Measures where the modeler is able to include calculations already within the view in order to help reporting or further modeling.
4 3
π r3
© 2012 SAP AG. All rights reserved.
39
A Calculated Measure does not have to be a complicated formula like the one shown in the example illustration, it can also be a simple calculation.
©SAP AG
HA300
3-39
When to use Calculated Measures Creating Restricted & Calculated Measures
When you include calculations in your views using Calculated Measures you take advantage of the speed of SAP HANA letting the database engine perform the calculations, instead of doing these calculations in your end client reporting tool. Having ready defined calculations in views can also help simplifying reporting by unifying calculations having them calculated in the same way for all users instead of having users or developers create their own versions of the calculations.
Client Application
G
Y
Z
Calculation Engine Do calculation after aggregation
OLAP Engine
Column Store
© 2012 SAP AG. All rights reserved.
©SAP AG
A
Join Engine
Row Store
A
B
C
D
G
Y
Avoid calculation before aggregation on line item level
40
HA300
3-40
Creating Calculated Measures Creating Restricted & Calculated Measures
A Calculated Measure is defined in the view and when you create one you can use the calculations, mathematical functions etc. available in the editor.
© 2012 SAP AG. All rights reserved.
41
In the example in this slide the amount including VAT is calculated using the base measure multiplied by 1.2
©SAP AG
HA300
3-41
Creating Calculated Measures Creating Restricted & Calculated Measures
For certain measure it is not possible to perform the calculations when the measures are already aggregated. The aggregated granularity of for example Price does not mean anything.
© 2012 SAP AG. All rights reserved.
42
In the sum line highlighted red, the units have been aggregated as well as the price. Multiplying these to aggregates will not give a meaningful result.
©SAP AG
HA300
3-42
Creating Calculated Measures Creating Restricted & Calculated Measures For these types of Measures you can predefine the Calculated Measure to calculate each individual item before aggregating. This is done by selecting the option “Calculate Before Aggregation”.
© 2012 SAP AG. All rights reserved.
43
Ticking the box ”Calculate Before Aggregation” would solve the problem highligheted in the previous slide.
©SAP AG
HA300
3-43
Creating Calculated Measures Creating Restricted & Calculated Measures
This way you can be sure that you end up with a correct sum as the calculation are performed on the correct granular level.
© 2012 SAP AG. All rights reserved.
44
Here you can see that the total sales measure (98 750) varies greatly from the incorrect one in the example earlier (1 135 225).
©SAP AG
HA300
3-44
Objectives Using currency Conversion
You should now be able to: Understand the benefits of Restricted Measures Use Restricted Measures Understand when to use Calculated Measures Create Calculated Measures
© 2012 SAP AG. All rights reserved.
©SAP AG
45
HA300
3-45
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion
©SAP AG
HA300
3-46
Objectives Using Filter Operations
At the end of this Lesson you will be able to: Explain how to compare constraint filter and WHERE clause, Explain how to create client dependant views, Explain how to model domain fix values.
© 2012 SAP AG. All rights reserved.
©SAP AG
47
HA300
3-47
Overview Using Filter Operations
This module covers the following topics: Compare constraint filter and WHERE clause, Create client dependant views, Model domain fix values.
© 2012 SAP AG. All rights reserved.
©SAP AG
48
HA300
3-48
Filter Operations Using Filter Operations Client Application
Calculation Engine
OLAP Engine
Join Engine
Column Store
Row Store
Reduce data transfer between the engines by using Filter Operations like : Using a Constraint or WHERE clause Creating Client Dependant Views Using Model Domain Fix Values
© 2012 SAP AG. All rights reserved.
49
A primary goal during modeling is to minimized Data transfers. This ternate holds true both internally, between the SAP HANA database views, and also between SAP HANA and the end user application. For example, the end-user will never need to see 1 million rows of data. They would never be able to understand so much information or consume it in some meaningful way. This means that Data, wherever possible, should be aggregated and filtered to a manageable size before it leaves the data layer. When deciding upon the records that should be reported upon, a best practice approach is to think at a “set level” not a “record level”. A set of data can be aggregated by a region, a date, or some other group in order to minimize the amount of data passed between views.
©SAP AG
HA300
3-49
Compare Constraint Filter & WHERE Clause Using Filter Operations Constraint filter :
Is defined on design time on a table,
The filter applies on the table before the query starts to execute,
WHERE clause : Vs
Normally faster than WHERE clause, as the results set is reduced before proceeding with the query execution plan, e.g. constraints applied before a table join is executed.
Is defined on runtime in the SQL query,
The filters applies on the results set of a query.
© 2012 SAP AG. All rights reserved.
50
Both operations, WHERE clause and Constraint filter, are used to reduce the result set of data. But from semantic point of view, these two operations are not the same at all : • WHERE clause : – Is defined on runtime in the SQL query, – The filters applies on the results set of a query. • Constraint filter : – Constraint filter is defined on design time on a table, – The filter applies on the table before the query starts to execute, – Normally faster than WHERE CLAUSE, as the results set is reduced before proceeding with the query execution plan, e.g. constraints applied before a table join is executed
©SAP AG
HA300
3-50
Create Client Dependant Views Using Filter Operations Define « Default Client » as « dynamic » in properties of your views (Attribute Views and Analytic Views)
Define the value of the « Session Client » in the definition of the User ID. The variable constraint is substituted at runtime by the client that is set for the current user running a query on the model: © 2012 SAP AG. All rights reserved.
51
Almost all tables in the ECC ER Model are client dependent. Only a small number of tables does not have the client as the first primary key field. Those are mainly used for client dependent configuration (e.g. printer). The data of Attribute and/or Analytics View is automatically filtered to one client (actually the client maintained in the user properties) at runtime, if variable $$client$$ (case sensitive) is set in the properties (Default Client) of the model definition. Do not set filter on field level in the modeler. (Newer versions of the modeler already have an entry called "dynamic" in the drop down list box). For dynamic selection of tables data in Calculation Views, you can use new build in functions : • SESSION_CONTEXT('CLIENT') - deliveres client based on the current user profile, • SESSION_CONTEXT('LOCALE') - deliveres language in POSIX format (set by 'locale' parameter of JDBC/ODBC/ODBO connection, e.g. en_en, de_de, de_at), • SESSION_CONTEXT('LOCALE_SAP') - deliveres language in SAP internal format (like domain SPRAS). Example: SELECT field1, field2 FROM « my_table » WHERE mandt = SESSION_CONTEXT('CLIENT') AND spras = SESSION_CONTEXT('LOCALE_SAP')
©SAP AG
HA300
3-51
Model Fix Values Using Filter Operations
© 2012 SAP AG. All rights reserved.
52
Domain Fix Values are used for several fields. They are basically simple value lists with language dependent texts. Actually values are stored in database tables (DD07L, DD07T). During the initialization of the replication a python script is executed that creates two tables for each used Domain. Please consider that currently only those domain values are taken that are active (Field AS4LOCAL = 'A' in DD07T). The naming convention for these tables is DOM_
for the Domain values and DOM_T for the language dependent texts (Example: DOM_FKREL and DOM_FKRELT, DOM_BFART and DOM_BFARTT). They are located in the same schema as the replicated tables. You can build Attribute Views for those tables to be used in analytical or calculation views. Please include both related tables and create a text join with language field DDLANGUAGE. Include field DOMVALUE_L as key field in the output structure and add field DDTEXT (Description mapping (for DOMVALUE_L to field DDTEXT of joined text table) does not work with all clients). Recommendation is to change Attribute Name and Description to the name of the Domain.
©SAP AG
HA300
3-52
Model Fix Values Using Filter Operations
© 2012 SAP AG. All rights reserved.
53
Domain fix values are created for ECC and for CRM separately. This is currently done by a set of SQL statements that need to be executed via SQL editor.
©SAP AG
HA300
3-53
Summary Using Filter Operations
You should now be able to: Explain how to compare constraint filter and WHERE clause, Explain how to create client dependant views, Explain how to model domain fix values.
© 2012 SAP AG. All rights reserved.
©SAP AG
54
HA300
3-54
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion
©SAP AG
HA300
3-55
Objectives Creating Variables
At the end of this Lesson you will be able to: Explain the difference between Variables and Input Parameters Create Variables Create Input Parameters
© 2012 SAP AG. All rights reserved.
©SAP AG
56
HA300
3-56
Overview Creating Variables
This module covers the following topics: Understanding the difference between Variables and Input Parameters How to create Variables and apply them as Filters How to make use of different types of Input Parameters
© 2012 SAP AG. All rights reserved.
©SAP AG
57
HA300
3-57
Variables and Input Parameters Concepts Creating Variables In an Attribute or Calculation view you can create two types of objects to get data from reporting users:
Variables
These are bound to attributes and are used for filtering. As such, they can only contain the values available in the Attribute they relate to.
Input Parameters
Can contain any value the reporting user want to enter. Therefore, a data type for the Input Parameter must be specified.
© 2012 SAP AG. All rights reserved.
58
Variables and Input parameters in SAP HANA can be described as a type of input boxes. It is a way of asking the reporting user for input on how to provide, present or restrict the data that will be displayed. For course participants familiar with Business Objects terminology, the Variable is referred to as a Prompt in BO.
©SAP AG
HA300
3-58
Variables and Input Parameters Concepts Creating Variables Client Application
Calculation Engine
You
use variables to filter data at runtime. You assign values to these variables by entering the value manually, or by selecting it from the drop-down list.
Using
variables means that you do not need to decide the restriction on the value of attributes at the design time.
OLAP Engine
Join Engine
You
can apply variables in the analytic and calculation views.
If
Column Store
Row Store
a calculation view is created using an analytic view with variables, those variables are also available in the calculation view but cannot be edited.
© 2012 SAP AG. All rights reserved.
©SAP AG
59
HA300
3-59
Variables Types Creating Variables The following types of Variables are supported: Type
Description
Single Value
Use this to apply a filter to a Single Value
Interval
Use this where you want the user to specify a set start and end to a selected Interval.
Range
Use this when you want the end user to be able to use operators such as “Greater Than” or “Less Than”.
Range Variable Example:
© 2012 SAP AG. All rights reserved.
©SAP AG
60
HA300
3-60
Creating Variables for Filtering Creating Variables A variable restricts the results in the view for the selected Attribute. You select the Attribute in the view that you want to filter on, and you also define the: Selection Type: Whether selections should be based on intervals, ranges or single values. Multiple Entries: Whether multiple values of the selection types should be allowed. You can also define whether the Variable is Mandatory or if it should have a Default Value. © 2012 SAP AG. All rights reserved.
61
Note: For mandatory variables (used in calculated attributes and calculated measures), it is mandatory to provide the value at runtime. For non-mandatory variables, if the values is not specified at the runtime, all data is shown without a filter when the data is previewed. You can see whether a variable is mandatory or not from the properties of the variable in the properties pane. In calculated attributes and calculated measures only single value variables can be used.
©SAP AG
HA300
3-61
Creating Variables for Filtering Creating Variables
Just creating a Variable will however not enable the users to filter the result set. After a variable has been created it also needs to be assigned and applied to an Attribute as a Filter. © 2012 SAP AG. All rights reserved.
©SAP AG
62
HA300
3-62
Creating Variables for Filtering Creating Variables By right clicking on the appropriate Attribute and applying a filter, you can then select a Variable as an operator for the filter, and then specify the created Variable. The correct Data Type will then automatically be assigned to the filter.
© 2012 SAP AG. All rights reserved.
63
Please note there might some problems with the SAP HANA Studio user interface when you select the attribute that the Variable is referring to. Due to this problem it might not be entirely clear which attribute you have selected.
©SAP AG
HA300
3-63
Creating Variables for Filtering Creating Variables
When displaying the data of a View with a Variable or Input Parameter included, the Value Help Dialog you can help you or the reporting user to find the selections you are looking for. © 2012 SAP AG. All rights reserved.
64
In the example in this slide, note that more than one selection has been chosen. This is possible when the ”Multiple Entries” checkbox has been ticked. ”From” and ”To” is displayed in the ”Variable Values” dialog even though you may not have defined a range.
©SAP AG
HA300
3-64
Input Parameters Concept Creating Variables
You might not want a variable to just restrict the data of a view. You might want to take input from the user and process it, returning dynamic data based on the user selection. Input Parameters makes this possible.
© 2012 SAP AG. All rights reserved.
©SAP AG
65
HA300
3-65
Input Parameter Types Creating Input Variables The following types of Input variables are supported: Type
Description
Currency
Use this during currency conversion where the end user should specify a source or target currency.
Date
Use this to retrieve a date from the end user using a calendar type input box.
Static List
Use this when the end user should have a set list of values to choose from.
Attribute Value
When an Input Variable has this type, it serves the same purpose as a normal Variable.
(none)
If none of the above applies you do not have to specify an Input Variable type. The Type can be left blank
© 2012 SAP AG. All rights reserved.
©SAP AG
66
HA300
3-66
Creating Input Parameters Creating Variables If we want the end user to decide whether Gross or Net amount should be shown in a View, what we first need to do is to create an Input Parameter to be used in a calculation. The Input Parameter can be of any suitable type, for example a StaticList type. In this example to the left the user can choose either ”Gross” or ”Net” as the selection.
© 2012 SAP AG. All rights reserved.
67
A variable to be used within a formula does not have to be a Static List type, it could also be for example a value to be used for in multiplications or any other calculations.
©SAP AG
HA300
3-67
Creating Input Parameters Creating Variables We also need somewhere to call the Input Parameter from. In for example a Calculated Measure, we can reference the result of the user selected Input Parameter. This is done by calling it within double dollar signs, see example:
if('$$INP_GROSS_OR_NET$$'='Gross',"GROSS_AMOUNT","NET_AMOUNT")
© 2012 SAP AG. All rights reserved.
68
This example Calculated Measure will display the GROSS_AMOUNT measure if the user selectes ”Gross”. Any other selection will result in NET_AMOUNT.
©SAP AG
HA300
3-68
Creating Input Parameters Creating Variables The Input Parameter type ”Date” can be useful when you for example want an input date from the reporting user in order to create further calculations on. Whatever input is selected in the variable can be used as a basis for extended calculations.
© 2012 SAP AG. All rights reserved.
69
Note that Date is set as both the [Variable] Type and the Data Type.
©SAP AG
HA300
3-69
Creating Input Parameters Creating Variables
When using the type ”Date” you are making it easier for the end user to select a date by utilizing a calendar dialog for selecting the appropriate date.
© 2012 SAP AG. All rights reserved.
70
In the example above the user is asked for a Single Value. Dates can also be selected as ranges.
©SAP AG
HA300
3-70
Objectives Creating Variables
You should now be able to: Explain the difference between Variables and Input Parameters Create Variables Create Input Parameters
© 2012 SAP AG. All rights reserved.
©SAP AG
71
HA300
3-71
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion
©SAP AG
HA300
3-72
Objectives Creating Calculation Views
At the end of this Lesson you will be able to: Explain how to create calculated attributes in a calculation view Explain how to create “simple” calculation views (non aggregate) Explain how to use the aggregation node Explain how to define unmapped columns in a union node
© 2012 SAP AG. All rights reserved.
©SAP AG
73
HA300
3-73
Overview Creating Calculation Views
This module covers the following topics: Calculated Attributes in Calculation Views “Simple” Calculation Views Aggregation Node Unmapped Columns in a Union Node
© 2012 SAP AG. All rights reserved.
©SAP AG
74
HA300
3-74
Concept Calculation Views Creating Calculation Views
Calculation Views
Calculation Engine
Analytic Views
OLAP Engine
Join Engine
© 2012 SAP AG. All rights reserved.
Attribute Views
75
Calculation views are defined as either graphical views or scripted views depending on how they are created. They can be used in the same way as analytic views, however, in contrast to analytic views it is possible to join several fact tables in a calculation view. It is also possible to include more advanced calculations in a calculation view. When running a calculation view it is executed by the Calculation Engine. It should however be noted that it may not be as fast as an Analytical View.
©SAP AG
HA300
3-75
Calculated Attributes in Calculation Views Creating Calculation Views Apart from being able to hold standard Attributes, a Calculation View can also contain Calculated Attributes. This means that with the columns generated by the Calculation View, you can also create new Calculated Attributes specific to the Calculation View.
© 2012 SAP AG. All rights reserved.
©SAP AG
76
HA300
3-76
Simple Calculation Views Creating Calculation Views A standard Calculation View will group the measures by dimension, thereby producing an aggregated result. If this behavior is not wanted, it is possible to create a Simple Calculation View by setting the “Multidimensional Reporting” of the view to “disabled”. This can be useful when it is required to necessary to create a list based view, in essence creating a complex Attribute View. For this functionality to be utilised the Calculation View can however only contain attributes, no measures.
© 2012 SAP AG. All rights reserved.
©SAP AG
77
HA300
3-77
Simple Calculation Views Creating Calculation Views It is necessary to understand the impact of using a Simple Calculation View when dealing with values.
Source Data: Country Germany Germany Italy Italy
Calculation View: Country Germany Italy
Value 20 10
Value 3 17 5 5
A Simple Calculation View is not meant to aggregate measures, so a careful approach has to be taken when including values.
Simple Calculation View: Country Germany Germany Italy
Value 3 17 5
© 2012 SAP AG. All rights reserved.
©SAP AG
It behaves like an Attribute View in that it also:
Provides a list of the distinct values without aggregation
Does not allow the usage of measures 78
HA300
3-78
Aggregation Node Creating Calculation Views In order to have further control of how the aggregation is done, it is beneficial to use the aggregation node in Graphical Calculation Views. When using the aggregation node you can specify which columns should be aggregated and also the aggregation type (sum, min or max). You can also add Calculated Columns to the node. These calculations will be performed after aggregation.
© 2012 SAP AG. All rights reserved.
©SAP AG
79
HA300
3-79
Unmapped Columns in a Union Node Creating Calculation Views When you have several result sets that you want to union together, placing them on top of each other. This can be done both in Graphical Calculation Views and SQL Script Calculation Views. In SQL Script this is done using CE functions. In Graphical Calculation views this is done using the Union Node.
In order for the columns from the different sources to go into the correct target, a mapping will need to be provided. This can be done via a drag and drop interface.
© 2012 SAP AG. All rights reserved.
©SAP AG
80
HA300
3-80
Unmapped Columns in a Union Node Creating Calculation Views There could be instances when a union needs to be performed where the sources have a different number of columns. You can then set a Constant Value for the source columns that do not have the target column. The Constant Value can be set by right-clicking on the Target column, selecting “Manage Mappings”.
© 2012 SAP AG. All rights reserved.
©SAP AG
81
HA300
3-81
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion © 2012 SAP AG. All rights reserved.
©SAP AG
82
HA300
3-82
Objectives SAP HANA SQL
Explain the language elements used in SAP HANA SQL statements.
© 2012 SAP AG. All rights reserved.
©SAP AG
83
HA300
3-83
Overview SAP HANA SQL
Overview SQL Language Elements Identifiers SQL Data Types Predicates and Operators Functions and Expressions SQL Statements - Examples
© 2012 SAP AG. All rights reserved.
©SAP AG
84
HA300
3-84
SQL - Definition SAP HANA SQL
Structured Query Language Standardized language for communication with a relational database. Used to retrieve, store or manipulate information in the database.
Class
Description
Example
DDL
Data Definition Language
CREATE, ALTER, DROP TABLE
DML
Data Manipulation Language
SELECT, DELETE, INSERT, UPDATE
DCL
Data Control Language
GRANT, REVOKE
© 2012 SAP AG. All rights reserved.
85
SQL stands for Structured Query Language. It is a standardized language for communicating with a relational database. It is used to retrieve, store or manipulate information in the database. SQL statements can perform the following tasks: • Schema definition and manipulation • Data manipulation • System management • Session management • Transaction management
©SAP AG
HA300
3-85
SQL language elements SAP HANA SQL Identifiers
Used to represent names used in SQL statement
Data types
Specify the characteristics of a data value
Expressions
Clause that can be evaluated to return values
Operators
Used for calculation, value comparison or to assign values
Specified by combining one or more expressions or logical operators and returns one of the following logical or truth values: TRUE, FALSE, or UNKNOWN
Used in expressions to return information from the database
Predicates
Functions
© 2012 SAP AG. All rights reserved.
©SAP AG
86
HA300
3-86
Comment and Code page SAP HANA SQL Comments -double hyphens /* <……> */ Codepage The SAP HANA database supports Unicode to allow use of all languages in the Unicode Standard and 7 Bit ASCII code page without restriction.
© 2012 SAP AG. All rights reserved.
87
Comments are delimited in SQL statements as follows . • Double hyphens “--“. Everything after the double hyphen until the end of a line is considered by the SQL parser to be a comment • "/*" and "*/". This style of commenting is used to place comments on multiple lines. The SAP HANA Database supports Unicode to allow use of all languages in the Unicode Standard and 7 Bit ASCII code page without restriction.
©SAP AG
HA300
3-87
Identifiers SAP HANA SQL
© 2012 SAP AG. All rights reserved.
88
Identifiers are used to represent names used in SQL statement including • table name • view name • synonym name • column name • index name • function name • procedure name • user name • role name, and so on There are two kinds of identifiers; undelimited identifiers and delimited identifiers. • Undelimited table and column names must start with a letter and cannot contain any symbols other than digits or an underscore "_". NOTE: Undelimited identifiers are implicitly treated as upper case. Quoting identifiers will respect capitalization and allow for using white spaces etc. which are normally not allowed in SQL identifiers. • Delimited identifiers are enclosed in the delimiter, double quotes, then the identifier can contain any character including special characters. − For example, “―AB$%CD ” is a valid identifier name. Single Quotation Mark Single quotation marks are used to delimit string literals and single quotation mark itself can be represented using two single quotation marks. Double Quotation Mark Double quotation marks are used to delimit identifiers and double quotation mark itself can be represented using two double quotation marks. ©SAP AG
HA300
3-88
SQL Data types SAP HANA SQL Classification
Data Type
Datetime types
DATE, TIME, SECONDTIME, TIMESTAMP
Numeric types
TINYINT, SMALLINT, INTEGER, BIGINT, SMALLDECIMAL, DECIMAL, REAL, DOUBLE, FLOAT
Character string types
VARCHAR, NVARCHAR, ALPHANUM, SHORTTEXT
Binary types
VARBINARY
Large Object types
BLOB, CLOB, NCLOB, TEXT
© 2012 SAP AG. All rights reserved.
89
Data type specifies the characteristics of a data value. A special value of NULL is included in every data type to indicate the absence of a value.
©SAP AG
HA300
3-89
Predicates SAP HANA SQL Comparison Predicates
{ = | != | <> | > | < | >= | <= } [ ANY | SOME| ALL ] { | }
Range Predicate
[NOT] BETWEEN AND
In Predicate
[NOT] IN { | }
Exists Predicate
[NT] EXISTS ( )
LIKE Predicate
[NOT] LIKE [ESCAPE ]
NULL Predicate
IS [NOT] NULL
© 2012 SAP AG. All rights reserved.
90
Predicates A predicate is specified by combining one or more expressions or logical operators and returns one of the following logical or truth values: TRUE, FALSE, or UNKNOWN. LIKE Predicate The LIKE predicate is used for string comparisons. A value, expression1, is tested for a pattern, expression2. Wildcard characters ( % ) and ( _ ) may be used in the comparison string expression2. LIKE returns true if the pattern specified by expression2 is found. The percentage sign (%) matches zero or more characters and underscore (_) matches exactly one character. To match a percent sign or underscore in the LIKE predicate, an escape character must be used. Using the optional argument, ESCAPE expression3, you can specify the escape character that will be used so that the underscore (_) or percentage sign (%) can be matched.
©SAP AG
HA300
3-90
Operators SAP HANA SQL Unary
Binary
unary plus operator(+) unary negation operator(-) logical negation(NOT)
operator operand
operand1 operator operand2
multiplicative ( *, / ), additive ( +,- ) comparison operators ( =,!=,<,>,<=,>=) logical operators ( AND, OR )
Arithmetic Operators
-< expression > < expression > operator < expression >
Negation, Addition, Subtraction Multiplication, Division
String Operator
< expression > || < expression >
String concatenation
Comparison Operators
operator
>= Greater or equal to <= Less than or equal to !=, <> Not equal
Logical Operators
Search conditions can be combined using AND or OR operators. You can also negate them using the NOT operator.
AND, OR NOT
Set Operators
Set operators perform operations on the results of two or more queries.
UNION, UNION, ALL, INTERSECT, EXCEPT
© 2012 SAP AG. All rights reserved.
91
Operators You can perform operations in expressions by using operators. Operators can be used for calculation, value comparison or to assign values.
©SAP AG
HA300
3-91
Functions SAP HANA SQL Classification
Examples
Data type conversion Functions
CAST, TO_ALPHANUM, TO_BIGINT, …
DateTime Functions
ADD_DAYS, ADD_MONTHS, ADD_YEARS, DAYS_BETWEEN, DAYNAME, CURRENT_DATE, …
Number Functions
ABS, ACOS, ASIN, ATAN, COS …
String Functions
CONCAT, LEFT, LENGTH, TRIM, …
Miscellaneous Functions
IFNULL, CURRENT_SCHEMA, …
© 2012 SAP AG. All rights reserved.
92
Functions Functions are used to return information from the database. They are allowed anywhere an expression is allowed. Functions use the same syntax conventions used by SQL statements. Data type conversion functions Data type conversion functions are used to convert arguments from one data type to another, or to test whether they can be converted. Number Functions Number functions take numeric values or strings with numeric characters as inputs and returns numeric values. When strings with numeric characters are given as inputs, implicit conversion from string to number is performed automatically before computing the result values.
©SAP AG
HA300
3-92
Expressions SAP HANA SQL
Case Expressions
IF ... THEN ... ELSE logic without using procedures in SQL statements.
Function Expressions
SQL built-in functions can be used as an expression.
Aggregate Expressions
Uses an aggregate function to calculate a single value from the values of multiple rows in a column.
Subqueries in expressions
SELECT statement enclosed in parentheses.
© 2012 SAP AG. All rights reserved.
93
Expressions An expression is a clause that can be evaluated to return values. Case Expressions A case expression allows the user to use IF ... THEN ... ELSE logic without using procedures in SQL statements. Function Expressions SQL built-in functions can be used as an expression. Aggregate Expressions An aggregate expression uses an aggregate function to calculate a single value from the values of multiple rows in a column. Subqueries in expressions A subquery is a SELECT statement enclosed in parentheses. The SELECT statement can contain one and only one select list item. When used as an expression, a scalar subquery is allowed to return only zero or one value.
©SAP AG
HA300
3-93
Expressions: Examples SAP HANA SQL
Case expression You can use IF ... THEN ... ELSE logic without using procedures in SQL statements.
::= CASE WHEN THEN , ... [ ELSE ] { END | END CASE }
Aggregate Expressions ::= COUNT(*) | ( [ ALL | DISTINCT ] ) ::= COUNT | MIN | MAX | SUM | AVG | STDDEV | VAR
© 2012 SAP AG. All rights reserved.
94
Case Expression If the expression following the CASE statement is equal to the expression following the WHEN statement, then the expression following the THEN statement is returned. Otherwise the expression following the ELSE statement is returned, if it exists. Aggregate Expression COUNT
Counts the number of rows returned by a query. COUNT(*) returns the number of rows, regardless of the value of those rows and including duplicate values.COUNT() returns the number of non-NULL values for that expression returned by the query.
MIN
Returns the minimum value of expression.
MAX
Returns the maximum value of expression.
SUM
Returns the sum of expression.
AVG
Returns the arithmetical mean of expression.
STDDEV
Returns the standard deviation of given expression as the square root of VARIANCE function.
VAR
Returns the variance of expression as the square of standard deviation.
©SAP AG
HA300
3-94
SQL statement: Create Table SAP HANA SQL CREATE [] TABLE ; table_type ::= TEMPORARY
COLUMN | ROW | HISTORY COLUMN | GLOBAL TEMPORARY | LOCAL
table_contents_source ::= ( , … )|[ (column_name, ...) ] | [ | ] [ WITH [NO] DATA ] ] table_element ::= column_definition column_constraint | table_constraint ( column_name, ... ) like_table_clause ::= LIKE like_table_name as_table_subquery ::= AS () column_definition ::= column_name data type [] [] [DEFAULT default_value] [GENERATED ALWAYS AS ]
© 2012 SAP AG. All rights reserved.
95
column_constraint ::= NULL | NOT NULL | UNIQUE [BTREE | CPBTREE] |PRIMARY KEY [BTREE | CPBTREE] NULL | NOT NULL table_constraint ::=UNIQUE [BTREE | CPBTREE] | PRIMARY KEY [BTREE | CPBTREE]
©SAP AG
HA300
3-95
Create Table - Table Types SAP HANA SQL
COLUMN ROW
HISTORY COLUMN
COLUMN-based storage should be used, if the majority of access is through a large number of tuples but with only a few selected attributes. ROW-based storage is preferable, if the majority of access involves selecting a few records with all attributes selected. Creates a table with a particular transaction session type called HISTORY. Tables with session type HISTORY support time travel; the execution of queries against historic states of the database is possible.
GLOBAL TEMPORARY
Table definition is globally available while data is visible only to the current session. The table is truncated at the end of the session.
LOCAL TEMPORARY
The table definition and data is visible only to the current session. The table is truncated at the end of the session.
© 2012 SAP AG. All rights reserved.
96
column_constraint ::= NULL | NOT NULL | UNIQUE [BTREE | CPBTREE] |PRIMARY KEY [BTREE | CPBTREE] NULL | NOT NULL table_constraint ::=UNIQUE [BTREE | CPBTREE] | PRIMARY KEY [BTREE | CPBTREE]
©SAP AG
HA300
3-96
Create Table - Example SAP HANA SQL
Table Type
Column Constraint Table Elements Column Definition
Data Type
© 2012 SAP AG. All rights reserved.
©SAP AG
97
HA300
3-97
SQL statement: Insert SAP HANA SQL INSERT INTO [ ( column_name, ... ) ] { VALUES (expr, ... ) | } ;
© 2012 SAP AG. All rights reserved.
©SAP AG
98
HA300
3-98
SQL statement: Select SAP HANA SQL
SELECT [TOP number ] [ ALL | DISTINCT ] [] [] [] [] [] [] [] ;
© 2012 SAP AG. All rights reserved.
99
::= FOR UPDATE [OF , …] The FOR UPDATE keyword locks the selected rows so that other users cannot lock or update the rows until end of this transaction. ::= AS OF [COMMIT ID|TIMESTAMP] [ | ] Can be used for statement level time travel to go back to the snapshot specified by commit_id or timestamp.
©SAP AG
HA300
3-99
Summary Approaching SAP HANA Modeling
Explain the language elements used in SAP HANA SQL statements.
© 2012 SAP AG. All rights reserved.
©SAP AG
100
HA300
3-100
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion © 2012 SAP AG. All rights reserved.
©SAP AG
101
HA300
3-101
Objectives SQLScript and Procedures
Explain SQLScript and SQLScript extensions SQLScript implementation logic Create and call a Procedure Explain calculation engine and calculation model Explain functionality of Calculation engine operators
© 2012 SAP AG. All rights reserved.
©SAP AG
102
HA300
3-102
Overview SQLScript and Procedure
Introduction to SQLScript Overview of SQLScript Extensions SQLScript implementation logic Procedures Introduction to calculation engine and calculation model Introduction to Calculation engine operators
© 2012 SAP AG. All rights reserved.
©SAP AG
103
HA300
3-103
SQLScript: Concept SQLScript and Procedures
SQLScript is a collection of extensions to Structured Query Language (SQL). Data Extension
Allows the definition of table types without corresponding tables.
Functional Extension
Allows definitions of (side-effect free) functions which can be used to express and encapsulate complex data flows
Procedural Extension
Provides imperative constructs executed in the context of the database process.
SQLScript allows developer to push data intensive logic into the database.
SQLScript encourages developer to implement algorithms using a setoriented paradigm instead of one tuple at a time paradigm.
SQLScript allows usage of imperative as well as declarative statements.
© 2012 SAP AG. All rights reserved.
©SAP AG
104
HA300
3-104
SQLScript: Implementation Logic SQLScript and Procedures SQL Script Client
x Orchestration Logic
x
Imperative Extension
x Declarative Logic Functional Extension
© 2012 SAP AG. All rights reserved.
105
There are two levels on which application logic can be brought closer to the database: Orchestration Logic and Declarative Logic. Orchestration Logic: Orchestration logic is used for implementing data flow and control flow logic by using DDL-, DML- and SQL-Query statements as well as imperative language constructs like loops and conditionals. The orchestration logic can also execute declarative logic that is defined in the functional extension by calling the corresponding procedures. The imperative extension refers to the superset of this functional core with statements that store and modify both local and global state using assignments. From an application perspective we encourage the use of the functional core for highly optimizable declarative logic, while the focus of the imperative extension (which also features declarative statements like INSERT, UPDATE, and DELETE) shall be orchestration logic. Declarative Logic: Declarative logic is used for efficient execution of data-intensive computations. This logic is internally represented as data flows which can be executed in parallel. As a consequence, operations in a dataflow graph have to be free of side effects. This means they must not change any global state either in the database or in the application. The first condition is ensured by only allowing changes on the dataset that is passed as input to the operator. The second condition is achieved by only allowing a limited subset of language features to express the logic of the operator.
©SAP AG
HA300
3-105
SQLScript: Data Type Extensions SQLScript and Procedures Scalar Data Type : The SQLScript type system is based on the SQL-92 type system and supports following primitive data type: TINYINT, SMALLINT, INTEGER, BIGINT DECIMAL(p, s), REAL, FLOAT, DOUBLE VARCHAR, NVARCHAR, CLOB, NCLOB VARBINARY, BLOB DATE, TIME, TIMESTAMP
© 2012 SAP AG. All rights reserved.
©SAP AG
106
HA300
3-106
SQLScript: Data Type Extensions SQLScript and Procedures Table Type : SQLScript’s datatype extension also allows the definition of table types. These table types are used to define parameters for a procedure. A table type is created using the CREATE TYPE and delete using DROP TYPE statement.
Syntax: CREATE TYPE [schema.]name AS TABLE (name1 type1 [, name2 type2,...]) DROP TYPE [schema.]name [CASCADE]
© 2012 SAP AG. All rights reserved.
107
SQLScript’s datatype extension also allows the definition of table types. These table types are used to define parameters for a procedure that represent tabular results. The syntax for defining table types follows the SQL syntax for defining new types – also see the SQL reference for details. In order to create a table type in a different schema than the current default schema, the schema has to be provided as a prefix (e.g. myschema.mytable for a table type). The table type is specified using a list of attribute names and primitive data types. For each table type, attributes must have unique names. In contrast to definitions of database tables, table types do not have an instance that is created when the table type is created (i.e., no Data Manipulation Language (DML) statements (INSERT, UPDATE, DELETE) are supported on table types). A table type can be dropped using the DROP TYPE statement. CREATE TABLE TYPE can also be used but it is deprecated now.
©SAP AG
HA300
3-107
SQLScript: Functional Extensions SQLScript and Procedures Functional
extension allow to describe complex dataflow logic using side-effect free procedures.
Procedures
can have multiple input parameters and output parameters(which can be of scalar or table types).
Procedures
describe a sequence of data transformations on data passed as input and database tables.
Data
transformations can be implemented as queries that follow the SAP HANA database SQL syntax by calling other procedures.
Read-only
procedures can only call other read-only procedures.
© 2012 SAP AG. All rights reserved.
108
The use of SQLScript functional-style procedures has some advantages compared to the use of pure SQL. • The calculation and transformations described in procedures can be parameterized and reused inside other procedures. • The user is able to use and express knowledge about relationships in the data; related computations can share common sub expressions, and related results can be returned using multiple output parameters. • It is easy to define common sub expressions. The query optimizer will decide if a materialization strategy (which avoids re-computation of expressions) or other optimizing rewrites are best to apply. In any case, it eases the task to detect common sub-expressions and improves the readability of the SQLScript code. • If scalar variables are required or imperative language features are required, then procedural extensions are also required.
©SAP AG
HA300
3-108
SQLScript: Procedure SQLScript and Procedures Procedure
is a reusable processing block. It is implemented using
SQLScript. and CREATE statement is used to modify the definition of a procedure.
DROP
A
procedure can be created as read only(without side-effect) or readwrite.
Procedure
can be implement using SQLScript, L or R language.
© 2012 SAP AG. All rights reserved.
109
The body of a procedure consists of a sequence of statements separated by semicolons. An intermediate variable , inside a procedure, is not required to be defined before it is bound by an assignment. Cyclic dependencies that result from the intermediate result assignments or from calling other functions are not allowed. Variable name is prefixed by ‘:’ while used as input to other statement. Procedure can be created using SQL editor or using creation wizard.
©SAP AG
HA300
3-109
SQLScript: Procedure Creation using SQL editor SQLScript and Procedures Syntax: CREATE PROCEDURE {schema.}name {({IN|OUT|INOUT} param_name data_type {,...})} {LANGUAGE } {SQL SECURITY } {READS SQL DATA {WITH RESULT VIEW }} AS BEGIN ... END
READS SQL DATA defines a procedure as read-only.
Implementation LANGUAGE can be specified . Default is SQLScript.
WITH RESULT VIEW is used to create a column view for output parameter of type table which can be used in SQL query.
© 2012 SAP AG. All rights reserved.
110
Language: The implementation language is by default SQLSCRIPT. It is good practice to define the language in all procedure definitions. Other implementation languages are supported but not covered here. Security Mode: With security mode “definer”, which is the default, execution of the procedure is then performed with the privileges of the definer of the procedure. The other alternative is mode “invoker”. In this case, privileges are checked at runtime with the privileges of the caller of the function. Please note that analytical privileges are checked regardless of the security mode. READS SQL DATA This is used to define a procedure as read-only. This marks a procedure as being free of sideeffects. One factor to be considered here is that neither DDL nor DML statements are allowed in its body, and only other read-only procedures can be called by the procedure. The advantage to this definition is that certain optimizations are only available for read-only procedures. WITH RESULT VIEW RESULT VIEW can be specified as it was previously available for table functions. The name of the result view is no longer bound to a static name scheme but can be any valid SQL identifier. For backward compatibility reasons the old static name scheme will be only supported for procedures that are generated with the deprecated CREATE FUNCTION syntax.
©SAP AG
HA300
3-110
SQLScript: Procedure Creation using Wizard SQLScript and Procedure
Start Procedure creation wizard from context menu of package.
Provide creation parameter.
Set “Access Mode” for read-only or read-write classification.
Security mode can be select by setting the value of attribute “Run With”
111 111
© 2012 SAP AG. All rights reserved.
Default schema is used for object name qualification.
©SAP AG
HA300
3-111
SQLScript: Procedure Creation using Wizard SQLScript and Procedure
Define output and input parameter of procedure.
Write application logic in Script view using SQLScript.
© 2012 SAP AG. All rights reserved.
112
Default schema is used for object name qualification.
©SAP AG
HA300
3-112
SQLScript: Calling a Procedure SQLScript and Procedure CALL - Procedure Called From Client A procedure (or table function) can be called by a client on the outer-most level, using any of the supported client interfaces. Syntax CALL [schema.]name (param1 [, ...])
For table output parameters it is possible to either pass a table or ‘?’. ‘?’ can be used to represent an empty parameter binding.
© 2012 SAP AG. All rights reserved.
113
CALL when executed by the client syntax behaves in a consistent way as the SQL standard semantics, e.g. Java clients can call a procedure using a JDBC CallableStatement. CALL returns an iterator over result sets. Each output variable of the procedure will be represented as a result set. SQL statements that are not assigned to any table variable in the procedure body will be added as result sets at the end of the result set iterator. The type of these result structures will be determined during compilation time but will not occur in the signature of the procedure. Scalar output variables will be a scalar value that can be retrieved from the callable statement directly.
©SAP AG
HA300
3-113
SQLScript: Calling a Procedure SQLScript and Procedure CALL...WITH OVERVIEW From Client This CALL statement returns one result set that holds the information of which table contains the result of a particular table’s output variable. This is used to populate an existing table by passing it as parameter. When passing ‘?’ to the output parameters, temporary tables holding the result sets will be generated.
Syntax CALL [schema.]name (param1 [, ...]) WITH OVERVIEW
© 2012 SAP AG. All rights reserved.
114
This CALL statement returns one result set that holds the information of which table contains the result of a particular table’s output variable. When passing existing tables to the output parameters CALL … WITH OVERVIEW will insert the result set tuples of the procedure into the given tables.
©SAP AG
HA300
3-114
SQLScript: Calling a Procedure SQLScript and Procedure CALL - Internal Procedure Call For internal calls, i.e. calls of one procedure by another procedure, IN variables are bound by literals or variable references, new OUT variables are bound to the result of the call. Syntax CALL [schema.]name (:in_param1, out_param [, ...])
© 2012 SAP AG. All rights reserved.
©SAP AG
115
HA300
3-115
Calculation Engine SQLScript and Procedures
SQL Script
Calculation engine is the execution engine for SQLScript.
SQL Script Complier
SQLScript statement are parsed into calculation model as much as possible. The calculation engine instantiates a calculation model at time of query execution.
Calculation Model (Data Flow Graph) Calc Engine
Model Optimizer (rule based) R
Intermediate Results
Model Executor R
Calc Engine Operators Logical Execution Plan
R
Database Optimizer
© 2012 SAP AG. All rights reserved.
©SAP AG
R
Script Execution Runtime
116
HA300
3-116
Calculation Model SQLScript and Procedures Union
R-OP
R-OP
R-OP
Filter
Filter
Filter
Join Input2
Input1
© 2012 SAP AG. All rights reserved.
117
Calculation Models can be seen as directed acyclic data flow graphs, where the modeler can define data sources as inputs and different operations (join, aggregation, projection, .. ) on top of them for data manipulations. The restriction to acyclic graph means that loops and recursion are not allowed in calculation models. Each node has a set of inputs and outputs and an operation that transforms the inputs into the outputs. In addition to their primary operation, each node can also have a filter condition for filtering the result set. Calculation models support a variety of node types: • Nodes for set operations such as projection, aggregation, join, union, minus, intersection. • SQL nodes that execute a SQL statement which is an attribute of the node. • For statistical calculations nodes of type R-operation can be used. The calculation performed by such a node is described using the R language for statistical computing. • For complex operations that cannot be described with a graph of data transformations, scripting nodes can be used. The function of such a node is described by a procedural script in Python, “L” language or JavaScript.
©SAP AG
HA300
3-117
Calculation Model Example SQLScript and Procedures
Query 5
Query 4
Query 3
Query 1
EPM_PROCUREMENT1
© 2012 SAP AG. All rights reserved.
©SAP AG
Query 2
M_TIME_DIMENSION
118
HA300
3-118
SQLScript: Calculation Engine Plan Operators SQLScript and Procedure
Calculation engine plan operators encapsulate datatransformation functionality.
It is an alternative to using SQL statements as their logic is directly implemented in the calculation engine, i.e. the execution environment of SQLScript.
Operator has been categorized as Data Source Access and Relational.
© 2012 SAP AG. All rights reserved.
©SAP AG
119
HA300
3-119
SQLScript: Data Source Access Operators SQLScript and Procedure
CE_COLUMN_TABLE ("table_name"{, ["attrib_name", ...]}) Example: ot_books1 = CE_COLUMN_TABLE("BOOKS"); ot_books2 = CE_COLUMN_TABLE("BOOKS",["TITLE","PRICE","CRCY"]); This example only works on a column table and does not invoke the SQL processor. It is semantically equivalent to the following: ot_books3 = SELECT * FROM books; ot_books4 = SELECT title, price, crcy FROM books;
© 2012 SAP AG. All rights reserved.
120
Data Source Access The data source access functions bind the column table or column view of a data source to a table variable for reference by other built-in functions or statements in a SQLScript procedure. Attribute can not be renamed using data access operator. CE_COLUMN_TABLE Description: The CE_COLUMN_TABLE operator provides access to an existing column table. It takes the name of the table and returns its content bound to a variable. Optionally a list of attribute names can be provided to restrict the output to the given attributes. Note that many of the calculation engine operators provide a projection list for restricting the attributes returned in the output. In the case of relational operators, the attributes may be renamed in the projection list. The functions that provide data source access provide no renaming of attributes but just a simple projection.
©SAP AG
HA300
3-120
SQLScript: Data Source Access Operators SQLScript and Procedure CE_JOIN_VIEW("join_view_name"{, ["attrib_name", ...]}) Example: out = CE_JOIN_VIEW("PRODUCT_SALES", ["PRODUCT_KEY", "PRODUCT_TEXT", "SALES"]); Retrieves the attributes PRODUCT_KEY, PRODUCT_TEXT, and SALES from the join view PRODUCT_SALES. It is equivalent to the following SQL: out = SELECT product_key, product_text, sales FROM product_sales;
© 2012 SAP AG. All rights reserved.
121
CE_JOIN_VIEW: The CE_JOIN_VIEW operator returns results for an existing join view (also denoted as Attribute View). It takes the name of the join view and an optional list of attributes as parameters of such views/models.
©SAP AG
HA300
3-121
SQLScript: Data Source Access Operators SQLScript and Procedure CE_OLAP_VIEW("OLAP_view_name"{, ["DIM", "KEY_FIG", ... ]}) Example: out = CE_OLAP_VIEW("OLAP_view", ["DIM1", "KF"]); Is equivalent to the following SQL: out = select dim1, SUM(kf) FROM OLAP_view GROUP BY dim1;
© 2012 SAP AG. All rights reserved.
122
CE_OLAP_VIEW: The CE_OLAP_VIEW operator returns results for an existing OLAP view (also denoted as Analytical View). It takes the name of the OLAP view and an optional list of key figures and dimensions as parameters. The OLAP cube that is described by the OLAP view is grouped by the given dimensions and the key figures are aggregated using the default aggregation of the OLAP view.
©SAP AG
HA300
3-122
SQLScript: Data Source Access Operators SQLScript and Procedure CE_CALC_VIEW ("CALC_VIEW_NAME"{, ["attrib_name", ...]}) Example: out = CE_CALC_VIEW("_SYS_SS_CE_TESTCECTABLE_RET", ["CID", "CNAME"]); Semantically equivalent to the following SQL: out = SELECT cid, cname FROM "_SYS_SS_CE_TESTCECTABLE_RET";
© 2012 SAP AG. All rights reserved.
123
CE_CALC_VIEW: The CE_CALC_VIEW operator returns results for an existing calculation view. It takes the name of the calculation view and optionally a projection list of attribute names to restrict the output to the given attributes.
©SAP AG
HA300
3-123
SQLScript: Relational Operators SQLScript and Procedure CE_JOIN (:var1_table, :var2_table, [join_attr, ...]{, [attrib_name , ...]}) Example: ot_pubs_books1 = CE_JOIN (:lt_pubs, :it_books,["PUBLISHER"]); ot_pubs_books2 = CE_JOIN (:lt_pubs, :it_books,["PUBLISHER"], ["TITLE","NAME","PUBLISHER","YEAR"]); This example is semantically equivalent to the following SQL but does not invoke the SQL processor. ot_pubs_books3 = SELECT P.publisher AS publisher, name, street, post_code, city, country, isbn, title, edition, year, price, crcy FROM :lt_pubs AS P, :it_books AS B WHERE P.publisher = B.publisher; ot_pubs_books4 = SELECT title, name, P.publisher AS publisher, year FROM :lt_pubs AS P, :it_books AS B WHERE P.publisher = B.publisher; © 2012 SAP AG. All rights reserved.
124
Relational operator provide the functionality which are directly executed in the calculation engine. This allows exploitation of the specific semantics of the calculation engine and to tune the code of a procedure if needed. CE_CALC_VIEW: Syntactically, the plan operator CE_JOIN takes four parameters as input 1. One table variable representing the left argument to be joined. 2. One table variable representing the right argument to be joined. 3. A list of join attributes. Since CE_JOIN requires equal attribute names, one attribute name per pair of join attributes is sufficient. The list must at least have one element. 4. Optionally, a projection list can be provided specifying the attributes that should be in the resulting table. If this list is present it must at least contain the join attributes. The CE_JOIN operator calculates a natural (inner) join of the given pair of tables on a list of join attributes. For each pair of join attributes, only one attribute will be in the result. Optionally, a projection list of attribute names can be given to restrict the output to the given attributes. If a projection list is provided, it must include the join attributes. Finally, the plan operator requires each pair of join attributes to have identical attribute names. In case of join attributes having different names, one of them must be renamed prior to the join. CE_LEFT_OUTER_JOIN Calculate the left outer join. Besides the function name, the syntax is the same as for CE_JOIN. CE_RIGHT_OUTER_JOIN Calculate the right outer join. Besides the function name, the syntax is the same as for CE_JOIN. NOTE: CE_FULL_OUTER_JOIN is not supported.
©SAP AG
HA300
3-124
SQLScript: Relational Operators SQLScript and Procedure
CE_PROJECTION (:var_table, [param_name [AS new_param_name],...]{,[Filter]}) Example: ot_books1 = CE_PROJECTION (:it_books,["TITLE","PRICE", "CRCY" AS "CURRENCY"], '"PRICE" > 50'); Semantically equivalent to the following SQL:. ot_books2= SELECT title, price, crcy AS currency FROM :it_books WHERE price > 50;
© 2012 SAP AG. All rights reserved.
125
Restricts the columns in the schema of table variable var_table to those mentioned in the projection list. Optionally renames columns, computes expressions, or applies a filter. The calculation engine plan operator CE_PROJECTION takes three parameters as input: • A variable of type table which is subject to the projection. Like CE_JOIN, CE_PROJECTION cannot handle tables directly as input. • A list of attributes which should be in the resulting table. The list must at least have one element. The attributes can be renamed using the SQL keyword AS, and expressions can be evaluated using the CE_CALC function. • An optional filter where Boolean expressions are allowed, as defined for the CE_CALC operator below. In this operator, the projection in parameter 2 is applied first – including column renaming and computation of expressions. As last step, the filter is applied.
©SAP AG
HA300
3-125
SQLScript: Relational Operators SQLScript and Procedure
CE_CALC ('', ) Example: with_tax = CE_PROJECTION(:product, ["CID", "CNAME", "OID", "SALES", CE_CALC('"SALES" * :vat_rate', decimal(10,2)) AS "SALES_VAT"], '"CNAME" = :cname'); Notice, that all columns used in the CE_CALC have to be included in the projection list.
© 2012 SAP AG. All rights reserved.
126
CE_CALC is used inside other operators discussed in this section. It evaluates an expression and is usually then bound to a new column. An important use case is evaluating expressions in the CE_PROJECTION. The CE_CALC function takes two arguments: 1. The expression enclosed in single quotes. 2. The result type of the expression as a SQL type. Another frequent use case of CE_CALC is computing row numbers: CREATE PROCEDURE ceGetRowNum(IN it_books books, OUT ranked_books ot_ranked_books) LANGUAGE SQLSCRIPT READS SQL DATA AS BEGIN ordered_books = SELECT title, price, crcy FROM :it_books ORDER BY price DESC; ranked_books = CE_PROJECTION(:it_books, ["TITLE", "PRICE", CE_CALC('rownum()', integer) AS "RANK", "CRCY" AS "CURRENCY"]); END;
©SAP AG
HA300
3-126
SQLScript: Relational Operators SQLScript and Procedure CE_AGGREGATION (:var_table, [aggregate ("column") {AS "renamed_col"}] {,["column", ...]}); Example: ot_books1 = CE_AGGREGATION (:it_books, [COUNT ("PUBLISHER") AS "CNT"], ["YEAR"]); Semantically equivalent to the following SQL: ot_books2 = SELECT COUNT (publisher) AS cnt, year FROM :it_books GROUP BY year;
© 2012 SAP AG. All rights reserved.
127
Groups the input and computes aggregates for each group. Syntactically, the aggregation operator takes three input parameters: 1. A variable of type table containing the data that should be aggregated. CE_AGGREGATION cannot handle tables directly as input. 2. A list of aggregates. For instance, [SUM ("A"), MAX("B")] specifies that in the result, column "A" has to be aggregated using the SQL aggregate SUM and for column B, the maximum value should be given. 3. An optional list of group-by attributes. For instance, ["C"] specifies that the output should be grouped by column C, i.e. the resulting schema has a column named C in which every attribute value from the input table appears exactly once. If this list is absent the entire input table should be treated as a single group, and the aggregate function is applied to all tuples. The final parameter is optional. Note that CE_AGGREGATION implicitly defines a projection: All columns that are not in the list of aggregates, or in the group-by list, are not part of the result. Supported aggregation functions are: count("column") sum("column") min("column") max("column") use sum("column") / count("column") to compute the average Note that count(*) can be achieved by doing an aggregation on any integer column, if no group-by attributes are provided, this will count all non-null values. ©SAP AG
HA300
3-127
SQLScript: Relational Operators SQLScript and Procedure
CE_UNION_ALL (:var_table1, :var_table2) Example: ot_all_books1 = CE_UNION_ALL (:lt_books, :it_audiobooks); Semantically equivalent to the following SQL: ot_all_books2 = SELECT * FROM :lt_books UNION ALL SELECT * FROM :it_audiobooks;
© 2012 SAP AG. All rights reserved.
128
The CE_UNION_ALL function is semantically equivalent to SQL UNION ALL statement. It computes the union of two tables which need to have identical schemas. The CE_UNION_ALL function preserves duplicates, i.e. the result is a table which contains all the rows from both input tables.
©SAP AG
HA300
3-128
SQLScript: Control statement SQLScript and Procedure IF THEN {then-stmts1} {ELSEIF THEN {then-stmts2}} {ELSE {else-stmts3}} END IF
Example: SELECT count(*) INTO found FROM books WHERE isbn = :v_isbn; IF :found IS NULL THEN CALL ins_msg_proc ('result of count(*) cannot be NULL'); ELSE CALL ins_msg_proc ('result of count(*) not NULL - as expected'); END IF;
© 2012 SAP AG. All rights reserved.
129
The IF statement consists of a boolean expression – bool-expr1. If this expression evaluates to true then the statements – then-stmts1 – in the mandatory THEN block are executed. The IF statement ends with END IF. The remaining parts are optional. If the Boolean expression – bool-expr1 – does not evaluate to true the ELSE-branch is evaluated. In most cases this branch starts with ELSE. The statements – else-stmts3 – are executed without further checks. After an else branch no further ELSE branch or ELSEIF branch is allowed. Alternatively, when ELSEIF is used instead of ELSE another boolean expression – bool-expr2 – is evaluated. If it evaluates to true, the statements – then-stmts2 – is executed. In this fashion an arbitrary number of ELSEIF clauses can be added. This statement can be used to simulate the switch-case statement known from many programming languages.
©SAP AG
HA300
3-129
SQLScript: Control statement SQLScript and Procedure WHILE DO {stmts} END WHILE
FOR IN {REVERSE} .. DO {stmts} END FOR
© 2012 SAP AG. All rights reserved.
130
While Loop The while loop executes the statements – stmts – in the body of the loop as long as the boolean expression at the beginning – bool-stmt – of the loop evaluates to true. For Loop The for loop iterates a range of numeric values – denoted by start and end in the syntax – and binds the current value to a variable (loop-var) in ascending order. Iteration starts with value start and is incremented by one until the loop-var is larger than end. Hence, if start is larger than end, the body loop will not be evaluated. For each enumerated value of the loop variable the statements in the body of the loop are evaluated. The optional keyword REVERSE specifies to iterate the range in descending order.
©SAP AG
HA300
3-130
SQLScript: Dynamic SQL SQLScript and Procedure EXEC ''
Dynamic SQL statement is used to construct SQL statements at runtime in a procedure.
Dynamic SQL allows to use variables where they might not be supported in SQLScript. It provides more flexibility in creating SQL statements.
Optimization of dynamic SQL statement is limited.
It is not possible to bind result of a dynamic SQL statement to a SQLScript variable.
Dynamic SQL is prone to SQL injection.
EXEC '‘ is used to construct dynamic SQL.
© 2012 SAP AG. All rights reserved.
131
Executes the SQL statement passed in a string argument. This statement allows for constructing an SQL statement at execution time of a procedure. Thus, on the one hand dynamic SQL allows to use variables where they might not be supported in SQLScript or more flexibility in creating SQL statements. On the other hand, dynamic SQL comes with an additional cost at runtime: •
Opportunities for optimizations are limited.
•
The statement is potentially recompiled every time the statement is executed.
•
One cannot use SQLScript variables in the SQL statement (but when constructing the SQL statement string).
•
One cannot bind the result of a dynamic SQL statement to a SQLScript variable.
•
One must be very careful to avoid SQL injection bugs that might harm the integrity or security of the database.
Note: It is recommended to avoid dynamic SQL because it might have a negative impact on security or performance.
©SAP AG
HA300
3-131
Summary Approaching SAP HANA Modeling
Explain SQLScript and SQLScript extensions
SQLScript implementation logic
Create and call Procedure
Explain calculation engine and calculation model
Explain functionality of Calculation engine operators
© 2012 SAP AG. All rights reserved.
©SAP AG
132
HA300
3-132
Unit 3: Advanced Modeling Creating Attribute Views Using Hierarchies Creating Restricted & Calculated Measures Using Filter Operations Using Variables Creating Calculation Views SAP HANA SQL - Introduction SQLScript and Procedures Using Currency Conversion © 2012 SAP AG. All rights reserved.
©SAP AG
133
HA300
3-133
Objectives Using Currency Conversion
Understand Currency Conversion in SAP HANA Apply Currency Conversion in Analytic Views Leverage Fixed Currencies Leverage Source Currency from Attributes Create Target Currency Variables Use Currency conversion in Calculation Views
© 2012 SAP AG. All rights reserved.
©SAP AG
134
HA300
3-134
Overview Using Currency Conversion
Currency Conversion in SAP HANA Currency Conversion in Analytic Views Using Fixed Currencies Using Source Currency from Attributes Creating Target Currency Variables Currency Conversion in Calculation Views
© 2012 SAP AG. All rights reserved.
©SAP AG
135
HA300
3-135
Currency Conversion Using Currency Conversion
As most frontend tools do not allow defining or switching reporting currency in the UI, and as there might not be such information in master data, we have to convert the possibly many monetary document currencies into just a few. SAP HANA has the necessary functions needed to achieve currency conversion during data modeling.
© 2012 SAP AG. All rights reserved.
136
In source data monetary values are most often expressed in a single currency. For reporting purposes, it can be necessary to convert the currencies. The purposes can include, but are not limited to: • Corporate Reporting – when using a global reporting currency in corporate reporting all values should be displayed in the global reporting currency. • Regional Reporting – for example the US region might want to see European figures in USD. • P&L Reporting – you might want to see the effects of currency gains/losses. • Accounting – as a requirement in multi currency general ledger transactions.
©SAP AG
HA300
3-136
Currency Conversion Using Currency Conversion As currency exchange rates fluctuate constantly in the global markets, when converting it is necessary not only to define the source and target currencies when converting, but also to define the time when currency conversion should take place. Examples could be:
Billing Date
Posting Date
Financial Year End
Today’s Date
© 2012 SAP AG. All rights reserved.
137
Some currency exchange rates fluctuate more than others. When the source or target currency involves a volatile currency such as for example the Japanese Yen, it can make a very big difference to your measures if your conversion is not done on the correct date.
©SAP AG
HA300
3-137
Currency Conversion in SAP HANA Using Currency Conversion The preferred way to define currency conversion for measures is to model in an Analytic View.
© 2012 SAP AG. All rights reserved.
138
The Analytic View provides the easiest way to convert currencies, and the conversion modeling can be done using the graphical interface.
©SAP AG
HA300
3-138
Currency Conversion in SAP HANA Using Currency Conversion But still, sometimes due to the constraints in the master data, or because of the complexity of the reporting requirements, it might be necessary to model the conversion within a Calculation View.
© 2012 SAP AG. All rights reserved.
139
This diagram depicts a calculation view. The graph is there for illustration, the actual currency conversion is not done graphically in a Calculation View.
©SAP AG
HA300
3-139
Currency Conversion in SAP HANA Using Currency Conversion The following standard SAP tables need to be included in the SYSTEM schema or other specified schema, as they are used for currency conversion. What we need are some of the TCUR* tables. They must be replicated so that the conversions are working correctly. Table Name
Description
TCURR
Exchange Rates
TCURV
Exchange rate types for currency translation
TCURF
Conversion Factors
TCURN
Quotations
TCURX
Currency Decimals
© 2012 SAP AG. All rights reserved.
140
If you do not have these tables available in the SYSTEM schema or another defined schema, you will need to replicate them here. If you can not replicate these tables (lets say you do not have replication set up), you will need to find other means to load them into this schema if you want to enable the currency conversion functionality.
©SAP AG
HA300
3-140
Currency Conversion in Analytic Views Using Currency Conversion Process Flow - Currency Conversion in Analytic Views
Create a Measure
Select Target Currency
Define it as Measure Type: “Amount with Currency”
This is the currency we want to convert to.
Enable for Conversion
Select Date of Conversion and Exchange Type Date of Conversion will use the exchange rate of the defined date. Exchange Type is the type of exchange rate we want to use.
Define Source Currency This is the currency we want to convert from. It is the currency type that our measure is stored in.
This is option gives us the possibility to convert the currency.
© 2012 SAP AG. All rights reserved.
141
The diagram depicts the process flow for currency conversion within an Analytic View. Further slides will go into detail about each step.
©SAP AG
HA300
3-141
Currency Conversion in Analytic Views Using Currency Conversion A measure in an Analytic View can be defined with the Measure Type:
Analytic View
“Amount with Currency” This is the necessary Measure Type for the method.
Create a Measure
Target Currency
Enable for Conversion
Date of Conversion and Exchange Type
Source Currency
© 2012 SAP AG. All rights reserved.
142
By default a measure is defined as Type: ”Simple”. ”Amount with Currency” is a specific measure type in its own right.
©SAP AG
HA300
3-142
Currency Conversion in Analytic Views Using Currency Conversion Enabling the Measure for conversion releases further options in the Measure definition in order to complete the conversion as required.
Create a Measure
Target Currency
Enable for Conversion
Date of Conversion and Exchange Type
Source Currency
© 2012 SAP AG. All rights reserved.
143
This Measure Type: ”Amount with Currency” can be used even when conversion is not actually used. When it is not enabled for conversion all that happens is that you will see the measure displayed with the selected currency.
©SAP AG
HA300
3-143
Currency Conversion in Analytic Views Using Currency Conversion Selecting the Currency brings up the Currency Dialog where the Target Currency can be defined. A Fixed Currency will convert the Source Currency into a single currency.
Create a Measure
Target Currency
Enable for Conversion
Date of Conversion and Exchange Type
We also have the option of creating the Target Currency not based on a Fixed Currency, but rather based on an Attribute or Variable.
Source Currency
© 2012 SAP AG. All rights reserved.
144
Note that the whatever currency is selected in the top ”Currency:” drop-down box will be selected also as the ”Target Currency”. The reason for this is that you when the measure is not enabled for conversion, it does not have a Target Currency as such, as it is not converted. It should be noted that you can not change the Target Currency in any other way than to change what is selected in the top ”Currency:” drop-down box.
©SAP AG
HA300
3-144
Currency Conversion in Analytic Views Using Currency Conversion Clicking on the Source Currency, once again brings up the Currency Dialog, this time to set the Source Currency.
Create a Measure
Target Currency
Enable for Conversion
Date of Conversion and Exchange Type
If we know the base Currency we can set it as a Fixed Type. If it varies we can set it as an Attribute type and it will change based on an Attribute.
Source Currency
© 2012 SAP AG. All rights reserved.
145
When a measure is enabled for conversion a source currency is required. This is so that SAP HANA will know which currency to convert from.
©SAP AG
HA300
3-145
Currency Conversion in Analytic Views Using Currency Conversion The Exchange Type is the type of Exchange Rate we want to use, as supplied from our SAP TCUR* base tables. The Date Mapping defines the date when we want the currency conversion to occur, based on either a Fixed date, an Attribute or a Variable.
Create a Measure
Target Currency
Enable for Conversion
Date of Conversion and Exchange Type
Source Currency
© 2012 SAP AG. All rights reserved.
146
There are many different Exchange Rates available, and in order to choose the right one that you want to use the user will need to know the contents of the TCUR* tables and the implications of these. In order to find out this information the customer might need to speak to the person responsible for the BASIS installation or the person who is in charge of the currency exchange rates in the SAP installation at his/hers site.
©SAP AG
HA300
3-146
Leverage Fixed Currencies Using Currency Conversion When the source, target or both the source and target currencies are known, it is beneficial to set the currency of either or both types to “Fixed”. Using this method all lines for the measure in the Analytic View will be converted using the same currency.
Sales Order 001 002 003 004
Amount GBP £2 500 £10 000 £3 000 £1 650
Sales Order 001 002 003 004
© 2012 SAP AG. All rights reserved.
Amount USD $3 250 $13 000 $3 900 $2 145
147
When both source and target is set to a fixed currency, this is the most simple type of currency conversion.
©SAP AG
HA300
3-147
Leverage Fixed Currencies Using Currency Conversion To achieve a conversion using fixed currencies we set the Currency Type as “Fixed” and define the currencies to the appropriate source and target currencies. This way we achieve a oneto-one conversion.
© 2012 SAP AG. All rights reserved.
148
It can be worth noting that ”Fixed” in this instance does not mean fixed in terms of a fixed date for exchange rate data, it only means that it is one single fixed currency rather than multiple.
©SAP AG
HA300
3-148
Leverage Source Currency from Attribute Using Currency Conversion There can sometimes be instances of master data where the base currencies vary across the table.
Sales Order
Currency
Amount
001
EUR
€ 2 500
002
JPY
¥100 000
003
CHF
CHF 3 000
004
SEK
165 000 kr
On one line you might find for example a sales order with a value in Euro, on another line you might find a different value expressed in Japanese Yen etc. Summing up these different currencies would give us a useless value.
© 2012 SAP AG. All rights reserved.
149
Summing up the values above would give us the value 270 500, but it would not mean anything to anyone. The currency conversion will need to be done on a line-by-line basis, not on the total amount.
©SAP AG
HA300
3-149
Leverage Source Currency from Attribute Using Currency Conversion With multiple currencies in the master data we are not able to set the source currency to a single fixed type. Instead, in order to convert the currency, we will need to define the source currency based on an attribute in the master data
Source Currency Attribute Sales Order 001 002 003 004
Currency Amount EUR JPY CHF SEK
€ 2 500 ¥100 000 CHF 3 000 165 000 kr
Sales Order 001 002 003 004
Amount USD $3 250 $13 000 $3 900 $2 145
© 2012 SAP AG. All rights reserved.
150
In the example on this slide, the source currency varies depending on the Currency Attribute. But note that the Target currency remains fixed, i.e. each line has USD as the currency.
©SAP AG
HA300
3-150
Leverage Source Currency from Attribute Using Currency Conversion When using an Attribute Type to define the Source Currency we set the attribute that contains the currency code to be used for the conversion to the target currency. This way SAP HANA knows which currency exchange rate to use for each individual line.
© 2012 SAP AG. All rights reserved.
151
This slide shows where in the dialogues to set the Source Currency Type to use the attribute.
©SAP AG
HA300
3-151
Use Currency Conversion in Analytic Views Using Currency Conversion SAP HANA includes some further functionality to ease currency conversion.
Enable for decimal shifts This option is to be used when you want to shift the decimal separator to the appropriate place according to the currency exchange rate data available in the master data tables. Upon Conversion Failure This selection gives you the opportunity to define how SAP HANA should deal with conversion failures. You can opt to either: Fail / Set to NULL / Ignore
© 2012 SAP AG. All rights reserved.
152
This slide only brushes on the functionality of ”Enable for decimal shifts” and ”Upon Conversion Failure”. It is good to know that the functionality exists but we do not go into further details about these functions, neither are they used in the excercises.
©SAP AG
HA300
3-152
Use Currency Conversion in Analytic Views Using Currency Conversion To summarize currency conversion in Analytic views, you can set: Decimal Shift – Yes/No Target Currency – Fixed or Attribute Based Source Currency – Fixed or Attribute Based Exchange Type – When you have multiple Exchange Rates, you specify the one to use Exchange Date – The date when conversion should be performed Upon Conversion Failure: Fail/Set to NULL/Ignore
© 2012 SAP AG. All rights reserved.
153
This slide summarises all the options you have when defining currency conversions in an Analytic View. The only extra option available is to use a Variable for setting the Target (or Source) Currency. It is explained in more detial in the next module.
©SAP AG
HA300
3-153
Use Currency Conversion in Analytic Views Using Currency Conversion
There can however be circumstances where the Target Currency options “Fixed” or “Attribute”-based are not sufficient.
?
You might instead want the option to let the reporting user choose the currency that the measure should be displayed in. This will be dealt with in the following topic, “Create Target Currency Variable”.
© 2012 SAP AG. All rights reserved.
154
The variable can also be described as a ”prompt”, it asks the user which currency to use.
©SAP AG
HA300
3-154
Create Target Currency Variable Using Currency Conversion To enable a prompt where the reporting user can specify the target currency of the measure, we first need to create a new Variable. The Variable should be defined as:
Type: Currency
Data Type: VARCHAR, Length: 5
© 2012 SAP AG. All rights reserved.
155
The reason for this exact definition of using VARCHAR lenght 5 is that this is how the currency is defined in the TCUR* tables. We need to use the same definition for our variable.
©SAP AG
HA300
3-155
Create Target Currency Variable Using Currency Conversion After the variable has been defined, you can then set the Target Currency to be expressed not as a Fixed or Attribute Type, but rather as a Variable. The variable created can then be selected as the Currency Type.
© 2012 SAP AG. All rights reserved.
156
The variable used in the example in the previous slide is called CCY_VARIABLE, this is the same variable that is selected in this slide.
©SAP AG
HA300
3-156
Create Target Currency Variable Using Currency Conversion When you later view the data in the Analytic View, you get the option of choosing the measure target currency based on the variable that you have created.
© 2012 SAP AG. All rights reserved.
157
As a user you will first need to click on the add button above (left) to add the response to the variable. Clicking on find (right) gives you a complete list of currencies available.
©SAP AG
HA300
3-157
Use Currency conversion in Calculation Views Using Currency Conversion Calculation View It is also possible to perform currency conversions in a Calculation View. The method used to convert currencies differs from how it is done in Analytic Views. Rather than defining the conversion rules graphically as done in the Analytic Views, the definitions will need to be written by the modeler.
© 2012 SAP AG. All rights reserved.
158
This graph depicts a calculation view. The graph is there for illustration, the actual currency conversion is not done graphically in a Calculation View.
©SAP AG
HA300
3-158
Use Currency conversion in Calculation Views Using Currency Conversion The function we want to use to perform currency conversion in Calculation Views is called CE_CONVERSION. When calling CE_CONVERSION you specify the input table (in most cases an Analytic view), and then define how you want the currency conversion done.
© 2012 SAP AG. All rights reserved.
159
This picture shows an example of the usage of the CE_CONVERSION function, it is an actual screenshot from HANA Studio. Further slides will go into more detail what the actual arguments do.
©SAP AG
HA300
3-159
Use Currency conversion in Calculation Views Using Currency Conversion The different argument definitions necessary when using CE_CONVERSION are similar to what needs to be defined when performing currency conversions graphically in an Analytic View. Key
Values
Default
Meaning
'error_ handling'
'fail on error', 'set to null', 'keep unconverted'
'fail on error'
describe reaction if a rate could not be determined for a row
'client'
any
none
define the client number used for currency conversion
'family'
'currency', 'unit'
none
describe the family of the conversion to be used
'method'
'ERP'
none
describe the conversion method to be used
'conversion_type'
any
none
define the type of exchange rate used for currency conversion
'source_unit'
any
none
define the default source unit for any kind of conversion
'target_unit'
any
none
define the default target unit for any kind of conversion
'reference _date'
any
none
define the default reference date for any kind of conversion
'schema'
any
current schema
define the default schema in which the conversion tables should be looked up
'output'
combinations of 'input', 'unconverted', 'converted', 'passed_through', 'output_unit', 'source_unit', 'target_unit', 'reference_date'
'converted, passed define which attributes should be included in the output _through, output_unit'
Please note the above is not a complete list, there are further optional arguments if required. See SAP HANA reference manual. © 2012 SAP AG. All rights reserved.
160
It can be useful to go back to the previous slide to show what the different arguments call do, based on the information found in the table on the right. The table on the right have the argument calls listed in the same order as in the screenshot on the previous slide. Method: ERP makes use of the conversion method from SAP ERP. At the moment this is the only option available for this argument.
©SAP AG
HA300
3-160
Use Currency conversion in Calculation Views Using Currency Conversion A simple Calculation View with the sole purpose of converting a measurement into a different currency could be written in three steps: Define the input table
Perform the Conversion using the CE_CONVERSION function
Project the results
Please note the above is pseudo-code and does not run unmodified.
© 2012 SAP AG. All rights reserved.
161
This slide is here to give an idea of context, i.e. where a CE_CONVERSION function can be used in an Calculation View. This is the most basic usage, the CE_CONVERSION function can in reality be used anywhere in a more complex Calculation View.
©SAP AG
HA300
3-161
Objectives Using Currency Conversion
Understand Currency Conversion in SAP HANA Apply Currency Conversion in Analytic Views Leverage Fixed Currencies Leverage Source Currency from Attributes Create Target Currency Variables Use Currency conversion in Calculation Views
© 2012 SAP AG. All rights reserved.
©SAP AG
162
HA300
3-162
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
4-1
Unit 4: Fulltext Search Full text Search Overview Data Types and Full text Indexes Using Full text Search
©SAP AG
HA300
4-2
Objectives Fulltext Search Overview
At the end of this Lesson you will be able to: Understand the Fulltext Search capabilities of SAP HANA Understand the benefits of Fulltext Search Understand how the text searching processes are invoked Understand when to use Fuzzy Search
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
4-3
Overview Fulltext Search Overview
This module covers the following topics: Fulltext Search capabilities of SAP HANA Fulltext Search benefits Text Searching Engines Fuzzy Search Introduction
© 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
4-4
What is Fulltext Search Fulltext Search Overview The Fulltext Search capabilities of HANA helps speed up search capabilities within large amounts of text data significantly. Fuzzy Search functionality enables finding strings that match a pattern approximately (rather than exactly), both finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately. Text Analysis scripts provides additional possibilities of analysing the strings or large text columns.
© 2012 SAP AG. All rights reserved.
©SAP AG
5
HA300
4-5
What is Fuzzy Search Fulltext Search Overview Fuzzy Search is a fast and fault-tolerant search feature for SAP HANA. The term ”fault-tolerant search” means that a database query returns records even if the search term (the user input) contains additional or missing characters or other types of spelling error. Its applications can be for example:
Fault-tolerant search in text columns (for example, html or pdf): Search for documents on 'Driethanolamyn' and find all documents that contain the term 'Triethanolamine'.
Fault-tolerant search in structured database content: Search for a product called 'coffe krisp biscuit' and find 'Toffee Crisp Biscuits'.
Fault-tolerant check for duplicate records: Before creating a new customer record in a CRM system, search for similar customer records and verify that there are no duplicates already stored in the system. When, for example, creating a new record 'SAB Aktiengesellschaft & Co KG Deutschl.' in 'Wahldorf', the system shall bring up 'SAP Deutschland AG & Co. KG' in 'Walldorf' as a possible duplicate.
© 2012 SAP AG. All rights reserved.
©SAP AG
6
HA300
4-6
SAP HANA Fulltext Search Aids Fulltext Search Overview HANA provides the following to aid Fulltext Search:
Fulltext and Fuzzy Search Studio Modeler and SQL Script Enhancement
Python based Text Analysis script sets.
Search User Interface sample application
© 2012 SAP AG. All rights reserved.
©SAP AG
7
HA300
4-7
Fulltext Search Benefits Fulltext Search Overview The benefits include:
Exploit unstructered content without additional costs
Less data duplication and movement – leverage one infrastructure for analytical and search workloads
Easy-to-use modeling tools – use HANA Studio to create search models
Build Search Applications qiuckly – UI building blocks provided
© 2012 SAP AG. All rights reserved.
©SAP AG
8
HA300
4-8
Fulltext Search Processes Fulltext Search Overview Fulltext Search functions are invoked by a dedidated process UI
UI
Interfaces, Services
Store
Models
Metadata Repository
Calc Engine
SAP HANA
Engine
Search Engine
In-Memory Column Store
Text Processor
ETL/rep Source
© 2012 SAP AG. All rights reserved.
©SAP AG
9
HA300
4-9
Fulltext Search Processes Fulltext Search Overview To enable Fulltext Search, Search Models are created in HANA Studio by the modeller.
Suggestions
UI
UI components
Search
Metadata
Information Access Services
Modeler
Search Model
HANA Studio Metadata
Ranking
Snippets Search Engine
Tables
Linguistic Processing
Column Store
Text Processor
© 2012 SAP AG. All rights reserved.
©SAP AG
Fuzzy
HANA
Search
10
HA300
4-10
Fulltext Search Processes Fulltext Search Overview During data access through Information Access Services, Fuzzy Search queries are routed to the Search Engine, whilst Linguistic Processing is handled by the Text Processor.
Suggestions
UI
UI components
Search
Metadata
Information Access Services
Modeler
Search Model
HANA Studio Metadata
Ranking
Snippets Search Engine
Tables
Linguistic Processing
Column Store
Text Processor
© 2012 SAP AG. All rights reserved.
©SAP AG
Fuzzy
HANA
Search
11
HA300
4-11
Fulltext Search UI toolkit Fulltext Search Overview The Fulltext Search UI toolkit is provided with HANA SP4, and provides User Interface building blocks for developing search-based applications on SAP HANA. The toolkit is based on HTML5 and Javascript.
© 2012 SAP AG. All rights reserved.
©SAP AG
12
HA300
4-12
Text Analysis Fulltext Search Overview The Text Analysis enablement of SAP HANA allows you to make use of these unique capabilities in the domain of unstructured data as well. The text analysis is a set of Python based scripts, that can be installed and it can then extract entities such as persons, products, places, and more from documents and thus enrich the set of structured information in SAP HANA. These additional attributes enable improved analytics and search. The text analysis provides a vast number of possible entity types and analysis rules for many industries in 20 languages, they provide a rich standard set of dictionaries and rules for identifying and extracting entities from any business text. The standard covers common entities such as organizations, persons, countries, dates, measures, and many more. The standard also contains specialized extraction content such as Marketing (“voice of the customer”) or Public Sector.
© 2012 SAP AG. All rights reserved.
©SAP AG
13
HA300
4-13
Objectives Fulltext Search Overview
You should now be able to: Understand the Fulltext Search capabilities of SAP HANA Understand the benefits of Fulltext Search Understand how the text searching processes are invoked Understand when to use Fuzzy Search
© 2012 SAP AG. All rights reserved.
©SAP AG
14
HA300
4-14
Unit 4: Fulltext Search Fulltext Search Overview Data Types and Fulltext Indexes Using Fulltext Search
©SAP AG
HA300
4-15
Objectives Data Types and Fulltext Indexes
At the end of this Lesson you will be able to: Understand the Fulltext Search usage for different data types Understand how to create columns that enable Fulltext Search Understand how to enable HANA Studio for Fulltext Modeling
© 2012 SAP AG. All rights reserved.
©SAP AG
16
HA300
4-16
Overview Data Types and Fulltext Indexes
This module covers the following topics: Fulltext Search usage for different data types Enabling columns for Fulltext search Enabling Hana Studio for Fulltext modeling
© 2012 SAP AG. All rights reserved.
©SAP AG
17
HA300
4-17
Supported Data Types Data Types and Fulltext Indexes Fuzzy search works out-of-the-box on the following column store data types:
TEXT
SHORTTEXT
VARCHAR
NVARCHAR
DATE
All data types with a full-text index
© 2012 SAP AG. All rights reserved.
©SAP AG
18
HA300
4-18
Full Text Indexes Data Types and Fulltext Indexes
I N D E X
It is possible to speed up the fuzzy search by creating additional data structures, which are used for faster calculation of the fuzzy score. These data structures exist in the memory only, so no additional disk space is required. You should enable the fast fuzzy search structures for all database columns that have a high load of fuzzy searches, and for all database columns that are used in performancecritical queries, to get the best response times possible. The additional data structures increase the total memory footprint of the loaded table.
© 2012 SAP AG. All rights reserved.
©SAP AG
19
HA300
4-19
Full Text Indexes Data Types and Fulltext Indexes
I N D E X
For data types TEXT or SHORT TEXT the index creation is done during table creation; for data types VARCHAR, NVARCHAR or CLOB an index has to be created manually post table creation. Note that full text indexes can not be created for DATE data type columns.
© 2012 SAP AG. All rights reserved.
©SAP AG
20
HA300
4-20
Full Text Indexes Data Types and Fulltext Indexes
I N D E X
The syntax used to enable Full Text Indexes on TEXT or SHORTTEXT is the following:
CREATE COLUMN TABLE mytable ( id INTEGER PRIMARY KEY, col1 SHORTTEXT(100) FUZZY SEARCH INDEX ON ); It is currently not possible to change these settings at a later point in time by using the ALTER TABLE command.
© 2012 SAP AG. All rights reserved.
©SAP AG
21
HA300
4-21
Full Text Indexes Data Types and Fulltext Indexes
I N D E X
The syntax used to enable Full Text Indexes on VARCHAR, NVARCHAR or CLOB is the following: CREATE COLUMN TABLE mytable ( col1 NVARCHAR(2000) ); CREATE FULLTEXT INDEX myindex ON mytable(col1) FUZZY SEARCH INDEX ON ; This can be changed at a later point in time by using the ALTER FULLTEXT INDEX command.
© 2012 SAP AG. All rights reserved.
©SAP AG
22
HA300
4-22
Fuzzy Search SQL Syntax Data Types and Fulltext Indexes In order to use the Studio for Search modeling, you first enable Search Options in the preferences of the Studio under: Modeler -> Search Options
© 2012 SAP AG. All rights reserved.
©SAP AG
23
HA300
4-23
Fuzzy Search SQL Syntax Data Types and Fulltext Indexes You will then have access to a new tab called ”Information Access” in the properties of created Attributes.
© 2012 SAP AG. All rights reserved.
©SAP AG
24
HA300
4-24
Objectives Data Types and Fulltext Indexes
You should now be able to: Understand the Fulltext Search usage for different data types Understand how to create columns that enable Fulltext Search Understand how to enable HANA Studio for Fulltext Modeling
© 2012 SAP AG. All rights reserved.
©SAP AG
25
HA300
4-25
Unit 4: Fulltext Search Fulltext Search Overview Data Types and Fulltext Indexes Using Fulltext Search
©SAP AG
HA300
4-26
Objectives Using Fulltext Search
At the end of this Lesson you will be able to: Know how to use Fulltext Search Know how to use Fuzzy Search Understand Fuzzy Search relevance scoring Know how to use Freestyle Search
© 2012 SAP AG. All rights reserved.
©SAP AG
27
HA300
4-27
Overview Using Fulltext Search
This module covers the following topics: Fulltext Search Fuzzy Search Fuzzy Search Relevance Scoring Freestyle Search
© 2012 SAP AG. All rights reserved.
©SAP AG
28
HA300
4-28
Fulltext Search SQL Syntax Using Fulltext Search
You call Fulltext Search by using the CONTAINS() function in the WHERE-clause of a SELECT statement. Without the Fuzzy option the search will only return results that contain the exact phrase searched for.
SELECT SCORE() AS score, * FROM documents WHERE CONTAINS(doc_content, 'Driethanolamyn') ORDER BY score DESC;
© 2012 SAP AG. All rights reserved.
©SAP AG
29
HA300
4-29
Fulltext Search SQL Syntax Using Fulltext Search
A fuzzy search is an alternative to a non-faulttolerant SQL statement like the example below, which would not return any results when there are spelling errors.
SELECT ... FROM documents WHERE doc_content LIKE '% Driethanolamyn %' ...
© 2012 SAP AG. All rights reserved.
©SAP AG
30
HA300
4-30
Fuzzy Search Relevance Score Using Fulltext Search The fuzzy search algorithm calculates a fuzzy score for each string comparison. The higher the score, the more similar the strings are. A score of 1.0 means the strings are identical. A score of 0.0 means the strings have nothing in common. You can request the score in the SELECT statement by using the SCORE() function. You can sort the results of a query by score in descending order to get the best records first (the best record is the record that is most similar to the user input). When a fuzzy search of multiple columns is used in a SELECT statement, the score is returned as an average of the scores of all columns used. © 2012 SAP AG. All rights reserved.
©SAP AG
0.0
1.0
31
HA300
4-31
String Type Fuzzy Search Using Fulltext Search String Type Search
SAP
~
SAP Deutschaland AG & Co
String types support a basic fuzzy string search. The values of a column are compared with the user input, using the fault-tolerant fuzzy string comparison. When working with string types, the fuzzy string comparison always compares the full strings. If searching with 'SAP', for example, a record like 'SAP Deutschland AG & Co. KG' gets a very low score, because only a very small part of the string is equal (3 of 27 characters match).
© 2012 SAP AG. All rights reserved.
©SAP AG
32
HA300
4-32
Text Type Fuzzy Search Using Fulltext Search Text Type Search
SAP
~ SAPPHIRE NOW Orlando
Text types support a more sophisticated kind of fuzzy search. Texts are tokenized (split into terms) and the fuzzy comparison is done term by term. For example, when searching with 'SAP', a record like 'SAP Deutschland AG & Co. KG' gets a high score, because the term 'SAP' exists in both texts. A record like 'SAPPHIRE NOW Orlando' gets a lower score, because 'SAP' is only a part of the longer term 'SAPPHIRE' (3 of 8 characters match).
© 2012 SAP AG. All rights reserved.
©SAP AG
33
HA300
4-33
Fuzzy Search SQL Syntax Using Fulltext Search You can call the fuzzy search by using the CONTAINS() function with the FUZZY() option in the WHERE-clause of a SELECT statement.
SELECT SCORE() AS score, * FROM documents WHERE CONTAINS(doc_content, 'Driethanolamyn', FUZZY(0.6)) ORDER BY score DESC; Optionally, the fuzziness threshold can manually be set when making the FUZZY() call. If set to for example 0.6, no matches lower than 0.6 will be returned. Default is 0.8.
© 2012 SAP AG. All rights reserved.
©SAP AG
34
HA300
4-34
Freestyle Search SQL Syntax Using Fulltext Search
If you want HANA to search for occurrences of your search word or phrase in multiple columns, you can perform a “Freestyle Search”. This type of search will go through any columns in the table that has “Freestyle Search” enabled.
SELECT SCORE() AS score, * Replacing the column FROM name with a star documents WHERE CONTAINS(*, 'Driethanolamyn', FUZZY) ORDER BY score DESC;
© 2012 SAP AG. All rights reserved.
©SAP AG
35
HA300
4-35
Objectives Data Types and Fulltext Indexes
You should now be able to: Know how to use Fulltext Search Know how to use Fuzzy Search Understand Fuzzy Search relevance scoring Know how to use Freestyle Search
© 2012 SAP AG. All rights reserved.
©SAP AG
36
HA300
4-36
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
5-1
Objectives Unit 5 – Processing Information Objects
At the end of this Lesson you will be able to: Explain how to validate models Explain how to compare versions of information objects Explain how to check model references Explain how to generate auto documentation
© 2012 SAP AG. All rights reserved.
©SAP AG
2
HA300
5-2
Model Validation Processing Information Objects Set preferences for validation rules
Create Data Model Attribute Views Analytical Views
Validate Data Model
Activate Data Model
Calculation Views
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
5-3
Set Preferences for Validation Rules Processing Information Objects In the SAP HANA STUDIO, go to “Windows” -> “Preferences” And then select “Modeler” -> “Validation Rules”
© 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
5-4
Set Preferences for Validation Rules Processing Information Objects At any moment, you can restore default settings.
You can select precisely which rules are applied during the validation of the Information Objects
© 2012 SAP AG. All rights reserved.
©SAP AG
5
HA300
5-5
Execute Validation Rules Processing Information Objects To validate Information Objects, you can use the button “Validate” or right click directly on the object. You can also select several Information Objects.
© 2012 SAP AG. All rights reserved.
©SAP AG
6
HA300
5-6
Execute Validation Rules Processing Information Objects In Job Log, in menu “Current”, you can see the job detail by double clicking on it. In menu “History”, all job logs appear for a period set in “Preferences”.
© 2012 SAP AG. All rights reserved.
©SAP AG
7
HA300
5-7
Object Versions Processing Information Objects
View Inactive
View Active
Validation, Save and Activation Creation of the view, the view is inactive
View Inactive
Modification and Save
Activation of the view, the view is active
Validation, Save and Activation
Modification of the view, the view is inactive
© 2012 SAP AG. All rights reserved.
View Active
Activation of the view, the view is active
8
System provides the possibility to display View Active and View Inactive in two separate screens if they exist in the same time. For example, when you create an attribute view for the first time, before activating it, the view active does not exist yet in the system. When you modify an existing view active, you create a new view inactive, but before activating it again, the previous view active is still available.
©SAP AG
HA300
5-8
Object Versions – Comparing versions Processing Information Objects View Active
View Inactive
Modification of the View Active by Adding fields © 2012 SAP AG. All rights reserved.
9
Note that, when both Views, Inactive and Active, exist in the system, View Active is in “Read Only” mode. So, the system do not allow to have two different Views Inactive.
©SAP AG
HA300
5-9
Object Versions – View Version History Processing Information Objects
Only Active Versions are displayed. Name of the user activating the view, Activation Date and Period from the last activation are available in Version History
© 2012 SAP AG. All rights reserved.
©SAP AG
10
HA300
5-10
References- Checking Model References Processing Information Objects In the modeler, it is possible to check where are used different Information objects in the schema. This function could be very helpful to study the impacts of changes in the data model. Select an object, do a right click and select « Where Used » function.
© 2012 SAP AG. All rights reserved.
©SAP AG
11
HA300
5-11
References- Checking Model References Processing Information Objects In the « Where-Used List », the number of usages of the object is available. Thereby, « Type », « Name » and « Package » of each object which are currently used for the selected object are displayed.
© 2012 SAP AG. All rights reserved.
©SAP AG
12
HA300
5-12
Auto Documentation Processing Information Objects It is possible to generate automatically documentation about the data model in HANA. These documents could provide a list of all objects containing in a package or details on previous selected objects. You can generate Auto Documentation with a right click on an Information Objects or directly with the button below.
© 2012 SAP AG. All rights reserved.
©SAP AG
13
HA300
5-13
Auto Documentation – Select Document Type Processing Information Objects Two document types are available :
Model Details (display each particularity of an Information Objects)
Model List (display a list of each component of the package)
© 2012 SAP AG. All rights reserved.
©SAP AG
14
HA300
5-14
Auto Documentation – Add Objects to Target List Processing Information Objects All Information Objects in the Content are available. You have to select one or several objects and use the “Add” button. Use the button “Remove” to cancel the selection. You can add objects from different packages in the same generated document.
© 2012 SAP AG. All rights reserved.
©SAP AG
15
HA300
5-15
Auto Documentation – Select Export Type & Save to Location
Processing Information Objects Unfortunately, it is not possible yet to change the export file type. Only .pdf type is supported for the moment. Then, choose an target emplacement to save the generated documents.
© 2012 SAP AG. All rights reserved.
©SAP AG
16
HA300
5-16
Summary Processing Information Objects
You should now be able to: Explain how to validate models Explain how to compare versions of information objects Explain how to check model references Explain how to generate auto documentation
© 2012 SAP AG. All rights reserved.
©SAP AG
17
HA300
5-17
You should now be able to:
© 2012 SAP AG. All rights reserved.
©SAP AG
18
HA300
5-18
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
6-1
Objectives Managing Modeling Content
At the end of this Lesson you will be able to: Explain how to manage schemas Explain how to import and export data models Explain Translating metadata texts
© 2012 SAP AG. All rights reserved.
©SAP AG
2
HA300
6-2
Schemas – Creating Schemas Managing Modeling Content You create schemas to group the tables. For import you need to create the schema where all the tables are imported. Schemas are created with a SQL Script statement. SQL Syntax : CREATE SCHEMA schema_name [OWNED BY name] Parameters : OWNED BY Specifies the name of the schema owner. Description : The CREATE SCHEMA statement creates a schema in the current database.
© 2012 SAP AG. All rights reserved.
3
Example : CREATE SCHEMA my_schema OWNED BY system;
©SAP AG
HA300
6-3
Schemas – Managing Schemas Mapping Managing Modeling Content You use this procedure to map the logical schemas with the physical schemas while transferring information objects from a source system to target in case of SAP shipped content. You can define different schema mappings in the same time.
© 2012 SAP AG. All rights reserved.
4
Authoring Schema = schema name in the source system Physical Schema = schema name in the target system
©SAP AG
HA300
6-4
Schemas – Managing Schemas Mapping Managing Modeling Content Export TABLE1 from SYSTEM S1 with SCHEMA_S1
Import TABLE1 into SYSTEM S2 and modify SCHEMA_S1 into SCHEMA_S2
SYSTEM S2
SYSTEM S1 SCHEMA_S1.TABLE1
SCHEMA_S1.TABLE1
SCHEMA_S2.TABLE1
© 2012 SAP AG. All rights reserved.
5
For example : Consider a source system with information objects referring to physical schema S1 and a target system with physical schema S2. When you export content, the tables of S1 are copied to S2 but the copied objects at target can't be opened, as they still refer to S1 which is not present at target. To solve this, there is a mapping done by Administrator between the authoring (logical) schema and physical schema both at source and target, where the authoring schema remains same. So now, if there is an authoring schema A1 which points to S1 at source and to S2 at target then both at source and target A1 is referred in spite of S1 and S2.
©SAP AG
HA300
6-5
Delivery units, packages and models in perspective
Package 1
Models and objects
Transport to another HANA system Delivery Unit Package 2
© 2012 SAP AG. All rights reserved.
©SAP AG
Models and objects
6
HA300
6-6
Import & Export Managing Modeling Content You can import models from your local system or from a server. Procedure :
SYSTEM S1
Creating Delivery Unit
Exporting Models (Client or Server)
© 2012 SAP AG. All rights reserved.
©SAP AG
Importing Models (Client or Server)
SYSTEM S2
7
HA300
6-7
HANA Content Transport Capabilities In context of Data Marts: 2-step integration into CTS+ (“lose coupling”):
Manual preparation (server-side export) Automated transport and deployment in target system via CTS+
SAP Solution Manager CTS+ CTS+
Export DIR
Export DIR
Potential for ABAP-based new applications (HPAs)
TLOGO-based transport Encapsulating SAP HANA content in ABAP objects (allows transport SAP HANA content with application code though standard CTS mechanisms)
Manual
Source
CTS+
Target
In context of SAP NW 7.3 BW, powered by SAP HANA Leverage existing transport functionality © 2012 SAP AG. All rights reserved.
©SAP AG
8
HA300
6-8
Import & Export – Create Delivery Unit Managing Modeling Content You use a Delivery Unit to create a group of transportable objects for content delivery and to export information models from source system to target server. From the Quick Launch tab page, choose « Delivery Units.. » and follow the steps given below.
© 2012 SAP AG. All rights reserved.
©SAP AG
9
HA300
6-9
Import & Export – Create Delivery Unit Managing Modeling Content
From the Delivery Units dialog box, choose create.
You need to associate packages with delivery units. This is required when you export models.
© 2012 SAP AG. All rights reserved.
©SAP AG
10
HA300
6-10
Import & Export – Create Delivery Unit Managing Modeling Content Enter the delivery unit name. Enter the responsible user. In the Version field, enter the delivery unit version. Enter the support package version of the delivery unit. Enter the patch version of the delivery unit.
© 2012 SAP AG. All rights reserved.
11
To edit the information of an existing delivery unit, select the value. To view the list of folders pertaining to a delivery unit, select it.
©SAP AG
HA300
6-11
Import & Export – Export Model to Server Managing Modeling Content You use this procedure to export models. Prerequisites : You have created a delivery unit.
© 2012 SAP AG. All rights reserved.
12
Exporting objects using Delivery Units (earlier known as Server Export): Function to export all packages that make up a delivery unit and the relevant objects contained therein, to the client or to the SAP HANA server filesystem. Exporting objects using Developer Mode (earlier known as Client Export): Function to export individual objects to a directory on your client computer. This mode of export should only be used in exceptional cases, since this does not cover all aspects of an object, for example, translatable texts are not copied. Importing objects using Delivery Unit (earlier known as Server Import): Function to import objects (grouped into a delivery unit) from the server or client location available in the form of .tgz file. Importing objects using Developer Mode (earlier known as Client Import): Function to import objects from a client location to your SAP HANA modeling environment.
©SAP AG
HA300
6-12
Import & Export – Export Model to Client or Server Managing Modeling Content Select the delivery unit. Then choose whether to export on the server or the client and click on “Next” button.
© 2012 SAP AG. All rights reserved.
13
A careful approach is needed with the “Filter By Time” option to avoid serious object consistency problems! A general best practice recommendation is to periodically schedule full exports, and have a few exports using Filter By Time in between. When using Filter By Time, it is recommended to use a “From” date that corresponds to the date of the last full export. To ensure that no object changes are missed, and therefore that a consistent export will be performed it’s best to use a date and time that is slightly before the date and time of the last full export.
©SAP AG
HA300
6-13
Import & Export – Import Model from Server Managing Modeling Content You use this procedure to import models from the server.
© 2012 SAP AG. All rights reserved.
©SAP AG
14
HA300
6-14
Import & Export – Import Models from Server Managing Modeling Content
Select the system. Select the file repository on the server where models have been exported. Then select models you want to import. Define parameters as “Overwritten inactive versions” and “Activate objects” and then click on “Finish” button
© 2012 SAP AG. All rights reserved.
15
To overwrite inactive objects in the target system, select the option Overwrite inactive versions. To activate objects in the target system, select the option Activate Objects. The status of import can be viewed in Job log.
©SAP AG
HA300
6-15
Import & Export – SAP Support Mode Managing Modeling Content In order to ease communication when working together with SAP to gain support for Information Models, the export tool provides a method to export Information Objects to the server in a mode named “SAP Support Mode”. Only active objects can be exported in this mode. These will be exported to the server and the file(s) can then be sent to SAP support for troubleshooting purposes.
© 2012 SAP AG. All rights reserved.
16
The status of export can be viewed in Job log.
©SAP AG
HA300
6-16
Copying Information Objects Managing Modeling Content
Select the system and click on “Next” button. Then define the folder location and select the package or models you want to export. Then select a target repository and click on “Finish” button.
© 2012 SAP AG. All rights reserved.
©SAP AG
17
HA300
6-17
Copying Content Delivered by SAP Managing Modeling Content You use this procedure to copy standard content shipped by SAP or an SAP partner to your local repository (see note 1608552 for implementing RDS content). Prerequisite: To copy the contents of objects, the administrator needs to create a mapping in the _SYS_BI.M_CONTENT_MAPPING table. Procedure: 1. From the Quick Launch tab page, choose Mass Copy. 2. Select the required object(s). 3. Choose Add. 4. Choose Next. Copy checkbox. 5. Choose Finish. The status of content copy can be viewed in Job log. © 2012 SAP AG. All rights reserved.
18
Note: If you want to override a copied object, select the Copy checkbox. Note: To copy the content, you must have the following privileges: • REPO.READ on the source package • REPO.MAINTAIN_NATIVE_PACKAGES on the Root Package • REPO.EDIT_NATIVE_OBJECTS on the Root Package
Note: See the following SAP note to implement RDS content https://service.sap.com/sap/support/notes/1608552 ?
©SAP AG
HA300
6-18
Translating Metadata Texts
© 2012 SAP AG. All rights reserved.
19
The Repository includes features for translating metadata texts. In order to prepare for translation, some metadata that is used by the translation system must first be maintained in the package. This metadata maintenance is available from the Edit Package Details dialog. Customers and partners can also maintain option Hints and Text Status values for the package, which are essentially notes for use by someone later performing the actual language translation. Select the required terminology domain to determine the correct terminology of the text that needs translation. 1.
Enter a Text Collection to associate a package with a collection in order to specify the language into which the package objects are to be translated.
2.
To provide a suggestion regarding the translation of the package, enter text in Hint.
3.
Enter a text status.
4.
Choose OK.
5.
To specify the
©SAP AG
HA300
6-19
Repository Translation Tool (RTT) The Repository Translation Tool (RTT) is a Java-based command line tool that exports language files in a standard format for customer or partner usage. RTT implements this process in four steps: Export: Exports the texts in the original language (written by the developer) from the SAP HANA Repository text tables to the file system. Upload : Uploads the texts from the file system to the SAP translation system. After this step, the translators can translate the texts from the original language into the required target languages. Download: Downloads the translated texts from the SAP translation to the file system. Import: Imports the translated texts from the file system to the SAP HANA Repository text tables.
© 2012 SAP AG. All rights reserved.
©SAP AG
20
HA300
6-20
RTT Usage examples Export the texts from those packages matching "pack*" from the database and upload the texts into the translation system, using the default configuration file ("rtt.properties"): rtt -e -p pack* Download the translated texts from those packages matching "pack*" from the translation system and import the texts into the database, using the default configuration file ("rtt.properties"): rtt -i -p pack* Export the texts from the database into the directory "exports": rtt --export -p pack* -x exports Upload the texts in the directory "exports" to the translation system: rtt --upload -p pack* -x exports Download the translated texts into the directory "imports": rtt --download -p pack* -x imports Import the translated texts from the directory “ imports": rtt --import -p pack* -x imports
© 2012 SAP AG. All rights reserved.
©SAP AG
21
HA300
6-21
Summary Managing Modeling Content
You should now be able to: Explain how to manage schemas Explain how to import and export data model Explain Translating metadata texts
© 2012 SAP AG. All rights reserved.
©SAP AG
22
HA300
6-22
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
7-1
Unit 7: Security and Authorizations User Management & Security Types of Privileges Template Roles Administrative
©SAP AG
HA300
7-2
Objectives User Management & Security
At the end of this Lesson you will be able to:
Explain How To Handle User Management And User Provisioning, Explain The Authentication Methods, Explain User and Role Concept in SAP HANA, Explain How To Maintain User’s Roles, Explain How To Maintain SAP HANA Privileges.
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
7-3
Overview User Management & Security
This module covers the following topics:
User and Role Concept, User and Role Creation, Manage User or Role, Grant and Revoke User or Role, Assign Privilege to User or Role.
© 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
7-4
User Management and Security in SAP HANA User Management & Security
Create Users Assign Initial Passwords Important User Parameters
Manage Users
Assign Security
Lock Users
Control Access to Objects
Reset Passwords
Row-Level Security
Check User Privileges
Restrict allowed actions
Integration with BI
© 2012 SAP AG. All rights reserved.
5
Why is a security concept in SAP HANA required? Trivial answers: • Database administration should be restricted to skilled (and empowered) persons • Access to ERP tables must be restricted • Editing of SAP HANA data models should only be possible for „owners“ of the model Not so trivial: user administration plays a big role in SAP HANA • Several front-end tools offer direct access into SAP HANA • Object access as well as access to content of data model must be controlled within SAP HANA • Need to have named users in SAP HANA for Information Consumers Exceptions: no user management for Information Consumers required if • Access to data does not need to be controlled • All data access occurs via BI Semantic Layer and Security implemented in BusinessObjects Enterprise
©SAP AG
HA300
7-5
Relationships Between Entities User Management & Security The relevant entities mentioned below relate to each other in the following way: Privilege
n Granted to n n
Principal
Granted to
A principal is either a role or a user.
A known user can log on to the database. A user can be the owner of database objects.
A role is a collection of privileges and can be granted to either a user or another role (nesting).
n Role
A privilege is used to impose restrictions on operations carried out on certain objects
User 1 owns n Object
User management is configured using SAP HANA studio. © 2012 SAP AG. All rights reserved.
6
Privileges can be assigned to users directly or indirectly using roles. Privileges are required to model access control. Roles can be used to structure the access control scheme and model reusable business roles. It is recommended to manage authorization for users by using roles. Roles can be nested so that role hierarchies can be implemented. This makes them very flexible, allowing very fine- and coarse-grained authorization management for individual users. All the privileges granted directly or indirectly to a user are combined. This means whenever a user tries to access an object, the system performs an authorization check using the user, the user's roles, and directly allocated privileges. It is not possible to explicitly deny privileges. This means that the system does not need to check all the user’s roles. As soon as all requested privileges have been found, the system aborts the check and grants access. Several predefined roles exist in the database. Some of them are templates that need to be customized; others can be used as they are. User management is configured using SAP HANA studio.
©SAP AG
HA300
7-6
User Provisioning - Get Users into the System User Management & Security Roles allow grouping privileges Create roles for specific tasks, e.g. Create data models (on a given subset of the data) Activate data models Consume models All types of privileges can be granted to a role Individual privileges Roles (Æ create a hierarchy of roles)
Roles / privileges can be assigned to users User / Role management are closely related Reflected in almost identical editor
User Role: edit + activate Role: edit model
Package: Package: create create // edit edit models models
© 2012 SAP AG. All rights reserved.
©SAP AG
SQL: SQL: select select
Role: activate model
Package: Package: activate activate
SQL: SQL: write write runtime runtime object object
7
HA300
7-7
User Provisioning - User and Role Concept User Management & Security
Creating Named Users in SAP HANA
Actual Database Users Create via SAP HANA Studio Or using standard SQL statements
Authentication Methods
User / Password
Set up and manage passwords using SAP HANA Studio or SQL
Kerberos Authentication
Certificate-based Requires Named User in SAP HANA DB
© 2012 SAP AG. All rights reserved.
©SAP AG
8
HA300
7-8
Authentication concept User Management & Security
SAP HANA database provides the following options for authentication: Direct logon to the SAP HANA database with user name and password Kerberos (third-party authentication provider) for SSO Environment
For more administrative operations : Such as database recovery, the credentials of the SAP operating system user (adm) are also required.
© 2012 SAP AG. All rights reserved.
9
Integration into Single Sign-On Environments SAP HANA supports the Kerberos protocol for single sign-on. It has been tested with Windows Active Directory Domain Kerberos implementation and MIT Kerberos network authentication protocol. The ODBC database client and the JDBC database client support Kerberos. To implement this, you need to install the MIT Kerberos client software on the host of the SAP HANA database. Once installation is complete, configure Kerberos as follows (for more information, see the Kerberos product documentation): 1. Create a user that serves as the service user for the SAP HANA database. 2. For this user, create one service principal name (SPN) for each host of the system. The SPN needs to have the following syntax: hdb/@Kerberos realm name where is the fully qualified domain name of the host. 3. Export each of the SPNs created into a separate file. 4. Import the SPNs in each of the files to the respective host. The users stored in the Microsoft Active Directory or the MIT Kerberos Key Distribution Center can be mapped to database users in the SAP HANA database. For this purpose, specify the user principal name (UPN) as the external ID when creating the database user. To map the Kerberos-enabled provisioning mechanism to the SAP HANA database, do the following: 1. Launch SAP HANA studio. 2. In the navigator, select Catalog
Authorization.
3. From the context menu, select New
User.
4. In the User Name field, enter the user name. 5. In the Authentication section, choose External and enter the external ID specified in the Microsoft Active Directory or the MIT Kerberos Key Distribution Center for the user. Alternatively, you can create the user with the following SQL command: CREATE USER IDENTIFIED EXTERNALLY AS '' For more information about SAP HANA administration, see the SAP HANA Knowledge Center in the SAP Help Portal at http://help.sap.com/hana System Administration.
©SAP AG
HA300
7-9
Additional Authentication Method, SAML User Management & Security Security Assertion Markup Language, SAML: - For users not directly connected to SAP HANA.. - Used for authentication only (not authorization). R
Middleware / Application Server Scenario: Application Server needs to connect to SAP HANA Database on behalf of a user. SAML assertion is requested from the client.
Browser R
SAML assertion is issued by the identity provider after the client was successfully authenticated there.
Application Server
SAML assertion is then sent to SAP HANA Database. Access is granted based on the established trust to the identity provider.
R HTTP R
Identity Provider
R Connect with SAML ASSERTION
HANA
© 2012 SAP AG. All rights reserved.
10
SAML, Security Assertion Markup Language, is the XML-based standard for communicating identity information between organizations. The primary function of SAML is to provide Internet Single Sign-On (SSO) for organizations. SAML is used to securely connect Internet applications that exist both inside and outside the organization's firewall. SAML is a standard protocol for authentication Generally speaking, Internet SSO is a secure connection that communicates identity and trust from one organization to another. For users, Internet SSO eliminates additional logins to external resources. For system administrators, it improves security and reduces costs. • Requires a trusted 3rd party (identity provider) that can issue SAML assertions for clients (e.g. browser). Single sign-on in middleware/application server scenarios (Use Cases): • Whenever the application server needs to connect to the SAP HANA database on behalf of a user, it requests a SAML assertion from the client. • The SAML assertion is issued by the identity provider after the client was successfuly authenticated there, and is then sent to the SAP HANA database. Restrictions: • The SAP HANA database can only act as a SAML service provider. • Assertions can be used for authentication only (no support for further properties). • SAML cann ot be used for authorization.
©SAP AG
HA300
7-10
SAML In SAP HANA Studio User Management & Security SAML may be selected as a user authentication method when creating user in the SAP HANA studio.
© 2012 SAP AG. All rights reserved.
11
The main purpose of SAML for SAP HANA is to support scenarios where clients are not directly connected to the SAP HANA Database, but to a middle tier application server (XS engine, for example). This middle tier application server runs an HTTP server. Whenever the application srver needs to connect to the database on behalf of the user, it requests a SAML assertion from the client. The assertion is issued by an indentity provider after the client was successfully authenticated. The assertion is then forwarded to the SAP HANA database, which will grant access based on the previously established trust to the identity provider. The SAP HANA database supports login of users to the SAP HANA database using the Security Assertion Markup Language (SAML) SAML may be selected as a user authentication method when creating users in the SAP HANA studio.
©SAP AG
HA300
7-11
Managing Users and Roles User Management & Security
Create Users
Define and Create Roles
Grant Role to User
Assign Privileges to Roles
© 2012 SAP AG. All rights reserved.
©SAP AG
12
HA300
7-12
Managing Users and Roles User Management & Security
Define and Create Roles
Create Users
Grant Role to User
Assign Privileges to Roles
© 2012 SAP AG. All rights reserved.
©SAP AG
13
HA300
7-13
Creating Role using SAP HANA Studio User Management & Security
Graphical UI for Creating / managing roles In SAP HANA Studio Æ Navigator Tree Path:
() Æ Catalog Æ Authorizations Right-Click “Roles” folder Select “New” Æ “Role” from context menu
Using SQL Syntax Run the following statement: CREATE
ROLE ;
Define and Create Roles
Create Users
Assign Privileges to Roles
Grant Role to User
© 2012 SAP AG. All rights reserved.
©SAP AG
14
HA300
7-14
Managing Users and Roles User Management & Security
Create Users
Define and Create Roles
Grant Role to User
Assign Privileges to Roles
© 2012 SAP AG. All rights reserved.
©SAP AG
15
HA300
7-15
Assign Privileges to Roles User Management & Security
On the appropriate privilege tab: Click on the green
icon
In the search box, start typing For System / Object Privileges : the object name For direct privilege assignment: the privilege name
Select the desired object or privilege Click OK
Define and Create Roles
Create Users
Assign Privileges to Roles
Grant Role to User
© 2012 SAP AG. All rights reserved.
©SAP AG
16
HA300
7-16
Managing Users and Roles User Management & Security
On the appropriate privilege tab: Click on the green
icon
In the search box, start typing For System / Object Privileges : the object name For direct privilege assignment: the privilege name
Select the desired object or privilege Click OK
Define and Create Roles
Create Users
Assign Privileges to Roles
Grant Role to User
© 2012 SAP AG. All rights reserved.
©SAP AG
17
HA300
7-17
Assign Privileges to Roles and save User Management & Security
Using the “save” button Using the “deploy” button (green arrow) Errors during save Typically: missing privilege for editing user (USER ADMIN) Or missing grant option: For Object/Privilege combinations: on object For direct privilege assignment: on privilege
Define and Create Roles
Create Users
Assign Privileges to Roles
Grant Role to User
© 2012 SAP AG. All rights reserved.
©SAP AG
18
HA300
7-18
Managing Users and Roles User Management & Security
Create Users
Define and Create Roles
Grant Role to User
Assign Privileges to Roles
© 2012 SAP AG. All rights reserved.
©SAP AG
19
HA300
7-19
Create Users or Roles User Management & Security Graphical UI for Creating / managing roles In SAP HANA Studio Æ Navigator Tree Path: () Æ Catalog Æ Authorizations Right-Click “Users” folder Select “New” Æ “User” from context menu
Choose authentication methods Define the initial password (user/password) Or define the external User ID (e.g. Kerberos to set up SSO)
To save the user:
Other user settings
Define default client This is used as an implicit filter value when reading from SAP HANA data models Define and Create Roles
Create Users
Assign Privileges to Roles
Grant Role to User
© 2012 SAP AG. All rights reserved.
©SAP AG
20
HA300
7-20
Managing Users and Roles User Management & Security
Create Users
Define and Create Roles
Grant Role to User
Assign Privileges to Roles
© 2012 SAP AG. All rights reserved.
©SAP AG
21
HA300
7-21
Grant Roles to User User Management & Security Using Studio: Switch to tab “Granted Roles” in User Editor Open search dialog ( ) Start typing the role name Add the role Allow/disallow granting the role (note: System Privilege “ROLE ADMIN” supersedes this GRANT OPTION)
Using SQL Syntax Run the following statement:
GRANT TO ; Define and Create Roles
Assign Privileges to Roles
Create Users
Grant Role to User
© 2012 SAP AG. All rights reserved.
©SAP AG
22
HA300
7-22
Grant Role to User User Management & Security Using the “save” button Using the “deploy” button (green arrow)
Errors during save Typically: missing privilege for editing user E.g.: System privilege ROLE ADMIN missing Or (without ROLE ADMIN): GRANT OPTION for role missing
Define and Create Roles
Assign Privileges to Roles
Create Users
Grant Role to User
© 2012 SAP AG. All rights reserved.
©SAP AG
23
HA300
7-23
Revoke Roles from User User Management & Security Using Studio: Switch to tab “Granted Roles” in User Editor Select the role from list of granted roles Click the icon Save the user (this also revokes a GRANT OPTION)
Using SQL Syntax Run the following statement:
REVOKE FROM ;
Note on Cascaded Dropping of Privileges If the user had granted the role to other users, revoking the role (and the grant option) also revokes the role from these grantees © 2012 SAP AG. All rights reserved.
©SAP AG
24
HA300
7-24
Summary User Management & Security
You should now be able to: Explain
How To Handle User Management and User Provisioning, Explain The Authentication Methods, Explain User and Role Concept in SAP HANA, Explain How To Maintain User’s Roles, Explain How To Maintain SAP HANA Privileges.
© 2012 SAP AG. All rights reserved.
©SAP AG
25
HA300
7-25
Unit 7: Security and Authorizations User Management & Security Types of Privileges Template Roles Administrative
©SAP AG
HA300
7-26
Objectives Types of privileges
At the end of this Lesson you will be able to: Explain
The Authorization Concept, Explain What is SQL Privilege, Explain What is SYSTEM Privilege, Explain What is Package Privilege, Explain What is Analytic Privilege, Explain What is a Template Role.
© 2012 SAP AG. All rights reserved.
©SAP AG
27
HA300
7-27
Overview Types of privileges
This module covers the following topics: Authorization Concept, SQL Privilege, SYSTEM Privilege, Package Privilege, Analytic Privilege.
© 2012 SAP AG. All rights reserved.
©SAP AG
28
HA300
7-28
Authorization concept Types of privileges When accessing the SAP HANA database using a client interface (such as ODBC, JDBC, MDX), any access to data must be backed by corresponding privileges. Different schemes are implemented SQL Privilege Privilege SQL statement type (for example, SELECT, UPDATE, and CALL…)
System Privilege SQL privilege
SYSTEM privilege
Used for administrative tasks. System Privileges are assigned to users and roles.
Authorization Concept
Package Privilege
Analytic Privilege
Restrict the access to and the use of packages Package in the repository privilege
Analytic privilege
Analytic Privileges are used to provide row-level authorization Views.
© 2012 SAP AG. All rights reserved.
29
System Privileges : Used for administrative tasks. System Privileges are assigned to users and roles. SQL Privileges : Used to restrict access to and modification of database objects, such as tables. Depending on the object type (for example, table, view), actions (for example, CREATE ANY, ALTER, DROP) can be authorized per object. SQL Privileges are assigned to users and roles. For SQL Privileges in the SAP HANA database, the SQL standard behavior is applied. Analytic Privileges : Used to restrict the access for read operations to certain data in Analytic, Attribute, and Calculation Views by filtering the attribute values. Only applied at the processing time of the user query. Analytic Privileges need to be defined and activated before they can be granted to users and roles. Package Privileges : Used to restrict the access to and the use of packages in the repository of the SAP HANA database. Packages contain design-time versions of various objects, such as Analytic, Attribute, and Calculation Views, as well as Analytic Privileges, and functions. To be able to work with packages, the respective Package Privileges must be granted.
©SAP AG
HA300
7-29
SQL Privilege Types of privileges In the SAP HANA database, a number of privileges are available to control the authorization of SQL commands. Following the principle of least privilege, users should only be given the smallest set of privileges required for their role. Two groups of SQL Privileges are available: System Privileges These are system-wide privileges that control some general system activities mainly for administrative purposes, such as creating schema, creating and changing users and roles.
Object Privileges These privileges are bound to an object, for example, to a database table, and enable object-specific control activities, such as SELECT, UPDATE, or DELETE to be performed.
© 2012 SAP AG. All rights reserved.
30
More details on Object Privileges activities: CREATE ANY This privilege allows the creation of all kinds of objects, in particular, tables, views, sequences, synonyms, SQL script functions or database procedures in a schema. This privilege can only be granted on a schema. ALL PRIVILEGES This is a collection of all DDL and data manipulation language (DML) privileges that on the one hand, the grantor currently has and is allowed to grant and on the other hand, can be granted on this particular object. This collection is dynamically evaluated for the given grantor and object. ALL PRIVILEGES is not applicable to a schema, but only a table, view, or table type. DROP and ALTER These are DDL privileges and authorize the DROP and ALTER SQL commands. While the DROP privilege is valid for all kinds of objects, the ALTER privilege is not valid for sequences and synonyms as their definitions cannot be changed after creation. SELECT, INSERT, UPDATE, and DELETE These are DML privileges and authorize respective SQL commands. While SELECT is valid for all kinds of objects, except for functions and procedures, INSERT, UPDATE, and DELETE are only valid for schemas, tables, table types, and updatable views. INDEX This special DDL privilege authorizes the creation, alteration or revocation of indexes for an object using the CREATE INDEX, ALTER INDEX, and DROP INDEX commands. This privilege can only be applied to a schema, table, and table type. EXECUTE This special DML privilege authorizes the execution of an SQL script function or a database procedure using the CALLS or CALL command, respectively.
©SAP AG
HA300
7-30
System Privilege Types of privileges 6 types of system privilege are available on SAP HANA Database:
Analytics
Auditing
Catalog and Schema Management
Users and Roles
System Management
System privilege
Data Import and Export
© 2012 SAP AG. All rights reserved.
31
Users and Roles USER ADMIN This privilege authorizes the creation and changing of users using the CREATE USER, ALTER USER, and DROP USER SQL commands. ROLE ADMIN This privilege authorizes the creation and deletion of roles using the CREATE ROLE and DROP ROLE SQL commands. It also authorizes the granting and revocation of roles using the GRANT and REVOKE SQL commands. Catalog and Schema Management CREATE SCHEMA This privilege authorizes the creation of database schemas using the CREATE SCHEMA SQL command. DATA ADMIN This privilege authorizes all users to have unfiltered read-only access to the full content of all system and monitoring views as well as to execute all data definition language (DDL) – and only DDL – commands in the SAP HANA database. Normally, the content of those views is filtered based on the privileges of the user.. CATALOG READ This privilege authorizes all users to have unfiltered read-only access to the full content of all system and monitoring views. Normally, the content of those views is filtered based on the privileges of the accessing user.
©SAP AG
HA300
7-31
System Management These privileges authorize the various system activities that can be performed using the ALTER SYSTEM SQL commands. Because of the high level of impact on the system, these privileges are not designed for a normal database user. Caution must be taken when granting these privileges (for example, only grant them to a support user or role.) The following provides a short overview of the relevant privileges: BACKUP ADMIN This privilege authorizes the ALTER SYSTEM BACKUP command to define and initiate a backup process or to perform a recovery process. SAVEPOINT ADMIN This privilege authorizes the execution of a checkpoint process. INIFILE ADMIN This privilege authorizes different methods to change system settings. LOG ADMIN This privilege authorizes the ALTER SYSTEM LOGGING [ON|OFF] commands to enable or disable the log flush mechanism. MONITOR ADMIN This privilege authorizes the monitoring of all activities associated with the various ALTER SYSTEM MONITOR commands as well as the ALTER SYSTEM SET MONITOR LEVEL command. OPTIMIZER ADMIN This privilege authorizes the ALTER SYSTEM CLEAR QUERY PLAN CACHE and ALTER SYSTEM UPDATE STATISTICS commands, which influence the behavior of the query optimizer. SERVICE ADMIN This privilege authorizes the ALTER SYSTEM [START|KILL|RECONFIGURE] commands, used for administering system services of the database. SESSION ADMIN This privilege authorizes the ALTER SYSTEM ALTER SESSION commands to stop or disconnect a user session. TENANT ADMIN This privilege authorizes the tenant operations performed by the ALTER SYSTEM [\RESUME|SUSPEND] TENANT commands. TRACE ADMIN This privilege authorizes the ALTER SYSTEM [CLEAR|REMOVE] TRACES commands for operations on database trace files. VERSION ADMIN This privilege authorizes the ALTER SYSTEM RECLAIM VERSION SPACE command of the multi-version concurrency control (MVCC) mechanism. Data Import and Export : The following System Privileges are available for the authorization of the data import and export in the database: IMPORT This privilege authorizes the import activity in the database using the IMPORT or LOAD TABLE SQL commands. Note that, besides this privilege, the user needs the INSERT privilege on the target tables to be imported. EXPORT This privilege authorizes the export activity in the database via the EXPORT or LOAD TABLE SQL commands. Note that, besides this privilege, the user needs the SELECT privilege on the source tables to be exported.
©SAP AG
HA300
7-32
Package Privilege – Assign to users/roles Types of privileges
In the SAP HANA studio, you can manage the package privileges on the Package Privileges tab
The SAP HANA database repository is structured hierarchically with packages assigned to other packages as subpackages. If you grant privileges to a user for a package, the user is automatically also authorized for all corresponding subpackages.
© 2012 SAP AG. All rights reserved.
©SAP AG
33
HA300
7-33
Package Privilege – Create package & subpackage Types of privileges The SAP HANA database repository is structured hierarchically with packages assigned to other packages as subpackages. If you grant privileges to a user for a package, the user is automatically also authorized for all corresponding subpackages.
Create a package (right click under « Content »)
Create a subpackage (right click under a package)
© 2012 SAP AG. All rights reserved.
©SAP AG
34
HA300
7-34
Package Privilege – Native & Imported Types of privileges In the SAP HANA database repository a distinction is made between native and imported packages. Native packages are packages that were created in the current system and should therefore, be edited in the current system. Imported packages from another system should not be edited, except by newly imported updates. An imported package should only be manually edited in exceptional cases. Hence different privileges are required to manage Native or Imported privileges
Exported packages SAP HANA 1
Exported packages
Exported packages
SAP HANA 2
Package can be edited Package cannot be edited, only import changes on the package © 2012 SAP AG. All rights reserved.
35
Developers should be granted the following privileges for native packages: REPO.READ : This privilege authorizes read access to packages and design-time objects, including both native and imported objects. REPO.EDIT_NATIVE_OBJECTS : This privilege authorizes all kinds of inactive changes to design-time objects in native packages. REPO.ACTIVATE_NATIVE_OBJECTS : This privilege authorizes the user to activate or reactivate design-time objects in native packages. REPO.MAINTAIN_NATIVE_PACKAGES : This privilege authorizes the user to update or delete native packages, or create subpackages of native packages. Developers should only be granted the following privileges for imported packages in exceptional cases: REPO.EDIT_IMPORTED_OBJECTS : This privilege authorizes all kinds of inactive changes to design-time objects in imported packages. REPO.ACTIVATE_IMPORTED_OBJECTS : This privilege authorizes the user to activate or reactivate design-time objects in imported packages. REPO.MAINTAIN_IMPORTED_PACKAGES : This privilege authorizes the user to update or delete imported packages, or create subpackages of imported packages. In the SAP HANA studio, you can manage the repository system privileges together with the other system privileges on the System Privileges tab: REPO.EXPORT : This privilege authorizes the user to export, for example, delivery units REPO.IMPORT : This privilege authorizes the user to import transport archives. REPO.MAINTAIN_DELIVERY_UNITS : This privilege authorizes the user to maintain delivery units (DU, DU-vendor must equal system-vendor). REPO.WORK_IN_FOREIGN_WORKSPACE : This privilege authorizes the user to work in a foreign inactive workspace.
©SAP AG
HA300
7-35
Analytic Privilege - The concept Types of privileges Analytic Privileges are used to control access to SAP HANA data models
Without Analytic Privilege, no data can be retrieved from
Attribute Views Analytic Views Calculation Views
Implement row-level security with Analytic Privileges
Restrict access to a given data container to selected Attribute Values
Field from Attribute View Field from Attribute View used in Analytic View Private Dimension of Analytic View Attribute field in Calculation View Combinations of the above Single value, range, IN-list
© 2012 SAP AG. All rights reserved.
36
Analytic Privileges are used in the SAP HANA database to provide fine-grained control of what data particular users can see for Analytic use. They provide the ability for rowlevel authorization, based on the values in one or more columns. All Attribute Views, Analytic Views, and Calculation Views, which have been designed in the modeler and have been activated from the modeler of the HANA studio, are automatically supported by the Analytic Privilege mechanism. If you are already familiar with the authorization model of SAP NetWeaver Business Warehouse (SAP NetWeaver BW), you will see many similarities between the two models. The overall idea behind Analytic Privileges is the reuse of Analytic Views by different users. However, the different users may not be allowed to see the same data. For example, different regional sales managers, who are only allowed to see sales data for their regions, could reuse the same Analytic View. They would get the Analytic Privilege to see only data for their region, and their queries on the same view would return the corresponding data. This is a major difference to the SAP NetWeaver BW model. While the concept itself is very similar, SAP NetWeaver BW would forward an error message if you executed a query that would return values you are not authorized to see. With the SAP HANA database, the query would be executed and, corresponding to your authorization, only values you are entitled to see returned.
©SAP AG
HA300
7-36
An Analytic Privilege consists of several restrictions. Three of these restrictions are always present and have the following special meanings: • One restriction (cube restriction) determines for which column views (Attribute, Analytic, or Calculation Views) the privilege is used. This may involve a single view, a list of views or, by means of a wildcard, all applicable views. • One restriction (activity restriction) determines the effected activity, for example, READ. This means that the activity READ is restricted and not available for use. • One restriction (validity restriction) determines at what times the privilege is valid. In addition to these three restrictions, many additional dimension restrictions are used. These are applied to the actual attributes of a view. Each dimension restriction is relevant for one dimension attribute, which can contain multiple value filters. Each value filter is a tuple of an operator and its operands, which is used to represent the logical filter condition. For example, a value filter (EQUAL 2006) can be defined for a dimension attribute YEAR in a dimension restriction to filter accessible data using the condition YEAR=2006 for potential users. Only dimension attributes, and no measures or key figures, can be employed in dimension restrictions.
©SAP AG
HA300
7-37
Analytic Privilege - Start creation wizard Types of privileges Analytic Privileges are repository objects Create and manage via SAP HANA Studio Create in any package Does not need to be the
same package as views
Call creation wizard: Right-click folder “Analytic Privileges” in package Enter name Click Next
and description
© 2012 SAP AG. All rights reserved.
38
In general, the user has access to an individual, independent view (Attribute, Analytic, or Calculation View) if the following prerequisites are met: • The user was granted the SELECT privilege on the view or the containing schema. • The user was granted an Analytic Privilege that is applicable to the view. An Analytic Privilege is applicable to a view if it contains the view in the Cube restriction and contains at least one filter on one attribute of this view. No SELECT privilege on the underlying base tables or views of this view is required.
©SAP AG
HA300
7-38
Analytic Privilege – Select Information Models Types of privileges
Select applicable Information Models Views have two functions in privilege
Views you want to grant access to Views from which you want to select fields for restrictions
You can add further views to the privilege later
© 2012 SAP AG. All rights reserved.
39
Analytic Privilege-Capable Views The Analytic Privilege mechanism is automatically enforced for all three kinds of views that can be defined using the information modeler, namely Attribute, Analytic, and calculation Views: Attribute Views • These views are built on joins of existing column tables and views. Attribute Views cannot be nested in other Attribute Views. Analytic Views • These views are multidimensional cubes with a fact table joined with multiple dimension tables. The information modeler allows Analytic Views to be associated with Attribute Views to reuse the specified join paths. However, it is not possible to use existing Attribute or Analytic Views as base views (join candidates) and use these as the basis for defining new Analytic Views. Calculation Views • These views are defined using SQL script. A Calculation View is a column view defined on the output of an SQL script function. In this function, any existing views, including Attribute, Analytic, and Calculation Views, can be used, for example, in a SELECT statement. This introduces interdependencies between the views.
©SAP AG
HA300
7-39
Analytic Privilege - Editor Overview Types of privileges
Restrictions apply to all views in list of “Reference Models” Choose “Add” in “Reference Models” section
Pick any appropriate view From any package
Do not use the “Applicable to All Content Models”-option Reason:
Can have surprising side-effects You give away control over model access
© 2012 SAP AG. All rights reserved.
©SAP AG
40
HA300
7-40
Analytic Privilege - Select field for attribute restriction Types of privileges You may implement value restrictions for all selected fields If no value restriction implemented Æ no restriction (wildcard) Otherwise: user will only be allowed to see listed values UI offers single value or range condition Can add several conditions per field (combined via “AND”)
© 2012 SAP AG. All rights reserved.
41
When relevant Analytic Privileges are found for the current user and the query directed to the particular view, the evaluation process ensures that, according to the value filters specified in the Dimension restrictions, the appropriate view data is presented to the user. In particular: • Within one Dimension restriction, all value filters on the corresponding dimension attribute are combined with logical OR. • Within one Analytic Privilege, all Dimension restrictions are combined with logical AND. • Multiple Analytic Privileges are combined with logical OR. For example, if there is only one Analytic Privilege found with two Dimension restrictions, YEAR=2008 and COUNTRY=US, the user is only allowed to see data fulfilling the condition YEAR=2008 AND COUNTRY=US. However, if these two conditions were put in two different Analytic Privileges found for this user and this view, the user is allowed to see more data, namely the OR combination of the filters of the individual Analytic Privileges: YEAR=2008 OR COUNTRY=US.
©SAP AG
HA300
7-41
Analytic Privilege Types of privileges Like views: activation required to create run-time object Only run-time object is grantable to users / roles
Name of run-time object: “/”
OR
© 2012 SAP AG. All rights reserved.
©SAP AG
42
HA300
7-42
Analytic Privilege Check Types of privileges
The Analytic Privilege Check evaluates Analytical Privileges:
Granted to the User
With Cube restriction covering the view
With currently valid Validity restrictions
With Activity restrictions (READ)
With Dimension restrictions covering attributes of the view
If no Analytic Privilege for the user can be found, user queries are rejected with a “…not authorized” error message.
Trace file in SAP HANA Studio Administration – Diagnosis Files
© 2012 SAP AG. All rights reserved.
43
To view the trace file: • From the context menu of your System, (example, HDB(STUDENT##)), select Administration. Press the Diagnosis Files tab. Find the file index server._.*.*.trc . • Press the Show End of File button. Find your USER## in the trace log by pressing CNTL+F. • Enter USER##. The trace log displays the Analytical Privilege Check errors.
©SAP AG
HA300
7-43
Summary Types of privileges
You should now be able to: Explain
The Authorization Concept, Explain What is SQL Privilege, Explain What is SYSTEM Privilege, Explain What is Package Privilege, Explain What is Analytic Privilege, Explain What is a Template Role.
© 2012 SAP AG. All rights reserved.
©SAP AG
44
HA300
7-44
Unit 7: Security and Authorizations User Management & Security Types of Privileges Template Roles Administrative
©SAP AG
HA300
7-45
Objectives Template Roles
At the end of this Lesson you will be able to: Explain
The Purpose Of the Pre-Delivered Roles, Explain Which role is Required for Information Composer.
© 2012 SAP AG. All rights reserved.
©SAP AG
46
HA300
7-46
Overview Template Roles
This module covers the following topics: Pre
Delivered Role, Role for Information Composer.
© 2012 SAP AG. All rights reserved.
©SAP AG
47
HA300
7-47
Pre-delivered Roles Template Roles SAP HANA comes with several pre-defined / standard roles : Roles that should (must) be used unchanged PUBLIC – minimal privileges for a user to work with the database at all Is implicitly granted whenever a user is created
Role templates
CONTENT_ADMIN – the only role in the system with vital privileges, e.g.:
MODELING – a very richly privileged role that enables
SQL Privileges on Schema _SYS_BIC – with GRANT OPTION SQL Privileges on Schema _SYS_BI – with GRANT OPTION Creation and activation of Information Models Creation and activation of Analytic Privileges
MONITORING – role with full read-only access to all meta data, monitoring and statistics
Regard these roles as “templates” Æ name change coming soon
Do not use these roles – build your own roles instead
© 2012 SAP AG. All rights reserved.
48
MODELING : Contains all privileges required for using the information modeler in the SAP HANA studio. • Contains the database authorization for a modeler to create all kinds of views and Analytic Privileges. • Allows access to all data in activated views without any filter (_SYS_BI_CP_ALL Analytic Privilege). However, this is restricted by missing SQL Privileges on those activated objects. • Note: Use caution when using the _SYS_BI_CP_ALL Analytic Privilege. • Use this predefined role as a template. MONITORING : Contains privileges for full read-only access to all meta data, the current system status in system and monitoring views, and the data of the statistics server. PUBLIC : Contains privileges for filtered read-only access to the system views. Only objects for which the users have access rights are visible. By default, this role is assigned to each user. CONTENT_ADMIN : Contains the same privileges as the MODELING role, but with the extension that users allocated this role are allowed to grant these privileges to other users. In addition, it contains repository privileges for working with imported objects. Use this role as a template for what content administrators might need as privileges.
©SAP AG
HA300
7-48
Information Composer Role Template Roles The SAP HANA Information Composer is a Web application that allows you to upload and manipulate data on the SAP HANA database. The SAP HANA Information Composer uses the SAP NetWeaver Core Engine for Partners 1.0 (LJS 1.0), which interacts with the SAP HANA database. The roles required to access to the SAP HANA Information Composer client: IC_MODELER role: This role allows users to upload new content into the SAP HANA database and to create physical tables and calculation views. IC_PUBLIC role : This role allows users to see the shared physical tables and calculation views
As long as the SAP HANA Information Composer is in use, the SAP_IC user must not be deleted. Otherwise, the IC_MODELER and IC_PUBLIC roles will also be deleted.
© 2012 SAP AG. All rights reserved.
©SAP AG
49
HA300
7-49
Summary Template Roles
You should now be able to: Explain
the purpose of the pre-delivered Roles, Explain which role is required for Information Composer.
© 2012 SAP AG. All rights reserved.
©SAP AG
50
HA300
7-50
Unit 7: Security and Authorizations User Management & Security Types of Privileges Template Roles Administrative
©SAP AG
HA300
7-51
Objectives Administrative
At the end of this Lesson you will be able to: Explain
How To Deactivate a User, Explain How To Reactivate a User, Explain How To Reset a Locked User, Explain How To Manage User Password.
© 2012 SAP AG. All rights reserved.
©SAP AG
52
HA300
7-52
Overview Administrative
This module covers the following topics: Deactivate
/ Reactivate User, Manage Connection Attempt, Set Initial Password to User, Force User To Change Password.
© 2012 SAP AG. All rights reserved.
©SAP AG
53
HA300
7-53
Deactivate / Reactivate a user Administrative Deactivation of Users The administrator can deactivate a user account with the following SQL command: ALTER USER DEACTIVATE USER NOW; After the user account is deactivated, the user cannot log on to the SAP HANA database until the administrator resets the user’s password. Reactivation of Users The administrator can reactivate a user account. A user account can be locked because of the following reasons:
The user’s password has expired.
The user has made too many invalid logon attempts.
If the user’s password has expired, the user has to change the password to a new value. If the user has made too many invalid logon attempts, the administrator can use an SQL command to unlock the user account.
© 2012 SAP AG. All rights reserved.
©SAP AG
54
HA300
7-54
Deactivate / Reactivate a user Administrative Deactivation of Users The administrator can deactivate a user account with the following SQL command: ALTER USER DEACTIVATE USER NOW; After the user account is deactivated, the user cannot log on to the SAP HANA database until the administrator resets the user’s password. Reactivation of Users The administrator can reactivate a user account. A user account can be locked because of the following reasons:
The user’s password has expired.
The user has made too many invalid logon attempts.
If the user’s password has expired, the user has to change the password to a new value. If the user has made too many invalid logon attempts, the administrator can use an SQL command to unlock the user account.
© 2012 SAP AG. All rights reserved.
©SAP AG
55
HA300
7-55
Deactivate / Reactivate a user in SAP HANA Studio Administrative Prerequisite: System Privilege USER ADMIN
Deactivate
Reactivate
© 2012 SAP AG. All rights reserved.
56
Users can be explicitly deactivated in the SAP HANA studio. For example, if an employee temporarily leaves the company or if a security violation is detected. It is possible to deactivate and reactivate users in the SAP HANA Studio. The System Privilege, USER ADMIN, is required to deactivate / reactivate users in the SAP HANA Studio. In the SAP HANA Studio: Catalog -> Authorization -> Users. From the context menu of the user record, select Open. A new password must be entered, and confirmed, when the User is Reactivated.
©SAP AG
HA300
7-56
Manage connection attempts Administrative The number of invalid logon attempts allowed is set to 6 by default). Which means after too many wrong attempts, the user will be locked. The Administrators can reset the number of invalid logon attempts with the following SQL command: ALTER USER RESET CONNECT ATTEMPTS ; With the first successful logon after an invalid logon attempt, an entry is made into the INVALID_CONNECT_ATTEMPTS view showing:
The number of invalid logon attempts since the last successful logon
The time of the last successful logon
Administrators and users can delete the information of invalid logon attempts with the following SQL command: ALTER USER DROP CONNECT ATTEMPTS;
© 2012 SAP AG. All rights reserved.
©SAP AG
57
HA300
7-57
Manage user password Administrative Defines whether users have to change their initial passwords at first logon. Logging on with the initial password is still possible but only ALTER USER PASSWORD ; the command can be executed. All other statements give the error message "user is forced to change password". Administrators can force a user to change the password at any time with the following SQL command: ALTER USER FORCE PASSWORD CHANGE
© 2012 SAP AG. All rights reserved.
©SAP AG
58
HA300
7-58
System Tables and Monitoring Views Administrative System tables and monitoring views query information about the system using SQL commands. The results appear as tables in SYS Schema. Some of the tables and views support User Management. For Example: Tables: P_Users
All users
P_User_
Kerberos Kerberose users
P_User_SAML
SAML users
P_Password
Password change time
Views: Invalid_ Connection_Attempts
Number of invalid connection attempts for a user
Granted_Privileges
Privileges granted to Users and Roles
Granted_Roles SAML_Providers
Roles granted to Users or other Roles SAML Providers
© 2012 SAP AG. All rights reserved.
59
For more information: • SAP HANA System Tables and Monitoring Views Reference http://help.sap.com/hana_appliance/
©SAP AG
HA300
7-59
Summary Administrative
You should now be able to: Explain
How To Deactivate a User, Explain How To Reactivate a User, Explain How To Reset a Locked User, Explain How To Manage User Password.
© 2012 SAP AG. All rights reserved.
©SAP AG
60
HA300
7-60
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
8-1
Unit 8: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-2
Objectives Positioning and Key Concepts
At the end of this Lesson you will be able to: Explain the positioning of SAP LT Replication Server Describe the key concepts and features List the prerequisites and how to set up the SAP LT Replication Server Name the benefits of the trigger-based replication approach
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
8-3
Overview Positioning and Key Concepts
This module covers the following topics: Product name, positioning and key benefits Commercial aspects and software shipment Overview on key concepts, features and user interfaces Overview on key installation and configuration steps
© 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
8-4
Product Name
SAP Landscape Transformation Replication Server for SAP HANA
© 2012 SAP AG. All rights reserved.
©SAP AG
5
HA300
8-5
SAP LT Replication Server for SAP HANA Leverages Proven SLO Technologies
SLO* technologies have been used since more than 10 years in hundred of projects per year
Key offerings foster SAP‘s Application Lifecycle Management concept
SAP LT Replication Server - as a new use case - leverages several SLO technologies
Application Lifecycle Management
*) System Landscape Optimization © 2012 SAP AG. All rights reserved.
©SAP AG
6
HA300
8-6
Positioning and Key Benefits of SAP LT Replication Server for SAP HANA
Key Benefits of the Trigger-Based Approach: Allows real-time (and scheduled) data replication from SAP and NON-SAP sources, replicating only relevant data into HANA Ability to migrate data into HANA format while replicating data in real-time „Unlimited“ release coverage (from SAP R/3 4.6C onwards) sourcing data from SAP ERP (and other ABAP based SAP applications) Leverages proven SLO technology (Near Zero Downtime, TDMS, SAP LT) Simple and fast set-up of LT replicator (initial installation and configuration in less than 1day) and fully integrated with HANA modeler UI
SAP LT Replication Server is the ideal solution for all HANA customers who need real-time or scheduled data replication sourcing from SAP and NON-SAP sources © 2012 SAP AG. All rights reserved.
©SAP AG
7
HA300
8-7
Commercial Aspects and Software Shipment
Commercial Aspects SAP LT Replication Server for SAP HANA will be part of SAP HANA software license model
Software Shipment SAP LT Replication Server for SAP HANA will be part of SAP HANA software shipment and fully integrated into the SAP HANA modeler UI
© 2012 SAP AG. All rights reserved.
©SAP AG
8
HA300
8-8
Overview - Trigger-Based Approach Positioning and Key Concepts SAP LT Replication Server does not have to be a separate SAP system and can run on any SAP system with SAP NetWeaver 7.02 ABAP stack (Kernel 7.20EXT)
HANA Studio
Application Table
Trigger Based Delta Recording
Replication Configuration
RFC Connection
Replication Engine
DB Connection Application Table
SAP source system
SAP LT Replication Server
Connection(s) between source system and SAP HANA system are defined as “Configuration” on the SAP LT Replication Server
© 2012 SAP AG. All rights reserved.
©SAP AG
SAP HANA system Data load and replication are triggered via SAP HANA Studio
9
HA300
8-9
Architecture and Key Building Blocks Positioning and Key Concepts
Read module
RFC Connection
Structure mapping & Transformation
Write module Application table
Logging table
DB Connection Application table
DB trigger
SAP source system
Efficient initialization of data replication based on DB trigger and delta logging concept (as with NearZero downtime approach)
SAP LT Replication Server
Flexible and reliable replication process, incl. data migration (as used for TDMS and SAP LT)
© 2012 SAP AG. All rights reserved.
©SAP AG
SAP HANA system
Fast data replication via DB connect LT replication functionality is fully integrated with HANA Modeler UI
10
HA300
8-10
Architecture for Non-SAP Source Replication Positioning and Key Concepts
DB Connection
Read module Structure mapping & Transformation
Application table
Logging table
Write module DB trigger
Non SAP source system
DB Connection Application table
SAP LT Replication Server
SAP HANA system
In a first step, SAP LT Replication Server transfers all metadata table definitions from the non-SAP source system to the HANA system. From the HANA Studio perspective, non-SAP source replication works as for SAP sources. When a table replication is started, SAP LT Replication Server creates logging tables within the source system. As a difference, the read modules are created in the SAP LT Replication Server. The connection the non-SAP source system is established as a database connection. © 2012 SAP AG. All rights reserved.
©SAP AG
11
HA300
8-11
Multi System Support 1/2 Positioning and Key Concepts
System A
Schema 1
System B
Schema 2
Source systems
SAP HANA system
System A
Schema 1
System B
Schema 1
Source systems
SAP HANA system
© 2012 SAP AG. All rights reserved.
©SAP AG
Source systems are connected to separate HANA schema on the same HANA System
Source systems are connected to separate HANA systems. Schema name can be equal or different
12
HA300
8-12
Multi-System Support 1/2 Positioning and Key Concepts N:1 Replication System A Schema 1
System B Source systems
SAP HANA system
Source systems are connected to same HANA system and also the same schema
1:N Replication
Schema 1
System A SAP Source system
Schema 2
SAP HANA systems
SAP source system is connected to separate HANA systems or to the same system with different schema name.
If one source system is connected to several target schemas (currently up to 1:4 supported), the relevant target schema can be selected in the data provisioning UI.
© 2012 SAP AG. All rights reserved.
©SAP AG
13
HA300
8-13
Set-up of LT Replication Server Positioning and Key Concepts Installation aspects Source system(s): use respective DMIS add-on LT replication server: use add-on DMIS_2010_1_700 with SP5-7; other system requirements (NW 7.02; SAP Kernel 7.20EXT) apply Apply SPS04 for SAP HANA 1.0
Configuration steps for SAP LT Replication Server Define a schema for each source system Define connection to source system Define DB connection into SAP HANA Define replication frequency (real-time; frequency for scheduled replication) Define maximum number of background jobs for data replication
Set-up of data replication in SAP HANA Select relevant source system Start (initial load only and / or continuous replication)
© 2012 SAP AG. All rights reserved.
©SAP AG
14
HA300
8-14
Key Configuration Steps Positioning and Key Concepts
Call SAP LT Replication Server Configuration (Transaction: LTR)
Define configuration data
© 2012 SAP AG. All rights reserved.
©SAP AG
15
HA300
8-15
Starting the Data Replication Positioning and Key Concepts Choose data provisioning to launch SAP HANA Modeler UI
1. Select source system and target schema as defined in SAP LT Replication Server; related system information and schema will be displayed 2. Use button Load and / or Replicate to set up the data replication 3. Use button Stop Replication to finish replication 4. Use button Suspend to pause replication 5. Use button Resume to continue replication
© 2012 SAP AG. All rights reserved.
©SAP AG
16
HA300
8-16
DB Supportability Matrix (HANA 1.0 SPS04): Loading Data via SAP LT Replication Server for SAP HANA Database
Technical availability SAP Sources
Non SAP Sources (*)
MSFT SQL Server Enterprise Edition
OK
OK
Oracle Enterprise Edition
OK
OK
IBM DB2 LUW/ UDB (DB6)
OK
OK
IBM DB/2 zSeries
OK
OK
IBM DB2 iSeries (former AS/400)
OK
Planned
IBM Informix
OK
Planned
SAP MaxDB
OK
OK
Sybase ASE
OK (with DB-Version 15.7.0.11)
OK (with DB-Version 15.7.0.11)
For non-SAP source systems, the customer database license needs to cover a permanent database connection with 3rd party products like LT replication server. (*) Since a DB connection from LT replication server to a non-SAP system is required, the OS/DB restrictions of NetWeaver 7.02 apply (see at http://service.sap.com/pam)
© 2012 SAP AG. All rights reserved.
©SAP AG
17
HA300 17
8-17
SAP LT Replication Server - Technical Enabler for Multiple Data Provisioning Use Cases Table-base Replication integrated into HANA Studio
Real-time for SAP and NON-SAP sources
Replication engine for new SAP HANA Application Accelerators
Replication engine for existing RDS Solutions and ERP Accelerators
+ serving HANA in the Cloud
© 2012 SAP AG. All rights reserved.
©SAP AG
18
HA300
8-18
Summary Positioning and Key Concepts
You should now be able to: Explain the positioning of SAP LT Replication Server Describe the key concepts and features List the prerequisites and how to set up the SAP LT Replication Server Name the benefits of the trigger-based replication approach
© 2012 SAP AG. All rights reserved.
©SAP AG
19
HA300
8-19
Unit 1: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-20
Objectives Overview on configuration aspects
At the end of this Lesson you will be able to: Describe the set-up of a configuration on the SAP LT Replication Server Explain the impact of the configuration set-up on the data replication
© 2012 SAP AG. All rights reserved.
©SAP AG
21
HA300
8-21
Concept: Define Configuration / Schema Overview on configuration aspects
Schema 1 Replication Configuration
Trigger-based delta recording
SAP source system
RFC Connection
Replication engine
DB Connection
SAP LT Replication Server
SAP HANA system
A new configuration can be created in the LT Configuration and Monitoring Dashboard. In that step, the connection between the source and the HANA system is established and the target schema will be created (if it doesn’t exist already). Also replication control tables are created and table lists are replicated from the source system. In addition, the required roles and GRANT / REVOKE procedures are generated. © 2012 SAP AG. All rights reserved.
©SAP AG
22
HA300
8-22
Creating a New Configuration for SAP Sources Overview on configuration aspects General Data Define the replication target Schema Name in the HANA system (if schema does not exist, it will be created automatically) Define the Number of Replay Jobs used for data load and replication
Connection to the source system SAP Source System: Use previously defined RFC destination to source system Non SAP Source System: Select the source database system and set the required fields (see also next slides)
Connection to HANA system Define the User Name and Password which can be used to connect to the HANA system (see also next slide) Define the Host Name and Instance Number of the target HANA system
Allow Multiple Usage to allow usage of source system in different configurations (1:N replication)
Read from Single client Flag for client specific load and replication. Read will be only from the client which is specified in RFC connection
Table space assignment Optional: define table space for logging tables. If no table space is defined, logging table will be created in the same table space as the original table. Own table space is recommended for easier monitoring of the table sizes of the logging tables
Replication Mode Replication can be executed in Real-time mode or in Scheduled mode © 2012 SAP AG. All rights reserved.
©SAP AG
23
HA300
8-23
Creating a New Configuration for Non-SAP Sources Overview on configuration aspects
To replicate from non-SAP source system select Legacy and the affected database system.
Depending on the database system, additional required information needs to be specified (e.g. for DB2 specify the DB connection and the table space name). Start with transaction LTR © 2012 SAP AG. All rights reserved.
©SAP AG
24
HA300
8-24
Results of Creating a New Configuration Overview on configuration aspects When the popup to create a new configuration is closed by pressing the OK button, the following actions are performed automatically: Configuration settings are saved on the LT Replication Server New user and schema are created on the HANA system with the defined target schema name (not performed if an existing schema is reused) Replication control tables (RS_* tables) are created in target schema User roles for the target schema are created: _DATA_PROV -> Role to manage data provisioning _POWER_USER -> Contains all SQL privileges of the target schema _USER_ADMIN -> Role to execute authority procedures (see below) A procedure to grant (RS_GRANT_ACCESS) or revoke (RS_REVOKE_ACCESS) are created in the target schema Replication of tables DD02L (stores the table list), DD02T (stores the table short descriptions) and DD08L (R/3 DD: relationship definitions) is started automatically. Once those tables are replicated , the HANA studio knows which tables are available in the source system SYS_REPL and table RS_REPLICATION_COMPONENTS are created (if they don’t exist already based on a previous configuration) Replication is registered in table RS_REPLICATION_COMPONENTS
© 2012 SAP AG. All rights reserved.
25
Most relevant replication control tables are: RS_ORDER Table which temporarily stores the data provisioning requests (e.g. replication) to be processed by the LT Replication. RS_STATUS Stores the data provisioning status (current and historical) for each table processed by the LT Replication.
©SAP AG
HA300
8-25
Summary Overview on configuration aspects
You should now be able to: Describe the set-up of a configuration on the SAP LT Replication Server Explain the impact of the configuration set-up on the data replication
© 2012 SAP AG. All rights reserved.
©SAP AG
26
HA300
8-26
Unit 1: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-27
Objectives Data Replication at a Glance
At the end of this Lesson you will be able to:
Explain the different option for data replication and the related implications
© 2012 SAP AG. All rights reserved.
©SAP AG
28
HA300
8-28
Overview Data provisioning at a glance
This module covers the following topics:
Load and replicate data
Suspend and resume data replication of certain tables
Stop and Restart the master job of a configuration in SLT
© 2012 SAP AG. All rights reserved.
©SAP AG
29
HA300
8-29
Launch Data Provisioning UI Data provisioning at a glance All further data provisioning steps are executed from the HANA Studio. Therefore, switch to the HANA Studio, choose the perspective Information Modeler and start the quick launch. Select your system and start the Data Provisioning via the link in section DATA.
If you cannot enter this screen or do not see any data, verify if a data provisioning role was assigned to your user.
© 2012 SAP AG. All rights reserved.
©SAP AG
30
HA300
8-30
Start Load / Replication Data provisioning at a glance Press the respective button to Load the current data of a table from the source system
Replication includes load and delta replication
Replicate a table includes the load of the current data and the replication of all changes in the source system
Choose a table from the list or enter a search string to search for a specific table. Use the button Add to select the table. Once all relevant tables are selected, load / replication is triggered when popup is closed via button Finish.
© 2012 SAP AG. All rights reserved.
©SAP AG
31
HA300
8-31
Start Load/Replication - Executed Activities Data provisioning at a glance
Read module
RFC Connection
HANA Studio
Structure mapping & transformation
Write module Application Table
Logging Table
DB Connection Application Table
DB Trigger
SAP source system
SAP LT Replication Server
SAP HANA system
Data provisioning can be managed via the HANA Studio modeler view using the data provisioning. If a table is started for load or replication, the LT runtime objects (reader, mapping and transformation and writer modules) are generated in the respective systems. If tables are selected for replication, also the delta recording (logging table and DB trigger) is activated on the source system. © 2012 SAP AG. All rights reserved.
©SAP AG
32
HA300
8-32
Stop / Suspend Replication Data provisioning at a glance Press the respective button to Stop the replication and also to stop delta recording for that table (deletes DB trigger!) Suspend data replication but keep delta recording active Resume a previously suspended data replication The table selection popup will look similar as for Load and Replication. In case of Stop Replication or Suspend, only those tables can be selected which are already in replication mode. In case of Resume only those tables can be selected which are in suspend mode.
Please be aware that in case a replication is stopped and started again, the corresponding table will be dropped. The initial load must be repeated as delta recording was deactivated for a certain time and changes might not be recorded. So in case you only want to pause the delta replication, use the mode Suspend and Resume as delta recording is not deactivated and the replication can be continued without a need of a new initial data load.
© 2012 SAP AG. All rights reserved.
©SAP AG
33
HA300
8-33
Stop/Suspend Replication - Executed Activities Data provisioning at a glance
Stop data replication Application Table
Logging Table
Delete trigger & Log. tab. Application table
DB Trigger
SAP source system
SAP LT Replication Server
SAP HANA system
Stop replication
Application table Logging table
Stop data replication Application table
DB Trigger
SAP source system Suspend replication
Application table Logging table
SAP LT Replication Server
SAP HANA system
Continue data replication Application table
DB Trigger
SAP source system Resume replication
SAP LT Replication Server
© 2012 SAP AG. All rights reserved.
©SAP AG
SAP HANA system
34
HA300
8-34
Configuration and Monitoring Dashboard Master Job settings – Stop and Restart 1 Choose configuration and change
to Edit mode
2 The master job can now easily be
stopped and restarted for the relevant configuration on SLT Replication Server. The whole replication for a configuration will be stopped.
Triggers in source systems are still active and working so the logging tables will be filled continuously indepndent of stopping the master job on SLT Replication Server! Stopping option is usefull to pause the replication of a configuration temporary only. © 2012 SAP AG. All rights reserved.
©SAP AG
35
HA300
8-35
Summary Data provisioning at a glance
You should now be able to:
Explain the different option for data replication and the related implications
© 2012 SAP AG. All rights reserved.
©SAP AG
36
HA300
8-36
Unit 8: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-37
Objectives Administration and Monitoring at a Glance
At the end of this Lesson you will be able to:
Explain the different option for monitoring the replication process
© 2012 SAP AG. All rights reserved.
©SAP AG
38
HA300
8-38
Status Monitoring in HANA Studio Data provisioning at a glance The load / replication status can be monitored in the Data Load Management screen within the data provisioning tool. In this section, the current status of all relevant tables of the selected source system / target schema is displayed.
© 2012 SAP AG. All rights reserved.
©SAP AG
39
HA300
8-39
Configuration and Monitoring Dashboard Enhanced Statistics
Additional statistical data can be displayed in the “Statistics” tab page. You can view the number of records that have been replicated, and the relevant operation (insert, update, delete).
© 2012 SAP AG. All rights reserved.
©SAP AG
40
HA300
8-40
SAP Replication Manager - Mobile Application 1/2 Benefits and Requirements Monitor
Monitor the data replication process and system parameters.
Execution
Trigger execution of important data replication functions.
Higher Flexibility
Application can be run anytime and anywhere from a mobile which is connected to the internet.
Analytical View
Provide an analytical perspective of real-time data replication in terms of latency. Infrastructure Requirements
SUP2.1 Gateway (NW 7.02) (Minimal gateway) Backend IW_BEP 200 ( SP2.0) IW_BEP ,GW add-on DMIS (DMIS_2010) SLT system should be a NW 700 EHP2 with SAP Kernel 7.20 EXT
© 2012 SAP AG. All rights reserved.
©SAP AG
41
HA300
8-41
SAP Replication Manager - Mobile Application 2/2 Screenshots
Analytical View Execute
Monitor
Higher Flexibility © 2012 SAP AG. All rights reserved.
©SAP AG
42
HA300
8-42
Summary Administration and Monitoring at a Glance
You should now be able to:
Explain the different option for monitoring the replication process
© 2012 SAP AG. All rights reserved.
©SAP AG
43
HA300
8-43
Unit 8: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings
Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-44
Objectives SLT based transformation concepts
At the end of this Lesson you will be able to:
Explain the basics of the SLT based transformation concepts
Leverage the data processing and transformation process
Explain how to change advanced replication settings for certain configurations of SLT Replicaion Server
© 2012 SAP AG. All rights reserved.
©SAP AG
45
HA300
8-45
Overview SLT based transformation concepts
This module covers the following topics:
Details on the basic concept of SLT based transformation
Trigger conditions in detail
© 2012 SAP AG. All rights reserved.
©SAP AG
46
HA300
8-46
Concept SLT based transformation concepts The main purpose of SAP HANA is reporting data of particular ERP content. Generally this will be done with an one-to-one replication of ERP table into the new database. Depending on the customer-specific requirement, it’s sometimes necessary to filter, to change, or to extend the original data within the load process into HANA. Key Use Cases:
Conversion of data Î Change data within data replication/loading
Filtering Î Reduce number of records to be replicated
Structural changes of target table in HANA Î Add, remove and/or change type of fields in target table
Partitioning of target table in HANA Î Partitioning within data replication/loading
© 2012 SAP AG. All rights reserved.
©SAP AG
47
HA300
8-47
Concept SLT based transformation concepts Currently, transformation & filtering topics are only provided by project base. (Only for SAP lead projects)
© 2012 SAP AG. All rights reserved.
©SAP AG
48
HA300
8-48
Concept - Data Processing SLT based transformation concepts
1 Start of Replication Cycle 2 RFC Connection
…
Get next portion from source
3
Loop through portion
4
Map fields of a record to receiver structure
5
Add record to sender portion
Data Portions
Source system
DB Connection
… Data Portions
SAP LT Replication Server
SAP HANA system
The
data for transformation will be split into portions (default: load: 10,000 lines, replication: 5,000 lines)
The
portions will be processed successively, mapped, and transferred to sender
Extension
of functionality (for example data conversion) is possible at several points of the process using ABAP includes Î Transformation rules
Implementation
of rules, structure changes of tables, partitioning of tables etc. will specified within Advanced Replication Settings (Transaction IUUC_REPL_CONTENT)
© 2012 SAP AG. All rights reserved.
©SAP AG
49
HA300
8-49
Specify Advanced Replication Settings Combined Transaction for all Table Settings (1) General features of Advanced Replication Settings:
The UI is designed maintain customizing tables of SLT Replication server. This customizing is used to set-up structure changes of tables partitioning of tables in SAP HANA change and/or filter of data performance optimization to use parallel read withtin data replication. Define tables settings
Select Configuration
Select Table(s) © 2012 SAP AG. All rights reserved.
©SAP AG
Save 50
HA300
8-50
UI to Specify Advanced Replication Settings Combined Transaction for all Table Settings (2) Transaction IUUC_REPL_CONTENT 2 Select table
1
Select Configuration
Transport settings / transformation content via upload / download to other SLT system. D ef se ine t t i ta n g bl s e
3
© 2012 SAP AG. All rights reserved.
©SAP AG
51
HA300
8-51
Specify Advanced Replication Settings Define Table Settings – Process Flow
Add Table
Enter Tablename
Save
Specify Advanced Replication Setting for Table
Choose Appropriate Tabfolder
© 2012 SAP AG. All rights reserved.
©SAP AG
52
HA300
8-52
Specify Advanced Replication Settings Define Table Settings
1 Enter table name
2 Choose appropiate tab page: IUUC_REPL_TABSTG ÎDefine table deviation and partitioning IUUC_ASS_RUL_MAP ÎTransformation rules to change and/or filter data IUUC_SPC_PROCOPT ÎTrigger specific adaption IUUC_PERF_OPTION ÎPerformance optimization to use parallel read
Ability to import or export settings from / to file (only SLT table settings from new UI)
© 2012 SAP AG. All rights reserved.
©SAP AG
53
HA300
8-53
Advanced Replication Settings Export/Import Settings– Process Flow
Choose Import Option
Save Settings from Dev-System
Set Parameter if Option ‚Load Selected Settings‘ was chosen
ImportSettings into Target SLT Server
© 2012 SAP AG. All rights reserved.
©SAP AG
54
HA300
8-54
UI to Specify Advanced Replication Content Export/Import Settings Screenshots 3
2
Choose Import Option
Possibility to compare and load settings selectively for each table
to in s ver g r tin e et T S S L rt S po et Im arg T
For the export / import options, only the settings of Customizing tables will be uploaded and downloaded. Î No objects (for example ABAP includes of rules) are exported or imported. If you import settings, the mass transfer ID of the relevant SLT configuration must be entered.
1 Save Settings from Dev. System © 2012 SAP AG. All rights reserved.
©SAP AG
55
HA300 55
8-55
Summary SLT based transformation concepts
You should now be able to:
Explain the basics of the SLT based transformation concepts
Leverage the data processing and transformation process
Explain how to change advanced replication settings for certain configurations of SLT Replicaion Server
© 2012 SAP AG. All rights reserved.
©SAP AG
56
HA300
8-56
Unit 8: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-57
Objectives Change of (target) table structure and Partitioning
At the end of this Lesson you will be able to:
Explain the concept of extending a table
Specify your own extended structures
Explain how to implement partitioning
© 2012 SAP AG. All rights reserved.
©SAP AG
58
HA300
8-58
Business Example Change of (target) table structure Change of Table Structures within Transformation RFC Connection
Source system
DB Connection
SAP LT Replication Server
Scenarios
Options of Implementation
add table fields like Source System ID Î recommended for N:1 Replication into same table
get rid of certain table fields in HANA
change type of table fields (cast of values has to be defined within transformation rules if needed)
1.
By using a template table Define template tables of target structure in Sender system or SLT Server change table settings in Replication Settings
2.
By defining Table Deviation in Advanced Replication Settings
© 2012 SAP AG. All rights reserved.
©SAP AG
SAP HANA system
59
HA300
8-59
Specify Structure Changes Setup changes in Advanced Replication Settings name of table in SAP HANA (optional)
Structural change by using a template table
Structural change by defining Table Deviation
name of table with structure to be used as template for table
Click on Edit Table Structure
Î ‘X’: table of target type was created in LTReplication Server Î ‚ ‚ : table of target type was created in sender system
© 2012 SAP AG. All rights reserved.
©SAP AG
60
HA300
8-60
Specify Structure Changes Define Table Deviation A different target structure can be defined - without having to create the structure in the DDIC. The table deviation can be defined in table IUUC_REPL_TAB_DV. In this case, only the deviation must be defined (ignore, change, or add certain fields). All other fields are derived from the source system. The table deviation can be defined in the replication content UI. There is no extra option to set the key flag of a new table field. If the new field should become a key field, you have to set the position of the new field so that the field in the following line is also a key field.
© 2012 SAP AG. All rights reserved.
©SAP AG
61
HA300
8-61
Specify Structure Changes Example - Extend table with Key Field SOURCE_SYS_ID 1
2
Edit Table Structure
Define AdditionalField on HANA
3 Start Load/Replication in HANA
© 2012 SAP AG. All rights reserved.
©SAP AG
62
HA300
8-62
Specify Table Settings Enhanced Table Settings - Partition There is an additional field (PARTITION_CMD) in table IUUC_REPL_TABSTG where a partitioning command can be defined for certain tables. The partition command will be added to the create statement of the table on the HANA System.
The partition command has to be entered in the same way as it would be defined in the SQL editor of the HANA studio, for example: PARTITION BY HASH (a, b) PARTITIONS 4 SLT will add the partition command when generating the SQL command to create the table. For example: CREATE COLUMN TABLE mytab (a INT, b INT, c INT, PRIMARY KEY (a,b)) PARTITION BY HASH (a, b) PARTITIONS 4
HANA WIKI - https://wiki.wdf.sap.corp/wiki/display/ngdb/Partitioning Note that instead of the partitioning command, each SQL parameter (e.g. for localization) is possible © 2012 SAP AG. All rights reserved.
©SAP AG
63
HA300
8-63
Specify Table Settings Further settings
NO_DROP ROW_STORE RD_PORTION_SIZE
Î data in HANA will not be deleted before the replication Î target table in HANA will be created as row table instead of column table Î no of records per portion; empty means default value
© 2012 SAP AG. All rights reserved.
©SAP AG
64
HA300
8-64
Summary SLT based transformation concepts
You should now be able to: Explain the concept of extending a table Specify your own extended structures Explain how to implement partitioning
© 2012 SAP AG. All rights reserved.
©SAP AG
65
HA300
8-65
Unit 8: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-66
Objectives SLT based transformation concepts
At the end of this Lesson you will be able to:
Explain the different types of rules to change data within replication
Specify the parameter in advanced replication settings
© 2012 SAP AG. All rights reserved.
©SAP AG
67
HA300
8-67
Business Example Transformation data Change of Data within Replication by using Transformation Rules RFC Connection
Source System
DB Connection
Transformation of Data
LT Replication Server
Scenarios
SAP HANA System
Options of Implementation
To make certain fields anonymous Î HR reporting
1.
Applying Parameter Rules
To fill initial fields e.g. created by change of table stucture
2.
Applying Event-Based Rues
To convert units or currency and recalculate amounts and values
© 2012 SAP AG. All rights reserved.
©SAP AG
68
HA300
8-68
SLT based Transformation Concepts Types of Transformation Rules Generally, there are two types of transformations rules, which can be used within transformation both has to be created as ABAP include, which can be embedded into replication
Types of Transformation Rules Parameter-based rules Less flexible than event-based rules Easy to create by using parameters Event-based rules More flexible than parameter-based rules Knowledge of data processing within the SAP LT Replication Server needed to select the right event for the specific business scenario has access to all fields of a record © 2012 SAP AG. All rights reserved.
©SAP AG
69
HA300
8-69
Specify Data Transformation Setup changes in Advanced Replication Settings
Data Transformation by using a Parameter Rule
Data Transformation by using a Event-Based Rule
no event = Parameter Rule
event of the rule
Parameters of the rule, which will be used in the ABAP Include
not in use for event-Based Rules
name of ABAP Include containing the rule
name of ABAP Include containing the rule
© 2012 SAP AG. All rights reserved.
©SAP AG
70
HA300
8-70
SLT based Transformation Concepts Details - Parameter Rules Parameter Rules will be performed after the record is mapped to the receiver structure and before it will be added to the sender portion
Import Parameters are used within ABAP-Include to apply values, which were determined in Advanced Replication Settings if no parameter has to be used within the rule, IMP_PARAM_1 has to be set to ‘DUMMY’! values of fields of the sender record will be addressed in the include using the following name:
IMPORT PARAMETER 1: IMPORT PARAMETER 2: IMPORT PARAMETER 3:
i__1 (for example, i_mandt_1) i__2 i__3
if you have defined a literal the technical name will be:
IMPORT PARAMETER 1: IMPORT PARAMETER 2: IMPORT PARAMETER 3:
i_p1 i_p2 i_p3
Export Parameter specifies the fieldname of the receiver structure, which has to be filled wihtin the parameter rule Addressed in the include using the following name:
EXPORT PARAMETER:
e_ (for example, e_mandt).
© 2012 SAP AG. All rights reserved.
©SAP AG
71
HA300
8-71
SLT based Transformation Concepts Example - Fill Source_SYS_ID within Parameter Rule 1
2
Create ABAP include
Specify Advanced Tables Settings for the table
3
Start Load/Replication in HANA
© 2012 SAP AG. All rights reserved.
©SAP AG
72
HA300
8-72
SLT based Transformation Concepts Details – Event-Based Rules Event based Rules and Parameter Rules
can be performed at several events of the transformation process
the event has to be determined in Advanced Replication Settings
Fields can be directly addressed in ABAP-Includes using field symbols (event BOL, BOR, EOR and EOL only)
Fields of Sender Structure - (for example, -mandt)
Fields of Receiver Structure - (for example, -mandt) Typical use-case for an event-based rule if you have to apply a mapping to figure out the value of the field to be changed with the rule, a select statement could be executed for each record wihtin a parameter rule. This could decrease the overall performance of the replication dramatically. By creating an event based rule at the beginning of the replication process (BOP or BOT) an internal table containing the mapping information could be initialized once. In a second event-based rule where the records will be processed, the mapping information can be taken from internal table (in memory) instead of a select to database. So the overall performance can be optimzed by applying the eventbased rules. © 2012 SAP AG. All rights reserved.
©SAP AG
73
HA300
8-73
SLT based Transformation Concepts Events of Event-Based Rules Start Load / Replication
BOP (Begin of Processing):
BOP – Begin of Processing
Processed only once, before the data transfer really starts.
DO
Can be used to initialize certain global fields that might be used in subsequent events (for example, fill internal mapping tables)
Get next portion from source system
EOP (End of Processing):
BOT – Begin of Block
LOOP AT source
BOT (Begin of Block):
BOL – Begin of Loop
MOVE-CORRESPONDING source To target
Processed only once, after the data transfer is completed. Access to all data records of a portion read from the sender system
EOT (End of Block):
Access to all data records immediately before they are passed to the receiver system
BOR – Begin of Record
BOL (Begin of Loop):
individual field mapping
EOR – End of Record EOL – End of Loop
EOL (End of Loop):
ENDLOOP Write to target system EOT – End of Block
Like BOT if only one table is included in the conversion object; in case of objects with multiple tables, it can be applied to each specific table Like EOT if only one table is included in the conversion object; in case of objects with multiple tables, it can be applied to each specific table
BOR (Begin of Record):
This event is processed before the field mapping of the individual fields is started.
ENDDO
EOR (End of Record):
EOP – End of processing
This event is processed after the field mapping of the individual fields of a certain data record has finished.
© 2012 SAP AG. All rights reserved.
©SAP AG
74
HA300
8-74
SLT based Transformation Concepts Example - Fill Source_SYS_ID within Event-Based Rule 1
2
Create ABAP include
Specify Advanced Tables Settings for the table
© 2012 SAP AG. All rights reserved.
©SAP AG
75
HA300
8-75
SLT based Transformation Concepts Transformation Rules - Insert Line of Coding
Instead of an ABAP routine, which contains the transformation rules, it is possible to enter one line of coding (max 72 characters) directly.
If the line of coding is defined, no ABAP include will be performed.
This option is suitable for implementing simple parameter rules to fill table fields, such as the source system ID.
© 2012 SAP AG. All rights reserved.
©SAP AG
76
HA300
8-76
Summary Extension of (target) table structure
You should now be able to:
Explain the different types of rules to change data within replication
Specify the parameter in advanced replication settings
© 2012 SAP AG. All rights reserved.
©SAP AG
77
HA300
8-77
Unit 8: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-78
Objectives Selective data replication
At the end of this Lesson you will be able to:
Explain the benefit of filtering
Explain the concept of filtering within SLT
Specify your own filtering rule
Define trigger adjustments
© 2012 SAP AG. All rights reserved.
©SAP AG
79
HA300
8-79
Business Example Selective data replication / filtering Reduce Number of Records to Be Replicated by Filter RFC Connection
Source system
DB Connection
SAP LT Replication Server
Scenarios
Implementation
Replicating certain data only Î Only data of specific years should be used in HANA, for instance ÎReplication of data active tables as they are in DDIC tables DD02l, DD03l etc.
© 2012 SAP AG. All rights reserved.
©SAP AG
SAP HANA system
Either by trigger adjustment in source system or by rules (event-based or parameter-based)
80
HA300
8-80
Concept - Filtering Selective data replication Conditional filter (in SLT) by parameter-based or event-based rules Î To skip a record from load and replication macro SKIP_RECORD can be used in the code of the include. In fact, all data (means not filtered data) will be transferred from the source system to SAP LT Replication Server, but not forwarded to HANA Î means SLT system will read all data from the source and write only relevant data into HANA. No database-specific knowledge needed Valid for both initial load and replication
Selected delta replication (“Trigger Adjustments”) ÎDone by trigger adjustment directly of data base of the source system Î Will be implemented using tabfolder in advanced Replication Settings IUUC_SPC_PROCOPT. For experts only Database-specific Will be implemented in source system Decreases the total amount of data to be extracted from source system Performance advantages because of reduction of triggered and transferred data Only valid for replication data; it is not working for initial load. © 2012 SAP AG. All rights reserved.
©SAP AG
81
HA300
8-81
Example: Filtering by Company Code Selective data replication
Realized with a parameter rule
RESULT: Only records where MJAHR(FiscalYear ) = ‚1998‘ were transferred to HANA:
© 2012 SAP AG. All rights reserved.
©SAP AG
82
HA300
8-82
Define the Trigger Condition SLT based transformation concepts Trigger condition has to be defined in folder IUUC_SPC_PROCOPT of Advanced Replication Settings. Based on the database of the source system, different syntaxes are adapted. Field DBSYS LINE_NO LINE
DBSYS ADABAS D
Action Enter the database type of the source system. You can specify multiple lines if the condition is too complex to be filled in only one line. Enter the trigger condition here. Only when the data change fulfills the condition , it will be recorded into the logging table for the SLT replication. Syntax field1 = 'value0' AND field2 IN ( 'value1', 'value2' )
Sample AS4LOCAL = 'N'
DB2
___."field1" = 'value0' AND ___."field2" IN ( 'value1', 'value2' )
___."AS4LOCAL" = 'N'
DB6
___.field1 = 'value0' AND ___.field2 IN ( 'value1', 'value2' )
___.AS4LOCAL = 'N'
MSSQL
field1 = 'value0' AND field2 IN ( 'value1', 'value2' )
AS4LOCAL = 'N'
ORACLE
:___.field1 = 'value0' AND :___.field2 IN ( 'value1', 'value2' )
:___. AS4LOCAL = 'N'
© 2012 SAP AG. All rights reserved.
©SAP AG
83
HA300
8-83
Define the Trigger Condition SLT based transformation concepts In this example, we want to customize the trigger directly in the source system in a way that only changes to the data with AS4LOCAL = ‘N’ will be recorded.
Please note: trigger filtering will only work for replication phase. If it is required to filter the table during the initial load phase, an additional event (or parameter) filter is also necessary!
© 2012 SAP AG. All rights reserved.
©SAP AG
84
HA300
8-84
Summary Selective data replication / filtering
You should now be able to:
Explain the benefit of filtering
Explain the concept of filtering within SLT
Specify your own filtering rule
Define trigger adjustments
© 2012 SAP AG. All rights reserved.
©SAP AG
85
HA300
8-85
Unit 8: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-86
Specific considerations Transformation Rules for Cluster Tables Apply SAP note 1662438 at first. After implementing the note, you should follow the steps (table BSEG as an example): 1. For initial load, you should create one entry in IUUC_ASS_RUL_MAP for BSEG and include program for BSEG table 2. For replication, you should create an addition entry in IUUC_ASS_RUL_MAP for RFBLG and include program for RFBLG table
© 2012 SAP AG. All rights reserved.
©SAP AG
87
HA300
8-87
Specific considerations Performance Optimization
Default setting for initial load: Reading Type 3 The default setting is 3 jobs for each table for parallel read If more parallel jobs are defined for parallel read than are available for loading, the tables will be processed successively To determine the order of table processing the sequence number (Seq. No.) can be used.
© 2012 SAP AG. All rights reserved.
©SAP AG
88
HA300
8-88
Specific considerations Accelerated Load Procedures – Reading types Reading Type
Advantages
Disadvantages
1 – Access Plan Calculation
• Fast data load if index exists
• Additional index may be required
• Parallel data load possible
• Requires a key field which is sufficiently selective • Calculation required before load
3 – DB_SETGET (default)
• No separate index required
4 & 5 – Index Cluster
• Very fast data load after data is extracted to table DMC_INDXCL
• Multi-threading possible with DMIS_2010 SP07
• Additional consumption of database buffer
• Additional table space temporarily required in the source system
• Minimal usage of DB buffer
© 2012 SAP AG. All rights reserved.
©SAP AG
89
HA300
8-89
Unit 8: Data Provisioning using SLT Positioning and Key Concepts Overview on Configuration Aspects Data Replication at a Glance Administration and Monitoring at a Glance SLT based transformation concepts and Advanced Replication settings Extension of (target) table structure and Partitioning Transformation of Data Filtering and Selective Data Replication Specific Considerations Appendix
©SAP AG
HA300
8-90
Appendix – Load from SAP Archive – Integration of SLT with SAP Solution Manager
©SAP AG
HA300
8-91
Load from SAP Archive 1/2 Architecture and Key Building Blocks The respective ILM API must be available in the source system. It can be installed by means of SAP Note 1652039 (46C – 731).
RFC Connection
Read Module
Structure Mapping & Transformation
Write Module
ADK Archive Access API
DB Connection Application Table
ADK Archive SAP Source system
SAP LT Replication Server
SAP HANA system
Archived data can be selected by the date of the archiving session.
© 2012 SAP AG. All rights reserved.
©SAP AG
This presentation and SAP‘s strategy and possible future developments are subject to change and may be changed by SAP at any time for any reason without notice. This document is provided without a warranty of any kind, either express or implied, including but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement
HA300
92
8-92
Load from SAP Archive 2/2 Define Load Object Use report IUUC_CREATE_ARCHIVE_OBJECT on SLT Replication Server setup and start the loading process for archive objects.
Select replication configuration Select archive object Define selection criteria
Select the relevant tables within the current archive object
© 2012 SAP AG. All rights reserved.
©SAP AG
93
HA300
8-93
Integration of SLT with SAP Solution Manager - Monitoring Capabilities
SAP Solution Manager is set to be the single source of truth for monitoring and incident management of the entire HANA stack.
Ensures proactive information about replication status through alerting and notification capabilities
© 2012 SAP AG. All rights reserved.
©SAP AG
94
HA300
8-94
Overview - Monitoring with SAP Solution Manager 7.1 SP5 Replication notifications and alerts are now visible in SAP Solution Manager 7.1 SP5.
SLT monitoring summarizes the following information per configuration:
Connectivity to source and target system
Status of latency time last 24h replication
Status of master and load jobs
Trigger status
© 2012 SAP AG. All rights reserved.
©SAP AG
95
HA300
8-95
Set-Up of System Monitoring for SLT SLT System monitoring is defined on ABAP technical system level If DMIS Add-on is detected, SAP template “SAP SLT ABAP Addon“ is assigned by default
© 2012 SAP AG. All rights reserved.
©SAP AG
96
HA300
8-96
SLT System Monitoring - Connectivity Status Availabile Monitoring Capabilities for SLT Per schema the connectivity status from SLT to the source and to the target system is monitored
© 2012 SAP AG. All rights reserved.
©SAP AG
97
HA300
8-97
SLT System Monitoring – Status of Latency Times Performance Monitoring for SLT Per schema the worst rating of average latency of the past 24 hours is reported
© 2012 SAP AG. All rights reserved.
©SAP AG
98
HA300
8-98
SLT System Monitoring – Status of Master and Load Jobs Exception Monitoring for SLT Per schema the job status for master and load jobs is monitored
© 2012 SAP AG. All rights reserved.
©SAP AG
99
HA300
8-99
SLT System Monitoring – Trigger Status Exception Monitoring for SLT Per schema the trigger status is monitored (worst case for all table triggers)
© 2012 SAP AG. All rights reserved.
©SAP AG
100
HA300
8-100
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
9-1
Unit 9: Data Acquisition using SAP Data Services Introduction to SAP Data Services Loading data into SAP HANA
©SAP AG
HA300
9-2
Objectives Data Acquisition using SAP Data Services
At the end of this Lesson you will be able to: Explain Data Services capabilities with SAP HANA Load data from SAP ECC source table into SAP HANA using an ABAP dataflow.
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
9-3
Solution: One-Stop Solution for Information Management
DATA SERVICES ETL
Data Quality
Data Profiling
Text Analytics
Metadata Management
Data Sources: Structured and Unstructured
© 2012 SAP AG. All rights reserved.
4
A One Stop Solution for Information Management can simplify the enterprise architecture for managing information and allow IT to more efficiently deliver business results.
©SAP AG
HA300
9-4
SAP Data Services – Hadoop Connector Connect unstructured & structured data for greater insight Log Files
SAP HANA - y In or em
Multi-structured Data Sources
Enterprise Portals
M
Hadoop
Data Warehouses
1
3
Media and Other Files
2
SAP BW
Sybase IQ
Structured Data Disk
Map-Reduce, is a parallel processing paradigm where code is sent to data for instant processing
1
Collect & Store Files are stored in their native format incurring no transformational costs. Built in fault-tolerance. Commodity hardware and software solution makes Hadoop scale cost effectively.
Mobile
2
Dashboard/ Report
On Demand Services
Analyze & Process Prepare the data to enable the class of problems to be solved. Problems like searching, counting, pattern detection lend themselves well to Map Reduce paradigm
3
Integrate & Consume
Read from / load into Hadoop Familiar, easy-to-use Data Services UI Enterprise support and Integration into enterprise infrastructure
© 2012 SAP AG. All rights reserved.
5
Interoperability of SAP HANA with open source Apache™ Hadoop™: SAP Data Services software will provide capabilities to use Hadoop as a data store and be interoperable with the “big data” Hadoop framework. Hadoop is an open source technology modeled after Google’s Distributed File System (GFS) after the seminal paper published by Google on Map-Reduce published in 2004. Essentially, Google showed that for a highly dynamic internet search operation, the best approach is to do a full scan on the data in parallel. In order to reduce the data movement during the operation, the search and locate algorithm is sent to the place where the data is stored. The results are centrally aggregated and response fed back to the querying agent. All of this is done in a robust reliable way since the focus is to use cheap commodity hardware to keep the storage costs at the bare minimum. This amazing software architecture that ensures reliability, robustness and parallel processing over commodity, less reliable hardware has revolutionized how Big Data challenges can be approached. More and more organizations are becoming fast adopters of this technology because of the promise it brings on managing Big Data.
©SAP AG
HA300
9-5
SAP Data Services – SAP HANA Extract, transform, and load data quickly
Metadata
Modeler
Repository Server Open Hub BW
Any Source
Designer and Management Console
SAP Data Services
Data Load
In-Memory Computing Engine
SAP HANA
© 2012 SAP AG. All rights reserved.
6
SAP Data Services enable you to gain deeper insight with a single, trusted view by accessing and integrating structured and unstructured data from data sources across your enterprise. With data integrator you can, Extract, transform, and load data into a data warehouse to create a complete view Access data from Hadoop to combine unstructured and structured data for new insight Unlock meaning from unstructured documents with native text data processing The data integrator capabilities can be used with many of our data stores, including SAP HANA, SAP NetWeaver BW, SAP Rapid Marts, and Sybase IQ.
©SAP AG
HA300
9-6
SAP Data Services 4.0 SP2 with SAP HANA Database SAP Data Services 4.0 SP2 is the minimum release to work with HANA 1.0 SP4
Performance improvements for loading HANA Option in the HANA target for commit size will allow for better tuning. Option to choose column or row type of table in Template tables.
Support for Bulk Updates in SAP HANA (through an intermediate staging table) Direct UPDATE and DELETE statements via ODBC are slow. Option to load all data into a temporary table in HANA, together with an operation code to indicate this row is an INSERT/UPDATE/DELETE. Once all data is in the temporary table a SQL statement get issued to HANA to apply the INSERTS/UPDATES/DELETES to the actual target. Overall this gives a big performance gain because Data Services is only generating (bulk) INSERTs, and the actual UPDATEs/DELETEs are all executed in memory in the HANA engine. Improved ABAP integration to ERP ODP – Operational Data Provider Framework New SAP delivered API implemented on the ERP side
© 2012 SAP AG. All rights reserved.
7
© SAP AG 2012
©SAP AG
HA300
9-7
Full extractor support through ODP Full extractor support through ODP data replication API: Data Services can use this API to get initial and delta loads, the data can be streamed to Data Services. Main points: Only “released” extractors are shown to Data Services. Business Suite team releases standard extractors as they are certified for ODP Customer can release generic extractors using transaction RODPS_OS_EXPOSE Delta support through the delta queues (same mechanism as used by BW today) Data is streamed from SAP to Data Services. Overall DS-specific subset of overall ODP functionality released with ECC 6.0 EhP6. Standard extractors need to be “released” by the Business Suite team
© 2012 SAP AG. All rights reserved.
8
The Operational Data Provider (ODP) data replication API is installed on the SAP NetWeaver Platform. ODP provides the following benefits in Data Services: • ability to browse all available extractors, • extract data in both initial and changed-data capture (delta) mode, • stream data from the SAP application to the data flow without using staging files.
©SAP AG
HA300
9-8
Unit 9: Data Acquisition using SAP Data Services Introduction to SAP Data Services Loading data into SAP HANA
©SAP AG
HA300
9-9
Process Flow: SAP HANA and SAP Data Services 4
Create a connection to a SAP source system
Create a connection to SAP HANA
Import metadata from SAP BW Extractor to Data Services Repository
Design a Data Services job to populate SAP HANA
© 2012 SAP AG. All rights reserved.
©SAP AG
Execute a Data Services job to populate SAP HANA
Preview uploaded data
10
HA300
9-10
Standard vs ABAP Dataflows Standard Dataflow can be used if you have the following requirements: reading a single table small number of columns (data load buffer resticted to 512 Bytes per row. ABAP Dataflow can be used if you have the following requirements: reading multiple ECC tables. push down any join operations to the SAP Application. better performance
© 2012 SAP AG. All rights reserved.
11
It is recommended to use ABAP dataflows when loading data from SAP Applications.
©SAP AG
HA300
9-11
What is an ABAP Dataflow?
SAP Data Services
SAP Application SAP databases ABAP data flow Data flow
Send program
Read
Job Server
Transport File BAPI calls
SAP database
Read Load
IDOCs Access Server
© 2012 SAP AG. All rights reserved.
12
Data Services provides several methods for moving data into and out of SAP applications: Streaming data from SAP using RFC: Reads data from SAP applications using regular data flows and supports tables (for small data sets only) and extractors. ABAP programming language: Reads data from SAP applications using ABAP data flows and supports tables, hierarchies, extractors, and functions with scalar arguments. RFC/BAPI: Loads data to SAP applications and allows direct access to application tables outside of the data flows through RFC function calls. Because a function is handled as a nested column within a query, it can return SAP tables, structures, or scalar parameters, which can be loaded into any target. Data Services can process any RFC-enabled function call including all available parameters plus user-supplied functions. These functions are often used to write certain information into an SAP system or read data that has to be calculated by the function. IDoc interface: Reads from and loads data to SAP applications. Data Services can send, receive, and create SAP IDocs including extended and custom IDocs and supports reduced input sets, parallel processing, real-time, and batch processing.
©SAP AG
HA300
9-12
What is an ABAP Dataflow?
Data Flow R/3 data flow Source(s)
SNWD_SO_I(ECC.)
Data Transport
Query
Query
Target
DataTransport976
© 2012 SAP AG. All rights reserved.
13
ABAP Data Flow will generate an ABAP program according to the fields which you selected from SAP tables, then this program is copied into SAP, then SAP executes this program and collects all the data from tables and creates a file in the SAP Work Directory with the name you maintained in Data Transport object in the ABAP Data Flow, to store all the collected data in this file. It means, what data your are expecting from SAP tables, SAP will not allow BODS to directly take the data from SAP tables, instead it will give the data in the form of file. It means the ABAP Data Flow pulls data from SAP tables then it places the data in a file in SAP work Directory. Then with the help of Data Transfer Methods, system will move the file from SAP Work Directory to BODS Job Server local directory. Then the normal Data Flow will take that file as source to read and then loads to target table.
©SAP AG
HA300
9-13
Using Data Services Template Tables If the structures of the tables are similar or identical to the tables in the source system, then it’s not necessary import the meta data prior to executing the Data Services Job. Data Services provides a Template Tables functionality which executes a SQL statement in the target database prior to the data load which generates the meta data.
1 Inside the HANA Data Store select Template Tables, drop it into the Data flow
3 Map the source structure to the template table
2 Enter Table name and Owner name (Owner name = schema name)
© 2012 SAP AG. All rights reserved.
©SAP AG
14
HA300
9-14
Steps to load data into SAP HANA using an ABAP Dataflow Step 1: Create ECC and SAP HANA Datastores in the repository.
© 2012 SAP AG. All rights reserved.
©SAP AG
15
HA300
9-15
Steps to load data into SAP HANA using an ABAP Dataflow Step 2: Import the ECC table or extractor metadata into the repository.
© 2012 SAP AG. All rights reserved.
©SAP AG
16
HA300
9-16
Steps to load data into SAP HANA using an ABAP Dataflow Step 3: Create a Batch Job and add an ABAP dataflow
© 2012 SAP AG. All rights reserved.
17
Option Descriptions Datastore: Specifies the datastore corresponding to the SAP application tables or files this ABAP data flow accesses. Generated ABAP file name: Specifies the file name containing ABAP program code that Data Services generates from this SAP ABAP data flow. This file is written to the directory specified in the datastore definition. ABAP program name: Specifies the name Data Services uses for the ABAP program name that runs in SAP ABAP. It must begin with Y or Z and be 8 characters or fewer. Job name Specifies the name used for the job that runs in SAP. It defaults to the name of the data flow. ABAP row limit Limits the number of rows that will be read by this ABAP data flow. Use this option for testing purposes to limit the time that the parent batch job will run. Join rank Determines the order that Data Services reads this data set when joining it with another source. When joining sources, Data Services reads sources with higher ranks before reading sources with lower ranks. Parallel process threads Specifies the number of threads that read or load data. This option allows you to parallel process reading and loading For example, if you have four CPUs on your Job Server computer, enter the number 4 in this option.
©SAP AG
HA300
9-17
Steps to load data into SAP HANA using an ABAP Dataflow Step 4: Add the ECC source, Query transform and Data Transport in the workspace of the ABAP dataflow.
Double click on ABAP data flow to drill down
© 2012 SAP AG. All rights reserved.
©SAP AG
18
HA300
9-18
Steps to load data into SAP HANA using an ABAP Dataflow Step 5: Do the mappings in the Query transforms
© 2012 SAP AG. All rights reserved.
©SAP AG
19
HA300
9-19
Steps to load data into SAP HANA using an ABAP Dataflow Step 6: Execute the job and monitor
© 2012 SAP AG. All rights reserved.
©SAP AG
20
HA300
9-20
Steps to load data into SAP HANA using an ABAP Dataflow Step 7: Preview data in SAP HANA
© 2012 SAP AG. All rights reserved.
©SAP AG
21
HA300
9-21
Summary Data Acquisition using SAP Data Services
You should now be able to: Explain Data Services capabilities with SAP HANA Load data from SAP ECC source table into SAP HANA using an ABAP dataflow.
© 2012 SAP AG. All rights reserved.
©SAP AG
22
HA300
9-22
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
10-1
Unit 10: Uploading Data from Flat Files Introduction to Flat File Upload Loading flat file data into SAP HANA
©SAP AG
HA300
10-2
Objectives Data Acquisition using Flat File Data Load
At the end of this Lesson you will be able to: Understand the capabilities and positioning of the Flat file data load functionality Load data from Flat Files into the SAP HANA Database
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
10-3
New feature in SPS4: Uploading data from flat files
With SPS4, it’s possible to upload data from flat files, available at client file system, to SAP HANA database If the required table for loading the data does not exist in SAP HANA database, it’s necessary to create a table structure that is based on the flat files The application suggests the column names and data types for the new tables and it’s possible to edit them The new table always has a 1:1 mapping between the file and table columns When loading new data in the table, it gets appended to the existing data The application does not allow to overwrite any column or change the data type of existing data The supported file types are: .csv, .xls, and .xlsx Especially suited for Proof of Concepts or projects where only an one-time data load is required
+ Quick and easy data load - No delta logic available; no transformation capabilities Æ 1:1 mapping only © 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
10-4
Unit 10: Uploading Data from Flat Files Introduction to Flat File Upload Loading flat file data into SAP HANA
©SAP AG
HA300
10-5
Process Flow: Uploading data from flat files
Select Import Source
Select File for Upload
Select Target System
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Manage Table Definition and Data Mapping
Check Target Table
6
HA300
10-6
Select Import Source
In the File Menu, choose import
Expand the SAP HANA Content directory
Select Data From Local File and choose Next
Select Import Source
Select Target System
Manage Table Definition and Data Mapping
Select File for Upload
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Check Target Table 7
HA300
10-7
Select Target System In the Target System section, select the Target System where the data should be imported Choose Next
Select Import Source
Manage Table Definition and Data Mapping
Select File for Upload
Select Target System
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Check Target Table 8
HA300
10-8
Select File for Upload In the Flat File Upload screen, browse for the file which should be uploaded into SAP HANA database If a .xls or .xlsx file has been selected, choose the corresponding worksheet If a .csv File has been selected, select a delimiter If a header row exists in the flat file, select Header row exists and enter row number If only a specific row range should be relevant for the import, remove check for Import all data and enter the start / end line Note: A delimiter is used to determine columns and pick correct data against them. In a .csv file, the accepted delimiters are: , ;
Select Import Source
Select File for Upload
Select Target System
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Manage Table Definition and Data Mapping
Check Target Table 9
HA300
10-9
Select Target Table
For the Target Table, two options are available: New: When selecting New, a new table with the name entered will be generated within the schema chosen. Existing: When selecting Existing, data will be appended to an existing table. Choose Next
Select Import Source
Manage Table Definition and Data Mapping
Select File for Upload
Select Target System
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Check Target Table 10
HA300
10-10
Manage Table Definition and Data Mapping
In the Manage Table Definition and Data Mapping screen it’s possible to map the source and the target columns
The application proposes a mapping structure automatically based on the naming
Additionally it’s required to select a Key
Select Import Source
Note: Only 1:1 column mapping is supported. Additionally, it’s possible to edit the table definition by changing the store type, data types, renaming, adding or deleting columns
Manage Table Definition and Data Mapping
Select File for Upload
Select Target System
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Check Target Table 11
HA300
10-11
Manage Table Definition and Data Mapping
It’s possible to preview the data based on the flat file chosen
Select Finish to finalize the creation process
Select Import Source
Manage Table Definition and Data Mapping
Select File for Upload
Select Target System
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Check Target Table 12
HA300
10-12
Check Target Table
The table should be available within the schema defined during process step Select Target Table Double click table to see table definition
Select Import Source
Manage Table Definition and Data Mapping
Select File for Upload
Select Target System
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Check Target Table 13
HA300
10-13
Check Target Table Right click table and select Data Preview
Select Import Source
Manage Table Definition and Data Mapping
Select File for Upload
Select Target System
Select Target Table
© 2012 SAP AG. All rights reserved.
©SAP AG
Check Target Table 14
HA300
10-14
Summary Data Acquisition using Flat File Data Load
You should now be able to: Understand when to use Flat File data load functionality How to load data from Flat Files into the SAP HANA Database
© 2012 SAP AG. All rights reserved.
©SAP AG
15
HA300
10-15
You should now be able to:
© 2012 SAP AG. All rights reserved.
©SAP AG
16
HA300
10-16
Agenda SAP HANA Implementation and Modeling Unit 1: Approaching SAP HANA Modeling Unit 2: Connecting Tables Unit 3: Advanced Modeling Unit 4: Full Text Search Unit 5: Processing Information Models Unit 6: Managing Modeling Content Unit 7: Security and Authorizations Unit 8: Data Provisioning using SLT Unit 9: Data Provisioning using SAP Data Services Unit 10: Data Provisioning using Flat File Upload Unit 11: Data Provisioning using Direct Extractor Connection © 2012 SAP AG. All rights reserved.
©SAP AG
1
HA300
11-1
Unit 11: SAP HANA Direct Extractor Connection Overview Rationale SAP HANA Direct Extractor Connection (DXC) SAP HANA Direct Extractor Connection Details SAP Business Content DataSource Extractors SAP HANA Direct Extractor Connection Setup & Configuration Comparison with other SAP HANA Data Acquisition Techniques Appendix: DXC Sidecar Variation
©SAP AG
HA300
11-2
Objectives Direct Extractor Connection
At the end of this Lesson you will be able to:
Explain an additional data acquisition technique for working with data from SAP Business Suite systems that has been added to the existing techniques for HANA data acquisition.
© 2012 SAP AG. All rights reserved.
©SAP AG
3
HA300
11-3
Overview Direct Extractor Connection
This module covers the following topics: Overview
© 2012 SAP AG. All rights reserved.
©SAP AG
4
HA300
11-4
Overview SAP HANA Direct Extractor Connection An additional data acquisition technique for working with data from SAP Business Suite systems has been added to the existing techniques for HANA data acquisition: SLT Replication Data Services …and now SAP HANA Direct Extractor Connection (DXC)
© 2012 SAP AG. All rights reserved.
5
SAP HANA Direct Extractor Connect (DXC) is a means for providing out-of-the-box foundational data models to SAP HANA, which are based on SAP Business Suite entities. DXC is also a data acquisition method. The rationale for DXC is essentially simple, low TCO data acquisition for SAP HANA leveraging existing delivered data models. Customer projects may face significant complexity in modeling entities in SAP Business Suite systems. In many cases, data from different areas in SAP Business Suite systems requires application logic to appropriately represent business documents. SAP Business Content DataSource Extractors have been available for many years as a basis for data modeling and data acquisition for SAP Business Warehouse; now with DXC, these SAP Business Content DataSource Extractors are available to deliver data directly to SAP HANA. DXC is a batch-driven data acquisition technique; it should be considered as a form of extraction, transformation and load although its transformation capabilities are limited to user exit for extraction. Overview of the DXC Rationale: Leverage pre-existing foundational data models of SAP Business Suite entities for use in SAP HANA data mart scenarios: Significantly reduces complexity of data modeling tasks in SAP HANA Speeds up timelines for SAP HANA implementation projects Provide semantically rich data from SAP Business Suite to SAP HANA: Ensures that data appropriately represents the state of business documents from ERP Application logic to give the data the appropriate contextual meaning is already built into many extractors Simplicity / Low TCO: Re-uses existing proprietary extraction, transformation, and load mechanism built into SAP Business Suite systems over a simple http(s) connection to SAP HANA No additional server or application needed in system landscape Change data capture (delta handling): Efficient data acquisition – only bring new or changed data into SAP HANA DXC provides a mechanism to properly handle data from all delta processing types
©SAP AG
HA300
11-5
Overview SAP HANA DXC Concept: Illustration SAP ERP
SAP HANA
Embedded BW InfoCubes Data Store Objects InfoObjects generic data transfer
DataSource
Data flow redirected from embedded BW: transferred over http connection
Data models in HANA are built using active data table of In-Memory DataStore object
In memory DSO
Activation Queue
PSA
Active Version
Generic data transfer
Activation Processing
DataSource (flat structure) Extractor
Status
ERP data
Data
load into HANA activation queue
Separate
activation step
Scheduled
batch jobs
© 2012 SAP AG. All rights reserved.
6
An SAP Business Suite system is based on SAP NetWeaver. As of SAP NetWeaver version 7.0, SAP Business Warehouse (BW) is part of SAP NetWeaver itself, which means a BW system exists inside SAP Business Suite systems such as ERP (ECC 6.0 or higher). This BW system is referred to as an “embedded BW system”. Typically, this embedded BW system inside SAP Business Suite systems is actually not utilized, since most customers who run BW have it installed on a separate server, and they rely on that one. With the default DXC configuration, we utilize the scheduling and monitoring features of this embedded BW system, but do not utilize its other aspects such as storing data, data warehousing, or reporting / BI. DXC extraction processing essentially bypasses the normal dataflow, and instead sends data to SAP HANA. The following illustration depicts the default configuration of DXC. An In-Memory DataStore Object (IMDSO) is generated in SAP HANA, which directly corresponds to the structure of the DataSource you are working with. This IMDSO consists of several tables and an activation mechanism. The active data table of the IMDSO can be utilized as a basis for building data models in SAP HANA (attribute views, analytical views, and calculation views). Data is transferred from the source SAP Business Suite system using an HTTP connection. Generally, the extraction and load process is virtually the same as when extracting and loading SAP Business Warehouse – you rely on InfoPackage scheduling, the data load monitor, process chains, etc – which are all well known from operating SAP Business Warehouse. DXC does not require BW on SAP HANA. Also with DXC, data is not loaded into the embedded BW system. Instead, data is redirected into SAP HANA.
©SAP AG
HA300
11-6
Summary User Management & Security
You should now be able to:
Explain an additional data acquisition technique for working with data from SAP Business Suite systems that has been added to the existing techniques for HANA data acquisition.
© 2012 SAP AG. All rights reserved.
©SAP AG
7
HA300
11-7
Unit 11: SAP HANA Direct Extractor Connection Overview Rationale SAP HANA Direct Extractor Connection (DXC) SAP HANA Direct Extractor Connection Details SAP Business Content DataSource Extractors SAP HANA Direct Extractor Connection Setup & Configuration Comparison with other SAP HANA Data Acquisition Techniques Appendix: DXC Sidecar Variation
©SAP AG
HA300
11-8
Objectives Direct Extractor Connection
At the end of this Lesson you will be able to: Explain the rationale SAP HANA Direct Extractor Connection (DXC)
© 2012 SAP AG. All rights reserved.
©SAP AG
9
HA300
11-9
Overview Direct Extractor Connection
This module covers the following topics: Rationale SAP HANA Direct Extractor Connection (DXC)
© 2012 SAP AG. All rights reserved.
©SAP AG
10
HA300
11-10
Pre-Existing Foundational Data Models of SAP Entities for use in SAP HANA Challenges Data stored in many various tables, with high complexity in many modules of SAP Business Suite systems LT “real-time” approach Æ Uses base tables in the SAP Business Suite as a basis for data modeling of SAP Business Suite entities Project solution: Model SAP entities from scratch In some cases Æ Big challenges b/c of high complexity in the SAP Business Suite system
DXC Benefits: Leverage SAP Delivered Business Content DataSources Æ Which are existing foundational data models for key entities in SAP Business Suite systems Significantly reduces complexity of data modeling tasks in SAP HANA Speeds up timelines for customer’s implementation projects
© 2012 SAP AG. All rights reserved.
11
Challenges • Data stored in many various tables, with high complexity in many modules of SAP Business Suite systems • LT “real-time” approach Æ Uses base tables in the SAP Business Suite as a basis for data modeling of SAP Business Suite entities • Project solution: Model SAP entities from scratch • In some cases Æ Big challenges b/c of high complexity in the SAP Business Suite system DXC Benefits: • Leverage SAP Delivered Business Content DataSources Æ Which are existing foundational data models for key entities in SAP Business Suite systems • Significantly reduces complexity of data modeling tasks in SAP HANA • Speeds up timelines for customer’s implementation projects
©SAP AG
HA300
11-11
Provides Semantically Rich Data from SAP Business Suite to SAP HANA Challenges In many modules of SAP Business Suite systems Æ Application logic needed to have semantically rich data (data appropriately reflecting the state of business documents) LT “real-time” approach Æ Uses base tables in the SAP Business Suite as a basis for data modeling Æ Semantically rich data not provided “out of the box” Project solution: Implement business logic from scratch to properly represent SAP Business Suite data It can be extremely challenging to determine proper application logic to implement on a project basis in SAP HANA (depending on use case)
DXC Benefits DXC Æ uses SAP DataSource Extractors Æ provides semantically rich data “out of the box”
Ensures the data appropriately represents the state of business documents from ERP Application logic to “make sense of the data” already built into many extractors Avoid potentially difficult work of “reinventing the wheel” on a project basis in HANA -> reimplement application logic in HANA which is already provided in DataSource extractors
© 2012 SAP AG. All rights reserved.
12
Challenges • In many modules of SAP Business Suite systems Æ Application logic needed to have semantically rich data (data appropriately reflecting the state of business documents) • LT “real-time” approach Æ Uses base tables in the SAP Business Suite as a basis for data modeling Æ Semantically rich data not provided “out of the box” −
Project solution: Implement business logic from scratch to properly represent SAP Business Suite data
−
It can be extremely challenging to determine proper application logic to implement on a project basis in SAP HANA (depending on use case)
DXC Benefits • DXC Æ uses SAP DataSource Extractors Æ provides semantically rich data “out of the box” −
Ensures the data appropriately represents the state of business documents from ERP
−
Application logic to “make sense of the data” already built into many extractors
−
Avoid potentially difficult work of “reinventing the wheel” on a project basis in HANA -> reimplement application logic in HANA which is already provided in DataSource extractors
©SAP AG
HA300
11-12
Simplicity / Low TCO Challenges Some use cases require straightforward use of SAP ERP data Æ Simple interface desired System landscape impact of other data acquisition techniques LT “real-time” Æ Separate NetWeaver instance in system landscape DataServices Æ ETL tool, Separate BOE-instance in system landscape
DataServices Æ Requires an SP from March 2011 on the ERP system to use SAP DataSource Extractors (1522554 /Note:1558737)
DXC Benefits DXC provides a very simple, straightforward interface, re-uses existing extractors in SAP ERP HTTP-Connection directly from SAP Business Suite to HANA
TCO advantages (Due to simplicity, minimal system landscape impact) DXC is available simply by applying a note (In most cases) Re-use widely available skill-sets BW extraction and load is well-known in the industry
© 2012 SAP AG. All rights reserved.
©SAP AG
13
HA300
11-13
Activation Mechanism for Handling Delta Processing Challenges Many SAP Business Content DataSources offer delta processing, aka change data capture
Extraction only sends data created, changed, or deleted since the last extraction run Æ Efficiency Some DataSource extractor types deliver data with special properties Æ It should not be simply loaded directly into a table in HANA Æ This would cause incorrect results in reports
Delta processing types AIM, AIE, AIED, AIMD, ADD, ADDD and CUBE require data to be loaded into a DataStore Object (DSO) Æ Activation processing is important for data correctness Standard DSOs in BW include an activation mechanism, which handles the special properties of this data appropriately (e.g. after image only, overwrite, deletion flag, etc.) Without DXC, HANA standalone cannot properly handle data from the aforementioned types DXC Benefits DXC provides a special In-Memory DataStore Object (IMDSO) for use in HANA standalone
IMDSO Includes the same activation mechanism features as BW DSOs IMDSO properly handles special requirements of extractor types such as AIM and AIMD, which need DSO activation Æ ensures correct data in reports IMDSO ensures proper sequencing, overwrite, deletion of data, etc. © 2012 SAP AG. All rights reserved.
©SAP AG
14
HA300
11-14
Summary User Management & Security
You should now be able to: Explain the rationale SAP HANA Direct Extractor Connection (DXC)
© 2012 SAP AG. All rights reserved.
©SAP AG
15
HA300
11-15
Unit 11: SAP HANA Direct Extractor Connection Overview Rationale SAP HANA Direct Extractor Connection (DXC) SAP HANA Direct Extractor Connection Details SAP Business Content DataSource Extractors SAP HANA Direct Extractor Connection Setup & Configuration Comparison with other SAP HANA Data Acquisition Techniques Appendix: DXC Sidecar Variation
©SAP AG
HA300
11-16
Objectives Direct Extractor Connection
At the end of this Lesson you will be able to: Explain the SAP HANA Direct Extractor Connection in details
© 2012 SAP AG. All rights reserved.
©SAP AG
17
HA300
11-17
Overview Direct Extractor Connection
This module covers the following topics: SAP HANA Direct Extractor Connection Details
© 2012 SAP AG. All rights reserved.
©SAP AG
18
HA300
11-18
SAP HANA Direct Extractor Connection Details In typical business suite systems, the embedded BW is not utilized Customers typically have separate BW systems
DXC uses the embedded BW system to enable extraction and monitoring only Data flow is redirected Æ It gets sent to HANA Note: Modeling in the embedded BW is not part of the DXC solution
Note: An architectural variation available, which uses a “sidecar” BW instead of the embedded one. See appendix for details.
© 2012 SAP AG. All rights reserved.
19
In typical business suite systems, the embedded BW is not utilized • Customers typically have separate BW systems DXC uses the embedded BW system to enable extraction and monitoring only • Data flow is redirected Æ It gets sent to HANA • Note: Modeling in the embedded BW is not part of the DXC solution Note: An architectural variation available, which uses a “sidecar” BW instead of the embedded one. See appendix for details.
©SAP AG
HA300
11-19
SAP HANA Direct Extractor Connection Details The extraction from the SAP Business Suite system -> controlled from the Data Warehousing workbench inside the embedded BW When data is extracted from, the SAP Business Suite system, it is not loaded into the PSA of the embedded BW instead it is redirected and sent to HANA It gets loaded into in-memory DSO’s activation queue Then activated into the active table of the in-memory
However, in the data load monitor of the embedded BW, the data load into the activation queue in the DSO in HANA appears like data is loading into the PSA in the embedded BW
© 2012 SAP AG. All rights reserved.
20
The extraction from the SAP Business Suite system -> controlled from the Data Warehousing workbench inside the embedded BW • When data is extracted from, the SAP Business Suite system, it is not loaded into the PSA of the embedded BW instead it is redirected and sent to HANA −
It gets loaded into in-memory DSO’s activation queue
−
Then activated into the active table of the in-memory
• However, in the data load monitor of the embedded BW, the data load into the activation queue in the DSO in HANA appears like data is loading into the PSA in the embedded BW
©SAP AG
HA300
11-20
SAP HANA Direct Extractor Connection Details Delta processing (aka “change data capture”) Works the same for DXC as it would if BW were the receiving system If the DataSource is delta enabled, then delta-enabled data is available with SAP HANA Direct Extractor Connection
DXC internally in HANA uses the ICM (Internet Connectivity Manager) – receives XML packages over the http(s) connection
Mechanism written on the XS Engine (special component for HANA) Receives data packets from ICM, converts format Inserts the records into the activation queue of the in-memory DSO Activation processing Æ records are go into the active table in proper sequence
Both the ICM and XS Engine components must be installed in SAP HANA to utilize DXC
© 2012 SAP AG. All rights reserved.
21
Delta processing (aka “change data capture”) • Works the same for DXC as it would if BW were the receiving system −
If the DataSource is delta enabled, then delta-enabled data is available with SAP HANA Direct Extractor Connection
• DXC internally in HANA uses the ICM (Internet Connectivity Manager) – receives XML packages over the http(s) connection • Mechanism written on the XS Engine (special component for HANA) −
Receives data packets from ICM, converts format
−
Inserts the records into the activation queue of the in-memory DSO
−
Activation processing Æ records are go into the active table in proper sequence
• Both the ICM and XS Engine components must be installed in SAP HANA to utilize DXC
©SAP AG
HA300
11-21
SAP HANA Direct Extractor Connection Details Limitations for DXC Business Suite System based on NetWeaver 7.0 or higher (e.g. ECC) with at least the following SP level: Release 700 Release 701 Release 702 Release 730
SAPKW70021 SAPKW70104 SAPKW70201 SAPKW73001
(SP stack 19, from Nov 2008)
DataSource must have a key field defined Procedure exists to define a key if one is not already defined Certain DataSources may have specific limitations Inventory types, e.g. 2LIS_03_BF – data requires special features only available in BW Certain Utilities DataSources – can work with one and only one receiving system Some DataSources are not delta enabled – not a specific issue for DXC or HANA, but something to take into account © 2012 SAP AG. All rights reserved.
22
Limitations for DXC • Business Suite System based on NetWeaver 7.0 or higher (e.g. ECC) with at least the following SP level: −
Release 700
SAPKW70021
−
Release 701
SAPKW70104
−
Release 702
SAPKW70201
−
Release 730
SAPKW73001
(SP stack 19, from Nov 2008)
• DataSource must have a key field defined −
Procedure exists to define a key if one is not already defined
• Certain DataSources may have specific limitations −
Inventory types, e.g. 2LIS_03_BF – data requires special features only available in BW
−
Certain Utilities DataSources – can work with one and only one receiving system
−
Some DataSources are not delta enabled – not a specific issue for DXC or HANA, but something to take into account
©SAP AG
HA300
11-22
SAP HANA Direct Extractor Connection SAP Business Suite – DataSource Extractors Example from Sales Order Item Content HANA Data Models - virtual In-Memory Data Store Object Tables: Active table: /BIC/A 2LIS_11_VAITM00 Activation Processing Activation Queue table: /BIC/A 2LIS_11_VAITM40
SAP HANA Data flow redirected from embedded BW Transfer Structure Extract StruktureMC11VA0ITM
Delta Queue
ARFCSDATA
ARFCSSTATE
Delta
DataSource for transactional data 2LIS_11_VAITM
SAP ERP
Init/Full
Update Mode
Setup Table
TRFCQOUT
Update Methods Application logic MCVBAK
Sales Order
MCVBAP
Application Tables
© 2012 SAP AG. All rights reserved.
©SAP AG
Communication Structure
23
HA300
11-23
Summary User Management & Security
You should now be able to: Explain the SAP HANA Direct Extractor Connection in details
© 2012 SAP AG. All rights reserved.
©SAP AG
24
HA300
11-24
Unit 11: SAP HANA Direct Extractor Connection Overview Rationale SAP HANA Direct Extractor Connection (DXC) SAP HANA Direct Extractor Connection Details SAP Business Content DataSource Extractors SAP HANA Direct Extractor Connection Setup & Configuration Comparison with other SAP HANA Data Acquisition Techniques Appendix: DXC Sidecar Variation
©SAP AG
HA300
11-25
Objectives Direct Extractor Connection
At the end of this Lesson you will be able to: Explain the SAP Business Content DataSource Extractors
© 2012 SAP AG. All rights reserved.
©SAP AG
26
HA300
11-26
Overview Direct Extractor Connection
This module covers the following topics: SAP Business Content DataSource Extractors
© 2012 SAP AG. All rights reserved.
©SAP AG
27
HA300
11-27
SAP TechEd 08 SAP Business Content DataSource Extractors Overview Proprietary Extraction Technology Application based change data capture (aka delta capabilities)
SAP Business Suite
Extractors are application based and take data from the context of the application itself Extraction is event/process driven and (in some cases) is accomplished with publishing new or changed data in ERP based Delta Queues for receiving systems (e.g. BW or HANA) Extract structures can easily be enhanced using append structures Transformations can be implemented at the time of extraction using Business Add Ins (BADIs) Extract Structures based on entities in the Business Suite Asynchronous, mass data capable extraction
© 2012 SAP AG. All rights reserved.
©SAP AG
28
HA300
11-28
SAP Business Content DataSource Extractors Thousands of SAP Business Content DataSources Exist
Transactional data
Master data
Texts
Hierarchies
Totals
ERP & R/3
900
700
2300
100
4000
CRM
160
280
700
40
1180
SRM
30
30
60
10
130
Others
360
210
330
30
930
GRC
20
30
120
10
180
1470
1250
3510
190
6420
transactional data master data attributes master data text master data hierarchy
business documents
master data
© 2012 SAP AG. All rights reserved.
©SAP AG
29
HA300
11-29
Summary User Management & Security
You should now be able to: Explain the SAP Business Content DataSource Extractors
© 2012 SAP AG. All rights reserved.
©SAP AG
30
HA300
11-30
Unit 11: SAP HANA Direct Extractor Connection Overview Rationale SAP HANA Direct Extractor Connection (DXC) SAP HANA Direct Extractor Connection Details SAP Business Content DataSource Extractors SAP HANA Direct Extractor Connection Setup & Configuration Comparison with other SAP HANA Data Acquisition Techniques Appendix: DXC Sidecar Variation
©SAP AG
HA300
11-31
Objectives Direct Extractor Connection
At the end of this Lesson you will be able to: Explain SAP HANA Direct Extractor Connection Setup & Configuration
© 2012 SAP AG. All rights reserved.
©SAP AG
32
HA300
11-32
Overview Direct Extractor Connection
This module covers the following topics: SAP HANA Direct Extractor Connection Setup & Configuration
© 2012 SAP AG. All rights reserved.
©SAP AG
33
HA300
11-33
Setup & configuration Relevant notes All relevant information related to setup & configuration is provided by Note 1665602 - Setup & Config: SAP HANA Direct Extractor Connection (DXC) Note 1583403 - Direct extractor connection to SAP HANA
© 2012 SAP AG. All rights reserved.
34
You must read the following SAP Notes before you start the installation. These SAP Notes contain the most recent information on the installation, as well as corrections to the installation documentation. Make sure that you have the up-to-date version of each SAP Note, which you can find on SAP Service Marketplace at http://service.sap.com/notes.
©SAP AG
HA300
11-34
You must read the following SAP Notes before you start the installation. These SAP Notes contain the most recent information on the installation, as well as corrections to the installation documentation. Make sure that you have the up-to-date version of each SAP Note, which you can find on SAP Service Marketplace at http://service.sap.com/notes. SAP Note Number Title Description 1583403 Direct extractor connection to SAP HANA Main note for setup steps required in the source SAP Business Suite system. 1670518 SAP HANA Direct Extractor Connection: Monitoring Provides information how to monitor SAP HANA Direct Extractor Connection (DXC), in particular the Activation processing for In-Memory DataStore Objects (IMDSOs). 1688750 DataSource: Reading a property in the source system Apply this note to the source SAP Business Suite system only if you have the “sidecar” scenario described in section Appendix – DXC System Landscape Variants: The "Sidecar" Approach. 1701750 DataSource: Secondary Index on the PSA If your DataSource is missing a key, apply this note to any BW systems connected to the SAP Business Suite system you are using with DXC. 1677278 DataSource: Changing the Key Definition (A version) Provides a report where you can define a semantic key for any DataSources that are missing keys. DataSources without keys will cause an error when you try to generate the In-Memory DataStore Object in SAP HANA. Before applying this not to your SAP Business Suite system, first apply SAP note 1701750 to any BW systems connected to the SAP Business Suite system you are using with DXC. 1710236 SAP HANA DXC: DataSource Restrictions Lists specific DataSources not supported by DXC. 1714852 Troubleshooting SAP HANA DXC issues Guidance for troubleshooting DXC issues
©SAP AG
HA300
11-35
Step 1: Enabling XSEngine and ICM Service
© 2012 SAP AG. All rights reserved.
36
Ensure that the necessary technology components in SAP HANA are running, that is: •
the XSEngine
•
the ICM service
If these services are not running, proceed as follows to enable them: 1. Open SAP HANA studio. 2. Right-click your SAP HANA instance and select Administration from the context menu to open the Administration perspective of your SAP HANA instance. 3. Select the Configuration tab. Expand the daemon.ini section and then the sapwebdisp section. For the instances parameter, a green light showing the number 1 on the Host column should be displayed. 4. If the value 1 is not displayed, double-click the instances parameter and change the configuration value from 0 to 1. 5. Click Save to save the new value. 6. Continuing in the daemon.ini section, expand the xsengine section. For the xsengine parameter, a green light showing the number 1 on the Host column should be displayed. 7. If the value 1 is not displayed, double-click the instances parameter and change the configuration value from 0 to 1. 8. Click Save to save the new value. Check if the XSEngine is running by accessing the following URL: http://:80
The port name in this URL may differ on your system, depending on the instance number of your installation.
If the XSEngine and ICM are operational, you should see the following information on your screen: SAP XSEngine is up and running.
©SAP AG
HA300
11-36
Step 2: Delivery Unit Import
Result
© 2012 SAP AG. All rights reserved.
37
Prerequisites Prior to importing the delivery unit, prepare the delivery unit archive for import. Acquire the delivery unit archive from the SAP Service Marketplace at http://service.sap.com/swdc and save it to a location on your local computer. Importing the delivery unit For importing the delivery unit, do the following: 1. Open SAP HANA studio and change to the SAP HANA modeler perspective. 2. On the Quick launch tab, in the Content section choose Import. 3. In the Import dialog, expand the HANA Content node, choose Delivery Unit and click Next. 4. Select Client and navigate to the location on your local computer where you have stored the delivery unit archive. 5. Select the delivery unit archive and click OK. The status in the Object import simulation should display green lights. 6. Keep the existing defaults, and click Finish. In the lower right part of the screen you should see a tab display for Job Log, with a progress indicator under Current. Once the delivery unit import is completed, the message Completed Successfully is displayed.
©SAP AG
HA300
11-37
Step 3: Application Server Configuration
© 2012 SAP AG. All rights reserved.
38
The use of the DXC application has to be enabled with the XSEngine component of SAP HANA. To do so, an entry has to be added in the application server configuration. 1. Open the SAP HANA studio and select the instance of the SAP HANA database. 2. Open the Administration perspective and select the Configuration tab. 3. Expand the xsengine.ini section and then the application_container section. 4. Right-click the application_list parameter and select Change from the context menu. 5. In the New Value field, enter libxsdxc. If another value is already there, put a comma after the existing value and add the value libxsdxc, for example searchservice, libxsdxc. 6. Click Save. In order to make the new configuration effective, the XSEngine has to be restarted. 1. In the Administration perspective of SAP HANA studio, choose the Landscape tab. 2. Right-click the xsengine service and select Stop. The status light will change to yellow and then to red. 3. Right-click the xsengine service again and select Start missing services. Check if DXC is operational. The DXC application should be accessible now by using the following URL: http://:80/dxc/dxc.xscfunc If the check is successful, you will be prompted to save a file with the name dxc.xscfunc to your computer. The contents of this file are not important; the test is successful if calling this URL produces this file.
©SAP AG
HA300
11-38
Step 4: Creating a DXC User in SAP HANA
Create a user who has the privileges to execute the DXC extraction and load. Add the roles PUBLIC and MONITORING
© 2012 SAP AG. All rights reserved.
39
Create a user who has the privileges to execute the DXC extraction and load. To create this user – in the following sections referred to as the DXC user – do the following: 1. Open SAP HANA studio and select your SAP HANA system. 2. In the navigation tree, select Catalog
Authorization
Users.
3. Right-click Users and select New User. 4. In the User Name field, enter an appropriate name, for example DXCUSER. 5. Select internal authentication, enter a password and confirm the password. On the Granted Roles tab, add the roles PUBLIC and MONITORING. Click the green Deploy icon.
©SAP AG
HA300
11-39
Step 5: Creating a DXC Schema in SAP HANA
© 2012 SAP AG. All rights reserved.
40
Create a schema for use with the SAP HANA Direct Extractor Connection which is owned by the DXC user. You should create a distinct schema for each specific SAP Business Suite system that you will connect to this SAP HANA system with DXC. 1. In SAP HANA studio, execute the following SQL statement: create schema owned by Example: create schema R3TSCHEMA owned by DXCUSER 2. Click Deploy or press F8 to create the schema.
©SAP AG
HA300
11-40
Step 6: Create an HTTP Connection to the SAP HANA System
© 2012 SAP AG. All rights reserved.
41
In the SAP Business Suite system, create an HTTP destination of type G using the transaction SM59. 1. On the Configuration of RFC Connections screen, select the node HTTP Connections to External Server and click the Create icon. 2. Provide a name for the HTTP destination, for example DXC_HANA_CONNECTION_. 3. In the Technical Settings tab, enter the target host name and the port number of your remote SAP HANA system in the Target Host and Service No. fields. The port number is 80 of your SAP HANA database. In the Path Prefix field, enter /dxc/dxc.xsfunc. On the Logon & Security tab, choose Basic Authentication and provide the user name and password of your DXC user. Click Save.
©SAP AG
HA300
11-41
Step 7: Configure DXC HTTP Interface Destination and maintain DataSource parameters
© 2012 SAP AG. All rights reserved.
42
Configure DXC HTTP Interface Destination Create an entry in the table RSADMIN, which is the primary BW configuration table. In this table, the HTTP destination to the SAP HANA system created earlier is designated as the connection used for DXC. 1. Use transaction SA38 to execute the program SAP_RSADMIN_MAINTAIN. In the OBJECT field, enter the value PSA_TO_HDB_DESTINATION. In the VALUE field, enter the name of the HTTP destination you created, for example DXC_HANA_CONNECTION_. Make sure that you use upper and lower case letters for the name of the HTTP destination correctly. Click Insert, and then click Execute (F8) to create the table entry. Choose the System-Wide Setting for DataSources In this configuration step, you specify the extent of the use of DXC in the source SAP Business Suite system. Keep in mind that subsequent references to SAP BW typically refer to the embedded BW system which lives inside the SAP Business Suite system, unless you are using the “sidecar” approach discussed in section Appendix – DXC System Landscape Variants: The "Sidecar" Approach.
©SAP AG
HA300
11-42
The choice you make for this next configuration setting determines whether the normal BW functionality is available in the system you are using with DXC. • If you are using the embedded BW, depending on your choice, the embedded BW could potentially become completely disabled. • If you have a remote BW connected to this SAP Business Suite system, it would not be affected (no matter what choice you make in this section). Nontheless, if you are using the embedded BW for some purpose other than DXC (or you might in the future), the choice in this section is very important. Take time to discuss the implications of the choice you make here, and make the choice only after proper consideration. 1. Use transaction SA38 to execute the program SAP_RSADMIN_MAINTAIN. 2. Create an additional entry object PSA_TO_HDB. In the VALUE field, enter either the value GLOBAL, SYSTEM or DATASOURCE, depending on the option which is best in your scenario: GLOBAL All DataSources are available for use with DXC. When you choose this value, it is no longer possible to execute any BW processes or reports in the source system. Keep in mind that this refers to the BW system used with DXC (which is typically the the embedded BW inside an SAP Business Suite system, which is very often not used). If you have a separate SAP BW system connected to this SAP Business Suite system, this setting has no impact (except in the “sidecar” scenario, where it does, see section Appendix – DXC System Landscape Variants: The "Sidecar" Approach). • SYSTEM Only the specified clients are used with DXC. The remaining clients are available for DataSources to be extracted, transformed and loaded into the PSA of the SAP BW system (typically this is the embedded BW). • DATASOURCE Only the specified DataSources are used with DXC. Any DataSources not specified can be extracted, transformed and loaded into the PSA of the SAP BW system. If this SAP BW system (embedded BW or sidecar BW) is used for other purposes besides DXC, then choose this option. However, keep in mind that any DataSources you choose to be used by DXC cannot be used in this (embedded or sidecar) SAP BW system. 3. Once you have decided on the appropriate configuration setting, enter the text for that choice (for example DATASOURCE), and click Insert. Click Execute (F8) to create the table entry. Create and Populate a Table to Specify the DataSources Used by DXC The steps described in this section are only required if you choose the value DATASOURCE for the PSA_TO_HDB entry object.
©SAP AG
HA300
11-43
If you decide to use the DATASOURCE setting, then in order to be able to use specifc DataSources with DXC, you have to create a customer-specific database table to list the DataSources to use with DXC. 1. Use transaction SE11 to create a new table. In the Database Table field, enter a custom table name, for example ZDXCDATASOURCES. In the Short Description field, provide description for the table, for example DataSources for DXC. 2. Click Create. 3. On the Delivery and Maintenance tab, in the Delivery Class field, select C, and choose Display/Maintenance Allowed from the Data Browser/Table View Maint. List. 4. Change to the Fields tab. 5. Fill out the first table row as follows: −
In the Field column, enter the value DATASOURCE.
−
Select the checkboxes in the Key and Initial Values columns.
−
In the Data element column, enter the value ROOSOURCER.
6. Press ENTER to apply the changes and to change to the next table row. 7. Fill out the next table row as follows: −
In the Field column, enter the value LOGSYS. Select the checkbox in the Key column. Do not select the checkbox in the Initial Values column.
−
In the Data element column, enter the value RSSLOGSYS.
8. Press ENTER to apply the changes. 9. Click Save. A dialog box appears where you can enter an appropriate customer package (starting with “Z”) for the object directory entry. If you are not sure which package to use, consult with someone responsible for transporting objects in your system landscape. 10.
In the next dialog box, assign this to a change request for eventual transport.
11.
Click Technical Settings.
12.
In the next screen, enter APPL2 in the Data Class field, and choose 0 in the Size category field.
13.
Click Save and then click the green circle with an arrow to return to the previous screen.
14.
Click Activate to activate the table. Activation warnings can generally be ignored in this case.
©SAP AG
HA300
11-44
Create table entries for the specific DataSources that you will use with DXC. You can add entries to this table later if you decide to work with additional DataSources. 1. Use transaction SE16, enter the table name you created in the previous section, for example ZDXCDATASOURCES and click Create. 2. Enter the specific DataSource name and logical system name for the relevant client. There is no value help dropdown here, so you will need to know the exact technical name for the DataSource and logical system name, and enter them with proper spelling. 3. Repeat this action for all DataSources (and all associated relevant clients) that you want to use with DXC. You have successfully created and populated a table to designate the specific DataSources / clients that you will use with DXC. Indicate the Table Used to Specify the DataSources for Use with DXC The steps described in this section are only required if you choose the value DATASOURCE for the PSA_TO_HDB entry object. 1. Use transaction SA38 to execute the program SAP_RSADMIN_MAINTAIN. In the OBJECT field, enter the value PSA_TO_HDB_DATASOURCETABLE. In the VALUE field, enter the name of the name of the table you created in the previous section to hold the specific DataSources you want to enable for DXC, for example ZDXCDATASOURCES. Designate the Schema in SAP HANA to Store IMDSOs Within the SAP HANA database, an In-Memory DataStore Object (IMDSO) is generated for each DataSource. An IMDSO is a set of tables with an activation mechanism. In order to make sure that the IMDSOs are generated in the appropriate schema into the SAP HANA database, assign the DXC schema to be used. Use the schema that you created in SAP HANA. Perform the following actions: 1. Use transaction SA38 to execute the program SAP_RSADMIN_MAINTAIN. 2. Create the entry object PSA_TO_HDB_SCHEMA. In the VALUE filed, enter the name of the SAP HANA database schema to use, for example R3TSCHEMA. 3. Click Insert, and then click Execute (F8).
©SAP AG
HA300
11-45
Step 8: Configuration Steps Specific to SAP Business Warehouse Replicate DataSources
Result in HANA (IMDSO) Æ
© 2012 SAP AG. All rights reserved.
46
The next steps involve working in the BW Data Warehousing workbench (transaction RSA1). Consider the client to use for the BW client carefully. Be aware that once you have decided which client in your system is the BW client, the transaction RSA1 or other BW-related functions cannot be used in any other client. It is very difficult to change to another client later, as once you have executed transaction RSA1, many configurations are performed automatically. In order to be able to transfer data, you have to create source systems for any clients in the SAP Business Suite system that should be able to extract data and load it to the SAP HANA database. For further information about creating source systems for this purpose, please refer to the BW documentation at http://help.sap.com/nw_platform -> -> Application Help -> SAP Library -> -> SAP NetWeaver Business Warehouse. 1. Right-click the source system you are working with and select Replicate DataSources. If prompted about the type of DataSource, choose DataSource for all; do not choose the old 3.x type DataSource. All DataSources that have been installed using transaction RSA5 are transferred to the Data Warehousing workbench. 2. Right-click the DataSource(s) you want to work with and select Change. 3. In the Change DataSources dialog, click the Activate icon. This creates an In-Memory DataStore Object (IMDSO) in the SAP HANA database that corresponds to the DataSource structure.
©SAP AG
HA300
11-46
You can see the In-Memory DataStore Object in the modeling perspective of SAP HANA studio: 1. Log on to SAP HANA studio with the DXC user. 2. In the Modeling perspective, locate and expand the schema that you created for this source system, for example R3TSCHEMA. 3. Expand the folder for Tables and you should see the tables that make up the IMDSO. The tables include the DataSource with the following naming convention: • /BIC/A00 This is the active data table. This is the table that will end up storing all of the data that gets loaded into this IMDSO from DXC. This table is the one to use in SAP HANA data modeling – it is a base DB (columnar) table that can be used in attribute views, analytic views, calculation views, etc. • /BIC/A40 This is the activation queue table. When a DXC extraction/load job executes, it loads the entire series of data packages for this job into this activation queue, and then in a separate step they are activated into the active data table. This activation mechanism preserves proper sequence of records and ensures that all delta processing types (change data capture) are handled appropriately. • /BIC/A70, /BIC/A80, /BIC/AAO, etc. These are technical tables used to control the activation process. If you expect a significant data volume to accumulate in a particular IMDSO, for performance reasons it makes sense to partition the active data table of that IMDSO. For information on partitioning the active data table of an IMDSO, please refer to SAP note 1714933.
©SAP AG
HA300
11-47
Step 9: Create InfoPackages
© 2012 SAP AG. All rights reserved.
48
In order to be able to load the DataSources into the SAP HANA database, you have to create InfoPackages for the DataSources. In some cases, it is feasible to have delta processing (change data capture). In this case, create the following InfoPackages: • One InfoPackage for the delta initialization • One InfoPackage for the regular delta data loads Otherwise, if delta processing is not available, create an InfoPackage for full load, or several InfoPackages for full load using selection criteria. After the InfoPackage has been created, schedule it to load data into your IMDSO in the SAP HANA database. 1. In the Data Warehousing workbench, right-click the DataSource you are working with and select Create InfoPackage. 2. In the Create InfoPackage dialog, select the appropriate options and click Save. 3. On the Schedule tab, select the appropriate time for the job to execute, and click Start to schedule the extraction job.
©SAP AG
HA300
11-48
Step 10: Scheduling and Monitoring
Monitor Data Load in the Source SAP Business Suite System Verifying Data Transfer in the SAP HANA Database Create a Process Chain for Regular Data Transfer
© 2012 SAP AG. All rights reserved.
49
After the data load is started, you can monitor the status of the InfoPackages. 1. In the Data Warehousing Workbench, select the InfoPackage you want to monitor and click the Monitor icon. The Monitor - Administrator Workbench dialog appears. 2. Click the Status tab to get detailed information about the data load. The data is loaded into the activation queue table for the corresponding IMDSO in the SAP HANA database, although a status message is displayed stating that the request was successfully loaded to PSA. 3. Click the Details tab to get detailed information about the records that have been transferred. You can navigate through the structure of the processed data packets: 1. Look for Processing (Data Packets), and choose a data package. 2. Expand the node Update PSA. You should see a message like this: Data package 1 saved to remote SAP HANA DB You can also expand the node for Subsequent Processing; errors in the activation process will be displayed here. After the data for a given extraction job has successfully been loaded into the activation queue of the IMDSO, the data is immediately activated into the active data table of the IMDSO. The order in which data is loaded is important for data consistency. Therefore, all subsequent data activation for an IMDSO will be blocked if a (failed) request is still available in the activation queue table of the IMDSO. If you encounter such an activation failure, please refer to SAP note 1665553.
©SAP AG
HA300
11-49
The view is displayed with its description first. To locate the view, search for the following description: • Direct extractor connection (DXC) status information (M_EXTRACTORS) It might be easier to move the pane divider towards the right and look for the technical name M_EXTRACTORS. 4. Right-click the M_EXTRACTORS view, and select Open Content. 5. Look for the DataSource name which you want to monitor. The activation status table of the IMDSO is displayed in the Table Name column with the naming convention /BIC/AAO, for example /BIC/A0VER_SCARR_ATRAO. 6. Check the value in the Status column, successful activations have the value OK.
©SAP AG
HA300
11-50
Step 11: Monitoring the Activation Process of IMDSO
© 2012 SAP AG. All rights reserved.
51
Most of the monitoring tasks are performed using the BW monitoring features in the embedded BW of the SAP Business Suite system (or in the attached BW in the “sidecar” scenario mentioned in section Appendix – DXC System Landscape Variants: The "Sidecar" Approach). InfoPackage monitoring and Process Chain monitoring covers nearly all of the processing steps involved in extracting data from the source system and loading it into the activation queue of the SAP HANA IMDSO. The only step that is not handled with this type of monitoring is the activation processing of the IMDSO. Since this action takes place solely inside SAP HANA, its monitoring is decoupled from the other processes which are driven from the BW S-API. SAP HANA provides a monitoring view that holds the status of IMDSO activation. This view resides in the schema SYS; its technical name is M_EXTRACTORS. You can view status information in this table. 1. Log on to SAP HANA studio with the DXC user. 2. Expand the Catalog node, and then expand the SYS node. 3. Locate the M_EXTRACTORS view.
©SAP AG
HA300
11-51
The view is displayed with its description first. To locate the view, search for the following description: Direct extractor connection (DXC) status information (M_EXTRACTORS) It might be easier to move the pane divider towards the right and look for the technical name M_EXTRACTORS. 4. Right-click the M_EXTRACTORS view, and select Open Content. 5. Look for the DataSource name which you want to monitor. The activation status table of the IMDSO is displayed in the Table Name column with the naming convention /BIC/AAO, for example /BIC/A0VER_SCARR_ATRAO. 6. Check the value in the Status column, successful activations have the value OK.
©SAP AG
HA300
11-52
Step 12: Setup Email Alerting for the Activation Process
© 2012 SAP AG. All rights reserved.
53
The SAP HANA statistics server features altering for various aspects of operating the system; it includes an automatic email alerting mechanism to inform designated administrators of issues arising within SAP HANA. The statistics server includes a feature where the records of the M_EXTRACTORS view are evaluated on a regular interval (once every 15 minutes) to check if any activations have failed. In the event of a failed activation, administrators can receive an email informing about the failure, so that appropriate corrective action can be taken. To set up email alerting in case of failure of the activation of an IMDSO, proceed as follows: 1. In SAP HANA studio, change to the Administration perspective, and chose the Alerts tab. 2. Click the Configure Check Settings icon. 3. In the Configure Check Settings dialog, enter the appropriate values in the following fields: • Sender Email Address • SMTP Server • SMTP Port
©SAP AG
HA300
11-53
Typically an email address is created in the company email system which is used for sending alerts for this purpose. Please note, that these are general settings for all types of statistics server alerts in SAP HANA. Normally you can skip the section Recipients Email Addess for All Checks (unless you want to get emails for any type of error condition in SAP HANA). 4. Click Configure Recipients for Specific Checks. 5. Select the checkbox for Check In-Memory DataStore Object Activation. 6. Click Add Recipients. 7. In the subsequent dialog, type in the email address of any administrators or specialists who should get an email in case an In-Memory DataStore Object activation fails. 8. Click OK, and then click OK in the Configure Check Settings dialog.
©SAP AG
HA300
11-54
Summary User Management & Security
You should now be able to: Explain SAP HANA Direct Extractor Connection Setup & Configuration
© 2012 SAP AG. All rights reserved.
©SAP AG
55
HA300
11-55
Unit 11: SAP HANA Direct Extractor Connection Overview Rationale SAP HANA Direct Extractor Connection (DXC) SAP HANA Direct Extractor Connection Details SAP Business Content DataSource Extractors SAP HANA Direct Extractor Connection Setup & Configuration Comparison with other SAP HANA Data Acquisition Techniques Appendix: DXC Sidecar Variation
©SAP AG
HA300
11-56
Objectives Direct Extractor Connection
At the end of this Lesson you will be able to: Explain the Comparison with other SAP HANA Data Acquisition Techniques
© 2012 SAP AG. All rights reserved.
©SAP AG
57
HA300
11-57
Overview Direct Extractor Connection
This module covers the following topics: Comparison with other SAP HANA Data Acquisition Techniques
© 2012 SAP AG. All rights reserved.
©SAP AG
58
HA300
11-58
Comparison with other SAP HANA Data Acquisition Techniques Contrast with Data Services Direct Extractor Connection ETL type: Simple and straightforward ETL approach; no “premium” features SAP DataSources: available for all SAP Business Content DataSources (Extractors) and Generic DataSources with a defined key; key can be defined if missing Support Package Requirement: SP required in the source SAP Business Suite system that came out in March 2008; DXC is implemented by applying a special SAP note Delta handling (change data capture): Yes, for all SAP Business Content DataSources and all delta processing types; uses an In-Memory DSO with activation processing Software: Uses existing components in SAP HANA (XS Engine, ICM); configuration file imported into SAP HANA Transformations: very limited - BADI (ABAP) in extraction exit available. When extensive transformations are required, it’s recommended to use DataServices
SAP Data Services ETL type: Sophisticated ETL tool with extensive valuable features (data quality, metadata mgmt, transformations, etc.) SAP DataSources: available for SAP Business Content DataSources (Extractors), limited to the subset of DataSources released to Operational Data Provider Support Package Requirement: SP must be applied to the source SAP Business Suite system that came out in March 2011 – see SAP note 1522554 Delta handling (change data capture): Yes, except for SAP Business Content DataSources with delta processing types AIM, AIE, AIED, AIMD, ADD, ADDD, CUBE Software Requirement: BusinessObjects Enterprise and DataServices required Transformations: extensive transformation capabilities available in the DataServices ETL tool
© 2012 SAP AG. All rights reserved.
©SAP AG
59
HA300
11-59
Comparison with other SAP HANA Data Acquisition Techniques Contrast with SLT Direct Extractor Connection
SAP Landscape Transformation
Type: Batch-driven ETL. Data comes from SAP delivered Business Content DataSource extractors
Type: Trigger-based table replication. Data comes from base tables of SAP Business Suite systems
Real-Time: No. Once every 15 minutes approx. theoretical maximum (depends on DataSource)
Real-Time: Yes. Expected lag time typically less than 3 seconds
Delivered Foundational Data Models for SAP Entities: Yes
Delivered Foundational Data Models for SAP Entities: Generally no; some RDS content available
Semantically Rich Data: Yes, via SAP delivered Business Content DataSource extractors
Semantically Rich Data: No. Semantics must be implemented on a project basis in SAP HANA
System Landscape: Nothing added, uses existing components in SAP HANA
System Landscape: SAP NetWeaver 7.01 instance (SLT) required
Project Acceleration for Data Marts in HANA: Significant benefit
Project Acceleration Building Data Marts in HANA: Available with some RDS packages
Transformations: Limited, BADI in extraction exit available
Transformations: Some available as of SP3
© 2012 SAP AG. All rights reserved.
©SAP AG
60
HA300
11-60
Comparison with other SAP HANA Data Acquisition Techniques Supported Capability Matrix – Part 1 – Data from Tables
* See SAP Note 1513496 for official release limitations © 2012 SAP AG. All rights reserved.
61
Further Information In the following you will find a few clarifying comments about further considerations. Type of Data It makes sense to consider the type of data the DataSource provides, in the context of your use case. The SAP HANA Direct Extractor Connection (DXC) is available for all SAP Business Content DataSources. There are a very small number of DataSources related to inventory data, however, which will pose challenges in working with an SAP HANA appliance, since the SAP HANA applicance does not have a concept like BW does of “non-cumulative key figures”. There are special features in BW designed for working with inventory data that are not available in SAP HANA natively. For example, a DataSource like 2LIS_03_BF (Material Movements data) is not well suited for use with DXC since it provides data that essentially requires the special features for inventory that BW provides. In such use cases (inventory data), we recommend working with BW itself instead of the SAP HANA appliance. Of course, BW on HANA offers its own set of benefits. For a list of DataSources not supported with DXC, please refer to SAP note 1710236.
©SAP AG
HA300
11-61
Also, some SAP Business Content DataSources do not provide delta handling (change data capture). This is not particularly problematic for DXC, but you should be aware that the In-Memory DataStore Object (IMDSO) that is generated by DXC does not include a change log, and therefore the IMDSO itself cannot generate delta datasets for use by another data mart in SAP HANA appliance (unlike in BW, no layering concept exists in SAP HANA appliance). While DXC certainly works with DataSources that do not have a delta mechanism, in some cases this may mean long-running extraction jobs transferring large datasets to the SAP HANA appliance. Any identical records will not be duplicated, and of course any new records will be added – but be aware that if you delete transactional records in the Business Suite system (in cases where the DataSource does not offer delta handling), the deletion will not propagate the deletion through to the IMDSO in the SAP HANA appliance. DataSources Without Keys One requirement for DXC is that DataSources or extract structures have a unique semantic key defined. This is important for DXC’s features in SAP HANA, since with the IMDSO, database tables are created where a primary key is necessary. If you try to activate a DataSource for DXC that does not have a key, you will get an error message. If you run into this issue, refer to SAP notes 1677278 and 1701750. It is important to first apply SAP note 1701750 to any BW system that is connected to the source SAP Business Suite system you are working with before applying SAP note 1677278. Conclusion We encourage you to explore DXC’s value for a significant number of use cases. Its simplicity, as well its provision of foundational models from SAP Business Suite systems that offer semantically rich data in a straightforward manner, surely offers unique advantages for your SAP HANA project.
©SAP AG
HA300
11-62
Comparison with other SAP HANA Data Acquisition Techniques Supported Capability Matrix – Part 2 - Extractors
* See SAP Note 1513496 for official release limitations © 2012 SAP AG. All rights reserved.
©SAP AG
63
HA300
11-63
Summary User Management & Security
You should now be able to: Explain the Comparison with other SAP HANA Data Acquisition Techniques
© 2012 SAP AG. All rights reserved.
©SAP AG
64
HA300
11-64
Unit 11: SAP HANA Direct Extractor Connection Overview Rationale SAP HANA Direct Extractor Connection (DXC) SAP HANA Direct Extractor Connection Details SAP Business Content DataSource Extractors SAP HANA Direct Extractor Connection Setup & Configuration Comparison with other SAP HANA Data Acquisition Techniques Appendix: DXC Sidecar Variation
©SAP AG
HA300
11-65
Objectives Direct Extractor Connection
At the end of this Lesson you will be able to: Explain the DXC Sidecar Variation
© 2012 SAP AG. All rights reserved.
©SAP AG
66
HA300
11-66
Overview Direct Extractor Connection
This module covers the following topics: DXC Sidecar Variation
© 2012 SAP AG. All rights reserved.
©SAP AG
67
HA300
11-67
Appendix: DXC Sidecar Variation SAP HANA Direct Extractor Connection: Sidecar Rationale Customers with older Business Suite systems (lower than ones based on NetWeaver 7.0, e.g. ERP 4.7 or lower) or customers who do not want to use the embedded BW now have an alternative: The Sidecar variation Customers with a BW system connected can use that BW as the “bridge” between ERP and HANA (via DXC) – the same mechanism as with the embedded BW, but it’s external from the source ERP system.
DXC utilizing a sidecar BW 7.x or ORANGE (BW on HANA) Data flow is redirected from within BW 7.x sidecar or Orange (BW on HANA) - Data is not loaded into BW – gets redirected to HANA No concurrent consumption (Means – a DataSource can be used either in the connected BW or with DXC – cannot be used for both!) Apply SAP note 1583403 in BW Apply SAP note in ERP (minor enhancement, minimum risk)
© 2012 SAP AG. All rights reserved.
68
The default configuration relies on the “embedded BW” which exists inside of SAP NetWeaver 7.0 or higher (e.g. ECC 6.0). However, in some cases, customers may be interested in implementing DXC with an SAP Business Suite system that is older, and therefore not based on SAP NetWeaver 7.0 or higher (e.g. 4.6C). Another use case might be that the “embedded BW” is already in use. As a consequence, the customer might be reluctant to use it for this purpose; also there may be some general preference to avoid the use of the embedded BW system on an SAP Business Suite system, even though it is primarily used for scheduling and monitoring of extraction jobs in that scenario. In order to enable DXC when such conditions exist, DXC can be implemented with a “sidecar” approach. This means, that instead of using the “embedded BW” inside the SAP Business Suite system, a separate connected BW system can be used as an intermediary system for scheduling and managing the extraction job in the connected SAP Business Suite system, which sends the extracted data directly to SAP HANA. Extracted data will not be loaded into the connected SAP BW system; instead the data flow will be redirected to the SAP HANA system.
©SAP AG
HA300
11-68
Appendix: DXC Sidecar Variation SAP HANA DXC Concept: Illustration Embedded BW Data models in HANA are built using SAPofHANA active data table In-Memory DataStore object
SAP ERP Embedded BW
In memory DSO
InfoCubes Data Store Objects InfoObjects
generic data transfer
DataSource
Data flow redirected from embedded BW: transferred over http connection
PSA
Activation Queue
Active Version
Generic data transfer
Activation Processing
DataSource (flat structure) Extractor
Status
ERP data
Data load into HANA activation queue
Separate activation step
Scheduled batch jobs
© 2012 SAP AG. All rights reserved.
©SAP AG
69
HA300
11-69
Appendix: DXC Sidecar Variation SAP HANA DXC Concept: Illustration Standalone BW BW 7.x
SAP HANA
InfoCubes
In memory DSO
Data Store Objects InfoObjects generic data transfer DataSource
PSA
Data flow redirected to Hana: transferred over http connection
Activation Queue
Generic data transfer
Activation Processing
DataSource (flat structure) Extractor
Status
ERP data
SAP ERP
Active Version
Data
load into HANA activation queue
Separate
activation step
Scheduled
batch jobs
© 2012 SAP AG. All rights reserved.
70
Another similar variation is available to customers running “SAP NetWeaver BW Powered by SAP HANA” (aka BW on HANA). This illustration depicts another variation of the “sidecar” approach.
©SAP AG
HA300
11-70
Summary Direct Extractor Connection
You should now be able to: Overview Explain the rationale SAP HANA Direct Extractor Connection (DXC) Explain the SAP Business Content DataSource Extractors Explain SAP HANA Direct Extractor Connection Setup & Configuration Explain the Comparison with other SAP HANA Data Acquisition Techniques
© 2012 SAP AG. All rights reserved.
©SAP AG
71
HA300
11-71
Copyright © 2012 SAP AG. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.
INTERMEC is a registered trademark of Intermec Technologies Corporation. Wi-Fi is a registered trademark of Wi-Fi Alliance.
Microsoft, Windows, Excel, Outlook, PowerPoint, Silverlight, and Visual Studio are registered trademarks of Microsoft Corporation.
Bluetooth is a registered trademark of Bluetooth SIG Inc.
IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, z10, z/VM, z/OS, OS/390, zEnterprise, PowerVM, Power Architecture, Power Systems, POWER7, POWER6+, POWER6, POWER, PowerHA, pureScale, PowerPC, BladeCenter, System Storage, Storwize, XIV, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, AIX, Intelligent Miner, WebSphere, Tivoli, Informix, and Smarter Planet are trademarks or registered trademarks of IBM Corporation. Linux is the registered trademark of Linus Torvalds in the United States and other countries. Adobe, the Adobe logo, Acrobat, PostScript, and Reader are trademarks or registered trademarks of Adobe Systems Incorporated in the United States and other countries. Oracle and Java are registered trademarks of Oracle and its affiliates. UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group.
Google App Engine, Google Apps, Google Checkout, Google Data API, Google Maps, Google Mobile Ads, Google Mobile Updater, Google Mobile, Google Store, Google Sync, Google Updater, Google Voice, Google Mail, Gmail, YouTube, Dalvik and Android are trademarks or registered trademarks of Google Inc.
Motorola is a registered trademark of Motorola Trademark Holdings LLC. Computop is a registered trademark of Computop Wirtschaftsinformatik GmbH. SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP BusinessObjects Explorer, StreamWork, SAP HANA, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. Business Objects and the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and other Business Objects products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Business Objects Software Ltd. Business Objects is an SAP company.
Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems Inc.
Sybase and Adaptive Server, iAnywhere, Sybase 365, SQL Anywhere, and other Sybase products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Sybase Inc. Sybase is an SAP company.
HTML, XML, XHTML, and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology.
Crossgate, m@gic EDDY, B2B 360°, and B2B 360° Services are registered trademarks of Crossgate AG in Germany and other countries. Crossgate is an SAP company.
Apple, App Store, iBooks, iPad, iPhone, iPhoto, iPod, iTunes, Multi-Touch, Objective-C, Retina, Safari, Siri, and Xcode are trademarks or registered trademarks of Apple Inc.
All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary.
IOS is a registered trademark of Cisco Systems Inc. RIM, BlackBerry, BBM, BlackBerry Curve, BlackBerry Bold, BlackBerry Pearl, BlackBerry Torch, BlackBerry Storm, BlackBerry Storm2, BlackBerry PlayBook, and BlackBerry App World are trademarks or registered trademarks of Research in Motion Limited.
The information in this document is proprietary to SAP. No part of this document may be reproduced, copied, or transmitted in any form or for any purpose without the express prior written permission of SAP AG.
© 2012 SAP AG. All rights reserved.
©SAP AG
72
HA300
11-72
HA300 Exercise Manual Collection 97 SPS4 V2
1
Contents 1
System Preparation ........................................................................................................3 1.1 1.2 1.3
2
System Preparation SAP HANA Studio..........................................................................3 System Preparation Resources Perspective ..................................................................8 Preparation: Display Data from Models.........................................................................9
Exercises .........................................................................................................................11 Exercise 1: Creating Attribute Views .......................................................................................11 Exercise 2: Using Hierarchies .................................................................................................19 Exercise 3: Creating Restricted & Calculated Measures........................................................24 Exercise 4: Using Filter Operations.........................................................................................29 Exercise 5: Using Variables ....................................................................................................31 Exercise 6: Creating Calculation Views...................................................................................36 Exercise 7: Working with SQL statements ..............................................................................38 Exercise 8: Creating a simple Calculation View of type SQL Script ........................................41 Exercise 9: Creating a Calculation View of type SQL Script using CE_FUNCTIONs .............43 Exercise 10: Creating Procedures...........................................................................................46 Exercise 11: Creating a Calculation View of type SQL Script using procedures.....................49 Exercise 12: Currency Conversions in Analytical Views .........................................................52 Exercise 13: Using the Fuzzy Search functionality .................................................................56 Exercise 14: Processing Information Objects..........................................................................59 Exercise 15: Managing Modeling Content...............................................................................60 Exercise 16: Working with SAP HANA users and roles ..........................................................63 Exercise 17: Creating Roles and Analytic Privileges...............................................................65 Exercise 18: Start, Suspend, Resume the replication .............................................................68 Exercise 19: Data Acquisition using SAP Data Services ........................................................70 Exercise 20: Creating a Extractor Connection (DXC) .............................................................76
2
1 System Preparation 1.1 System Preparation SAP HANA Studio 1. To Logon to the WTS landscape go to:
2. Start-Menu Choose Common-Training
3. Remote Desktop into Server •
Create a remote desktop connection to another desktop. (connection information will be given by the instructor)
•
Start the Remote Desktop Connection as shown in the screenshot on the right side.
•
Use the path: Start Menu -> Remote Desktop Connection
4. In the next screen you have to choose one of the available remote desktop servers. Note: Type the name of the Server specified by your instructor) •
Click > Connect.
3
5. In the Remote Desktop Connection dialog box enter user name and password given by the instructor.
6. Open Modeler •
7.
HINT: Hit ‘x’ to close the overview screen.
From the Studio preferences option, change the network setting to the Direct. Go to Window > Preferences > General > Network Connections
8. Register a new System. Right Click > Add System... within the Navigator view. •
Enter the Hostname, Instance Number and Description as given by the instructor.
•
Click > Next.
4
9. Enter your credentials as given by your instructor.
5
10. Change your initial password •
Minimum 8 characters
•
Must contain capital and small letters
•
Must contain numbers
•
Example: Abcd1234
•
Hint: If prompted for a security fallback for your password, click No.
•
Note: If your HANA system is not properly connected to the SAP HANA Studio (status is red), choose Refresh from the context menu. If necessary, close and reopen the SAP HANA Studio.
11. Create a new package. Select the Content folder > Right Click > New Package.
12. Enter studentXX for the package name and description. Enter your user ID studentXX for “Person Responsible”. You do not need to assign a Delivery Unit at the moment. This will be covered later in the exercises.
.
6
13. As a result you will see the newly created package. Please note that the folder structure is initially empty until you start to create new objects.
7
1.2 System Preparation Resources Perspective
On the top right hand side of the SAP HANA Studio click on the icon and open the RESOURCES perspective.
Go to START > My Documents > Content transport on kpstransfer
Log in using the following credentials: Username: training Pwd: hanareadonly Then navigate to the following directory and copy the path
In the Resources perspective, right click in Project Explorer and choose IMPORT from the context menu. Paste the path 8
On the next screen paste the path to the select root directory and click on the white space to choose the project.
You should now have all the course files.
1.3 Preparation: Display Data from Models. Execute a prepared SQL command to grant SELECT on your schema to _SYS_REPO. 9
Now go to the RESOURCES perspective, right click and open the file 001_Preparation.sql In SAP HANA Studio, on the top right hand side click on the Click on the Catalog of the connection and click on OK.
icon to choose a connection.
Change the generic entry STUDENT## to your user ID (where ## represents your two digit group number, e.g. STUDENT02) Execute. Use execute icon
icon or press F8
10
2 Exercises
Exercise 1: Creating Attribute Views Unit 3 : Advanced Modeling At the conclusion of this exercise, you will be able to: • Create and Use Derived Attribute Views • Use Shared Attribute Views • Create and Use Calculated Attributes in Attribute Views You are at a customer site where EPM data is available. Information of the Business Partners is dispatched in several tables. The purpose will be to regroup Information of the Business Partners and to analyze Purchase Orders by Product and Partners. So you have been asked to build Attribute Views and Analytic Views for HANA. This section takes you through the process to create the Attribute Views and Analytical Views to be used for reporting.
1-1 Close all open views prior to creating a new attribute view for Business Partners. 1-1-1
Create an attribute view AT_HA300_UNIT3_BP_TRAINXX (xx being your student number), with description “Business Partners”. Use your package STUDENTXX. (If it does not exit, please create a new one.) Attribute Type: Standard Press Next.
1-1-2
Add the following tables, from the EPM_MODEL schema, to define the view and click Finish when done: EPM_MODEL.SNWD_BP EPM_MODEL.SNWD_BP_PH EPM_MODEL.SNWD_CONTACT EPM_MODEL.SNWD_BP_EM
11
1-1-3
Join tables with a referential join and cardinality 1:1. SNWD_BP is considered as the left table, others as right tables. Join field CLIENT of the left table to fields CLIENT of each right tables. Join field NODE_KEY of the left table to fields PARTNER_GUID for each right table.
1-1-4
Define the following attributes from table SNWD_BP: BP_ID Key Attribute. CLIENT Attribute NODE KEY Attribute
1-1-5
Define the following fields from table SNWD_CONTACT as Attributes: FIRST_NAME LAST_NAME LANGUAGE Define the following fields from table SNWD_BP_PH as Attributes: PHONE_NUMBER, PHONE_EXTENSION Define the following field from table SNWD_EM as an Attribute: EMAIL_ADDRESS
1-1-6
Activate.
1-1-7
View the logs. Dependent objects that are already active, do not get reactivated. From the context menu of AT_HA300_UNIT3_BP_TRAINXX, view the results by selecting Data Preview. Close.
1-2 Create an attribute view for Products. 1-2-1
Create an attribute view AT_HA300_UNIT3_PD_TRAINXX (XX being your student number allocated), with description “Products”. Attribute type: Standard.
1-2-2
Add table EPM_MODEL.SNWD_PD. Finish.
1-2-2
Define the following fields: PRODUCT_ID Key Attribute. CLIENT Attribute. 12
NODE_KEY Attribute. TYPE_CODE Attribute. CATEGORY Attribute. SUPPLIER_GUID Attribute. 1-2-4
Save and Activate.
1-2-5
From the context menu of AT_HA300_UNIT3_PD_TRAINXX, view the results by selecting Data Preview. Close.
1-3 Create an Analytic View for Purchase Orders. . 1-3-1
Create an analytic view AN_HA300_UNIT3_PO_TRAINxx (xx being your student number allocated), with description “Purchase Orders”. Schema for Currency Conversion: TCUR. Press Next.
1-3-2
Add the following tables: EPM_MODEL.SNWD_PO EPM_MODEL.SNWD_PO_I Press Next.
1-3-3
Add the following Attribute views from your Package, student##: AT_HA300_UNIT3_BP_TRAINXX and AT_HA300_UNIT3_PD_TRAINXX Click Finish A popup for the Aliases Proposal will appear:
Choose OK.
13
CLIENT and NODE-KEY are fields that are in both of the Attribute Views, so the system sees these as duplicate entities. Make further adjustments in the Output section of this Analytic View. To manually adjust the proposal we just need to add a unique Alias Name to CLIENT and NODE-KEY in one of the Attribute Views. Under the Attribute Views folder of the output pane, select AT_HA300_UNIT3_PD_TRAINXX. From the Properties tab for CLIENT, change the value for Alias Name and Description to CLIENT_PD_ALIAS From the Properties tab for NODE_KEY, change the value for Alias Name and Description to NODE_KEY_PD_ALIAS.
Save your changes. 1-3-4
In the DATA FOUNDATION, join the two tables. Table SNWD_PO is the left table. Join the field CLIENT from SNWD_PO to the CLIENT field in SNWD_PO_I. Join the field NODE_KEY (SNWD_PO) with the field PARENT_KEY (SNWD_PO_I). 14
For the join properties, select Referential join and a cardinality of 1:n for both joins in the properties of the join. 1-3-5
From the table SNWD_PO, define the following fields as Attributes CLIENT PO_ID PARTNER_GUID In Output, under the Private Attributes folder, rename CLIENT (SNWD_PO.CLIENT) to Client_BP, for Business Partner client. From the table SNWD_PO_I, define the following fields CLIENT, Attribute PO_ITEM_POS Attribute PRODUCT_GUID Attribute GROSS_AMOUNT Measure NET_AMOUNT Measure In Output, under the Private Attributes folder, rename CLIENT (SNWD_PO_I.CLIENT) to Client_PD, for Product Client.
1-3-6
Save your Analytic View.
1-3-7
In the logical view, join the data foundation to the Product view with the following fields. Use referential join and cardinality of n:1 CLIENT_PD to Client PRODUCT_GUID to NODE_KEY ("Product" view) Then, join the data foundation to the Business Partner view with the following fields. Use referential join and cardinality of n:1 CLIENT_BP to Client PARTNER_GUID to NODE_KEY ("Partner" view)
1-3-8
Save and Activate. View the Log. From the context menu of AN_HA300_UNIT3_PO_TRAINxx, select Data Preview to review the contents. Close.
1-4 Create a derived attribute view from the "Product" view. 1-4-1
Try to add the Attribute, AT_HA300_UNIT3_PD_TRAINXX , to the analytic view AN_HA300_UNIT3_PO_TRAINXX again, by dragging the Attribute view from the Navigator to the Logical View of the Analytic View. What is the result? (Not possible). Note: To add another Attribute view to an existing Analytic View, just select the Attribute from the Navigator and drag it into the Logical 15
view of the Analytic View. If you try to drag an Attribute View into an Analytic View that already exists in the Analytic View, it will not work. If your model requires that you have 2 identical Attribute Views in the same Analytical View, you will have to create a Derived Attribute with Aliases. 1-4-2
Create a new Attribute View, AT_HA300_UNIT3_PD_DERIVED, derived from AT_HA300_UNIT3_PD_TRAINXX. Note: To create a Derived Attribute View, select the Derived radio button in the New Attribute View screen. Can you modify the derived attribute view? (No. Derived Attribute Views are Read-Only.)
1-4-3
Save. Activate the Derived Attribute View, AT_HA300_UNIT3_PD_DERIVED.
1-4-4
Add the new Derived Attribute View to the Analytic View, AN_HA300_UNIT3_PO_Train##, by dragging it from the Navigator to the Logical view in the Analytic View. Join it to the Data Foundation by mapping the following fields: Client_PD to Client Product_GUID to Node_Key. (DataFoundation is on the left.) Create aliases by adding "DERIVED" to the Alias Name and Alias Description for each field of the derived attribute view fields. Example, CLIENT:
When completed, all fields in the Derived Attribute have aliases:
16
Can you modify the join between the data foundation and the derived attribute view? (Yes.) 1-4-5
Save. Activate.
1-4-6
View the results. Close.
1-5 Create a new Analytic View for Sales Orders and reuse the shared Product Attribute view. 1-5-1
Create an Analytic View, AN_HA300_UNIT3_SO_TRAINXX, with description "Sales Order". Schema for Currency Conversion: TCUR. Next.
1-5-2
Add the table EPM_MODEL.SNWD_SO_I. Finish.
1-5-3
Add the following fields: CLIENT Attribute SO_ITEM_POS Attribute CURRENCY_CODE Attribute PRODUCT_GUID Attribute. Gross_Amount Measure Net_Amount. Measure
17
1-5-4 In the Logical View, drag the Product Attribute View, AT_HA300_UNIT3_PD_TRAINXX, into the Analytic View. Join the Data Foundation (on the left) to the Product Attribute view with fields the following fields: CLIENT to CLIENT (Product Attribute View) PRODUCT_GUID to NODE_KEY (Product Attribute View). 1-5-5
Save. Validate. Activate.
1-5-6
View the results. Close.
1-6 (Optional) Modify the "Partner" view and create a calculated attribute. We want to merge the phone extension and the phone number. 1-6-1
Open AT_HA300_UNIT3_BP_TRAINxx.
1-6-2
Create a new calculated attribute and name it PHONE_NUMBER_EXTENSION with Data type VARCHAR and Length 20.
1-6-3
Define the following formula: if PHONE_EXTENSION is null, take PHONE_NUMBER, if not, merge PHONE_EXTENSION with PHONE_NUMBER.
1-6-4
Validate. Save. Activate.
1-6-5
View the result.
1-6-6
Now open the Attribute View again. Hide the fields PHONE_NUMBER and PHONE_EXTENSION, by setting the Hidden property to True for these 2 fields.
1-6-7
Save and Activate. View the results. Close.
18
Exercise 2: Using Hierarchies Unit 3 : Advanced Modeling At the end of this exercise, you will be able to: • Create and Implement Leveled Hierarchies. • Create and Implement Parent Child Hierarchies.
You are at a customer site where EPM data is available. Information of the Business Partners is dispatched in several tables. The purpose will be to regroup Information of the Business Partners and to analyze Purchase Orders by Product and Partners. So you have been asked to build both a leveled hierarchy for products and a parent child hierarchy for product types.
1-1 Implement a leveled hierarchy. 1-1-1
Create a new attribute view, AT_HA300_UNIT3_PD_LEV_HIE_TRAINXX by copying the first Product Attribute view, AT_HA300_UNIT3_PD_TRAINXX..
1-1-2
Remove the following fields from output CLIENT NODE_KEY SUPPLIER_GUID
1-1-3
Right click on the “Hierarchies” Folder. Select “New Level Hierarchy”. Name: HIE_LEV_PRODUCT_TRAINXX Hierarchy Type: Level Hierarchy Define the following in the Hierarchy definition: Private Attribute TYPE_CODE as level 1 (REGULAR Level Type), Private Attribute CATEGORY as level 2 (REGULAR Level Type) Private Attribute PRODUCT_ID as level 3 (REGULAR Level Type). Click OK.
1-1-4
Save. Activate and view the result. Close.
1-2 Implement a parent-child hierarchy. 19
1-2-1
Create a new attribute view, AT_HA300_UNIT3_PD_PAC_HIE_TRAINXX, by copying the Product Attribute view, AT_HA300_UNIT3_PD_TRAINXX.
1-2-2
Remove the following fields CLIENT NODE_KEY SUPPLIER_GUID.
1-2-3
Make sure the Principal Key property of PRODUCT_ID is set to True.
1-2-4
Right click on the “Hierarchies” Folder. Select “New Parent Child Hierarchy”. Name: HIE_PAC_PRODUCT_TYPE_CODE_TRAINxx. Hierarchy Type: Parent Child Hierarchy Define CATEGORY as parent attribute of PRODUCT_ID by selecting Private Attribute CATEGORY from the Parent Attribute drop down list.
20
1-2-7
Save. Activate. Preview the results.
1-2-8
Go to the Analysis tab of the preview.
1-2-9
Add CATEGORY and PRODUCT_ID to Label Axis.
1-2-10 Select Table as the Output Type.
1-2-11 Close the preview 1-3 Display Hierarchy in MS Excel (Optional) 1-3-1
Open the Attribute View AT_HA300_UNIT3_PD_LEV_HIE_TRAINXX
1-3-2
Add the CLIENT and NODE KEY as attributes. Save and Activate the Attribute View.
1-3-3
Create a new Analytic View, AN_HA300_UNIT3_HIER_TRAINXX by copying the Analytic View, AN_HA300_UNIT3_SO_TRAINXX created earlier in the previous exercise.
1-3-4
From the Logical view tab, delete the existing Attribute view, AT_HA300_UNIT3_PD_DERIVED.
1-3-5
Drag and drop the Attribute View, AT_HA300_UNIT3_PD_LEV_HIE_TRAINXX
1-3-6
Join the following fields with referential join and default cardinality from data foundation to the attribute view: CLIENT to CLIENT PRODUCT GUID to NODE KEY
21
1-3-7
Save and Activate the Analytic View
1-3-8
Start MS Excel by going to START> PROGRAMS>Microsoft Office > Microsoft Excel 2007
1-3-9
Then click on Insert >From Other Sources>From Data Connection Wizard
1-3-9
Then click on Insert >From Other Sources>From Data Connection Wizard
1-3-10 On the next screen choose Other/Advanced, then select SAP HANA MDX Driver and add the HANA server and credentials. Use your own STUDENTXX and your SAP HANA Password.1-3-
1-3-11 On the next screen change the package to your own studentxx package and choose the AN_HA300_UNIT3_HIER_TRAINXX Analytic view. Accept all the defaults and display the following level hierarchy that you created.
22
23
Exercise 3: Creating Restricted & Calculated Measures Unit 3:
Advanced Modeling
At the conclusion of this exercise, you will be able to: • Create Restricted Measures • Create Calculated Measures
You are a consultant at a customer that sells electronics. They are doing well with their computer equipment, but they have been facing a lot of competition with their media products and monitors. You have been asked to build individual measures to display total sales for their media products and their screen sales. Due to the reporting constraints the users want these objects as separate measures. Part of their requirements is that they also want to see how the media and screen products stack up compared to the total sales, they therefore want measures that give the percentage of these two types of products compared to their total sales. This section takes you through the process to create the Restricted and Calculated measures to fulfill these requirements.
1-1 Close all open views prior to creating a new analytical view. 1-1-1
Create an analytical view AN_HA300_UNIT3_TRAINxx_02 (xx being your student number allocated), with the description “Media and Screen Sales Analysis”.
1-1-2
Choose schema TCUR as the schema for conversion.
1-1-3
Click Next
1-1-4
Add the following tables to define the view: TRAINING.SALES_DATA, TRAINING.PRODUCT
1-1-5
Click Finish
24
1-2 Join the PRODUCT table to the SALES_DATA table. 1-2-1
Open the newly created view.
1-2-2
In the Data Foundation, join the following: PRODUCT_ID in the PRODUCT table to PRODUCT_ID in the SALES_DATA table.
1-2-3
Make sure that the Cardinality is set to 1..n (one TRAINING.PRODUCT.PRODUCT_ID to many TRAINING.SALES_DATA.PRODUCT_ID)
1-2-4
Set the Join Type to Referential.
1-3 Create the required Attributes in the view. 1-3-1
Add the Currency, SALES_DATA.CURRENCY as an Attribute, by right-clicking on CURRENCY in the SALES_DATA table in the Data Foundation, then selecting “Add as Attribute”.
1-3-2
Add Product Text PRODUCT.PRODUCT_TEXT as an Attribute, by right-clicking on PRODUCT_TEXT in the PRODUCT table in the Data Foundation, then selecting “Add as Attribute”.
1-3-3
In the properties for the PRODUCT_TEXT Attribute, set the “Hidden” property to “True”.
1-4 Create the required Measure in the view. 1-4-1
Add the Amount SALES_DATA.AMOUNT as a Measure, by rightclicking on AMOUNT in the SALES_DATA table in the Data Foundation, then selecting “Add as Measure”.
1-4-2
Name the Measure “TOTAL_SALES”.
1-5 Add Restricted Measures for Screen Sales. 1-5-1
Right click on the Restricted Measures folder in the Analytic View output.
1-5-2
Click New.
1-5-3
Name the Restricted Measure “SCREEN_SALES”
1-5-4
Enter Description: “Total Sales for Monitors and Flat Screens”
1-5-5
Choose the measure: “TOTAL_SALES” as a base.
1-5-6
Add a restriction
1-5-7
Select Parameter: “PRODUCT_TEXT”.
1-5-8
Select Operator: “Equal”. 25
1-5-9
Select Value: “Flatscreen”. You can click on “Find” in the “Value Help Dialog” to find the value so you do not need to manually type it.
1-5-10 Click OK. 1-6 Add Restricted Measures for Media Sales. 1-6-1
Right click on the Restricted Measures folder in the Analytic View output.
1-6-2
Click New.
1-6-3
Name the Restricted Measure “MEDIA_SALES”
1-6-4
Enter Description: “Total Sales for Hard Disks and SD Disks”
1-6-5
Choose the measure: “TOTAL_SALES” as a base.
1-6-6
Add a restriction
1-6-7
Select Parameter: “PRODUCT_TEXT”.
1-6-8
Select Operator: “Equal”.
1-6-9
Select Value: “Harddisk”. You can click on “Find” in the “Value Help Dialog” to find the value so you do not need to manually type it.
1-6-10 Add another restriction 1-6-11 Select Parameter: “PRODUCT_TEXT”. 1-6-12 Select Operator: “Equal”. 1-6-13 Select Value: “SD Disk”. 1-6-14 Click OK. 1-7 View the results. 1-7-1
Save the Analytic View.
1-7-2
Activate the view.
1-7-3
Right-click on the view and select “Data Preview”.
1-7-4
You will now see your three measurements, one simple Measure (TOTAL_SALES) and two Restricted Measures (SCREEN_SALES and MEDIA_SALES)
26
2-1 Add a Calculation Measure for the Screen Sales Percentage. 2-1-1
Open the Analytic View AN_HA300_UNIT3_TRAINxx_02 (xx being your student number allocated)
2-1-2
Right click on “Calculated Measures” in the output of the view.
2-1-3
Select “New”.
2-1-4
Give the Measure the name “SCREEN_SALES_SHARE”
2-1-5
Set the Description as “The percentage share for Screen Sales out of the Total Sales”.
2-1-6
Set the Data Type as DECIMAL, Length: 2, Scale: 2.
2-1-7
Left-click in the Expression Editor to make the Expression Editor the active window.
2-1-8
Open the “Restricted Measure” folder in the “Elements” part of the screen.
2-1-9
The expression should be “SCREEN_SALES” / TOTAL_SALES”
2-1-10 Click “OK” to close the Measure. 2-2 Add a Calculation Measure for the Media Sales Percentage. 2-2-1
Right click on “Calculated Measures” in the output of the view.
2-2-2
Select “New”.
2-2-3
Give the Measure the name “MEDIA_SALES_SHARE”
2-2-4
Set the Description as “The percentage share for Media Sales out of the Total Sales”.
2-2-5
Set the Data Type as DECIMAL, Length: 2, Scale: 2.
2-2-6
Left-click in the Expression Editor to make the Expression Editor the active window.
2-2-7
Open the “Restricted Measure” folder in the “Elements” part of the screen.
2-2-8
The expression should be “MEDIA_SALES” / TOTAL_SALES”
2-2-9
Click “OK” to close the Measure.
2-3 View the results. 2-3-1
Save the Analytical View.
2-3-2
Activate the view.
2-3-3
Right-click on the view and select “Data Preview”.
27
2-3-4
In addition to the previously created Measures, you will now also see the Calculated Measures SCREEN_SALES_SHARE and MEDIA_SALES_SHARE.
28
Exercise 4: Using Filter Operations Unit 3: Advanced Modeling At the conclusion of this exercise, you will be able to: • Create a Client Dependent Analytic View. • Filter on Session Client.
You are a consultant at a customer that sells electronics. You have been asked to build an Analytic View that filters on Client number.
1-1 Create a CLIENT dependant view. 1-1-1
Open your Analytic View, AN_HA300_UNIT3_PO_TRAINXX Rename the Measure, GROSS_AMOUNT to GROSS_AMOUNT_USD.
1-1-2
In the Data Foundation, Properties tab (click anywhere in the white space), set the Default Client to 800. Save. Activate.
1-1-3
Change the Measure Type from Simple to Amount with Currency.
1-1-4
Set the Currency to Fixed, USD by clicking on the “…” button and selecting Type: “Fixed”, Currency: “USD”.
1-1-5
View the result of your analytic view AN_HA300_UNIT3_PO_TRAINXX.
1-1-6
Return to Data Foundation, Properties tab of your Analytic View and set the Default Client to 200.
29
1-1-7
Save and Activate. View the results of your Analytic View. What do you see? (No data.)
1-1-8
Return to Data Foundation, Properties tab and set the Default Client back to 800. Save and Activate. View the result of your analytic view again.
1-1-9
Right-click SNWD_PO_I.CURRENCY_CODE and choose Apply filter. Choose Operator Equal and Value USD.
1-1-10 Save and Activate. View the results of your Analytic View. 1-1-11 Right-click SNWD_PO.PO_ID and choose Apply filter. Choose Operator Equal and Value 0300000000. Note: Use the Value Help Dialog and Button Find to display the available keys. 1-1-12 Save and Activate. View the results of your Analytic View. Why is there no data displayed? 1-1-13 Right-click SNWD_PO.PO_ID and choose Edit filter. Change the Value to 0300000001. 1-1-14 Save and Activate. View the results of your Analytic View. Note: There is one document with several items for this selection.
30
Exercise 5: Using Variables Unit 3:
Advanced Modeling
At the conclusion of this exercise, you will be able to: • Create Attribute Value Variables • Create Static Value Lists • Create Date Variables • Create Variables with formulas You are a consultant at a customer that sells electronics. You have been asked to build an Analytic View to analyze Sales data for certain customers. Two marketing e-mails have been sent out to each customer. Your users want to find out how many days have passed between their orders and these dates. This section takes you through the process to create the Restricted and Calculated measures to fulfill these requirements.
1-1 Close all open views prior to creating a new analytical view. 1-1-1
Create an analytical view AN_HA300_UNIT3_TRAINxx_01 (xx being your student number allocated), with the description “Marketing e-mail Analysis”.
1-1-2
Choose schema TCUR as the schema for Currency Conversion.
1-1-3
Click Next
1-1-4
Add the following tables to define the view: TRAINING.SALES_DATA TRAINING.CUSTOMER
1-1-5
Click Finish
31
1-2 Join the CUSTOMER table to the SALES_DATA table. 1-2-1
Open the newly created view.
1-2-2
In the Data Foundation, join the following fields: CUSTOMER_ID in the CUSTOMER table to CUSTOMER_ID in the SALES_DATA table.
1-2-3
Make sure that the Cardinality is set to 1..n (one TRAINING.CUSTOMER.CUSTOMER_ID to many TRAINING.SALES_DATA.CUSTOMER_ID)
1-2-4
Set the Join Type to “Referential”.
1-3 Create the required Attributes in the view. 1-3-1
Add the Customer (CUSTOMER.CUSTOMER_TEXT) as an Attribute, by right-clicking on CUSTOMER_TEXT in the CUSTOMER table in the Data Foundation, then selecting “Add as Attribute”.
1-3-2
Name the Attribute “CUSTOMER”.
1-3-3
Add Currency (SALES_DATA.CURRENCY) as an Attribute, by right-clicking on CURRENCY in the SALES_DATA table in the Data Foundation, then selecting “Add as Attribute”.
1-4 Create the required Measure in the view. 1-4-1
Add the Amount (SALES_DATA.AMOUNT) as a Measure, by rightclicking on AMOUNT in the SALES_DATA table in the Data Foundation, then selecting “Add as Measure”.
1-4-2
Name the Measure “SALES_AMOUNT”.
1-5 Add a variable to select customer. 1-5-1
Right click on the folder “Variables” in the Output of the Analytic View.
1-5-2
Select New.
1-5-3
Name the variable VAR_CUSTOMER
1-5-4
Set the description to “Customer Selector”.
1-5-5
Set the Attribute to “CUSTOMER”.
1-5-6
Set Selection Type to: “SingleValue” and tick “Is Mandatory”.
1-5-7
Right click on the Private Attribute “CUSTOMER” and apply a filter. Set the filter using Operator: Variable, Variable: VAR_CUSTOMER
32
1-6 View the results. 1-6-1
Save the Analytical View.
1-6-2
Activate the view.
1-6-3
Right-click on the view and select “Data Preview”.
1-6-4
For your Variable Value for VAR_CUSTOMER, click in the “From” column, and then the “…” button.
1-6-5
Click on Find.
1-6-6
Select a Customer, such as for example “Becker Berlin”
1-6-8
Click OK and run the view.
1-6-9
You will now see sales data for this customer only.
Your users actually want to see data for all customers, the view should not be restricted to only one customer. You now want to delete the Customer Selector, and instead add a measure to determine the amount of days between the marketing email and the sales orders. 2-1 Remove the Variable VAR_CUSTOMER and filter on CUSTOMER. 2-1-1
Open the view AN_HA300_UNIT3_TRAINxx_01 (xx being your student number allocated).
2-1-2
Remove the filter on CUSTOMER, by right clicking on the Private Attribute “CUSTOMER” and selecting “Remove Filter”.
2-1-3
Remove the variable VAR_CUSTOMER
2-2 Add Sale Date as an Attribute. 2-2-1
Add the Sale Date (SALES_DATA.SQL_DATE) as an Attribute, by right-clicking on SQL_DATE in the SALES_DATA table in the Data Foundation, then selecting “Add as Attribute”.
2-2-2
Name the Attribute SALE_DATE
2-3 Add an Input Parameter to select the e-mail date. 2-3-1
Right click on the folder “Input Parameters” in the Output of the Analytic View.
2-3-2
Select New.
2-3-3
Name the variable VAR_EMAIL_DATE
2-3-4
Set the description to “The date the email was sent out”.
2-3-5
Set Type to “StaticList” and tick that it “Is Mandatory”.
2-3-6
Set the Data Type to “DATE”. 33
2-3-7
Add a value to the List of Values, Name: 2012-01-25, Description: “First e-mail”.
2-3-8
Add a value to the List of Values, Name: 2012-02-03, Description: “Second e-mail”.
2-3-9
Click on OK.
2-4 Add a Calculated Attribute to determine days between the e-mail and the Sale Dates. 2-4-1
Right click on the folder “Calculated Attributes” in the Output of the Analytic View.
2-4-2
Select New.
2-4-3
Name the Attribute DAYS_ELAPSED
2-4-4
Enter the Description: “Days between Sale Date and e-mail”.
2-4-5
Set the Data Type to: “INTEGER”.
2-4-6
In the Expression Editor enter: daysbetween("SALE_DATE", date('$$VAR_EMAIL_DATE$$'))
2-4-7
Validate the expression.
2-4-8
Click on “Add”.
2-5 View the results. 2-5-1
Save the Analytical View.
2-5-2
Activate the view.
2-5-3
Right-click on the view and select “Data Preview”.
2-5-4
For your Variable Value for VAR_EMAIL_DATE, click in the “From” column, and then the “…” button.
2-5-5
Click on Find.
2-5-6
Select one of the e-mail dates, for example 2012-01-25, or manually type this value.
2-5-7
Click OK and run the view.
34
In order to make the view even more dynamic, you want the users to have the possibility of selecting other dates then the ones from the static list. 3-1 Change the Input Parameter from Static List to use a dynamic Date Selctor. 3-1-1
Right click on the “VAR_EMAIL_DATE” Input Parameter and select “Edit”.
3-1-2
Change the Type from “Static List” to “Date”.
3-1-3
Click on “Ok”.
3-2 View the results. 3-2-1
Save the Analytical View.
3-2-2
Activate the view.
3-2-3
Right-click on the view and select “Data Preview”.
3-2-4
For your Variable Value for VAR_EMAIL_DATE, click in the “From” column, and then the “…” button.
3-2-6
Select a date in the calendar, for example 2013-01-01
3-2-7
Click OK and run the view.
35
Exercise 6: Creating Calculation Views Unit 3: Advanced Modeling At the conclusion of this exercise, you will be able to: • Create Calculated Attributes in Calculation Views • Create a “Simple” Calculation View • Add an Aggregation Node to a Calculation View • Define Unmapped Columns in a Union Node You are a consultant at a company where sales with an order value of over 3 MEUR are recognized by the bonus model. You want to create a calculation view which lists the total order value over 3 MEUR as well as orders between 1 MEUR up to 3 MEUR. You want to divide these orders into two different categories (>3 MEUR and between 1 and 3 MEUR) and also include the latest order date for these two categories.
1-1 Close all open views prior to creating a new calculation view. 1-1-1
Create a calculation view CV_HA300_UNIT3_TRAINXX_01 (xx being your student number allocated), with the description “Bonus Sales Order Analysis”.
1-1-2
Choose a new, graphical view type.
1-1-3
Choose schema TCUR as the schema for conversion.
1-1-3
Click Next
1-1-4
Add the following table to the view: EPM_MODEL.SNWD_SO
1-1-5
Click Finish
1-1-6
Add an aggregation node and connect it to the SNWD_SO table, direction SNWD_SO into Aggregation Node.
1-1-7
In the aggregation node, define GROSS_AMOUNT as an aggregated column and apply a filter to it (Greater Than Equal 3000000)
1-1-8
Add another aggregated column, CREATED_AT. In the Properties for this Aggregated Column, set the Aggregation Type to “Max”.
36
1-1-9
Drag another instance of the EPM_MODEL.SNWD_SO table into your Calculation View and place it next to the original SNWD_SO table.
1-1-10 Add another aggregation node above the new SNWD_SO table and connect the two objects to each other. Direction SNWD_SO into Aggregation Node. 1-1-11 In the new aggregation node, add CREATED_AT to output 1-1-12 Add GROSS_AMOUNT as an Aggregated Column, and apply a filter to it (Between 1000000 and 2999999). 1-1-13 Add a Calculated Column to the new Aggregation Node. Name the column OVER_3MEUR_FLAG and set the Data Type to VARCHAR, Length: 1. 1-1-14 In the Expression Editor of the new column, enter the following: ‘N’ (include the single quotes) then click ADD 1-1-15 Add a Union Node inbetween your Aggregation Nodes and the Ouput Node. Connect the two Aggregation Nodes into the Union Node. 1-1-16 In the details of the Union Node, click the “Auto Map by Name” button.
1-1-17 Right click on the OVER_3MEUR_FLAG Target Mapping. Click on Manage Mapping. 1-1-18 Add a Constant Value to this column from the Aggregation Node that is missing a flag entry. Set the value to: Y 1-1-19 Connect the Union Node to the Output Node. 1-1-20 In the Output Node, add all columns as Attributes from the Union Node 1-1-21 Click on the background of the Graphical Calculation view in order to edit the properties of the Calculation View. Make sure that the calculation view has “Multidimensional Reporting” set to “disabled”. 1-1-22 Save, activate and preview the view. Note down answers to the following questions: 2-1-1
How can the calculation activate and show data even though there are no measures?
2-1-2
Why do you see only one line with OVER_3MEUR_FLAG = Y even though there are several sales orders over 3 MEUR? 37
Exercise 7: Working with SQL statements Unit 3: Advanced Modeling At the conclusion of this exercise, you will be able to: • Create tables using SQL statements • Insert Values into tables using SQL Statements
For testing purposes you need to create some sample tables including data.
1 Create new tables using SQL statements 1-1 Open the SQL file from the RESOURCES perspective. 1-2-1 In SAP HANA Studio, on the top right hand side click on the icon to choose a connection. Click on the Catalog of the connection and click on OK.
1-2-2 In the RESOURCES perspective right click and open the file 002_CREATE_TABLE.sql 1-3 Ensure you are working in your own schema! Replace STUDENT## with your group number and remove the first comment line. 1-4- Before you execute: 1-4-1 The statement will create two tables. Fill the table below with the exact table name which will be displayed in the Navigator and the table type: Table Identifier
Table Type
38
1-5 Delete the first comment line and Execute the statement (F8) What errors show up and why? _____________________________________________________________ __ 1-6 Refresh (F5) the table node in your schema STUDENT## and check the identifiers for the new created tables. Why is the second table created with small letters? _____________________________________________________________ __ 1-5 Right click the table identifier and choose Open Definition to check the table type. 2 Create a table with template and data 2-1 Remove the block comments in the SQL for STEP 2 and execute again. Do not change the comment - - with data for the first execution. 2-2 Right click table CurrDecimals and choose Open Content 2-3 Remove the comment - - with data end execute again. Note: Just remove the dashes. 2-4 Refresh the content display, to view the data transferred from the template. 3 Drop tables “STUDENT##”.“publisher” and ”STUDENT##”.“CurrDecimal”. Note: You do not need this tables for the following steps.
39
4 Insert values using SQL statements 4-1 Open prepared file SQL file from the RESOURCES perspective 4-1-1
Open file 003_INSERT_VALUES.sql
4-3 Ensure you are working in your own schema! Replace STUDENT## with your group number and remove the first comment line 4-4 Execute (F8) 4-5 Display the content of tables PUBLISHER and BOOKS. You will need this table and data in following exercises.
40
Exercise 8: Creating a simple Calculation View of type SQL Script Unit 3: Advanced Modeling
At the conclusion of this exercise, you will be able to: • Create a simple Calculation View of type SQL Script
1 Prepare Select statement 1-1 Open the SQL file from the RESOURCES perspective. 1-2-1 In SAP HANA Studio, on the top right hand side click on the icon to choose a connection. Click on the Catalog of the connection and click on OK.
1-2-2 In the RESOURCES perspective right click and open the file 004_SELECT_STATEMENT.sql 1-2 Ensure you are working in your own schema! Replace STUDENT## with your group number and remove the first comment line. 1-3 Execute (F8). Note: Please do not close the SQL Editor tab’ 2 Create a new Calculation View with Type SQL Script 2-1 Right-click node Calculation View in package student## and choose New 2-2 Name: CV##_SIMPLE_JOIN View Type: SQL Script Default Schema: STUDENT##
41
2-3 In the Scenario panel, click on Script. In the Output of Script_View, click on the Define Output Parameters icon (blue “+” sign).
2-3 Define output with following values:
2-4 Copy the Select statement from the SQL editor into the script and assign it to var_out
2-4 Define Attributes and Measures for the Output:
2-5
Validate. Save and Activate
2-6 From the context menu of the Calculation View, choose Data Preview.
42
Exercise 9: Creating a Calculation View of type SQL Script using CE_FUNCTIONs Unit 3: Advanced Modeling
At the conclusion of this exercise, you will be able to: • Create a Calculation View of type SQL Script using CE_FUNCTIONS
You want to replace standard SQL statements with CE_FUNCTIONS. Your task is to create SQL script replacing following SQL statement: SELECT pub_ID as publisher, name, isbn, title, price, crcy AS currency FROM publishers AS P, books AS B WHERE P.pub_id = B.publisher;
1 Create a new Calculation View with Type SQL Script based on template training.PUBS_BOOKS_TEMPLATE 1-1 Right-click node Calculation View in package student## and choose New 1-2 Name: CV##_PUBS_BOOKS_CE Copy from: training.HA300.PUBS_BOOKS_TEMPLATE Note: In Calculation View Folder. Click Finish. 2 Adjust the Default Schema to STUDENT## (where ## is your group number) 2-1 Click into the scenario area to display the properties of the calculation view 2-2 Change the DB Schema Name to STUDENT##.
43
Note: If you do not see STUDENT## on the drop down list for DB Schema Name, save your work. Then close and open your Calculation View again.
2-3 Save the changes first (CTRL + S) 3 Edit the SQL script 3-1 Change to the SQL script view and follow the instructions in the SQL script. 3-1-1
/***** STEP 1 *****/
/* You need a reference to tables "PUBLISHERS" and "BOOKS". Use CE_COLUMN_TABLE to assign the tables to the varialbes. Note: Check in the Schema for exact spelling (Capital/Small letters)*/
3-1-2
/***** STEP 2 *****/
/* You need a pair of column names for the CE_JOIN function. Use CE_PROJECTION to build an alias for column "PUB_ID" AS
"Publisher"
In Addition Build and alias for column "CRCY" AS "CURRENCY", which is used in the output parameters. Replace < ... > with your input.*/
44
3-1-3
/***** STEP 3 *****/
/* Use CE_JOIN to build the join using "PUBLISHER" as join field. Asign the result to the output parameter var_out. Take care of the sequence used in the output parameters. Replace < ... > with your input.*/
4 Check the Output view 3-1 Change to the Output view and check that all fields are used as attributes or measures.
5 Validate and Execute the Calculation View 6 Preview data after successful activation
45
Exercise 10: Creating Procedures Unit 3: Advanced Modeling
At the conclusion of this exercise, you will be able to: • Create a procedure
You need to add tax calculation to the basis price. To enable flexible input of different tax rates you decide to create a procedure.
1 Create a new procedure 1-1 Right-click on package student## and choose New Procedure 1-2 Name: P##_BOOKS_VAT Default Schema: STUDENT## Run with: Definers Rights Access mode: ReadOnly Finish. 2 Define New Scalar Input Parameter TAX 2-1 In the Input Pane right-click Input Parameters and choose New Scalar Parameter
46
2-2 Use following values:
3 Create Output Parameters out_with_tax 3-1 In the Output Pane right-click Output Parameters and choose New 2-2 Use following values:
47
4 Edit the Procedure Script 4-1 Use following script: /********* Begin Procedure Script ************/ BEGIN it_books = CE_COLUMN_TABLE ("BOOKS"); out_with_tax = CE_PROJECTION(
:it_books,
["ISBN", "TITLE", "PUBLISHER", "PRICE", CE_CALC('"PRICE" * :tax', decimal(5,2)) AS "PRICE_VAT", "CRCY" AS "CURRENCY" ]); END; /********* End Procedure Script ************/ 5 Save, Validate and Activate 6 Test the procedure 6-1 In the Navigation pane navigate to node “_SYS_BIC” => Procedures 6-2 Open procedure student##/P##_BOOKS_VAT to review the “Parameter” and “Create Statement” tabs 6-3 Open a new SQL Editor tab 6-4 Use following SQL statement for testing the procedure: call "_SYS_BIC"."studentXX/PXX_BOOKS_VAT" (1.19, ?); 6-5 Execute (F8) and review the result tabs. Try different tax rates for calculation.
48
Exercise 11: Creating a Calculation View of type SQL Script using procedures Unit 3: Advanced Modeling
At the conclusion of this exercise, you will be able to: • Create a Calculation View of type SQL Script using your own procedures.
1 Create a new Calculation View with Type SQL Script based on template training.PUBS_BOOKS_TEMPLATE 1-1 Right-click node Calculation View in package student## and choose New 1-2 Name: CV##_PUBS_BOOKS_TAX Copy from: training.HA300.PUBS_BOOKS_PROC_TEMPLATE
2 Adjust the schema to STUDENT## (where ## is your group number) 2-1 Click into the scenario area to display the properties of the calculation view 2-2 Change the DB Schema Name to STUDENT##.
2-3
Save the changes first (CTRL + S)
49
3
Edit the SQL script
3-1 Replace the CE_PROJECTION call for :it_books with the procedure call for tax calculation. 3-1-1 Comment out or delete the following line: it_books = CE_PROJECTION (:t_books,["ISBN", "PUBLISHER","TITLE","PRICE","CRCY" AS "CURRENCY"]); 3-1-2 Add following line: CALL "_SYS_BIC"."student##/P##_BOOKS_VAT" (1.19, it_books); Note: The second parameter of the call represents the variable where the result table is bound to. Use it_books, which is used in the following lines.
50
4 Add PRICE_VAT, which is return in the result of the procedure for output 4-1 Open the Output definition and add
at the end of the list.
4-2 Add “PRICE_VAT” at the end of the CE_JOIN expression according to the sequence of the output parameters: CE_JOIN (
:it_pubs, :it_books, ["PUBLISHER"], ["PUBLISHER", "NAME", "ISBN", "TITLE", "PRICE", "CURRENCY", "PRICE_VAT"]
4-3 Change to the Output View and add “PRICE_VAT” as additional measure. 5 Validate and Execute the Calculation View 6 Preview data after successful activation
51
Exercise 12: Currency Conversions in Analytical Views Unit 3: Advanced Modeling At the conclusion of this exercise, you will be able to: • Apply Currency Conversion in Analytical Views • Leverage Fixed Currencies • Leverage Source Currency from Attributes
You are at a customer site where EPM data is available. USD is the general corporate reporting currency. You have been asked to build Analytical Views for HANA for the purpose of displaying Sales and Purchase Order data. The Sales Order data is listed in EUR and the Purchase Order data is listed in multiple currencies, but you want to report on the data only using USD. This section takes you through the process to create the Analytical Views to be used for reporting.
1-1 Close all open views prior to creating a new analytical view. 1-1-1
Create an analytical view AN_HA300_UNIT3_TRAINxx_03 (xx being your student number allocated), with the description “Sales Orders Fixed Currencies”.
1-1-2
Choose schema TCUR as the schema for Currency Conversion.
1-1-3
Click Next
1-1-4
Add the following table to define the view: EPM_MODEL.SNWD_SO
1-1-5
Click Finish
52
1-2 Create the required Attributes in the view. 1-2-1
Open the newly created view.
1-2-2
Add the following fields: SO_ID (Sales Order Number) as an Attribute. CURRENCY_CODE as an Attribute.
1-3 Add a Gross Amount Measure with the base currency. 1-3-1
Add GROSS_AMOUNT as a Measure.
1-3-2
Rename the Measure to GROSS_AMOUNT_EUR
1-3-3
Change the Measure Type to Amount with Currency
1-3-4
Set the Currency to Fixed Type EUR
1-3-5
Do not “Enable for Conversion” or for “Decimal Shifts”.
1-3-6
Click OK.
1-4 Add a Gross Amount Measure in USD currency. 1-4-1
Add GROSS_AMOUNT as a new Measure.
1-4-2
Rename the Measure to GROSS_AMOUNT_USD
1-4-3
Change the Measure Type to Amount with Currency
1-4-4
Set the Currency to Fixed Type USD.
1-4-5
Enable the Measure for conversion.
1-4-6
Set the Source Currency to Fixed Type EUR.
1-4-7
Set the Exchange Type to 1003 “Historical Rate”.
1-4-8
Set the Date to Fixed, 20111201
1-4-9
Set Upon Conversion Failure: Fail.
1-4-10 Click OK. 1-5 View the results. 1-5-1
Save the Analytical View.
1-5-2
Activate the view.
1-5-3
Right-click on the view and select “Data Preview”.
1-5-4
You will now see your two measurements, the base GROSS_AMOUNT_EUR as well as the converted GROSS_AMOUNT_USD. 53
2-1 Close all open views prior to creating a new analytical view. 2-1-1
Create an analytical view AN_HA300_UNIT3_TRAINxx_04 (xx being your student number allocated), with the description “Purchase Orders Dynamic Currencies”.
2-1-2
Choose schema TCUR as the schema for conversion.
2-1-3
Click Next
2-1-4
Add the following table to define the view: EPM_MODEL.SNWD_PO
2-1-5
Click Finish
2-2 Create the required Attributes in the view. 2-2-1
Add the following fields: PO_ID as an Attribute. CURRENCY_CODE as an Attribute.
2-3 Add a Gross Amount Measure with the base currency. 2-3-1
Add GROSS_AMOUNT as a Measure.
2-3-2
Rename the Measure to GROSS_AMOUNT_BASE
2-3-3
Change the Measure Type to “Amount with Currency”.
2-3-4
Set the Currency to Currency Type Attribute.
2-3-5
Set the Currency Attribute to CURRENCY_CODE.
2-3-6
Do not Enable the Measure for conversion.
2-3-7
Click OK.
2-4 Add a Gross Amount Measure in USD currency. 2-4-1
Add GROSS_AMOUNT as a new Measure.
2-4-2
Rename the Measure to GROSS_AMOUNT_USD
2-4-3
Change the Measure Type to “Amount with Currency”.
2-4-4
Set the Currency to Fixed Type USD.
2-4-5
Enable the Measure for conversion.
2-4-6
Set the Source Currency to Currency Type Attribute.
2-4-7
Set the Source Currency Attribute to CURRENCY_CODE.
2-4-8
Set the Exchange Type to 1003 Historical Rate 54
2-4-8
Set the Date to Fixed, 20111201
2-4-9
Set Upon Conversion Failure: Fail.
2-4-10 Click OK. 2-5 Add a Filter to CURRENCY_CODE. 2-5-1
Right click on CURRENCY_CODE in the Data Foundation in the table EPM_MODEL.SNWD_PO
2-5-2
Set the Filter Operator to List of Values
2-5-3
Enter the following currency codes as the Value: EUR,USD,GBP Note: Syntax is important here. Do not use blanks
2-6 View the results. 2-6-1
Save the Analytical View.
2-6-2
Activate the view.
2-6-3
Right-click on the view and select “Data Preview”.
2-6-4
You will now see your two measurements, the base GROSS_AMOUNT_BASE as well as the converted GROSS_AMOUNT_USD.
55
Exercise 13: Using the Fuzzy Search functionality Unit 4: Fuzzy Search At the conclusion of this exercise, you will be able to: • Create a table with Fulltext Indexes • Select from a table using: • Fulltext Search • Fuzzy Search • Freestyle Search
You need to search customer data to find specific pattern matches. The data you have at hand is not well maintained and contains spelling mistakes and other errors.
1 Create a table with Fulltext Indexed columns, using SQL statements 1-1 Open the SQL file from the RESOURCES perspective. In the RESOURCES perspective right click and open the file 006_FULLTEXT_SEARCH.sql 1-2 Ensure you are working in your own schema! Perform a Search and Replace (CTRL-F), replacing ALL instances of student## with your group number, i.e. if your number is 05, replace student## with student05. 1-3 Delete the comment line as described in the script: /* DELETE THIS COMMENT LINE
1-4
On the top right hand side click on the icon to choose a connection. Click on the Catalog of the connection and click on OK.
56
1-5 Execute the statement (F8) The table you created is now called studentxx.studentxx_search and contains two columns: •
CUSTOMER
•
CITY
It will already contain some data. 2 Perform Fuzzy Search with different parameters and options 2-1 Open the SQL file from the RESOURCES perspective. In the RESOURCES perspective right click and open the file 007_FUZZY_SEARCH_SELECTS.sql 2-2 Ensure you are working in your own schema! Perform a Search and Replace (CTRL-F), replacing ALL instances of student## with your group number, i.e. if your number is 05, replace student## with student05. 2-3
On the top right hand side click on the icon to choose a connection. Click on the Catalog of the connection and click on OK.
2-4
Execute the statement (F8).
2-5
Using SELECT score() AS score, * FROM student##_search WHERE CONTAINS(customer, 'mark', FUZZY(0.7, 'textSearch=compare'))
How many customers do you find? …………………………………………………………………………… 2-6 /* DELETE THIS COMMENT LINE STEP 2 and execute again (F8). Why is the score on the second result tab different to the first select? ……………………………………………………………………………
57
2-7 /* DELETE THIS COMMENT LINE STEP 3 and execute again (F8). How many customers do you find on the third result tab? …………………………………………………………………………… 2-8 /* DELETE THIS COMMENT LINE STEP 4 and execute the two selects on column city (F8). Compare the scores on the 4th and 5th result pane …………………………………………………………………………… 2-9 /* DELETE THIS COMMENT LINE STEP 5 and execute a combined select on column customer and city (F8). Check the result on the last result pane.
58
Exercise 14: Processing Information Objects Unit 5: Processing Information Models
At the conclusion of this exercise, you will be able to: • Set your own preferences in validation rules.
You are a HANA developer and would like to set some specific preferences in validation rules.
1-1 Close all open views prior to modify preferences in validation rules. 1-1-1
Click on Manage Preferences in the Quick Launch window. Then check the validation rules.
1-1-2
In "Attribute" part, unselect "Key Attribute Validity Check".
1-1-3
Click on "OK".
1-1-4
Create a new the attribute view AT_HA300_UNIT5_PROD_PREF_TRAINxx by copying an existing view AT_HA300_UNIT3_PD_TRAINxx
1-1-7
Then convert the key attribute to attribute.
1-1-8
Validate. What is the Logs result?
1-1-9
Activate. What is the Logs result?
1-1-10 Return in the preferences of validation rules and set it back to default values. Then click on "OK" and validate and activate again.
59
Exercise 15: Managing Modeling Content Unit 6: Managing Modeling Content At the conclusion of this exercise, you will be able to: • Create schemas, • Manage schemas mapping, • Create a delivery unit, • Export models to the server, • Import models from the server.
You are a HANA administrator and in charge of the objects and content transportation. You have to create a transportation mapping between development environment and production environment, to manage schema mapping and to export/import model.
1-1 Close all open views prior creating a new schema. 1-1-1
Select the system HDB(STUDENTxx).
1-1-2
Open a new SQL editor.
1-1-3
Write the following command: CREATE SCHEMA STUDENTxx_DEV OWNED BY STUDENTxx;
1-1-4
Check if the new schema have been created, refresh if necessary.
1-1-5
Create a new table in both schema STUDENTxx and STUDENTxx_DEV: CREATE COLUMN TABLE STUDENTxx_DEV.STUDENTxx (KEY VARCHAR PRIMARY KEY, FIELD VARCHAR); CREATE COLUMN TABLE STUDENTxx.STUDENTxx (KEY VARCHAR PRIMARY KEY, FIELD VARCHAR);
60
1-1-6
Insert a line of data in both tables: INSERT INTO STUDENTxx_DEV.STUDENTxx VALUES ('A', 5); INSERT INTO STUDENTxx.STUDENTxx VALUES ('A', 5);
1-1-7
View the result in each new tables.
1-2 Create a schema mapping. 1-2-1
In the Quick Launch, open the Schema Mapping
1-2-2
Add a schema mapping, insert "STUDENTxx_DEV" into the Authoring Schema section and "STUDENTxx" into the Physical Schema section.
1-2-3
Click on "OK".
2-1 Create a delivery unit, to be able to export on the server. 2-1-1
In the Quick Launch, open the "Delivery Units".
2-1-2
Create a new delivery unit and name it STUDENTxx.
2-1-3
Then, assign a package to the new delivery unit by adding the package STUDENTxx.
2-1-4
Click on "Close".
2-2 Export a model on the server with a delivery unit. 2-2-1
Create a new attribute view AT_HA300_UNIT6_MAPPING_TRAINxx from the table STUDENTxx in the schema STUDENTxx_DEV.
2-2-2
Define the field KEY as a primary key and the field FIELD as an attribute.
2-2-3
Activate and close the new attribute view you have just created.
2-2-4
In the Quick Launch, open the "Export".
2-2-5
Select Modeler -> SAP HANA Content >Delivery Unit then click "Next".
2-2-6
On the next screen select the delivery unit STUDENTXX and click on Finish.
2-2-7
Look at the log.
61
2-3 Import a model from the server with a schema mapping. 2-3-1
Delete the attribute view AT_HA300_UNIT6_MAPPING_TRAINxx in your package STUDENTxx
2-3-2
Delete the table STUDENTxx in the schema STUDENTxx_DEV
2-3-3
In the Quick Launch, open the "Import".
2-3-4
Then from the SAP HANA Content select Delivery Unit.
2-3-5
Select the last file you have exported (finishing by STUDENTxx.tgz) and click on "Finish".
2-3-6
Look at the log.
2-3-7
In the package STUDENTxx, open the attribute view AT_HA300_UNIT6_MAPPING_TRAINxx
2-3-8
Look at the schema name of the table STUDENTxx
62
Exercise 16: Working with SAP HANA users and
roles Unit 7:
Security and Authorizations
At the conclusion of this exercise, you will be able to: • Review SAP HANA users and roles.
You are an HANA developer and need to create SAP HANA users.
1-1 Review USER##. 1-1-1
Under the STUDENT## System, HDB (STUDENTxx), navigate to Catalog → Authorization → Users.
1-1-2
Expand the User folder. Find your user, USER##.
1-1-3
From the context menu for your user, select Open.
1-1-4
Select the Granted Roles tab. Note that USER## has 2 roles: PUBLIC (default role), and RESTRICTED_ROLE.
1-2 Create a new System for USER##. 1-2-1
Add an additional System. In the Navigator, right click in the open space and select Add System. Enter the Host name, Instance number and Description as directed by the instructor. Host: .wdf.sap.corp Instance: Press Next.
1-2-2
Enter the user, USER## (## being your student number allocated) and the password , Training1. Click on Finish.
1-2-3
Enter the new password, Training2. Click on Finish.
63
1-2-4
Under the new System for USER##, navigate to your Student## Package. Content → student## → Analytic Views. From the context menu of Analytic View, AN_HA300_UNIT4_PO_TRAIN##, choose Data Preview. What is the result? The result does not display. There is a “…user is not authorized” message at the top of the screen. For more information, return to the Navigator. From the context menu of your STUDENT## System, HDB(STUDENT##), select Administration. Press the Diagnosis Files tab. Find and Open the file indexserver._.*.*.trc
Press the Show End of File button. Find your USER## in the trace log by pressing CNTL+F. Enter USER##. The trace log indicates an error on the Analytical Privilege Check: 2814][200344] 2012-05-22 21:43:51.507043 e AnalyticPriv TRexApiSearch.cpp(21554) : TRexApiSearch::analyticalPrivilegesCheck(): User USER17 is not authorized on _SYS_BIC:student17/AN_HA300_UNIT4_PO_TRAIN17/olap
USER## requires an Analytical Privilege to be able to review the results of the Analytical view, AN_HA300_Unit4_PO_TRAIN##.
64
Exercise 17: Creating Roles and Analytic Privileges Unit 7:
Security and Authorizations
At the conclusion of this exercise, you will be able to: • Create a Role and assign Analytic Privileges
You are an SAP HANA developer and according to the business rules need to apply display restriction on the data.
1-1 Create a new role, ROLE##, with an Analytic Privilege. 1-1-1 Under the System HDB(STUDENTxx), navigate to the Roles folder. Catalog → Authorizations → Roles. From the context menu of the Roles folder, select New Role. Enter the name ROLE## . Select the Analytic Privileges tab. Add an Analytic Privilege (press the green plus sign). In the Select Analytic Privilege screen enter _SYS_BI_CP_ALL. With this Analytic Privilege, users pass all Analytic Privilege authorizations globally. Press OK. Save your new role. (CNTL+S) 1-2 Assign your new ROLE## to your USER##. 1-2-1
Navigate to the STUDENT## System, (HDB (STUDENT##)). Catalog → Authorizations → Users.
1-2-2
From the context menu of your USER##, select Open. Select the Granted Roles tab. From the context menu choose Add Role (or press the green plus sign). In the Select Role screen, enter ROLE##. Press OK Save your user record. (CNTL+S) 65
1-2-3
Under the System for USER##, HDB (USER##), navigate to your Student## Package. Content → student## → Analytic Views. From the context menu of Analytic View, AN_HA300_UNIT4_PO_TRAIN##, choose Data Preview. What is the result? The results should display for all data now that your user has the Analytic Privilege, _SYS_BI_CP_ALL. Now you want to restrict access for this Analytic view so that this user will only be able to see Category Notebooks.
1-3 Create an Analytic Privilege in order to restrict access to the data. 1-3-1
Navigate to the STUDENT## System, (HDB (STUDENT##)). Select your package and from the context menu, select New → Analytic Privilege.
1-3-2
Enter the following name: AP_RESTRICT_ON_CATEGORY_NB_##. Click on Next.
1-3-3
Select the Analytic View, AN_HA300_UNIT4_PO_TRAIN## under your package (student##). Click on Add.
1-3-4
Click on FINISH.
1-3-5
Under Associated Attributes Restriction, click on Add… .
1-3-6
On the Select Object screen, select Category under the Attribute view folder, AT_HA300_UNIT4_PD_TRAIN##.
1-3-7
Click on OK.
1-3-8
Under Assign Restrictions, click on ADD… .
1-3-8
Select the Operator, Equal .
1-3-9
Click on the ellipsis to open the Value Help dialog. Press Find and select Notebooks. OK.
1-3-10 Save and activate the Analytic Privilege.
66
1-4 Assign the Analytic Privilege to USER##. 1-4-1
Under the STUDENT## System, HDB (STUDENT##), navigate to Catalog → Authorization → Roles.
1-4-2
Select ROLE##. Open.
1-4-3
Click on the Analytic Privileges tab. Delete the Analytic Privilege, _SYS_BI_CP_ALL, by selecting the Analytic Privilege and then press the red (X) icon.
1-4-4
Click on the green (+) icon to add a new Analytic Privilege. In the search bar, find AP_RESTRICT_ON_CATEGORY_NB_##.
1-4-5
Click on OK.
1-4-6
Save and activate.
1-4 Review the impact of assigning the Analytic Privilege to USER##. 1-4-1
Under the System for USER##, navigate to your Student## Package. Content → student## → Analytic Views. From the context menu of Analytic View, AN_HA300_UNIT4_PO_TRAIN##, choose Data Preview. What is the result? You should only see the results of the Analytic View with Purchase Order data for the Category, Notebooks.
67
Exercise 18: Start, Suspend, Resume the replication Unit 8:
Data Provisioning using SLT
At the conclusion of this exercise, you will be able to: • Start, stop, and resume the replication of table including data transformation.
Within this exercise you replicate a table by applying a parameter rule and inserting/deleting data in sender system. The data will be replicated in real time to HANA.
3-1 Use real time replication of your table ZSNWD_EMPLOYEEXX including data transformation 3-1-1
Start the replication of your table ZSNWD_EMPLOYEEXX. a) Start in Memory Studio: StartÎProgramsÎSAP in-memory computingÎstudio b) Chose the Modeler perspective in Studio via menu Window: WindowÎOpen PerspectiveÎOthersÎ Modeler c) Click on Data Provisioning in the Quick Launch tab. d) Choose from drop down box Select Source system ZMP. e) Click on the Replicate Button in the Data Provisioning Editor tab. f)
Enter zsnwd in Tables for selection and click on table list.
to filter the
g) Choose you table ZSNWD_EMPLOYEEXX and click on Add Button. h) Click on Finish to start the load process. 3-1-2
Insert or delete records of table in sender system by using report ZSNWD_EMPLOYEES. (Please enter your group number in the 68
selection screen for parameter Group to ensure that only your table will be modified). a) Log into sender system ZMP and start report ZSNWD_EMPLOYEES via transaction SE38. (Do not Change the coding!) b) Choose radio button insert record or delete record depending on your planned action. c) Enter your group number to determine that the data in your own sender table will be changed. d) Enter values of fields like employee_id, last_name etc. e) f) 3-1-3
(Execute F8). Check via transaction code SE16 whether the record was inserted/deleted from ZSNWD_EMPLOYEEXX
Preview the data of you table in HANA.. To preview press the if the tab of data preview is allready/still open. Otherwise choose the option Open data Preview of the context menu (via right click on tablename) of your table. The table can be find in the Navigator tab on left hand side. HDB (System) Î CatalogÎ EPM_MODELÎTablesÎEMP: Employee Data (ZSNWD_EMPLOYEEXX)
3-1-4
Suspend replication, change data in sender and resume the replication. Check the data in HANA after each of these steps by using the data preview of your table. To suspend, resume etc. the replication please repeat steps 3-1-1 to 3-1-3 choosing in step 3-1-1 e) appropriate action.
69
Exercise 19: Data Acquisition using SAP Data Services Unit 9:
Data Provisioning using SAP Data Services
At the conclusion of this exercise, you will be able to: • Load data from an ECC extractor into SAP HANA using an ABAP Dataflow with SAP Data Services.
You’re a customer that has already implemented one or many of the SAP Business Suite systems e.g. ERP, CRM, etc..., and you’ve now decided to implement SAP HANA in order to quickly analyze the data from those system. In order to do this, you need to determine how to get data from your SAP Business Suite systems into SAP HANA
1-1 Log in to SAP Data Services Designer. 1-1-1 Start SAP Data Services Designer by using the following path: Start Menu -> Programs -> SAP BusinessObjects Data Services 4.0 SP2 Patch 4 -> Data Services Designer. Please be patient a moment and don’t close the black system popup.
1-1-2 Enter your assigned SAP BusinessObjects Username and Password credentials, System: wdflbmt2287.wdf.sap.corp:6400 Username: train-xx (xx is your student number) 70
Password: same as username 1-1-3 Once logged in, make sure you choose your own repository, for example, DSREPOXX (XX is your student number) 1-2 Create an ECC Extractor Datastore. 1-2-1 At the bottom left hand side, you will see your local object library. From the tabs at the bottom select the icon. 1-2-2 From the context menu select New to create a Datastore 1-2-3 Enter the following details to create an ECC Datastore and click OK Datastore name
ECC
Datastore type
SAP Applications
Application server
zmptdc00.wdf.sap.corp
Username
development
Password
support
Client
800
SYSTEM
00
Data transfer method
RFC
RFC destination
DI_SOURCE
Working directory on SAP Server
\\zmptdc00.wdf.sap.corp\
1-3 Import the metadata from the ECC extractor into SAP Data Services Repository 1-3-1 Right click on the new ECC Datastore and from the context
menu choose Import by Name. Then select Extractors from the type and import the extractor called SFLIGHT. In the pop-up add the Name of consumer and Name of Project as shown below:
71
This generic extractor has been released already for ODP access. Once you have completed this step you should be able to see the SFLIGHT extractor under ECC> Extractor 1-4 Create a HANA Datastore. 1-4-1 From the context menu select New to create a Datastore 1-4-2 Enter the following details to create an ECC Datastore and click
OK Datastore name
HANA_DB
Datastore type
Database
Database type
SAP HANA
Database version
HANA 1.x
Datasource
HANA_HDB (from drop down)
Username
STUDENTXX
Password
Your HANA password
72
1-5 Create a Batch Job and load data into SAP HANA 1-5-1 At the bottom right hand side, in the local object library click on
the tab with the
icon to create a new Project.
1-5-2 Right click and from the context menu select New and create a
Project with the name, HANAXX_PROJ, where XX is your student number. 1-5-3 In the Project Area located on the left side, right-click on your
new project and click on New Batch Job and create a job called HANAXX_JOB., where XX is your student number. 1-5-4 On the right hand side, you will now see a toolbar. Click on the
icon and click again in the workspace to create a Dataflow called HANAXX_DF, where XX is your student number.. 1-5-5 Double click on your new HANAXX_DF, and add an ABAP
Dataflow by clicking on the
icon from the right toolbar.
1-5-6 Now add the following details:
Datastore
ECC
Generated ABAP file name
ZHANAXX.aba
ABAP program name
ZHANAXX
Job name
ZHANAXX_JOB
Leave the rest of the settings as default OK 1-5-7 Double click on the new ABAP_Dataflow and drag-drop the
SFLIGHT Extractor from the the ECC Datastores tab. 1-5-8 Add the Query transform from the right toolbar which has the
icon. 1-5-9 Add the Data Transport from the right toolbar which has the
icon. 1-5-10 Connect all of them together as shown below, and do the
following: 1-5-11 Double-click on the query transform drag all fields from the
SCHEMA IN to the SCHEMA OUT for the mappings. Navigate back to the ABAP Data Flow.
73
1-5-12 Double-click on the Data Transport and add the following name
in the filename section, HANAXX.dat (XX is your student number.)
1-5-13 On the top of the Data Services Designer click on the back
button which will take you back outside of the ABAP dataflow and show the DataFlow. Click on the icon. 1-5-14 Now add the Query Transform from the right toolbar and drag-
drop the template tables folder under the HANAXX Datastore. Call the template table, SFLIGHT_XX, where XX is your student number.
1-5-15 Double-click on the query transform drag all fields from the
SCHEMA IN to the SCHEMA OUT for the mappings. 1-5-16 You should now have the following:
1-5-17 You have now setup the Batch Job to load ECC data into SAP
HANA using an extractor. 1-5-18 Now right click on the Job and Execute. It will prompt you to
save and then accept all the defaults on Execution Properties window. 1-5-19 To monitor the job, click on the
74
icon on top.
1-5-20 Once the job is completed, go to the SAP HANA Studio, and
refresh the table structure. You should now see a new table called SFLIGHT_XX (where XX is your student number) under your STUDENT Schema. Check the data by right clicking on this table and selecting Data Preview. Well done!
75
Exercise 20: Creating a Extractor Connection (DXC) Unit 11 : Direct Extractor Connection At the conclusion of this exercise, you will be able to: • Create and use a Direct Extractor Connection (DXC)
You are at a customer site and you need some ECC data directly in HANA. You need these data in a short way and without high effort.
1 Create a DXC DataSource and load ECC data directly into HANA.
1-1 Check the parameters in table RSADMIN (check only) 1-1-1 Logon to the SAP ERP system (ZMP/ client800) Credentials will be provided by the instructor. Use transaction SE16 and open table RSADMIN. Select all entries which started with “PSA*” and check them. PSA_TO_HDB Æ “Global” (All activated DataSoources will create an IMDSO) PSA_TO_HDB_DESTINSATION Æ “DXC_HANA_CONNECTION” (Based on the HTTP Connection prepared in SM59) PSA_TO_HDB_Schema Æ “DXC” (Schema for automatically generated IMDSO tables based on new DataSources)
76
1-2
Create your own DataSource based on credentials of existing DataSource HA_SFLIGHT. 1-2-1 Type in transaction /nSBIW to open the generic DataSource maintenance in the embedded BW system. 1-2-2 Expand node “Generic DataSource” and click on “Maintain Generic DataSources”. 1-2-3 Create a DataSource (HA_SLFLIGHT_##) for transaction data. Application Component Æ DXC Description Æ HA_SFLIGHT View/Table Æ SFLIGHT Save the DataSource. 1-2-4 In the popup to create a new Object entry choose “Local” ($TMP) and Save. 1-2-5 In the Data Source: Customer version Edit screen save again without any changes to the defaults.
1-3 Replicate your DataSource based on application component DXC. 1-3-1 Start transaction RSA1. 1-3-2 Open Modeling-DataSources and choose the DataSource tree for the “XI_00_800” system. 1-3-3 Right-click on node DXC and choose Replicate Metadata All DataSources for application component DXC will be replicated. PLEASE use application component DXC ONLY!! 1-3-4 For new DataSource choose type as DataSource (RSDS) and confirm. 1-3-5 Right-Click your DataSource HA_SFLIGHT_## and choose Change. Activate your DataSource HA_SFLIGHT_##. (New IMDSO will be created in HANA)
77
1-4 Check the new IMDSO object directly into HANA modeler. 1-4-1 Logon to the HANA modeler. Credentials will give by instructor. 1-4-2 Search and open Schema “DXC”. Open the table folder and identify the tables of your new IMDSO. 1-4-3 Choose “Open Data Preview” for verifying empty tables 1-5 Create an InfoPackage for loading data 1-5-1 Choose the context menue for your DataSource in the ERP system and click on “Create InfoPackage”. Choose all default values and start the loading process by clicking on “Start” in the last tab. 1-5-2 Verify the data in BW (load monitor) and on HANA system (table preview on the active table) 1-6 Delete your DataSource in transaction RSA1 and have look to the coresponding HANA tabels. They will not exist any longer.
78