S4 HANA Introduction to S4 HANA
1
All copy rights belongs to Aditya.Challa
What Is SAP S/4HANA?
All copy rights belongs to Aditya.Challa
2
What Is SAP ERP on SAP HANA? 1.SAP ERP is software that was written to run on any database and was also originally written for disk-based databases. 2.Table 2.Table structures, program logic, and the overall data model were all constructed with the limitations of disk-based systems . 3.With the release of SAP HANA, SAP first needed to ensure that the existing SAP ERP code base could run on this new in-memory platform. 4.SAP uses a proprietary SQLDBC library (or database shared library [DBSL]) for connectivity to the database. After this was written, Support Package Stacks (SP Stacks) were released for Enhancement Package 6 (EHP 6) and Enhancement Package 7 (EHP 7), which allowed SAP ERP to run directly on SAP HANA 5.SAP uses Open SQL to abstract the database from the developers and the users, which is probably why the ability to run SAP Business Suite on on SAP HANA was released by by SAP so quickly after SAP HANA’s release. After the code base was solidified on SAP HANA 6.SAP then focused on delivering enhancements. With the release of SP Stacks for EHP 6, SAP ERP Financials (FI) was the first to receive updates with new transactions ending in “H” (Transaction FBL1H, Transaction FBL3H, Transaction FAGLL03H, etc.) 7.With EHP 7, lots of new enhancements were made for SAP ERP in general but a handful became SAP HANA only 8.SAP has said additional enhancement packages will be delivered for SAP ERP, and they will include enhancements that are applicable to any database. 9.SAP will support two code bases for SAP ERP: one for any database and one optimized for SAP HANA. 10.How sustainable that future will be depends on the adoption. Adoption depends on the ease at which people can use the innovation being offered.
3
Software Hardware
All copy rights belongs to Aditya.Challa
4
Software What software will make up your new landscape? When running SAP ERP on any other database, you typically only have to maintain patches to the database. With SAP HANA, you’ll have a few more components to maintain. SAP HANA becoming a transparent box instead of black box. Many of the enhancements offered by SAP Business Suite on SAP HANA But some of the true innovation when running SAP Business Suite on SAP HANA will come from components that exist on the outside.
All copy rights belongs to Aditya.Challa
5
Free text search Real-time replication SAP HANA provides a text search capability that can be extended to search help and more in SAP Business Suite. SAP HANA 1610 With SP 01 includes numerous enhancements that are part of the platform
When using Central Finance or other solutions, youhave to maintain the components supporting real-time data replication.
SAP Fiori Smart Business Apps SAP HANA Live
SAP-delivered and predefined OLAP content installed outside of SAP Business Suite but inside SAP HANA. It creates a virtual data model.
SAPUI5 applications access both transactional tables in SAP Business Suite and analytical content in SAP HANA Live
All copy rights belongs to Aditya.Challa
6
SAP Business Suite on SAP HANA runtime license will prevent you from accessing all of the features without an additional license. In terms of
which software license is required to run SAP Business Suite on SAP HANA, you’ll need the runtime license (as of August 2015).
SAP HANA Base
SAP HANA Platform
SAP HANA Enterprise
Each version provides a different level of functionality, with SAP HANA Enterprise being the all-inclusive version. If your organization hasn’t licensed the SAP Business Suite runtime SAP HANA license,none of the SAP HANA media (and, more specifically, the SAP NetWeaver kernel for SAP HANA) will be avail able for you to download.
7
All copy rights belongs to Aditya.Challa
Hardware
SAP HANA hardware can be implemented using two methods
SAP certified HANA appliance partners There are more than 17 certified hardware partners with 900+ server configu- rations. This method includes both hardwar e and operating system (OS) sup- port with SAP HANA preinstalled and preconfigured.
Tailored data center integration (TDI) Customer-provided hardware with SAP HANA installation is performed by a certified SAP HANA administrator. This method enables you to leverage existing IT investment if hardware meets the SAP HANA requirements
All copy rights belongs to Aditya.Challa
8
In August 2015, SAP announced support for SAP HANA on IBM Power Systems in addition to t he Intel Haswell-based systems being offered. From an OS perspective, SUSE Linux was certified first for use with SAP HANA, with Red Hat Enterprise Linux (RHEL When implementing SAP HANA 1610 SP 01 on RHEL, which required one major OS kernel update to provide system stability.
Customers are still able to leverage Microsoft Windows Server as an ABAP application server with SAP HANA as the database platform. In fact, you can continue to use any SAP-supported OS platform as your application server platform.
When you refer about the different hardware configurations, keep in mind that as of Q4 2016, SAP Business Suite on SAP HANA was only supported in the scale-up scenario Appliances certified for SAP Business Warehouse (SAP BW) on SAP HANA scale up aren’t the same as those certified for SAP Business Suite on SAP HANA. SAP Business Suite on SAP HANA has a different memory-to-CPU ratio, which aligns to the heavier OLTP environment.
SAP believes 99% of its customers can run SAP Business Suite on one SAP HANA node because current hardware can scale to 12TB of memory with 188 CPU Core on one appliance. This means you could theoretically migrate a 40TB row-store database to a column-store database in SAP HANA.
All copy rights belongs to Aditya.Challa
9
What Is SAP S/4HANA? SAP S/4HANA is the next generation of SAP ERP optimized for SAP HANA. SAP ERP 6 EHP 7 has a common code base that supports any
database, including SAP HANA. With SAP S/4HANA SAP has taken the first s tep to simplify the SAP ERP data model and optimize the code base for SAP HANA as an in-memory database. SAP has worked to remove the constraints of the traditional row-store disk-based database and to consolidate tables, reducing the amount of stored derived data. SAP HANA is able to process calculations at runtime so quickly that you no longer need s pecific tables in SAP ERP to store calculated values. The first simplification of the da ta model was delivered w ith SAP S/4HANA Finance and the second one is SAP S/4HANA Logistics will include optimizations for Sales, Materials Man- agement (MM), Production Planning (PP), Procurement, Asset Management, and more. SAP S/4HANA Finance has been released for use with core SAP ERP in addition to 25 industry solutions. All future solutions built in SAP S/ 4HANA will require SAP S/4HANA Finance first. This means that you can’t migrate to SAP S/4HANA Logistics without first completing the migration to SAP S/4HANA Finance. SAP S/4HANA is more than an installation; it’s a group of migration ac tivities for those customers currently running SAP Business Suite. For new customers, SAP S/4HANA is the initial install, so there is no data migration within SAP Business Suite which is called Greend field Implemantations. For those of you who alrea dy run SAP Business Suite, you’ll need to prepare your system for the SAP S/4HANA migration, which starts with the activation of SAP S/4HANA Finance. SAP S/4HANA Finance is a data migration post-installation of specific ABAP components. You can include SAP S/4HANA Finance as part of the SAP Business Suite on SAP HANA migration project—it’s technically feasible—and it would enable your organization to use the latest innovations and business process optimizations. With an as-is migration to SAP Business Suite on SAP HANA, you apply support packages a nd change the underlying database migration approach that can be completed in 12 weeks from project kickoff to go-live in production with SAP Business Suite on SAP HANA . If you choose to include SAP S/4HANA Finance, then you have to coordinate with a financial calendar because the migration has significant impact on the functionality within SAP ERP Financials (FI).
All copy rights belongs to Aditya.Challa
10
SAP S4 HANA
1
All copy rights belongs to Aditya.Challa
SAP HANA Introduction
SAP S/4 HANA
SAP HANA and the Relational Database Management System
In-Memory Database versus Caching SAP HANA
SAP ERP On HANA
Architecture Options
All copy rights belongs to Aditya.Challa
2
SAP HANA and the Relational Database Management System
All copy rights belongs to Aditya.Challa
3
SAP HANA and the Relational Database Management System
SAP HANA is a relational database, but it has been described as the reinvention of the database.
It’s true that SAP HANA as a database is both SQL-92 and ACID (atomicity, consistency, isolation, and durability) compliant. This means that you can run the most common and latest SQL commands within SQLScript.
ACID compliant means that SAP HANA performs like any modern database system because it processes transactions that follow a set of rules, maintain consistencies of the transactions, isolate transactions from each other, and ensure any committed transaction won’t be lost; these are the four guiding princi ples of database theory.
Where SAP HANA breaks from the Relational Database Management System (RDBMS) mold is that it stores all transactions in memory first, in columnar format, and compressed. Many in-memory databases available on the market today require you to store data within a diskbased database first and then replicate to a memory cache. With SAP HANA, end user transactions are read and written from memory first and then synchronized to disk and c ommitted to logs. SAP HANA is open and supports Open Database Connectivity (ODBC), Java Database Connectivity (JDBC), multidimensional expressions (MDX), OLE DB for OLAP (ODBO), ADO.NET, ABAP, and Extensible Markup Language (XML; REST [Representational State Transfer] and JSON [JavaScript Object Notation]), providing external access to both the database and its application platform. 4
All copy rights belongs to Aditya.Challa
SAP HANA and the Relational Database Management System
SAP HANA is an enterprise platform because it offers a database and an application platform that provides advanced text analysis, a spatial reference (geospatial), virtual OLAP, a web application server, and an advanced data replication engine. It’s an entire application stack in one system.
You could deploy a web application running SAP Fiori on the SAP HANA platform leveraging all of the components and only need to maintain one system. With other software vendors, this might require two to four different systems.
SAP HANA also offers the disaster recovery and system replication features you would expect from an enterprise platform. You can replicate within the data center or across multiple geographically dispersed data centers it’s all supported and configured with a GUI in SAP HANA Studio.
Additionally, SAP HANA is available as both a cloud and on premise platform. Again, it’s a good time to be an SAP customer because you have choices. You can choose to run SAP HANA in a hybrid scenario or use services from SAP and its partners, which offer SAP HANA as a platform as a service (PaaS) in addition to the traditional infrastructure as a service (IaaS)
5
All copy rights belongs to Aditya.Challa
In Memory Data base versus caching
All copy rights belongs to Aditya.Challa
6
In-Memory Database versus Caching
An in-memory database is different from a database in memory. Even with IBM’s first database in 1968, we had the ability as customers to run a database that could pin tables to memory. A commit operation would exist at the disk level, and a process would copy the data to main memory. This is often referred to as a disk- block cache. The industry term in-memory database management systems (IMDBMS) was coined to describe databases that store the entire database structure in-memory with all operations (such as select, insert, update) occurring in memory without the need for an input/output (I/O) instruction to disk. SAP BW Accelerator (BWA) is a great example of a database in memory. We could load specific data from SAP BW into the main memory of another server, which would be used for OLAP operations (read only). It could be an index of a table or a group of indexes, but the main point here is that databases which cached data in memory were not used for the OLTP workload. SAP HANA is an IMDMS that uses a column store and multiple compression algorithms to store the entire database in memory. It works closely with the Linux OS (Red Hat or SUSE Linux) to manage memory and correctly allocate memory tools for the most efficient storage of data. People always ask how much compression they will get by moving to SAP HANA, and the answer depends on the nature of the data, the number of repeated values, and the indexes. One great feature is that SAP HANA manages the tables for you, and it determines the compression algorithms (dictionary, run-length encoding (RLE), sparse, and others) to be used by column. Multiple columns within a table in memory could be using different compression types And not all columns are loaded into memory. Sometimes referred to as “lazy loading,” SAP HANA loads column into memory on first use. Columns that aren’t used aren’t loaded, thereby avoiding memory waste. SAP HANA is a collection of processes to the operating system, which requests memory from the OS based on demand. When the virtual memory needs to be used, it is mapped to physical memory or resident memory.
All copy rights belongs to Aditya.Challa
7
In-Memory Database versus Caching
The OS uses an algorithm to swap the old process out of resident memory. However, in a correctly sized SAP HANA system, this swapping should never occur. SAP HANA memory manager will request and reserve more memory from the OS. By default, SAP HANA allocates 90% of the first 64GB and then 97% of the remaining memory. This allocation limit can be changed in scenarios where you have multiple SAP HANA instances on the same appliance. SAP HANA has matured, SAP has provided its customers with the tools needed to manage memory in SAP HANA. You have commands to load all tables into memory to get the “worst case” measurement And you can view snapshots of memory usage over a period of time. SAP HANA is a true IMDBMS that can be managed much like a traditional RDBMS. Instead of monitoring disk space for tables, your database administrators will be monitoring memory and table growth to ensure your database systems can operate without error. (You still have SAP HANA, SAP ERP, and SAP S/4HANA to monitor the disk space in SAP HANA, and all transactions are written to the data and log files too.)
8
All copy rights belongs to Aditya.Challa
In-Memory Database versus Caching
9
All copy rights belongs to Aditya.Challa
Architecture Options
All copy rights belongs to Aditya.Challa
10
SAP HANA Architecture Options
SAP HANA supports a point-in-time recovery with the ability to configure High Availability (HA) and Disaster Recovery (DR) to meet any customer’s requirements. From local disaster recovery to site-to-site replication, SAP has delivered out-of-the-box functionality to support most scenarios. There is no additional license to buy to leverage these advanced features. SAP HANA as a database has two options for scale, as it relates to database growth. SAP HANA can be deployed in the scale up or the scale out scenario. With scale up, we deploy as many resources as possible to a single h ost to support the current database and future database growth. With the scale out scenario, we add additional SAP HANA hosts alongside the primary host to add additional capacity for the database to grow. In the scale out scenario, data is distributed amongst the hosts supporting the database; thus we have to plan for and managed this data distribution. In the scale up scenario all data resides within one host which is easier to manage. SAP Business Suite on SAP HANA is only supported on the scale up scenario. Currently, you might have a single stack with an ABAP application server sharing t he same host as the database. Or you might have a separate, clustered, primary application server (PAS) with standalone dialog servers and a clustered database. 11
All copy rights belongs to Aditya.Challa
SAP HANA Architecture Options
12
All copy rights belongs to Aditya.Challa
SAP HANA Architecture Options
Small- to medium-enterprise customers can run SAP Business Suite on SAP HANA in a cost-effective manner and still provide great recovery point objectives (RPO) and recovery time objectives (RTO). The RPO can be up to the minute, which is fantastic. SAP HANA can handle the automatic host failover at the data- base layer. The ABAP application server failover is manual and requires DNS changes, but it can be executed in under an hour or even under 30 minutes if well scripted. This assumes you’re using SAP Solution Manager to monitor system availability so manual processes can be started in the event of a failure. In terms of the system replication types, which can be used in a multitier strategy, you should be aware of the following modes: Asynchronous The primary system commits the transaction after sending the log without waiting for a response from the secondary system. This mode is most useful when the secondary site is more than 100KM away from the primary site or when reducing latency is critical. Synchronous In-Memory The primary system commits the transaction after it receives a reply that the log was received by the secondary system, but before it was persisted. The delay in the primary system is shorter because it only includes the network transmission time. Synchronous The primary system waits to commit the transaction until it receives a reply that the log is persisted in the secondary system. This mode guarantees immediate consistency between both systems at a cost of delaying the transaction by the time for data transmission and persisting in the secondary system. 13 Synchronous Full Sync Another option determines what to do if the replication fails due to a network or system issue. The options are to fail the transaction or commit the transaction.
SAP HANA Architecture Options
14
Project scope in S4 HANA
All copy rights belongs to Aditya.Challa
1
Project scope in S/4 HANA Is Divided in to two parts
Establishing Project Scope
Landscape Planning
All copy rights belongs to Aditya.Challa
2
Project scope Involve Following steps Add-On Compatibility Enhancement Packages Upgrade Dependency Analyser
Upgrade and Migration Dependencies Support Packages
Non-Unicode versus Unicode
All copy rights belongs to Aditya.Challa
3
Landscape Planning consisted of following steps System statagy
SAP Solution Manager Tools for Establishing Scope
Connected Systems Questions
Tranning plan
Resource planning
Project team
All copy rights belongs to Aditya.Challa
4
Establishing Project Scope Both a clearly defined project scope and project charter are key to any project in IT The project scope will dictate dic tate resources, time lines, effort, and, consequently, consequently, cost. As you plan and execute a migration to SAP Business Suite on SAP HANA you should ask the following questions: Do we upgrade first and then migrate? Do we migrate as-is if we already have the minimum supported software components? How will this impact our third-party software components and interfaces with SAP Business Suite?
All copy rights belongs to Aditya.Challa
5
Establishing Project Scope As a customer, customer, you have several available options options when migrating to SAP Business Suite on SAP HANA. You can choose to include a Support Package Stack (SP Stack) in the migration, or you can include an entire enhancement package when migrating to SAP HANA.. Landscape planning is also a key component to a successful migration project. You’ll need to know what your client strategy will be, as well as how many systems will be required to support both production support changes and any development required during the project. There are several approaches to the landscape along with with the pros and and cons of each approach. Running SAP Business Suite systems in a silo isn’t a real-world scenario. We know that your organization’s SAP Business Suite system will have multiple interfacing scenarios, third-party add-ons installed, potentially industry solutions activated, and numerous customizations. All copy rights belongs to Aditya.Challa
6
Establishing Project Scope Most of the tools available from SAP to understand dependencies between SAP components, industry solutions, and Certified SAP third-party add-ons, as well as the risk during the migration for those non certified applications connecting to SAP ERP. ERP. The migration will affect the application layer within SAP ERP so you’ll have a better understanding if homegrown applications will still interface after the migration. SAP Solution Manager and its Build SAP Like a Factory toolsets will play a key component role in the migration. Initially, you’ll use SAP Solution Manager to reverse-document your production SAP ERP . SAP Solution Manager supports testing and impact analysis. Solution Documentation tool, which you use to document the transactions and programs in use within your production systems. During the various phases of the project, It cover the types of resources required and how to calculate the number of each resource required All copy rights belongs to Aditya.Challa
7
Upgrade and Migration Dependencies
Add-ons
Enhancement packages
3
4
1
Unicode 2
Support packages
All copy rights belongs to Aditya.Challa
8
Common Quires which comes in Migration
01
How much change is too much change?
02
How can you best leverage the testing dollars you plan on spending during this project?
03
What can you do if you’re an SAP R/3 customer today?
04
Whats next?
All copy rights belongs to Aditya.Challa
9
Enhancement Packages When coupled with the correct usage of the Business Process Change Analyzer (BPCA) in SAP Solution Manager, a customer can also measure the impact of enabling specific enhancements
Enhancement packages split the installation of new functionality with th e im pl em en ta ti on of enhancements. You can now install enhancements without impacting existing business processes and then selectively activate enhancements when they are ready.
All copy rights belongs to Aditya.Challa
Enhancement packages can be included with normal support package installations performed by a customer (ideally on an annual basis) and not directly impact the level of testing required.
Enhancements are delivered as part of maintenance agreements with SAP, and it’s up to the customer to install and then activate.
10
Finding Your Target SAP ERP Version When planning your SAP Business Suite on SAP HANA migration, there are specific SAP ERP release levels that are supported for the migration . The source release for a migration to SAP Business Suite on SAP HANA can be any version of SAP ECC 5.0 or SAP ERP 6.0. The target release depends on the release strategy within your organisation. If we follow SAP’s recommendation for the migration, the target release will be EHP 7 for SAP ERP 6.0 SPS 01 at a minimum. If you’re already running SAP ERP 6.0 EHP 6, then you can migrate asis to SAP Business Suite on SAP HANA using the Software Provisioning Manager (SWPM). The Software Tool represents the two applications provided by SAP that you can us to migrate to SAP Business Suite on SAP HANA. The Soft- ware Update Manager (SUM) and Software Provisioning Manager (SWPM) are delivered as part of the software logistics toolset (SL Toolset) from SAP. First, if you’re currently running SAP R/3, then you need to upgrade to SAP ERP 6.0 EHP 7 and then use the SWPM tool to migrate. All copy rights belongs to Aditya.Challa
11
Finding Your Target SAP ERP Version
All copy rights belongs to Aditya.Challa
12
Finding Your Target SAP ERP Version The second key point from the table comes from SAP Note 1768031. If your organisation’s release strategy is N-1, this might dictate that you must apply EHP 6 instead of EHP 7 (N-1 refers to the IT strategy of installing software one revision lower than the current release with the idea the newest release is not the most stable). The N-1 philosophy is better applied to SP Stacks, not enhancement packages. enhancement pack impact your system unless activated via the Switch Framework (Transaction SFW5). SP Stacks are collections of fixes provided by SAP that will directly impact your transactions and programs in use within your environment. Adopting an N-1 strategy for SP Stacks is allowed during the migration to SAP Business Suite on SAP HANA. This is an option when you execute the Maintenance Optimiser to download the required upgrade software components. Based on the current SAP recommendations and published roadmaps, we recommend setting your target release to SAP ERP 6.0 EHP 7 if you’re not already on that release. If you’re 13 currently running EHP7 All copy rights belongs to Aditya.Challa
Finding Your Target SAP ERP Version
All copy rights belongs to Aditya.Challa
14
Supporting Packages After planning the target enhancement package level you’ll apply when migrating to SAP HANA, you also need to determine the SP Stack level. SP Stack is a collection of support packages that SAP has tested as a release to the customer base. A support package is a collection of fixes or SAP Notes for a specific version of an SAP product. In practice, it’s best that you always run a consistent SP Stack delivered by SAP. To determine which SP Stack you should implement during your SAP Business Suite on SAP HANA migration project, you need to take into account SAP’s maintenance strategy. SAP regularly updates its SP Stack schedule within the Service Marketplace. For example, if you know you’re going to start a migration project in Q4 2017, then the SP Stack schedule helps you understand the current SP Stack level available and the planned release date for the next available SP Stack. In general, SAP recommends installing the latest available SP Stack.
All copy rights belongs to Aditya.Challa
15
Supporting Packages
All copy rights belongs to Aditya.Challa
16
Supporting Packages
When planning your target release for the SAP Business Suite on SAP HANA migration, it’s also useful to understand the maintenance schedule and supported versions for the software components included in your project scope. The product availability matrix (PAM) provides detailed information regarding specific components that will support your SAP Business Suite on SAP HANA system. You can find details there on the supported OS versions, SAP HANA support package levels, supported browsers, supported Java versions.
All copy rights belongs to Aditya.Challa
17
Supporting Packages
All copy rights belongs to Aditya.Challa
18
Supporting Packages
Using the PAM, you can determine the target version for each component of your landscape. You must also find the right balance of in-house expertise and longterm supportability. Note the Support Until dates are drastically different between SUSE Enterprise and Red Hat Enterprise.
All copy rights belongs to Aditya.Challa
19
Supporting Packages
All copy rights belongs to Aditya.Challa
20
Add-On Compatibility ABAP add-ons installed in your SAP Business Suite systems are either provided by SAP or by third parties. SAP-delivered ABAP add-ons are defined as supported or unsupported in SAP Notes and the PAM previously described. These ABAP add-ons are the easiest to determine whether support will be provided by SAP during your migration to SAP Business Suite on SAP HANA. You can determine the ABAP add-ons installed in your system. After you’ve determined the ABAP add-check the add-on is supported when migrating to SAP Business Suite on SAP HANA. You must review add-ons for both SAP NetWeaver AS ABAP 7.4 on SAP HANA and SAP ERP 6.0 EHP 7 version for SAP HANA 1.0. Third-party ABAP add-ons are more difficult to verify because they require the developer of the add-on t o rectify the solution with SAP Business Suite on SAP HANA. SAP Note 1855666 is continually updated with information regarding third-party add-ons that have been qualified. The most common ABAP add-ons from Vistex, OpenText, and Nakisa have all been certified for use with SAP Business Suite on SAP
All copy rights belongs to Aditya.Challa
21
Unicode VS Non Unicode SAP HANA only supports Unicode, so any SAP Business Suite system currently running i n nonUnicode must be converted. The Database Migration Option (DMO) in SUM supports a Unicode conversion during the SAP HANA migration and upgrade but only if your non-Unicode system is currently configured with a single code page. If your system is Multi display/Multiprocessing (MDMP), then your Unicode conversion must take place before the migration to SAP Business Suite on SAP HANA. The SUM tool doesn’t support a Unicode conversion and migration to SAP HANA in one step for MDMP systems. SAP has provided several utilities for customers to use to prepare their systems for a Unicode migration. These preparation tasks can be completed weeks or months in advance of the SAP HANA migration. 22 All copy rights belongs to Aditya.Challa
Unicode VS Non Unicode Conversion of Non Unicode to unicode is the responsibility take care by Basis team
All copy rights belongs to Aditya.Challa
23
Landscape Planning As you continue to define your project scope, you must define the project landscape to be used during your SAP Business Suite on SAP HANA migrations. The project landscape plan will include a list of systems, system clients if applicable, and software components to be used during your project When you embark on your SAP Business Suite on SAP HANA migration, you’re going to need an isolated landscape (albeit temporarily) where you can test your migration procedures, perform application regression tests, and potentially perform cutover testing. Because your SAP landscapes are so interconnected within each SAP system, you need to define a plan for the number of systems, where they will reside, how long they will exist, and how you’ll manage changes during your project All copy rights belongs to Aditya.Challa
24
Connected Systems Questions If you answer yes to any of the following questions, it’ s safe to say you’ll need a corresponding test system for any connected system as part of the SAP Business Suite on SAP HANA migration project: Does the connected system not support connections to more than one SAP ERP system at the same time? Do you have a 1:1 relationship between the connected system and the systems in your SAP ERP land scape (e.g., DEV SAP ERP connected to DEV SAP CRM)? Are you unable to implement a manual interfacing method if the connected system is unavailable or a Defect arises at go-live? These additional systems will only be required for the test cycles you execute during your project. For an upgrade and migration project, ideally you should limit your total testing time to two to four weeks there are Exceptions for regulated industries where test cycles might be longer). Keep in mind that you’re changing the underlying database but not activating new functionality during the migration. Depending on your approach, you might also be applying SP Stacks, which, as is mentioned previously, are collections of notes. So your test cycles will typically be focused on regression testing—not user acceptance or unit testing.
All copy rights belongs to Aditya.Challa
25
System Strategy When referring to the system strategy as part of the overall landscape strategy, we’re referring to the representation of two components: How many SAP ERP systems will you need during your project to support existing SAP ERP operations and the project How many SAP ERP connected systems (e.g., SAP CRM and SAP PI) will you need during your project? There are two main system strategies we see in the SAP community when customers execute SAP Business Suite on SAP HANA migration. The first strategy is probably the most common, and it’s referred to as the five-system landscape. In this scenario, you have your normal development, quality, and production systems along with two project-related systems supporting development and testing. The second strategy (and coincidentally the least expensive to implement) involves the usage of only one additional system in the landscape. It’s been called the five-in-one landscape, but for simplicity, let’s refer to it as the four-system landscape. explore each scenario to determine which landscape might best fit your organisation .
All copy rights belongs to Aditya.Challa
26
SAP Solution Manager After choosing targeting application versions, verifying add-ons, working with Unicode conversions, and defining a landscape strategy. Next, you need to define what you are upgrading and migrating to SAP Business Suite on SAP HANA. What are the transactions and programs that will potentially be impacted by the migration? What business processes will you need to test, and how can you determine what your users use in production today? How can you estimate the effort for development and testing during your project based on the transactions your users utilise? These are all important questions you must answer to help you determine the scope and ultimately the effort required to complete the SAP Business Suite on SAP HANA migration. All copy rights belongs to Aditya.Challa
27
Project Team Resource Planning When talking about resource planning, we’ll discuss the roles on a project that you’ll need to account for. A role may be supported by one or many people, and a single person can support multiple roles.
Project Team Your project team is the heart and soul of your SAP Business Suite on SAP HANA project. The success of the project will be the collective sum of individual efforts during the life of the project. It’s important everyone understand the roles and responsibilities on the project as well as the communication paths and escalation points. Your project team might consist of internal and external team members. You should consider SAP Active Global Support (AGS) part of your extended team as they will be supporting any technical issues that arise during your project. All copy rights belongs to Aditya.Challa
28
Test Coordinator
SAP NetWeaver Administrator (Basis)
Test Manager
SAP (ABAP) Developer
Project Manager
SAP Security Administrator
Project Team Roles All copy rights belongs to Aditya.Challa
29
Project Manager
It goes without saying that the project manager role is essential to a successful project. As you begin searching internally for a project manager, it’s more than likely that this person won’t have experience managing SAP ERP migration to SAP HANA. However, it’s more important that you find an individual who is good at managing technical projects and even better at communicating with technical people.
All copy rights belongs to Aditya.Challa
30
Test Mana er For some organisations that will have hundreds or more test scripts to build and execute as part of a test cycle, they’ll need a dedicated resource in this position. The test manager is ultimately responsible for the test cycle planning, execution, defect tracking, test cycle reporting, and exit criteria for the test phase of the project. It’s important this person can lead and define the test cycles as part of the project. The test manager defines test script standards for the build of test scripts and their execution by test cycle. The test manager also defines the test plan, which details what resources are required when, and facilitates the logistics of testing. This role provides regular status reports during the build of test scripts as well as the execution of the test script cycles. The test manager is also responsible for defect management during the project. Where will defects be stored? How will resolvers be able to report their resolutions? And how will testers know when to retest? These are all questions the test manager needs to plan for and coordinate. All copy rights belongs to Aditya.Challa
31
Test Coordinator
This role is responsible for executing the strategy defined by the test manager. The test coordinator is more tactical in that this person serves as the liaison to business analysts or business users who are responsible for the test script build and execution. During the preparation phase before the test cycle begins, the test coordinator ensures that all scripts conform to a set of standards and even assists with the test script build when required. Ideally, this person is a hands-on administrator of the test planning tool.
All copy rights belongs to Aditya.Challa
32
SAP NetWeaver Administrator (Basis) This role is the technical focus of the project from beginning to end. This person (or group of persons) is ultimately responsible for the execution of the migration tools for the SAP Business Suite on SAP HANA migration. This role is also responsible for the SAP sizing to be completed as part of the planning phase in addition to providing input on the landscape and client strategy. If you’re working with SAP’s tailored data centre integration (TDI), then this role must be certified by SAP to perform SAP HANA installations If you’re planning to leverage weekends for both non-production and production downtime, then you’ll need multiple persons to act in this role.
All copy rights belongs to Aditya.Challa
33
SAP ABAP Developer The scope of effort for this role won’t be defined until after the execution of the SEA or the BPCA. After either one of these two tools are executed, you should have a good estimate of the effort required to remediate base SAP modifications and how many custom objects will be impacted by the potential installation of an enhancement package or SP Stack. The SAP developer role is responsible for the defect remediation and custom application code adjustment during the project. The developer role’s work stream will more than likely be driven by the SAP Solution Manager output and the defects created during testing. The SAP developer is also involved in the test scope identification for interface testing. Ideally, this role should be filled by someone who is familiar with your landscape and its interfaces, and has an overall understanding of why base SAP modifications were made if they exist.
All copy rights belongs to Aditya.Challa
34
SAP Securit Administrator
The SAP security administrator plays a few small roles on the project. During the planning phases, this person needs to understand the systems and clients to be utilised so he can prepare the SAP systems for administrators, developers, and testers., the migration to SAP Business Suite on SAP HANA isn’t activating any new functionality or authorisation objects for the functional users. However, you’re changing the underlying database and need to determine who will administrate users at the database level.
All copy rights belongs to Aditya.Challa
35
SAP A
lication Tester
This role has two primary responsibilities on the project: build test scripts and test data sets during the preparation for the test cycles, and execute the test scripts and record defects during the test cycle.
All copy rights belongs to Aditya.Challa
36
SAP Trainin Plan
As part of your SAP Business Suite on SAP HANA migration project, you’ll have multiple training opportunities to support your project team and those responsible for maintain your SAP Business Suite on SAP HANA landscape long term. You're training plan should identify the training needs of your project team members for use during the project and the training required to support SAP Business Suite on SAP HANA.
All copy rights belongs to Aditya.Challa
37
S/4 HANA 1
All copy rights belongs to Aditya.Challa
The purpose of this Session is to provide detail overview to understand the key ideas and concepts behind SAP Simple Finance
2
S/4 HANA Power of Innovation There are three Fundamental Building Blocks to upgrade from R3 to S/4 HANA
In-memory data storage
Removal of Redundancy
Columnar Data Organization
All copy rights belongs to Aditya.Challa
3
S/4 HANA -In memory Data Base The overall vision of SAP HANA as the single unified platform for mixed enterprise workloads— namely, transactional and analytical data processing. At the core of SAP HANA is a massive parallel database management system (DBMS) that runs fully in main memory. In contrast to traditional DBMSs, which are designed for optimizing performance on hardware with constrained main memory
The SAP HANA database is designed from the ground up around the idea that memory is available in abundance to keep all business data and that input/ output (I/O) access to the hard disk is not a constraint. Whereas traditional database systems put the most effort into optimizing hard disk I/O. SAP HANA focuses on optimizing memory access between the CPU cache and the main memory. which first and foremost incorporates a full DBMS into a standard SQL interface, transactional isolation and recovery, and high availability. The DBMS is capable of handling row store, column store, and graph store of data. It facilitates massive parallelization of database access threads through a dedicated engine and comes with integrated services, such as search and application extension. It is compatible with standard interfaces such as SQL, MDX, and others.
4
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base DBMS aspects of SAP HANA Is Divided in to Four categories
Optimize for writes access
Keep data in memory
Optimize in-memory data access
Support massive parallel data processing
All copy rights belongs to Aditya.Challa
5
S/4 HANA -In memory Data Base To process massive quantities of data in main memory and provide immediate results for both analysis and transaction Which IS known as OLTP &OLAP)
Keep data in memory Although nonvolatile storage is still needed to ensure that write operations are durable, read operations can anticipate that all relevant data resides permanently in main memory and thus can be executed without disk I/O.
Optimize in-memory data access With all data, readily available in main memory, data movement between the CPU cache and main memory becomes the new performance. SAP HANA resolves this by using columnar store and effective data compression techniques to effectively reduce the overall size of data in memory and achieve high hit ratios in the different caching layers of the CPU.
Support massive parallel data processing Modern computer systems have a continuously increasing number of processing cores. To natively take advantage of massively parallel multicore processors, SAP HANA manages the SQL processing instructions into an optimized model that allows parallel execution and scales incredibly well with the number of cores.
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base Optimize for writes access The many advantages of columnar store for fast read performance have their price. Write operations, particularly inserts and updates to a columnar store, are more complicated and less efficient. To overcome this drawback, SAP HANA introduces the concept of the delta store. Incoming updates are accumulated in a write-optimized delta partition, which is merged into the main partition at appropriate times.
7
S/4 HANA -In memory Data Base Keeping Data in Memor y
Single enterprise class server can hold terabytes of data in main memory. The main memory is the fastest storage medium that can hold a significant amount of data, making it a viable approach to permanently house the complete business data of an enterprise.
All copy rights belongs to Aditya.Challa
8
S/4 HANA -In memory Data Base We look at the storage hierarchy of a computer system
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base The conceptual view of storage hierarchy in typical computer systems. In terms of performance, two factors come into play when accessing data from a storage layer. performance, is related to the physical speed of the storage medium itself.
The other is latency, which is the time delay experienced by the system to load data from a storage medium until it is available in a CPU register.
In the end, every operation takes place inside the CPU, and in turn the data must be in a register of the CPU to be processed.
Hard disks are at the very bottom of storage hierarchy. Because they are cheap, it is affordable to have a very large amount of storage at this level, but the tradeoff is performance. Not only it is the slowest medium, but also (because there are typically four layers between the hard disk and CPU register) it has the highest latency. Main memory is the first level of storage next to CPU caches, and it is directly accessible. Compared with accessing data on hard disks, typically data in main memory can be accessed more than 100,000 times faster. Compared with conventional DBMSs, which employ disks as the primary data store and use main memory as buffer for data processing, keeping data in memory can improve database performance just by the advantage in access time
S/4 HANA -In memory Data Base
Ensure Durability
Keeping data in memory raises several questions. First, what happens if there is a power outage?
Main memory is volatile storage. It loses its content when it is out of power. In this context, we refer to a set of properties known as atomicity, consistency, isolation, and durability (ACID) in database technology, which ensures that database transactions are processed reliably and are not susceptible to external disruptions. The persistence layer of SAP HANA ensures that changes are durable and that the database can be restored to the most recent committed state after a restart
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base
Ensure Durability
Keeping data in memory raises several questions. First, what happens if there is a power outage? Main memory is volatile storage. It loses its content when it is out of power. In this context, we refer to a set of properties known as atomicity, consistency, isolation, and durability (ACID) in database technology, which ensures that database transactions are processed reliably and are not susceptible to external disruptions. The persistence layer of SAP HANA ensures that changes are durable and that the database can be restored to the most recent committed state after a restart
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base
Ensure Durability
To achieve this goal in an efficient way, the persistence layer uses a combination of write-ahead logs, shadow paging, and data save points.
Nonvolatile storage is used for logs and save points. Upon restart after a power failure, the database can be restored much like a disk-based database. First, the database pages are restored from the most recent save points, and then the database logs are rolled forward to redo any changes that were not captured in the save points, eventually bringing the database to the same consistent state as before the power failure.
With all relevant data in memory, disk access is no longer a limiting factor for performance. with the increasing number of cores, CPUs can process more and more data per time interval.
The new performance bottleneck in an in- memory database arises when the CPU waits for data to be loaded from memory into the CPU cache.
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base
Ensure Durability
The resource “DBMSs on a Modern Processor shows that traditional database systems could not address this challenge well. Provided that all data is buffered in main memory in traditional DBMSs, the CPU spends half of the execution time in stalls—that is, waiting for data to be loaded from main memory into the CPU cache.
To achieve desired performance, the key aspect is to work on a minimal set of data and minimize the amount of data that needs to be transferred between the main memory and processor.
All copy rights belongs to Aditya.Challa
S/4 HANA 1
All copy rights belongs to Aditya.Challa
The purpose of this Session is to provide detail overview to understand the key ideas and concepts behind SAP Simple Finance
2
S/4 HANA Power of Innovation There are three Fundamental Building Blocks to upgrade from R3 to S/4 HANA
In-memory data storage
Removal of Redundancy
Columnar Data Organization
All copy rights belongs to Aditya.Challa
3
S/4 HANA -In memory Data Base The overall vision of SAP HANA as the single unified platform for mixed enterprise workloads— namely, transactional and analytical data processing. At the core of SAP HANA is a massive parallel database management system (DBMS) that runs fully in main memory. In contrast to traditional DBMSs, which are designed for optimizing performance on hardware with constrained main memory
The SAP HANA database is designed from the ground up around the idea that memory is available in abundance to keep all business data and that input/ output (I/O) access to the hard disk is not a constraint. Whereas traditional database systems put the most effort into optimizing hard disk I/O. SAP HANA focuses on optimizing memory access between the CPU cache and the main memory. which first and foremost incorporates a full DBMS into a standard SQL interface, transactional isolation and recovery, and high availability. The DBMS is capable of handling row store, column store, and graph store of data. It facilitates massive parallelization of database access threads through a dedicated engine and comes with integrated services, such as search and application extension. It is compatible with standard interfaces such as SQL, MDX, and others.
4
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base DBMS aspects of SAP HANA Is Divided in to Four categories
Optimize for writes access
Keep data in memory
Optimize in-memory data access
Support massive parallel data processing
All copy rights belongs to Aditya.Challa
5
S/4 HANA -In memory Data Base To process massive quantities of data in main memory and provide immediate results for both analysis and transaction Which IS known as OLTP &OLAP)
Keep data in memory Although nonvolatile storage is still needed to ensure that write operations are durable, read operations can anticipate that all relevant data resides permanently in main memory and thus can be executed without disk I/O.
Optimize in-memory data access With all data, readily available in main memory, data movement between the CPU cache and main memory becomes the new performance. SAP HANA resolves this by using columnar store and effective data compression techniques to effectively reduce the overall size of data in memory and achieve high hit ratios in the different caching layers of the CPU.
Support massive parallel data processing Modern computer systems have a continuously increasing number of processing cores. To natively take advantage of massively parallel multicore processors, SAP HANA manages the SQL processing instructions into an optimized model that allows parallel execution and scales incredibly well with the number of cores.
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base Optimize for writes access The many advantages of columnar store for fast read performance have their price. Write operations, particularly inserts and updates to a columnar store, are more complicated and less efficient. To overcome this drawback, SAP HANA introduces the concept of the delta store. Incoming updates are accumulated in a write-optimized delta partition, which is merged into the main partition at appropriate times.
7
S/4 HANA -In memory Data Base Keeping Data in Memory
Single enterprise class server can hold terabytes of data in main memory. The main memory is the fastest storage medium that can hold a significant amount of data, making it a viable approach to permanently house the complete business data of an enterprise.
All copy rights belongs to Aditya.Challa
8
S/4 HANA -In memory Data Base We look at the storage hierarchy of a computer system
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base The conceptual view of storage hierarchy in typical computer systems. In terms of performance, two factors come into play when accessing data from a storage layer. performance, is related to the physical speed of the storage medium itself.
The other is latency, which is the time delay experienced by the system to load data from a storage medium until it is available in a CPU register.
In the end, every operation takes place inside the CPU, and in turn the data must be in a register of the CPU to be processed.
Hard disks are at the very bottom of storage hierarchy. Because they are cheap, it is affordable to have a very large amount of storage at this level, but the tradeoff is performance. Not only it is the slowest medium, but also (because there are typically four layers between the hard disk and CPU register) it has the highest latency. Main memory is the first level of storage next to CPU caches, and it is directly accessible. Compared with accessing data on hard disks, typically data in main memory can be accessed more than 100,000 times faster. Compared with conventional DBMSs, which employ disks as the primary data store and use main memory as buffer for data processing, keeping data in memory can improve database performance just by the advantage in access time
S/4 HANA -In memory Data Base
Ensure Durability
Keeping data in memory raises several questions. First, what happens if there is a power outage?
Main memory is volatile storage. It loses its content when it is out of power. In this context, we refer to a set of properties known as atomicity, consistency, isolation, and durability (ACID) in database technology, which ensures that database transactions are processed reliably and are not susceptible to external disruptions. The persistence layer of SAP HANA ensures that changes are durable and that the database can be restored to the most recent committed state after a restart
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base
Ensure Durability
Keeping data in memory raises several questions. First, what happens if there is a power outage? Main memory is volatile storage. It loses its content when it is out of power. In this context, we refer to a set of properties known as atomicity, consistency, isolation, and durability (ACID) in database technology, which ensures that database transactions are processed reliably and are not susceptible to external disruptions. The persistence layer of SAP HANA ensures that changes are durable and that the database can be restored to the most recent committed state after a restart
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base
Ensure Durability
To achieve this goal in an efficient way, the persistence layer uses a combination of write-ahead logs, shadow paging, and data save points.
Nonvolatile storage is used for logs and save points. Upon restart after a power failure, the database can be restored much like a disk-based database. First, the database pages are restored from the most recent save points, and then the database logs are rolled forward to redo any changes that were not captured in the save points, eventually bringing the database to the same consistent state as before the power failure.
With all relevant data in memory, disk access is no longer a limiting factor for performance. with the increasing number of cores, CPUs can process more and more data per time interval.
The new performance bottleneck in an in- memory database arises when the CPU waits for data to be loaded from memory into the CPU cache.
All copy rights belongs to Aditya.Challa
S/4 HANA -In memory Data Base
Ensure Durability
The resource “DBMSs on a Modern Processor shows that traditional database systems could not address this challenge well. Provided that all data is buffered in main memory in traditional DBMSs, the CPU spends half of the execution time in stalls—that is, waiting for data to be loaded from main memory into the CPU cache.
To achieve desired performance, the key aspect is to work on a minimal set of data and minimize the amount of data that needs to be transferred between the main memory and processor.
All copy rights belongs to Aditya.Challa
Columnar Data Organization There are basically two options
Row Based Tables
Column-Based Tables
15
Columnar Data Organization
Conceptually, relational databases represent data in two-dimensional structures called tables. A table is a set of data elements organized in terms of vertical columns or attributes (which are identified by their name) and horizontal rows or records. The main memory, however, is a single-dimensional space, providing memory addresses that start at zero and increase serially to the highest available location. To store the data in memory, the database storage layer must decide how to map the twodimensional table structures to the linear memory address space.
All copy rights belongs to Aditya.Challa
Columnar Data Organization A row-based layout stores a table as a sequence of rows in which data elements forming a row are stored in contiguous memory locations. In the row-based layout, all attributes of a tuple (or ordered set of values) are stored consecutively and sequentially in memory,
In a columnar layout, the values of individual columns are stored together. A column-oriented layout stores a table as a sequence of columns in which data elements of individual columns are stored together.
Both storage models have their advantages. These can be traced back to the different memory access patterns required when performing data operations on row- oriented and column-oriented data layouts.
All copy rights belongs to Aditya.Challa
Columnar Data Organization
Row Based Tables
05
The table has a small number of rows (e.g., configuration tables)
Neither aggregations nor fast searching are required.
04 03 The application typically needs to access the complete record
The columns contain mainly distinct values, so the data compression rate would be low
02 01
The application needs to process only one record at a time (many selects and/ or updates of single records).
18
Columnar Data Organization
Column Based Tables Most the columns contain few distinct values (compared to the number of rows). The table has many records, and mostly columnar operations are required The table has many columns
The table is searched based on values of a few columns Calculations are typically executed on single or few columns only.
19
Columnar Data Organization
SAP HANA supports both row-store and column-store tables. High performance is achieved when column-store tables are used in memory, which shows that workloads on enterprise databases are mostly read-oriented and dominated by set processing.2 In most cases, operations work on many rows but only on a notably smaller subset of all columns. These factors speak in favor of using a columnar layout in an enterprise scenario. Columnar table layout enables effective projection by accessing only the relevant columns, thus reducing the total number of memory accesses. Operations on single columns, such as searching or aggregations, can be implemented as loops over an array stored in contiguous memory locations. Such an operation has high spatial locality and efficiently utilizes the CPU caches
All copy rights belongs to Aditya.Challa
Columnar Data Organization
With row-oriented storage, the same operation would be slower, because data of the same column is distributed across memory, causing CPU cache misses and thus stalling the CPU execution.
Columnar data storage allows highly efficient compression. Especially if the column is sorted, there will be ranges of the same values in contiguous memory, so compression methods such as run-length encoding or cluster encoding can be used more effectively. Even though hardware technology develops very rapidly and the size of available main memory constantly grows, the use of efficient compression techniques plays a key role in in-memory computing to do two things: first, to keep as much data in main memory as possible, and second to minimize the amount of data that must be read from memory to process queries and transfer data between nonvolatile storage mediums and main memory
All copy rights belongs to Aditya.Challa
Columnar Data Organization Data Encoding and Compression
Data compression is a set of techniques used to decrease the number of bits used for data representation in memory. It reduces the overall memory requirement (and therefore the cost) for keeping business data entirely in main memory. Also, read operations can be performed more efficiently on the compressed data, because data movement between the main memory and the CPU is minimized.
All copy rights belongs to Aditya.Challa
Columnar Data Organization Dictionary Encoding Dictionary encoding is the basis of many data compression techniques.
The main effect of dictionary encoding is that longer values, such as text strings, are represented with shorter codes (normally integer values).
Dictionary encoding works column wise. It constructs a dictionary for each column, which translates every distinct value of the column to a distinct integer code. The code can be stored in dictionary explicitly or implicitly (e.g., by the position of the value entry in the dictionary).
In our example, the dictionary for the column fame lists all the distinct values, and the position of that value in the dictionary represents the code for the value. represents Mary. Then, in the actual column store (attribute vector), it replaces all instances of Mary in the column fame with the number 24.
Dictionary encoding replaces longer text values with shorter numbers. This can lead to effective reduction in data size in memory when a column contains many identical values (e.g., multiple Johns in the same list). The more often identical values appear in a column, the greater this benefit of dictionary encoding.
In SAP business applications, many columns (e.g., country code, status code, or foreign keys) contain only a few distinct values compared to the number of rows. This allows for a very effective compression of column data using dictionary encoding.
All copy rights belongs to Aditya.Challa
Columnar Data Organization Dictionary Encoding
All copy rights belongs to Aditya.Challa
Columnar Data Organization Data Compression
On top of dictionary encoding, SAP HANA applies several lightweight compression techniques, such as prefix encoding, run-length encoding, cluster encoding, indirect encoding, and delta encoding. These techniques provide a good tradeoff between higher data compression rates and the additional CPU cycles needed for encoding and decoding. The basic idea of dictionary encoding is applicable also in row-based storage. However, in row- based storage successive memory locations contain data of different columns. Many compression methods, such as run-length encoding, cannot be applied on a row store. A higher compression factor of up to 10 can be achieved in a columnar store as compared to traditional row-oriented database systems.
All copy rights belongs to Aditya.Challa
Columnar Data Organization Operation on Compressed Data
By working with dictionaries to represent text as integers, dictionary encoding not only reduces the memory footprint of data but also increases speed for operations on the encoded data. A performance gain of a factor of between 100 and 1,000 has been reported when comparing operations on integer-encoded compressed
In-Memory Technology and SAP HANA values with operations on uncompressed data. There are several reasons for this result: Compressed data can be loaded faster into the CPU cache. Because the new limiting factor is data transport between memory and the CPU cache, performance gain exceeds the additional computing time needed for decompression. Example, an equality check of join conditions—are evaluated directly on integer codes. This istypically much faster than comparing string values
All copy rights belongs to Aditya.Challa
Columnar Data Organization Operation on Compressed Data
Compression can speed up operations such as scans and aggregations if the operator is aware of the compression. Given a good compression rate, computing the sum of the values in a column will be much faster if the column is run-length encoded, and many additions of the same value can be replaced by a single multiplication.
Certain operations, such as COUNT or NOT NULL, can be performed directly on the encoded data without having to retrieve the actual values from the dictionary.
Read operations can further benefit from dictionary encoding if the dictionary itself is sorted. A sorted dictionary can act like an index. The processing of retrieving a value from a sorted dictionary can be done using binary search. Compared to a full scan through the dictionary, this has a much-reduced complexity of O(log(n)). Of course, the cost of this enhancement is having to maintain the sorting order of the dictionary when new values are added.
All copy rights belongs to Aditya.Challa
Columnar Data Organization Parallel Execution
Another key aspect of SAP HANA is its capability to achieve optimal parallelism in processing data in memory. This is particularly important to take advantage of modern multicore computing systems. It is also key for multimode scale- out implementation and for ensuring scalability in such distributed systems. SAP HANA is engineered for parallel data processing that scales with the number of cores, even across multiple computing nodes in a distributed setup. Specifically, for DBMSs, optimization for multicore platforms accounts
All copy rights belongs to Aditya.Challa
Columnar Data Organization Parallel Execution in Columnar Store
Column-based storage simplifies parallel execution by using multiple processor cores. In a column store, data is already vertically partitioned. That means that parallelization can be achieved on different levels. First, operations on different columns can easily be processed in parallel. If multiple columns need to be searched or aggregated, then each of these operations can be assigned to a different processor core. In addition, operations on one column can be executed in parallel by dividing the column into multiple sections that are processed by different processor cores
All copy rights belongs to Aditya.Challa
Columnar Data Organization Parallel Execution in Columnar Store
All copy rights belongs to Aditya.Challa
Columnar Data Organization Parallel Aggregation
Aggregation functions like COUNT, SUM, AVERAGE, MAXIMUM, and MIMINUM are perfect examples for parallel execution. SAP HANA performs aggregation operations by spawning several threads that act in parallel, each of which operates on a separate segment of data in memory. Each thread executes aggregation functions in a loop wise fashion as follows: 1. Fetch a small partition of the input data in the shared memory.
2. Calculate the aggregated value on that partition. 3. Repeat steps 1 and 2 until the complete input data is processed. Each thread has a private buffer in which the local aggregated results are stored. When the aggregation threads are finished, the aggregated values in all private buffers are merged to produce the overall results
All copy rights belongs to Aditya.Challa
Columnar Data Organization Parallel Aggregation
All copy rights belongs to Aditya.Challa
Columnar Data Organization Delta Store and Merge
This brings us to the final key SAP HANA concept: using delta store to address potential performance issues in column store during data insert and merge operations.
When the issues faced during an insert operation, before moving to how delta store and merge help resolve them.
All copy rights belongs to Aditya.Challa
Columnar Data Organization Delta Store and Merge
Insert Let’s first look at what happens when inserting a new record into a table that is organized in a column-based format. Adding a new record in a column store means adding a new entry to every column of the table.
Because every column of the table consists of a dictionary and an attribute vector, adding a new entry to a column comprises the following steps:
1. Look up the dictionary for the new entry, and add a new value if the entry is not found. 2. Add the respective value to the attribute vector of the column.
Processing is particularly more complicated if the dictionary is sorted. Adding a new entry to the dictionary may result in a need to sort the dictionary again. In addition, the attribute vector must be updated as well to reflect the new orders of the dictionary, after which a recalculation of compression may be required. This can have significant impact on insert performance. SAP HANA resolves this problem through the delta store.
All copy rights belongs to Aditya.Challa
Columnar Data Organization Delta Store and Merge Delta Store Inserting directly to the compressed column can be slow. To resolve this, SAP HANA introduces the concept of the delta store, wherein the storage of a column-store table comprises a main store and a delta store. All write operations (insert, update, and delete) happen only in the delta store, which (as its name suggests) is a differential buffer. The main store is not involved during any write operation. Same as the main store, the delta store is also column oriented, meaning that every column of the table has a delta store and uses dictionary encoding. To optimize write performance, the dictionary of the delta store is not sorted. When adding a new entry, it is simply appended to the end of the dictionary. In addition, no further compression technique is applied other than dictionary encoding. The delta store exists only in memory and is not included in save points. Only logs are written out to nonvolatile storage to ensure durability of changes made to the delta store. The main store is highly compressed and optimized in terms of memory consumption and read performance. Although write operations only affect the delta part, read operations must be performed on both the main and delta stores, because the overall current state of the table is now the conjunction of the main and delta stores. During query execution, a query is logically translated into a query of the main store and the delta store. The results returned from both parts are then combined to build the overall results.
All copy rights belongs to Aditya.Challa
Columnar Data Organization Delta Store and Merge
All copy rights belongs to Aditya.Challa
Columnar Data Organization Delta Store and Merge
Delta Merge The delta store grows as more changes are made to a column table. A large delta store has negative impact on the overall system efficiency and read performance.
First, the delta store is not compressed, meaning that it consumes more memory. Second, the dictionary in delta store is not sorted, which makes it easier to add a new entry but costlier to look up an entry.
To optimize query execution performance of the system and to ensure optimum compression, the size of the delta store needs to be kept as small as possible. To achieve this, SAP HANA uses an online reorganization process that periodically transfers data from the delta store to the main store. This process is called delta merge.
The delta merge process can be costly, because it involves resorting the dictionary and recalculating compression. SAP HANA uses a dual-buffer mechanism to maintain
All copy rights belongs to Aditya.Challa
Columnar Data Organization Delta Store and Merge
There are three stages of a merge process Before the merge operation All write operations go to storage Delta1, and read operations read from storages Main1 and Delta1.
During the merge operation While the merge operation is running, all changes go into the second delta storage, Delta2. Read operations read from the original main storage, Main1, and from both delta storages, Delta1 and Delta2. Uncommitted changes in Delta1 are copied to Delta2. The content of Main1 and the committed entries in Delta1 are merged into the new main storage, Main2.
After the merge operation Main1 and Delta1 storages are deleted after the merge process is finished. With this double-buffer concept, the table only needs to be locked for a short time at the beginning, when open transactions are moved to Delta2, and at the end of the process, when the storages are “switched.” As a result, the merge process can be highly parallelized without blocking other read and write operations on the same data.
All copy rights belongs to Aditya.Challa
Columnar Data Organization
Delta Store and Merge
All copy rights belongs to Aditya.Challa
Columnar Data Organization
Delta Store and Merge
SAP HANA introduces a new way of thinking about how to construct business applications, such as financial applications, in an enterprise. The concept of using a columnar in-memory database is now being the backbone of enterprise In-Memory Technology and SAP HANA systems, handling both analytical and transactional processing in one system. Techniques such as parallelization, dictionary encoding, data compression, insert only, and the delta store tell a compelling story that lets applications such as SAP Simple Finance become a reality.
All copy rights belongs to Aditya.Challa
Data Redundancy Removal of Redundancy
Redundantly kept data—that is, data derived from other data available elsewhere in the database—is one of the big challenges of software systems. Example: The sum of two invoice amounts separately if the same value can easily be derived by calculating it on the fly from the two invoices. In the past, such data redundancy was only introduced to increase performance, because traditional databases could not keep up with user expectations considering billions of data entries. This came at the costs of significant effort to keep the redundant data consistent, increased database storage, and more complex systems. Now that SAP HANA improves performance radically, the need for redundancy vanishes. Based on a single source of truth, derived data can be calculated on the fly instead of being physically stored in the database. Hence, in the spirit of simplification, getting rid of redundancy is a key paradigm of SAP Simple Finance.
All copy rights belongs to Aditya.Challa
Data Redundancy Benefits of a Redundancy-Free System
Data redundancy occurs if data is repeated in more than one location—
Example In two separate database tables—or can be computed from other data already kept in the database. In general, all data that you can also derive from other data sources of a system is redundant.
Redundant data is distinguished by the fact that you need to maintain its consistency. If you modify the data at one location, then you need to apply corresponding modifications at all the other locations in the database to keep the relationship intact and avoid anomalies. Redundant data is high-maintenance data, and it won’t take care of itself.
Software engineers spotted the fundamental problems of data redundancy early on. In 1970, introduced the fundamental relational model that underlies many of today’s database systems.1
To reduce redundancy, normalization techniques such as normal forms are an integral part of the relational model.2 We can distinguish four different kinds of redundancy within a database system, depending on the relationship of redundant data to the original data:
All copy rights belongs to Aditya.Challa
Data Redundancy Benefits of a Redundancy-Free System
Materialized view
01
04
02 Materialized result set
05
Increased throughput
07
08
Materialized aggregate
Simplicity
Smaller database footprint
03
Duplicated data due to overlap
06
Consistency maintained by design
09
Flexibility in real time
3
Data Redundancy Simplifying the Core Data Model
As outlined above, enterprise applications in the past needed to store data redundantly to meet performance expectations of their users in view of limited database performance.
Applications that remain bound to traditional disk-based databases still experience these limitations.
SAP Simple Finance is based on SAP HANA, so it makes use of the dramatically improved performance of an inmemory database. At the same time, its data model is a non-disruptive evolution of the SAP ERP Financials data model, removing any redundancy that has historically been necessary for performance reasons.
When you look in to the data model of SAP ERP Financials, several instances of data redun- dancy quickly become apparent. The fundamental separation of different components (such as financial accounting, controlling, profitability, and others) into separate table structures is a case of duplicate data.
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
When you look in to the data model of SAP ERP Financials, several instances of data redundancy quickly become apparent. The fundamental separation of different components (such as financial accounting, controlling, profitability, and others) into separate table structures is a case of duplicate data.
The data model of each of these components in turn contained data redundancy in terms of materialized views and materialized aggregates. To explain the changes on this foundational level (essentially the first steps toward a redundancy-free system, completed by the Universal Journal), We now take a closer look into the data model of Financial Accounting’s G/L. The explanations similarly apply to the other components; they, too, have been simplified in the same spirit
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
Example With With reference reference the old tables from Financial Accounting for reasons of comparison; with the Universal Journal a next step merges the data structures of hitherto seep- arate components.
The fundamental and essential data tuples of every financial accounting system—besides master data—are the accounting documents and their line items. The system records records each transaction as an accounting document with at least two line items (one each for debit and credit entry) millions or billions of accounting documents (headers, primarily stored stored in table BKPF) and their line items items (table BSEG) and slow disk-based disk-based performance,
SAP ERP Financials needed materialized views and materialized aggregates to provide sufficiently fast access to line items with specific properties or to aggregate values. For this reason, the core data model of Financial Accounting among others, six materialized materialized views (three (three for open line items separated by accounts receivable, accounts payable, and G/L accounts, and three for cleared line items separated in the same manner) and three materialized aggregates for corresponding corresponding totals.
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
Example, in an SAP ERP Financials Financials system the materialized materialized view of all open accounts receivable receivable line items (table BSID) contains a copy of each line item (with (with a subset of attributes) attributes) from from the original table of accounting document line items that fulfills the following condition: the line item is open (that is, has not been cleared) and is part of the accounts receivable sub ledger. ledger. When a transaction clears the item, it also must delete the corresponding tuple from the materialized materialized view of open accounts receivable receivable items and add it to the corresponding corresponding materialized view of cleared accounts receivable items.
SAP Simple Finance these materializations have been removed to take the first step toward eliminating redundancy. redundancy. Instead, the corresponding corresponding tables have been replaced replaced with compatibility views. views.
From the accounting documents documents and line items as the single single source of truth, truth, any derived data can be calculated on the fly, most times with higher performance than in a traditional disk-based system. This includes, but of course is not limited to, the views and aggregates that have previously existed in materialized versions.
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
The resulting redundancy-free data model of Financial Accounting (before the Universal Journal) that is functionally equivalent to the previous data model.
In the spirit of simplification, it consists only of the essential tables for accounting documents and for accounting document line items that record the business transactions.
In addition, the compatibility views transparently provide access to the same information that was redundantly stored in materialized views and materialized aggregates before.
These redundant tables are in turn obsolete, since SAP HANA calculates the same information on the fly.
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
The compatibility views bear the same name as their historical predecessors to ensure that the changes are nondisruptive and do not require SAP customers to modify their custom programs. Any program—be it part of the SAP standard or a customer modification—that in the past accessed the materialized view of open accounts receivable line items is now seamlessly routed to the corresponding compatibility view.
The compatibility view calculates the result for each query on demand—without compromising performance. In this case, the view selects the open items belonging to the accounts receivable sub ledger directly from the original table of accounting document line items. Any additional selection conditions.
A specific customer—are immediately passed through to the query optimizer and integrated into the query execution plan.
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
All copy rights belongs to Aditya.Challa
Data Redundancy Simplifying the Core Data Model
The same approach applies to other components as well. For example, materialized aggregates on top of controlling documents are no longer necessary either, opening the possibility to combine different accounting components in the Universal Journal. cash management is another area with similar changes that remove data redundancy.
SAP Simple Finance data model is now entirely based on line items, without any p rebuilt materialized aggregates or other data redundancy. Not only is the data model simpler, but the program architecture is also simpler: the system “simply” records all business transactions as they happen.
Everything else is being calculated on the fly by algorithms on top of the data. Without any negative effect on your existing investments in SAP systems, you immediately benefit from switching to SAP Simple Finance.
All copy rights belongs to Aditya.Challa
Data Redundancy Immediate Benefits of the New Data Model Removing materialized views and materialized aggregates from the financial accounting data model has an immediate positive impact on the transactional throughput of the system. In the case of SAP Simple Finance, posting an accounting document requires neither inserting redundant duplicates into materialized views nor updating redundant aggregate values.
The corresponding effort and database operations to maintain consistency are no longer necessary.
Therefore, the number of tuples inserted or modified during database operations was indeed cut by half according to experimental measurements. These experimental measurements are based on real-world data of a large SAP customer. Five hundred accounting documents were posted, each with six line items. Instead of 26,000 tuples affected by UPDATE, INSERT, and DELETE operations in a traditional SAP ERP Financials system already running on SAP HANA,
SAP Simple Finance only needed to insert 11,000 database tuples into the tables of the financials component for the entire test—a savings of more than a factor of two.
Few tuples translate directly to less end-to-end transaction time spent posting a document. Instead of over 200 mms per document in SAP ERP Financials on SAP HANA, posting in SAP Simple Finance only needed 100 mms from end to end, down by a factor of two.
All copy rights belongs to Aditya.Challa