Descrição: Introduction to Migration to New General Ledger
Problems resulted from the 4G migration to 5G
THIS DOCUMENT TO HELP PEOPLE WHO NEED TO UPGRADE 9I TO 11G
SAP
For more than 15 years, Oracle MySQL has been a real structure piece in web technology and its applications, enjoying large acceptance. This is oftentimes permanently reason MySQL offers a strong database which enables firms to make system that execu
ADM545 Database Migration to Sybase ASE (for SAP systems) SAP NetWeaver
Date Training Center Instructors Education Website
Trademarks Microsoft, Windows, Excel, Outlook, PowerPoint, Silverlight, and Visual Studio are registered trademarks of Microsoft Corporation. IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, z10, z/VM, z/OS, OS/390, zEnterprise, PowerVM, Power Architecture, Power Systems, POWER7, POWER6+, POWER6, POWER, PowerHA, pureScale, PowerPC, BladeCenter, System Storage, Storwize, XIV, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, AIX, Intelligent Miner, WebSphere, Tivoli, Informix, and Smarter Planet are trademarks or registered trademarks of IBM Corporation. Linux is the registered trademark of Linus Torvalds in the United States and other countries. Adobe, the Adobe logo, Acrobat, PostScript, and Reader are trademarks or registered trademarks of Adobe Systems Incorporated in the United States and other countries. Oracle and Java are registered trademarks of Oracle and its affiliates. UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group. Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems Inc. HTML, XML, XHTML, and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology. Apple, App Store, iBooks, iPad, iPhone, iPhoto, iPod, iTunes, Multi-Touch, Objective-C, Retina, Safari, Siri, and Xcode are trademarks or registered trademarks of Apple Inc. IOS is a registered trademark of Cisco Systems Inc. RIM, BlackBerry, BBM, BlackBerry Curve, BlackBerry Bold, BlackBerry Pearl, BlackBerry Torch, BlackBerry Storm, BlackBerry Storm2, BlackBerry PlayBook, and BlackBerry App World are trademarks or registered trademarks of Research in Motion Limited. Google App Engine, Google Apps, Google Checkout, Google Data API, Google Maps, Google Mobile Ads, Google Mobile Updater, Google Mobile, Google Store, Google Sync, Google Updater, Google Voice, Google Mail, Gmail, YouTube, Dalvik and Android are trademarks or registered trademarks of Google Inc. INTERMEC is a registered trademark of Intermec Technologies Corporation. Wi-Fi is a registered trademark of Wi-Fi Alliance. Bluetooth is a registered trademark of Bluetooth SIG Inc. Motorola is a registered trademark of Motorola Trademark Holdings LLC. Computop is a registered trademark of Computop Wirtschaftsinformatik GmbH.
g2012525114543
SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP BusinessObjects Explorer, StreamWork, SAP HANA, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. Business Objects and the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and other Business Objects products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Business Objects Software Ltd. Business Objects is an SAP company. Sybase and Adaptive Server, iAnywhere, Sybase 365, SQL Anywhere, and other Sybase products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Sybase Inc. Sybase is an SAP company. Crossgate, m@gic EDDY, B2B 360°, and B2B 360° Services are registered trademarks of Crossgate AG in Germany and other countries. Crossgate is an SAP company. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary.
Disclaimer These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies (“SAP Group”) for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.
g2012525114543
g2012525114543
About This Handbook This handbook is intended to complement the instructor-led presentation of this course, and serve as a source of reference. It is not suitable for self-study.
Typographic Conventions American English is the standard used in this handbook. The following typographic conventions are also used. Type Style
Description
Example text
Words or characters that appear on the screen. These include field names, screen titles, pushbuttons as well as menu names, paths, and options. Also used for cross-references to other documentation both internal and external.
2012
Example text
Emphasized words or phrases in body text, titles of graphics, and tables
EXAMPLE TEXT
Names of elements in the system. These include report names, program names, transaction codes, table names, and individual key words of a programming language, when surrounded by body text, for example SELECT and INCLUDE.
Example text
Screen output. This includes file and directory names and their paths, messages, names of variables and parameters, and passages of the source text of a program.
Example text
Exact user entry. These are words and characters that you enter in the system exactly as they appear in the documentation.
Variable user entry. Pointed brackets indicate that you replace these words and characters with appropriate entries.
Contents Course Overview .......................................................... ix Course Goals ........................................................... ix Course Objectives ..................................................... ix
Unit 1: Introduction........................................................ 1 Introduction ..............................................................2
Unit 2: The Migration Project.......................................... 21 The Migration Project................................................. 22
Unit 3: System Copy Methods ........................................ 43 System Copy Methods ............................................... 44
Unit 4: SAP Migration Tools ........................................... 55 SAP Migration Tools .................................................. 56
Unit 5: R3SETUP/SAPINST ............................................ 81 R3SETUP/SAPINST.................................................. 82
Unit 6: Technical Background Knowledge ......................... 95 Data Classes (TABARTs) ............................................ 96 Miscellaneous Background Information...........................105
Performing a JAVA System Migration .............................283
Unit 10: Troubleshooting.............................................. 299 Troubleshooting ......................................................300
Unit 11: Special Projects .............................................. 325 Special Projects ......................................................326
This course offers detailed procedural and technical knowledge on homogeneous and heterogeneous system copies, which are performed by using R3LOAD/JLOAD on SAP NetWeaver systems with a focus on OS/DB migrations. The training content is mostly release independent, and is based on information up to 7.30. Previous releases, like R/3 4.x, R/3 Enterprise 4.7 (Web AS 6.20), ERP 2004 / NetWeaver '04 (Web AS 6.40), ECC 6.0 / NetWeaver '04S /NetWeaver 7.0x are covered as well. The course attendance is the prerequisite for the OS/DB Migration certification test.
Target Audience This course is intended for the following audiences: •
SAP Technology Consultants.
Course Prerequisites Required Knowledge •
Basic understanding of the SAP Systems setup.
Recommended Knowledge • • •
Basic knowledge of system administration of at least one operating system and one database system. Basic knowledge in SAP Basis administration. Experience in SAP systems installation.
Course Goals This course will prepare you to: •
Organizational consulting for and practical implementation of the migration of an operating system and / or database for SAP Systems which are based on ABAP and / or JAVA.
Course Objectives After completing this course, you will be able to: • • •
2012
Understand SAP OS/DB Migration Strategy. Understand SAP OS/DB Migration Check. Implement OS/DB migrations using SAP migration tools.
Unit 1 Introduction Unit Overview This unit should clarify what is a homogeneous or heterogeneous system copy, which tools are available, what is the GoingLive OS/DB Migration Check Service, and from where to get information about the migration procedure.
Unit Objectives After completing this unit, you will be able to: • • •
Distinguish between an SAP homogeneous system copy and an SAP OS/DB Migration Estimate the problems involved with a system copy or migration Understand the functions of the SAP OS/DB Migration Check
Unit Contents Lesson: Introduction ...............................................................2 Exercise 1: Introduction .................................................... 15
Lesson: Introduction Lesson Overview Lesson Objectives After completing this lesson, you will be able to: • • •
Distinguish between an SAP homogeneous system copy and an SAP OS/DB Migration Estimate the problems involved with a system copy or migration Understand the functions of the SAP OS/DB Migration Check
Business Example You want to understand which system copy / migration tools are provided by SAP and what is the difference between a homogeneous and a heterogeneous system copy. Furthermore you are interested in the scope of the OS/DB Migration Check service.
Figure 1: Definition of Terms
Please note: Improved functionality was often introduced with new SAP Kernel versions. If the new SAP Kernel was backward compatible to older SAP releases, the new functionality was available for the older releases as well. Example: a SAP Web AS 6.20 running on SAP Kernel 6.40 can make use of R3LOAD 6.40 features. Throughout the SAP Documentation and SAP Notes, the term NetWeaver ‘04S and NetWeaver 7.00 is used in a mixed way, meaning the same.
The initial SAP service offering for OS/DB Migrations was originally called “SAP OS/DB Migration Service”, but was renamed to “SAP OS/DB Migration Check Service”. Today, the term “SAP OS/DB Migration Service” is used for SAP fix price projects, in which SAP consultants migrate customer systems to a different database and/or operating system, mainly from remote.
Figure 2: Copying a SAP System
A client transport is not a true SAP System copy or migration. The copy function cannot transport all of the system settings and data to the target system, nor is it intended to do so. This applies particularly to production systems. Of course client transports have no meaning to JAVA-based SAP Systems. For further reference see SAP Note 96866 “DB copy by client transport not supported”. Databases can be duplicated by restoring a backup. In most cases, this is the fastest and easiest way to perform a homogeneous system copy. Some databases even allow a database backup to be restored in a different operating system platform (OS migration). Note: 3rd party database tools and methods suitable for switching the operating system (OS migration) or even the database (DB migration) are not supported by SAP, if not explicitly mentioned in SAP documents or SAP Notes. Nevertheless, the usage of unsupported tools or methods is not forbidden in general (the tool and method support must be provided by the 3rd party organization in such a case). SAP cannot be made responsible for erroneous results. After the system copy, the migrated SAP system is still under maintenance, but efforts to fix problems caused by the unsupported tool or method, can and will be charged to the customer! SAP System copy tools can be used for system copies or migrations on any SAP supported operating system and database combination as of R/3 Release 3.0D. Since NetWeaver '04 (6.40) JAVA based systems can also be copied or migrated to any SAP supported operating system and database combination by SAP System copy tools.
The SAP System copy tools are used for homogeneous and heterogeneous system copies. SAP System copy tools used for heterogeneous system copies are called SAP Migration Tools. In the remainder of this document, the the term SAP Migration Tools will be used.
Figure 4: SAP System Copy Tools / Migration Tools (2)
BW functionality is part of the ABAP Web AS 6.40 standard. Since then, every SAP System can contain non-standard objects! Special post- and pre-migration activities are required for them. The generated DDL statements of SMIGR_CREATE_DDL are used to tell R3LOAD how to create non-standard objects in the target database. The RS_BW_POST_MIGRATION program adapts the non-standard objects to the requirements of the target system.
The reports SMIGR_CREATE_DDL and RS_BW_POST_MIGRATION are required since BW 3.0, and for all systems based on BW functionality (i.e. SCM/APO). They are also mandatory for NetWeaver '04 (Web AS 6.40) and later. JLOAD is available since NetWeaver '04 (Web AS 6.40). Earlier versions of the JAVA Web AS (i.e. Web AS 6.20) did not store data in a database. JSIZECHECK is available since NetWeaver 04S / 7.00. JLOAD and JSIZECHECK are JAVA programs which are called by SAPINST.
Figure 5: Support Tools for ABAP System Copies (1)
The PACKAGE SPLITTER is available in a JAVA and in a Perl implementation. R3SETUP is using the Perl PACKAGE SPLITTER. SAPINST provides the Perl and JAVA PACKAGE SPLITTER or the JAVA version only (release dependent). Two TABLE SPLITTERs exist: One is database independent and is called R3TA, the other is a PL/SQL script implementation and is available for Oracle only. Table splitting is supported since R3LOAD 6.40 in combination with MIGMON. MIGCHECK is implemented in JAVA.
Figure 6: Support Tools for ABAP System Copies (2)
MIGMON and MIGTIME are implemented in JAVA. The JAVA based tools are release independent and can be utilized on any SAP platform which supports the required JAVA version. The Distribution Monitor can be used if the R3LOAD caused CPU load should be distributed over several application servers. This can improve the database server performance significantly. It is often seen in Unicode conversion scenarios. Normally the Distribution Monitor makes sense only, if more than one application server is planned to use. It was developed to support system copies based on Web AS 6.x and later.
Figure 7: Support Tools for JAVA System Copies
JPKGCTL (also called JSPLITTER) was developed to reduce the export/import run-time for large JAVA systems. A single JLOAD process exporting the whole database (like implemented in previous SAPINST versions) was often too slow as soon as the database was exceeding a certain size, so it was necessary to provide package and table splitting for JLOAD as for R3LOAD.
JMIGMON and JMIGTIME do offer a similar functionality like MIGMON and MIGTIME.
Figure 8: Possible Negative Consequences of a System Copy
The goal of this training is to prevent problems, such as those mentioned above, by providing in-depth knowledge about each SAP System copy step and the tools which are involved. Following the SAP guidelines ensures a smooth migration project.
Figure 9: Definition: SAP Homogeneous System Copy
For the target system, the same operating system can also mean an SAP certified successor like Windows 2003 / Windows 2008. Depending on the method used for executing the homogeneous system copy, it might be necessary to upgrade the database or the operating system of the source system first. On older SAP System releases, even an upgrade might be necessary. This can happen if the target platform requires a database or operating system version that was not backward released for the SAP System version that is to be copied, etc.
New hardware on the target system might be supported by the latest operating system and database version only. With or without assistance from a consultant, customers can execute a homogeneous system copy by themselves. If you plan to use a new hardware type or make major expansions to the hardware (such as changing the disk configuration), we recommend involving the hardware partner as well.
Figure 10: Reasons for Homogeneous System Copies
The term MCOD is used for SAP installations where [M]ultiple [C]omponents are stored in [O]ne [D]atabase. If a system was installed with an SAP reserved SAPSID, a homogeneous system copy can be used to change the SAPSID. To see if a change is required, check with SAP. All the mentioned reasons above are also applicable to heterogeneous system copies.
Figure 11: Definition: SAP Heterogeneous System Copy
An OS/DB migration is a complex process. Consultants are strongly advised to do all they can to minimize the risk with regard to the availability and performance of a production SAP System. Depending on the method used for executing the heterogeneous system copy, it might be necessary to upgrade the database or the operating system of the source system first. On older SAP System releases, even an upgrade might be necessary. This can happen if the target platform requires a database or operating system version that was not backward released for the SAP System version that is to be migrated, etc. New hardware on the target system might be supported by the latest operating system and database version only. The decisive factors for performance in a SAP System are the parameter settings in the database, the operating system, and the SAP System itself (which depends on the operating system and the database system). During an OS/DB migration, the old settings cannot simply be taken unchanged. Determining the new parameter values requires an iterative process, during which the availability of the migrated system is restricted.
Figure 12: Common Heterogeneous System Copy Reasons
The above mentioned points are the primary reasons for changing an operating system or database, but the reasons for homogeneous system copies also apply. The reasons also partially apply to homogeneous system copies.
Figure 13: Frequently used SAP Terms
The above table shows which term is being used for SAP System copies. For example, when changing the operating system, this is called an OS migration and is a heterogeneous system copy. Generally, the term heterogeneous system copy implies that it is some kind of OS and/or DB migration. The term “SAP System copy” is used in a more unspecific way.
Figure 14: Homogeneous or Heterogeneous System Copy?
The table above is only valid when using R3LOAD or JLOAD. Homogeneous system copies using Backup/Restore will require the same database version on source and target system, or must be upgraded after the system copy. Note: If the hardware architecture in a system copy does change, but the operating system type stays the same, it is a homogenous system copy. In other words, if the operating system is called the same on source and target, it is a homogeneous system copy. This does not automatically imply the possibility of a backup/restore to copy the database (i.e. system copy from Solaris SPARC to Solaris Intel). It only points out, SAP treats it like a homogeneous system copy and no “SAP OS/DB Migration Check” is required. SAP assumes the operating system behavior will be the same without regards of the underlying platform. Please check the database documentation for details on available system copy procedures. Further examples are: HP-UX PA-RISC to HP-UX IA64, LINUX X86 to LINUX POWER, etc.
The cost for the SAP OS/DB Migration Check is specific to the customer location and may differ from country to country. The SAP OS/DB Migration Check will be delivered as a remote service. In the “Remote Project Audit”, SAP checks the OS/DB migration project planning. The required tools for homogeneous or heterogeneous system copies (installation software) are provided by SAP to customers free of charge. The software can be downloaded from the SAP Service Marketplace.
Complete information about OS/DB migrations is available in the SAP Service Marketplace and the SAP Developer Network. FAQs = Frequently Asked Questions The manuals for homogeneous and heterogeneous system copies can be downloaded from the SAP Service Marketplace. SAP Notes are available on homogeneous and heterogeneous system copies. Check the homogeneous / heterogeneous system copy manuals for the respective SAP Note numbers.
Exercise 1: Introduction Exercise Objectives After completing this exercise, you will be able to: • Differentiate between homogeneous and heterogeneous system copies and to know the procedural consequences for a migration project.
Business Example In customer projects, you must know whether a system move or a database change is a homogeneous or heterogeneous system copy and in which case it is necessary to order a SAP OS/DB Migration Check Service.
Task 1: A customer plans to invest in a new and more powerful hardware for his ABAP-based SAP production system (no JAVA Web AS installed). As the operating system and database version are not up-to-date, he also wants to change to the latest software versions in a single step while doing the system move. Current system configuration: Oracle 10.2, AIX 6.1 Planned system configuration: Oracle 11.2, AIX 7.1 1.
Is the planned system move a homogeneous system copy, a DB migration or an OS migration? Describe your solution!
2.
If the SAP System copy tool R3LOAD is used, will it be necessary to perform an operating system or database upgrade after the move? Describe your solution!
Task 2: An SAP implementation project must change the database system before going into production, because of strategic customer decisions. The customer system configuration was setup as a standard three-system landscape (development, quality assurance, production). Each system is configured as ABAP Web AS with JAVA Add-In.
2012
1.
Is it necessary to order a SAP OS/DB Migration Check for the planned database change?
2.
According the SAP System copy rules, who must do the system copies?
Solution 1: Introduction Task 1: A customer plans to invest in a new and more powerful hardware for his ABAP-based SAP production system (no JAVA Web AS installed). As the operating system and database version are not up-to-date, he also wants to change to the latest software versions in a single step while doing the system move. Current system configuration: Oracle 10.2, AIX 6.1 Planned system configuration: Oracle 11.2, AIX 7.1 1.
Is the planned system move a homogeneous system copy, a DB migration or an OS migration? Describe your solution! a)
2.
The system move will be a homogeneous system copy. Neither the database nor the operating system will be changed. During a system copy, an upgrade to a new database or operating system software version is not a problem, as long as the operating system and database combinations are supported by the respective SAP System release and SAP kernel version.
If the SAP System copy tool R3LOAD is used, will it be necessary to perform an operating system or database upgrade after the move? Describe your solution! a)
Provided the fact that the installation software is able to install on the target operating system version and also supports the installation of target database release directly, no additional OS/DB software upgrade will be necessary after the R3LOAD import. In the case that the new target database is not supported by the installation software, a database upgrade will have to be done after the system copy.
Task 2: An SAP implementation project must change the database system before going into production, because of strategic customer decisions. The customer system configuration was setup as a standard three-system landscape (development, quality assurance, production). Each system is configured as ABAP Web AS with JAVA Add-In. 1.
Is it necessary to order a SAP OS/DB Migration Check for the planned database change? a)
a) The system landscape contains a pre-production system only. In this case, no OS/DB Migration Check service is necessary, as its intention is to be used for productive systems only.
According the SAP System copy rules, who must do the system copies? a)
2012
The change of a database involves a heterogeneous system copy, which must be done from someone who is certified for OS/DB migrations. The fact that the systems are not productive is regardless.
Lesson Summary You should now be able to: • Distinguish between an SAP homogeneous system copy and an SAP OS/DB Migration • Estimate the problems involved with a system copy or migration • Understand the functions of the SAP OS/DB Migration Check
Unit Summary You should now be able to: • Distinguish between an SAP homogeneous system copy and an SAP OS/DB Migration • Estimate the problems involved with a system copy or migration • Understand the functions of the SAP OS/DB Migration Check
Unit Objectives After completing this unit, you will be able to: • • •
Describe the scope of services performed by the SAP OS/DB Migration Check Estimate the effort involved in a migration Plan a migration project
Unit Contents Lesson: The Migration Project ................................................. 22 Exercise 2: The Migration Project ......................................... 35
Lesson: The Migration Project Lesson Overview Contents • •
Project Schedule of an OS/DB Migration Drawing Up a Project Schedule for the SAP OS/DB Migration Check
Lesson Objectives After completing this lesson, you will be able to: • • •
Describe the scope of services performed by the SAP OS/DB Migration Check Estimate the effort involved in a migration Plan a migration project
Business Example You want to setup an OS/DB Migration project. You need to know which steps are required and what can be a reasonable time line to finish the tasks.
Figure 18: Project Schedule of an OS/DB Migration (1)
Migration requests can be directed to the local SAP Support Organization or to the local customer SAP contact (i.e. Customer Interaction Center).
An introductory phase applies to new SAP products only. If mentioned in a system copy SAP Note, customers must register to the introductory phase before starting the OS/DB Migration. In such a case, it was decided that this particular product can only be migrated under SAP's control (providing direct support from SAP development in case of problems). Usually the introductory phase is limited to few months only. Customer projects with required SAP involvement can be i.e. “Pilot Projects” or a “Minimized Downtime Service” (MDS) for very large databases. The standard OS/DB migration procedure applies also to heterogeneous system copies of ABAP Systems in “Introductory Phase Projects” or “Pilot Projects”. The project type specific activities can be seen as something over-and-above a standard migration procedure.
Figure 19: Project Schedule of an OS/DB Migration (2)
Prepare for the “SAP OS/DB Migration Check Analysis Session” as soon as possible. It runs on the productive SAP System (the source system) and must be performed before the final migration. Migration test-runs are iterative processes that are used to find the optimal configuration for the target system. In some cases, one test-run suffices, but several repeated runs are required in other cases. The same project procedure applies to both the operating system migration and the database migration. Test and final migrations are mandatory for productive SAP Systems only. Most other SAP Systems like development, test or quality assurance are less critical. If the first test-run for those systems shows positive results, an additional
migration-run (final migration) is not necessary. Nevertheless, the schedule defined in the “SAP OS/DB Migration Check Project Audit questionnaire” must reflect test-runs and final migrations for all SAP Systems of the customer landscape. The “SAP OS/DB Migration Check Analysis Session” will be performed on the production migration source system and the “SAP OS/DB Migration Check Verification Session” will run on the migrated production system after the final migration.
Figure 20: Time Schedule for Productive SAP Systems
You should begin planning a migration early. If you procure new hardware, there may be long delivery times. The time which is necessary to do serious tests varies from system to system. Allow at least two weeks! SAP recommends to wait with a SAP release upgrade on a migrated productive system for 6 weeks! First get the system stable and then do the upgrade! SAP will schedule the “SAP OS/DB Migration Check Analysis Session” only if the “Remote Project Audit Session” was completed successfully.
The above requirements refer to the technical implementation of the migration. Application-specific tests require knowledge of the applications. ABAP Dictionary knowledge is required for System copies based on R3LOAD. Understand consequences of missing objects on database and/or SAP ABAP Dictionary. A method to verify, that all tables in the R3LOAD structure files can be exported without problem, would be a compare of the table names from the structure file against the ones from the database catalog. The more easy way is a test export. Useful SAP Notes are: • •
9385 What to do with QCM tables (conversion tables) 33814 Warnings of inconsistencies between database & R/3 DDIC (DB02)
Figure 22: Contractual Arrangements
Database or operating system specific areas in the SAP Service Marketplace may not be visible to the customer unless the contractual agreement regarding the new configuration is finalized with SAP. The “SAP OS/DB Migration Check” is mandatory for each productive system, but not for development, quality assurance, or test systems.
A productive system can be a stand-alone ABAP system, but it can also be an ABAP Web AS with an JAVA Add-in, or an ABAP Web AS with a JAVA Web AS, each using its own database. The services are checking the parameters for ABAP and JAVA-based systems. A heterogeneous system copy of a stand-alone JAVA system means that no ABAP system is copied in the migration project.
Figure 23: Hardware Procurement
For safety reasons, an OS/DB migration of productive SAP Systems must always be performed in a separate system. For this reason, should serious problems occur, you can always switch back to the old system. Retaining the old system also simplifies error analysis. When you change the database, consider the new disk layout. Each database has its own specific hardware requirement. From a performance point of view, it might not be sufficient to provide a duplicate of the current system.
Each productive system must be migrated twice (test and final migration)! Development, test und quality assurance systems are less critical and can often be migrated in a single step. In many cases, the migration of a quality assurance system is not necessary, because it can be copied from the migrated production system.
Figure 25: SAP OS/DB Migration Check Project Audit
The “SAP OS/DB Migration Check Project Audit Questionnaire” will automatically be sent from SAP to the customer, as soon as the “SAP OS/DB Migration Check” was requested. The migration project time schedule should be created in consultation with the migration partner.
For safety reasons, SAP cannot approve any migration of a production SAP System in which the source system is deleted after the data export in order to set up the target database. Make sure to include the dates for test and final migration steps of every SAP System, not only for productive systems. The migration project schedule must reflect correct estimates of the complexity of the conversion, its time schedule, and planned effort. SAP checks for the following: • • •
Is the migration partner technology consultant SAP-certified for migrations? Does the migration project schedule meet the migration requirements? Technical feasibility. Are hardware, operating system, SAP System, and database versions compatible with the migration tools, and is this combination supported for the target system?
The migration of an SAP System is a complex undertaking that can result in unexpected problems. For this reason, it is essential that SAP has remote access to the migrated system. Remote access is also a prerequisite for the “SAP OS/DB Migration Check”.
Figure 26: SAP Migration Tools
The migrations tools must fit to the used SAP release and kernel. Only for those SAP installations that are running old database or operating systems (which are no longer supported by current installation software 4.6D and below), it may be necessary to order the Migration CD set. Most questions regarding tool versions are answered in the SAP System copy notes and manuals. Also check the “Product Availability Matrix” (PAM) in the SAP Service Marketplace. Please open a call at the SAP Service Marketplace if in doubt about which tools to use in certain software combinations.
In some cases it is advisable to upgrade the operating system, database or SAP release first, before performing the migration. In rare cases if can be even necessary to use intermediate systems.
Figure 27: SAP OS/DB Migration Check Analysis
The “SAP OS/DB Migration Check Analysis Session” is focused on the special aspects involved in the platform or database change. It is performed on the production SAP System with regard to the target migration system environment. The results of the “SAP OS/DB Migration Check” are recorded in detail and provided to the customer through the SAP Service Marketplace. They also include recommendations for the migration target system. ABAP and JAVA-based SAP Systems components will be checked.
Figure 28: Required Source System Information (1)
It must be carefully checked that all software components can be migrated – in particular JAVA-based components! The exact version information of each software component is necessary to be able to download/order and use the right installation software. It could be the case, that a certain Support Package Stack must be installed before a OS/DB migration can take place (i.e. certain target database features can be utilized only if the Support
Packages are current). Updating Support Packages can be a serious problem in some customer environments, because of modifications, system interdependencies, or fixed update schedules. The current system landscape must be known to have the big picture. There may be OS/DB related dependencies between certain systems which must be analyzed first. The number of productive systems indicates the number of test and final migrations Which systems should be migrated in which order? What is the customer time schedule (deadlines)? When minimizing the downtime, the amount of tuning efforts that are necessary increases and much more time must be spend on it. In case of a hosting environment, will the consultant have access to the source system (which limitations will apply)?
Figure 29: Required Source System Information (2)
The number of CPUs and information about the I/O sub system can help in determining the best number of export processes. The sizes of the source databases indicate how long the migration will take. Next to the database size itself, the size of the largest tables will influence the export significantly. For the first test migration 10% - 15% of the source database size should be available as export file system free space. If large tables are stored in separate locations (i.e. table spaces), should this also be retained in the target database? On some databases it can increase performance or ease database administration. MDMP or UNICODE system? In case of AS/400 R/3 4.6C and below: is it an EBCDIC or ASCII based system? Case 1: Table exists in database but not in the ABAP Dictionary - table will not be exported.
Case 2: Table exists in ABAP Dictionary but not in database – export errors are to be expected. How to handle external files (spool files, archives, logs, transport system files, interfaces, ) ? Which files must be copied to the target system? The migration support tools like MIGMON and the PACKAGE SPLITTER used by SAPINST will need JAVA. The old Perl-based PACKAGE SPLITTER of R3SETUP needs Perl version 5. Because of strict software policies, customers might not allow the installation of additional software on productive systems. If source and target system are not in the same location – which media will be available to transport the dump files?
Figure 30: Required Target System Information
Figure 31: Migration Test Run
Generating the target database: •
2012
Make a generous sizing of the target database, or set it to an auto extensible mode (if possible), this will prevent load errors caused by insufficient space. An analysis of disk usage cannot be performed until after the data has been loaded.
RFC connections External interfaces Transport environment Backup Printer Archiving etc.
Figure 32: Final Migration
A cut-over plan should be created, including an activity checklist and a time schedule. Include plenty of reserve time. The migration of a production system is often performed under intense time pressure. Checklists will help you to keep track of what is to be done, and when to do it. Not all the tests and checks which were done during previous test runs must be necessarily done again in the final migration. In most cases it makes sense to have one cut-over-plan for the technical migration, and a separate one for application related tasks.
The “SAP OS/DB Migration Check Verification Session” should be scheduled 4 weeks after the final migration of the productive SAP System. This is because several weeks are required to collect enough data for a performance analysis. The “old” production system should still be available. ABAP and JAVA-based SAP Systems will be checked.
Exercise 2: The Migration Project Exercise Objectives After completing this exercise, you will be able to: • Create a migration project plan and a time schedule that is compliant to SAP needs.
Business Example To plan a system copy project, you must know about the proper timing and the required test phases. The database size will influence the expected downtime. You should know about the tasks of each OS/DB Migration Check service component and how many services are required in a specific customer project.
Task 1: The SAP heterogeneous system copy procedure for productive systems requires a test phase between test and final migration, and also recommends not performing an upgrade to the next SAP System release until at least 6 weeks after the final migration. 1.
What is the minimal duration recommended for the test phase?
2.
What should be done in the test phase, and who should perform it?
3.
What is the reason for the recommended time duration between final migration and the next upgrade?
Task 2: A customer SAP System landscape is made up of several systems. All systems have to be migrated to a different database. System set 1 (ERP): Development, Quality Assurance, Production. System set 2 (BW): Development, Production. 1.
How many SAP OS/DB Migration Checks must be ordered?
2.
How many system copies are involved? (More than one answer can be right)
Task 3: The following facts as listed below are known in inspecting the source system of a migration (ABAP Web AS with JAVA Add-In). Please indicate for every item what the impact on the R3LOAD/JLOAD migration will be. 1.
The to total size of the database is 500 GB (used space). Continued on next page
Solution 2: The Migration Project Task 1: The SAP heterogeneous system copy procedure for productive systems requires a test phase between test and final migration, and also recommends not performing an upgrade to the next SAP System release until at least 6 weeks after the final migration. 1.
What is the minimal duration recommended for the test phase? a)
2.
What should be done in the test phase, and who should perform it? a)
3.
Two weeks is the minimum amount of time to be considered between the test and final migration of a productive system.
The test phase should be utilized to check the migrated system regarding the most important customer tasks and business processes. End users who know their daily business very well should do the major part of the testing. Two weeks might be sufficient even in complex environments.
What is the reason for the recommended time duration between final migration and the next upgrade? a)
Every time a system has been copied to a different operating system and/or database, it takes some time to get familiar with it and to establish a smooth-running production environment. In the case that an upgrade immediately follows the migration, the direct cause of the problems may be hard to identify. First get the system stable and then do the upgrade!
Task 2: A customer SAP System landscape is made up of several systems. All systems have to be migrated to a different database. System set 1 (ERP): Development, Quality Assurance, Production. System set 2 (BW): Development, Production. 1.
How many SAP OS/DB Migration Checks must be ordered? a)
System sets 1 and 2 contain productive systems. Because of this, two separate SAP OS/DB Migration Checks must be ordered.
How many system copies are involved? (More than one answer can be right) a) System set 1:
1 x Development, 1 x Quality Assurance, 2 x Production.
Alternate:
1 x Development, 2 x Production, homogeneous system copy from Production to Quality Assurance.
System set 2:
1 x Development, 2 x Production.
Task 3: The following facts as listed below are known in inspecting the source system of a migration (ABAP Web AS with JAVA Add-In). Please indicate for every item what the impact on the R3LOAD/JLOAD migration will be. 1.
The to total size of the database is 500 GB (used space). a)
2.
The sizes of the largest ABAP tables are 34 GB, 20 GB, 18 GB. a)
3.
The largest ABAP tables will significantly influence the amount of time necessary to export or import the database. A single R3LOAD process for each large table will improve the export and import time.
The sum of all tables and index sizes of the JAVA schema does not exceed 2 GB. a)
4.
From a database size of 500 GB it can be expected, that the R3LOAD / JLOAD export will need about 10% - 15% (50 GB - 75 GB) of local disk storage.
Because the JAVA tables will only need a little bit of time to export, this will not be critical for the overall export time.
Transaction DB02 shows two tables belonging to the ABAP schema user that only exist on the database, but not in the ABAP Dictionary. a)
R3LDCTL only reads the ABAP Dictionary. Tables that exist on the database, but not in the ABAP Dictionary, are ignored. As a consequence they are not inserted into any *.STR file. The same happens to tables belonging to the JAVA schema, but are not defined in the JAVA Dictionary. They will not be exported.
Lesson Summary You should now be able to: • Describe the scope of services performed by the SAP OS/DB Migration Check • Estimate the effort involved in a migration • Plan a migration project
Unit Summary You should now be able to: • Describe the scope of services performed by the SAP OS/DB Migration Check • Estimate the effort involved in a migration • Plan a migration project
Unit 3 System Copy Methods Unit Overview This unit gives an overview of available SAP system copy methods. Of most importance are information about SAP products which cannot be migrated the standard way and R3LOAD restrictions that exist if a PREPARE of an upgrade was run or the Incremental Table Conversion (ICNV) was not finished.
Unit Objectives After completing this unit, you will be able to: •
Evaluate the database-specific and -unspecific options for performing SAP homogeneous or heterogeneous system copies (OS/DB Migrations)
Unit Contents Lesson: System Copy Methods................................................ 44 Exercise 3: System Copy Methods ....................................... 49
Lesson: System Copy Methods Lesson Overview Contents •
Database-specific and -unspecific methods for SAP homogeneous or heterogeneous system copies (OS/DB Migrations)
Lesson Objectives After completing this lesson, you will be able to: •
Evaluate the database-specific and -unspecific options for performing SAP homogeneous or heterogeneous system copies (OS/DB Migrations)
Business Example In as customer project, it must be figured out, what’s the best method to move a system from one platform to another. The right approach depends on the involved database and the type of operating system used.
Figure 34: Comment
Any Hotline or Remote Consulting effort that results from the use of a copy or migration procedure that has not been officially approved by SAP will be billed.
DB2 for LUW = DB2 for Linux, UNIX, Windows The above table shows that all SAP supported database systems can be copied to each other by using R3LOAD. Note: 1. 2.
The database specific methods might be faster than the R3LOAD (if released by SAP) The database specific methods might be faster for an OS migration than R3LOAD (if released by SAP).
On earlier SAP release the PREPARE phase imports and implements ABAP Dictionary changes which cannot be unloaded consistently by R3LOAD. A complete reset of all PREPARE changes is not possible. Restarting the PREPARE phase on the migrated system will not help. If it applies to your SAP release it is mentioned in the system copy guide and/or in a corresponding SAP Note. The Incremental Table Conversion implements database-specific methods which cannot be unloaded consistently by R3LOAD (danger for loss of data). Before using R3LOAD, finish all table conversions! The transaction ICNV should not show any entry.
Figure 37: R3LOAD Restrictions (2)
For BW 3.0 and 3.1 R3LOAD system copies the appropriate Support Package level must be applied and a certain patch level for R3LOAD and R3SZCHK is required (according SAP Note 777024). Related SAP Notes: • • •
46
771209 “NetWeaver 04: System copy (supplementary note)” 777024 “BW 3.0 and BW 3.1 System copy (supplementary note)” 888210 “NetWeaver 7.**: System Copy (supplementary note)”
Figure 38: Database Specific System Copy Methods (ABAP)
Certain databases can be even migrated to other operating systems by a simple restore. However, heterogeneous system copies by database-specific methods must be approved by SAP. If in doubt contact SAP before executing such kind of OS migration. The SAP OS/DB Migration Check is required anyway!
Notes on database specific methods for ABAP based systems (make sure that the method is also valid for JAVA Add-In installations): 1.
DB2: Copy - Database copy on the same host, Dump - Database copy to another host. 2. DB4: SAVLIB/RSTLIB method, see SAP Note: 585277 3. DB6: Database director (redirect restore) or brdb6 tools. 4. DB6: Cross platform restore since DB2 UDB version 8 (for AIX, HP-UX, Solaris), see SAP Note: 628156 5. HDB: Check http://help.sap.com/hana_appliance for the respective guide 6. INF: Informix Level 0 Backup, see SAP Notes: 89698, 173970. 7. ADA: Cross platform restore if source and target OS is of same endian type. SAP Note: 962019 8. MSS: Detach/Attach database files, see SAP Notes: 151603, 339912 9. ORA: The SAPINST Backup/Restore method is released for all products. SAP Notes: 659509, 147243 10. ORA: Transportable Tablespace / Database, see SAP Notes: 1035051, 1003028, 1367451 11. SYB: Backup/Restore, see SAP Note: 1591387 Operating system Endian types, see SAP Note: 552464
Figure 39: Database Specific System Copy Methods (JAVA)
SAPINST runs an internal function called “Migration Tool Kit” (“Migration Controller”) to adjust the SAP JAVA target system for the new instance name, instance number, host name, etc.
Exercise 3: System Copy Methods Exercise Objectives After completing this exercise, you will be able to: • Know in which cases to prefer homogeneous system copies with R3LOAD/JLOAD against database specific methods. • Understand how to handle OS migrations with database tools.
Business Example For a SAP system move, it should be known what the available options and their specific prerequisites are. R3LOAD is quite flexible, but needs more time for the export/import compared to a backup/restore scenario. Nevertheless, there can be good reasons to use R3LOAD anyway.
Task 1: The homogeneous copy of an ABAP system performed with database specific means is in most cases much faster than using the R3LOAD method. 1.
What could be some of the reasons for using the R3LOAD method?
2.
Which specific checks should be done before using R3LOAD to export the source system?
Task 2: Some databases allow OS migrations of SAP systems using database specific means.
2012
1.
Is it necessary in this case to order an SAP OS/DB Migration Check for productive systems?
2.
Is a test and final migration required for productive systems?
3.
Must one be certified in order to perform an OS/DB migration?
Solution 3: System Copy Methods Task 1: The homogeneous copy of an ABAP system performed with database specific means is in most cases much faster than using the R3LOAD method. 1.
2.
What could be some of the reasons for using the R3LOAD method? a)
The source and target systems use the same operating system and database type but different versions.
b)
The target disk layout is completely different from the source system and the database specific copy method does not allow adapting to new disk layouts.
c)
If the database storage unit names include the SAP SID, the installation of the target database according the R3LOAD method will allow you to choose new names.
d)
Data archiving is done in the source database and the system copy to the target system should also be used to reduce the amount of required disk space
e)
In the case that systems should be moved in or out of a MCOD database.
Which specific checks should be done before using R3LOAD to export the source system? a)
Make sure the PREPARE for the next SAP upgrade was not started (if this restriction applies to your SAP System release) and verify that the Incremental Table Conversion (ICNV) has completed.
Task 2: Some databases allow OS migrations of SAP systems using database specific means. 1.
Is it necessary in this case to order an SAP OS/DB Migration Check for productive systems? a)
2.
It doesn’t matter which method is used to perform a heterogeneous system copy of a productive SAP ABAP System. The SAP OS/DB Migration Check is required anyway.
Is a test and final migration required for productive systems? a)
A test and a final system migration is required, when performing an SAP heterogeneous system copy.
Lesson Summary You should now be able to: • Evaluate the database-specific and -unspecific options for performing SAP homogeneous or heterogeneous system copies (OS/DB Migrations)
Unit Summary You should now be able to: • Evaluate the database-specific and -unspecific options for performing SAP homogeneous or heterogeneous system copies (OS/DB Migrations)
Unit 4 SAP Migration Tools Unit Overview This unit describes the SAP migration tools in detail. It also describes the tasks of R3SETUP/SAPINST and in which phase they are calling the migration tools. The R3LOAD and JLOAD export directory structure will be discussed.
Unit Objectives After completing this unit, you will be able to: •
Recognize the tools that are required to perform a SAP OS/DB migration and describe their functions
Unit Contents Lesson: SAP Migration Tools................................................... 56 Exercise 4: SAP Migration Tools .......................................... 75
Lesson: SAP Migration Tools Lesson Overview Contents • •
Functional description of the SAP OS/DB migration tools Technical procedure for an OS/DB migration using the SAP migration tools
Lesson Objectives After completing this lesson, you will be able to: •
Recognize the tools that are required to perform a SAP OS/DB migration and describe their functions
Business Example You want to know, which SAP tools are executed during an export/import based system copy, and what are the specific differences between the ABAP and JAVA system copy.
Figure 40: Installation Programs R3SETUP and SAPINST
R3SETUP can run in character mode where no graphic environment is available.
SAPINST requires JAVA and a graphic environment which it supports (Microsoft Windows, or X-Windows).
Figure 41: ABAP DDIC Export and DB Object Size Calculation
R3LDCTL reads the ABAP Dictionary to extract the database independent table and index structures, and writes them into *.STR files. Every version of R3LDCTL contains release-specific, built-in knowledge about the table and index structures of specific SAP internal tables, which can not be retrieved from the ABAP dictionary. R3LDCTL creates the DDL.TPL files for every SAP supported database. Since 6.40, additional DDL_LRG.TPL files are generated to support system copies of large databases more easy. As of version 4.5A, the size computation of tables and indexes are removed from R3LDCTL (R/3 Load Control) and implemented in a separate program called R3SZCHK (R/3 Size Check), which also creates the *.EXT files. R3LDCTL is still used for *.EXT file generation on 3.1I and 4.0B. R3LDCTL/R3SZCHK can only run as a single process (no parallelization is possible). The table DDLOADD is used to store the results of the table/index size calculation. R3SZCHK generates the target database size file DBSIZE.XML for SAPINST. The size calculation is limited to a maximum of 1.78 GB for each database object (table or index).
The standard R3LOAD implementation contains an EBCDIC/ASCII conversion of LATIN-1 character sets only. Other translations tables are available upon request. Note that 4.6C is the last R/3 version which runs on EBCDIC. Those 4.6C SAP Systems running on AS/400 (iSeries) must be converted to ASCII before an upgrade to a higher release can be possible. Character set conversions to Unicode are implemented since R3LOAD 6.10. The conversion will be done at export time, as additional information is necessary only available in the source system. Before the data export/import, R3LOAD performs a syntax check on the *.STR files. This prevents unintended overlaps between field names in tables and R3LOAD key words, as well as other inconsistencies. If an R3LOAD process terminates with an error, a restart function allows the data export/import to be continued after the last successfully recorded action. Special care must be taken on restarts after OS crashes, power failures, and out of space on export disk (see the troubleshooting section). As of Release R/3 4.5A, R3LOAD writes information about the source system into the dump file. R3LOAD checks these entries when starting the import. If source and target OS or DB are different, R3LOAD will need a valid migration key to perform the input. The parallel export/import of single tables using multiple R3LOADs processes is supported since R3LOAD 6.40.
For SAP migration tool version dependencies, see the relevant SAP Notes. For special considerations on migration tools for Release 3.x, see the relevant SAP Notes for 3.1I. From time to time, SAP provides updated installation software to support new operating systems or database versions for the installation of older SAP releases directly. These updates might have new installation programs, but will still use the matching R3LDCTL, R3SZCHK, R3LOAD and kernel versions for the SAP System release in charge.
Figure 44: DDL Statements for Non-Standard DDIC Objects
The report SMIGR_CREATE_DDL generates DDL statements for non-standard database objects and writes it into .SQL files. The .SQL file is used by R3LOAD to create the non-standard DB objects in the target database, bypassing the information in .STR files. Non-standard objects are using DB specific features/storage parameters, which are not part of the ABAP dictionary (mainly BW objects). Since NetWeaver '04, BW functionality is an integral part of the standard. Now customers or SAP can decide to implement BW objects on any system. The report must run to make sure that no non-standard DB objects get the wrong storage parameters on the target system.
The report RS_BW_POST_MIGRATION performs necessary adaptations because of DB specific objects in the target system (mainly BW objects). Required adaptations can be the regeneration of database specific coding, maintaining aggregate indexes, ABAP dictionary adaptations, and many others. The program should run independently, regardless of whether a .SQL file was used or not. The reports above are not applicable to BW 2.x versions!
Figure 45: ABAP Web AS – Source System Tasks ≤ NW 04
Depending on the database, update statistics is required before the size calculation or not. R3SETUP/SAPINST calls R3LDCTL and R3SZCHK to generate various control files for R3LOAD and to perform the size calculation for tables and indexes. R3LDCTL will also do the size calculation for tables and indexes on R/3 releases before 4.5A. Once the size of each table and index has been calculated, R3SETUP/R3SZCHK computes the required database size. R3SETUP generates a DBSIZE.TPL. R3SZCHK creates a DBSIZE.XML for SAPINST. Optional MIGMON can be used to reduce the unload and load time significantly. A special exit step was implemented to call MIGMON since SAPINST for NetWeaver '04. Earlier versions of SAP systems can benefit from MIGMON as well. Appropriate break-points must be implemented.
R3SETUP/SAPINST/MIGMON generates R3LOAD command files for every *.STR file. SAPINST/MIGMON calls R3LOAD to generate task files for every *.STR file. The splitting of *.STR files improves unload/load times. For table splitting the usage of MIGMON is mandatory (6.40 and later)!
Figure 46: ABAP Web AS – Target System Tasks ≤ NW 04
Depending on the database type, the database is installed with or without support through R3SETUP or SAPINST. Optional MIGMON can be used to reduce the unload and load time significantly. A special exit step was implemented to call MIGMON in SAPINST for NetWeaver '04. Earlier versions of SAP systems can benefit from MIGMON as well. Appropriate break-points must be implemented in the R3SETUP/SAPINST installation flow. After the data load, it is necessary to run update statistics to achieve the best possible performance. Ensuring ABAP DDIC (Dictionary) consistency means, the program “dipgntab” will be started to update the SAP System “active NAMETAB” from the database dictionary (the table field order). The last step in each migration process is to create database specific objects by calling SAP programs via RFC. To be successful, the password of user DDIC of client 000 must be known.
The report RS_BW_POST_MIGRATION is called as one of the post-migration activities, which are required to bring the system to a proper state, required since ABAP Web AS 6.40 (NetWeaver '04) and all SAP Systems using BW functionality based on Web AS 6.20. For table splitting the usage of MIGMON is mandatory (6.40 and later)!
Figure 47: ABAP Web AS – Source System Tasks ≥ NW 7.0
In newer SAPINST versions there is an option to skip the update statistic. Since NetWeaver 7.0 (NetWeaver '04S), some SAPINST functionalities have been removed and MIGMON is called instead. The above slide shows that the whole R3LOAD handling is done by MIGMON. SAPINST implements MIGMON parameter related dialogs and generates the MIGMON property file. After the export is completed, MIGMON gives the control back to SAPINST. Even if MIGMON is configured automatically by SAPINST, it can still be configured and called manually for special purposes.
Figure 48: ABAP Web AS – Target System Tasks ≥ NW 7.0
SAPINST uses MIGMON for the import as well. The export and the import can run at the same time, as long as the target system has already been prepared. Even if MIGMON is configured automatically by SAPINST, it can still be configured and called manually for special purposes.
Figure 49: ABAP Web AS – Export Directories and Files
R3SETUP and SAPINST automatically creates the shown directory structure on the named dump file system. During the export procedure, the files are then copied to the specified directory structures. Since NetWeaver 7.0 the dump directory contains an ABAP and/or a JAVA subdirectory to store the exports into one location, but separated by name. The *.STR, *.TOC and the dump files are stored in the DATA directory. All *.EXT files are stored in the corresponding database subdirectory. Under UNIX, the directory names are case sensitive. The .SQL and SQLFiles.LST (since 7.02) files exist only, if the report SMIGR_CREATE_DDL created them and they were copied to the database subdirectory (automatically by SAPINST, or manually according the system copy instructions). In most SAPINST implementations, the *.EXT files are only copied for Oracle to DB/. Example target database: Oracle *.STR, *.TOC, and *. files are stored in /DATA *.EXT files and the target database size file DBSIZE.* are stored in /DB/ORA The DDLORA.TPL file is stored in /DB At import time, R3SETUP and SAPINST will read the content of file LABEL.ASC to verify the dump directory location. The *.WHR files do only exist if the optional table splitting was used.
Figure 50: JAVA Data Export/Import
As of NetWeaver '04, JAVA data is stored in a database, but there are still JAVA applications storing persistent data in the file system. JLOAD deals with database data only. File system data is covered by SAPINST functionality.
JLOAD is not designed to be a stand-alone tool. For migrating a JAVA-based SAP system, SAPINST will need to perform additional steps which are version and installed software components specific. Unlike R3LOAD which exports only table data, JLOAD can export the dictionary definitions and the table data into dump files. JLOAD writes its data in a format that is independent of database and platform. This format can be read and processed on all platforms supported by SAP. If JLOAD terminates with an error, a restart function allows the data export/import to be continued after the last successfully recorded action. Before NetWeaver 7.02 one single JLOAD process did the whole export or import. Starting with 7.02, multiple JLOAD processes can run simultaneously. As of SAPINST for NetWeaver 7.02 package and table splitting is available for JLOAD.
Figure 51: JLOAD Job File Creation using JPKGCTL
In previous version, JLOAD did not only export the table data, it also generated it own export/import job files. Starting with NetWeaver 7.02 JPKGCTL is used for it. Because of the need for faster exports and imports, package and table splitting was implemented. As a consequence, it was necessary to separate the meta data export from the table data export to allow a separate table creation for splitted tables. All JLOAD Processes will now be started by JMIGMON. The JLOAD package size information is stored in “sizes.xml”.
The size calculation is not limited to a certain object size (like R3SZCHK). Files containing “Initial Extents” (like the *.EXT file for R3LOAD) are not required for JLOAD. In case of a database change during a heterogeneous system copy, the conversion weights for data and indexes are calculated using master data/index sizes. The export sizes are converted to import sizes using the conversion coefficients, and 20-30% additional space is added for safety reasons. If the computed size is less then some default values (i.e. 1GB for Oracle), then default sizes are used in the output file.
Figure 53: Flow Diagram JAVA Add-In / JAVA System Copy
Figure 54: JAVA Web AS – Source System Tasks NW 04 / 04S
Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. The steps can vary in their order. The JSIZECHECK is called to create the DBSIZE.XML files for all target databases where this file is needed. The log files for JSIZECHECK can be found in the installation directory. For applications storing their persistent data in the file system, SAPINST collects the files into SAPCAR archives. The software deployment manager (SDM) is called to put its file system components (incl. SDM repository) into the SDMKIT.JAR file. JLOAD is called to export the JAVA meta and table data. In NW 04 SAPINST must be called twice. One time for the ABAP export and the second time for the JAVA part. Since NW 04S SAPINST provides a selection for JAVA Add-In which exports the ABAP and the JAVA part in one single step.
Figure 55: JAVA Web AS – Target System Tasks NW 04 / 04S
Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. The steps can vary in their order. The database software installation is only required in cases where a JAVA Web AS is installed using its own database, opposed to an JAVA Add-in installation into an existing ABAP database. JLOAD is called to load the database. SDM file system software components are re-installed (re-deployed). Applications specific data is restored from SAPCAR archives. Various post-migration tasks must be done, to bring the system to a proper state. Since NW 04S SAPINST provides a selection for JAVA Add-In which imports the ABAP and the JAVA part in one single step.
Figure 56: JAVA Web AS – Source System Tasks – JPKGCTL
Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. The steps can vary in their order. The JSIZECHECK is called to create the DBSIZE.XML files for all target databases where this file is needed. The log files for JSIZECHECK can be found in the installation directory. JPKGCTL distributes the JAVA tables to package files (job files) and can optionally split tables. JMIGMON calls JLOAD to export the JAVA table data. For applications storing their persistent data in the file system, SAPINST collects the files into SAPCAR archives. Since 7.10 not required anymore. The software deployment manager (SDM) is called to put its file system components (incl. SDM repository) into the SDMKIT.JAR file. Since 7.10 not required anymore. The JPKGCTL/JMIGMON is active only if the environment variable “JAVA_MIGMON_ENABLED=true” was set before starting SAPINST 7.02. If the environment variable was not set, the export looks like in NW 04S. Later versions of SAPINST will use JPKGCTL/JMIGMON by default.
Figure 57: JAVA Web AS – Target System Tasks – JPKGCTL
Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. The steps can vary in their order. SDM file system software components are re-installed (re-deployed). Since 7.10 not required anymore. JLOAD is called to load the database. Applications specific data is restored from SAPCAR archives. Since 7.10 not required anymore. Various post-migration tasks must be done, to bring the system to a proper state. The JPKGCTL/JMIGMON is active only if the environment variable “JAVA_MIGMON_ENABLED=true ” was set before starting SAPINST 7.02. If the environment variable was not set, the import looks like in NW 04S. Later versions of SAPINST will use JMIGMON by default.
Figure 58: JAVA Web AS – Export Directories and Files
The JLOAD.LOG, *_.LOG and *.STAT.XML files will be created in the SAPINST installation directory. The *_.STA files are in the SAPINST installation directory or in /usr/sap///j2ee/sltools. The “SOURCE.PROPERTIES” file contains information that is used to create the central instance on the target system. Directories: Applications (APPS), DB, JLOAD Dump (JDMP), Software Deployment Manager (SDM) The DB sub-directories contain the target database size files created by JSIZECHECK (since SAPINST for NetWeaver 7.0) The APPS directory holds archives from applications storing their persistent data in the file system. The subdirectories and files are only created if the application is installed and known by SAPINST, otherwise application specific directives must be performed, to copy the required files to the target system (see respective SAP Notes). Examples for applications are: ADS (Adobe Document Services), PORTAL (SAP Portal), KM (Content Management and Collaboration) The APPS and SDM directory may disappear in future releases as no JAVA relevant persistent data is stored in the file system anymore.
Since NetWeaver 7.10 the Software Deployment Manager (SDM) using a file system based repository, is not used anymore. The repository is now stored in the database and can be exported with JLOAD. JAVA applications were changed to store no persistent data in the file system. As a result SAPINST does not need to collect application files for system copies anymore. As NetWeaver 7.10 (released for certain SAP products only) was available before SAPINST 7.02, JLOAD package and table splitting is not available for this version. Releases using SAPINST functionality based on 7.02 and higher may provide these features later on. Please check the system copy guides and SAP Notes for updates.
Note: The ABAP/JAVA Dual Stack Split is intended to be used in a homogeneous system copy scenario, but not for heterogeneous migrations. The name Software Logistics Toolset stands for a product-independent delivery channel which delivers up-to-date software logistics tools. http://service.sap.com/sltoolset.
As of SAP NetWeaver 7.0 including Enhancement Package 3 and SAP Business Suite 7i2011, which is based on SAP NetWeaver 7.0 including Enhancement Package 3, the installation of SAP dual-stack systems is no longer supported. Furthermore, as of SAP Business Suite 7i2011, it will no longer be possible to upgrade an SAP dual-stack system to a higher release. Related SAP Notes: • • •
1655335 Use Cases for Splitting Dual-Stack Systems. 1685432 Dual-Stack Split 2.0 SP1 for Systems Based on SAP NetWeaver. 1563579 Central Release Note for Software Logistics Toolset 1.0
Definition of a Dual-Stack System SAP system that contains installations of both Application Server ABAP (AS ABAP) and Application Server Java (AS Java). A dual-stack system has the following characteristics: • • •
Common SID for all application servers and the database Common startup framework Common database (with different schemas for ABAP and Java)
Available options for splitting a dual-stack system that is based on SAP NetWeaver into one ABAP stack and one Java stack each with own system ID (the dual-stack system is reduced to an ABAP system and the Java system is reinstalled):
2012
Move Java database:
Export JAVA stack and import into a separate database. Remove original JAVA stack.Keep JAVA database: Export JAVA stack and import into the same database, but as MCOD installation. Remove original JAVA stack.Remove JAVA stack: Similar to "Keep JAVA database", but without installation and import into a new system.
Keep JAVA database:
Export JAVA stack and import into the same database, but as MCOD installation. Remove original JAVA stack.
Remove JAVA stack:
Similar to "Keep JAVA database", but without installation and import into a new system.
Exercise 4: SAP Migration Tools Exercise Objectives After completing this exercise, you will be able to: • Better understand what the tasks and purposes of the SAP System copy tools are.
Business Example You need to know the tasks of R3LDCTL, R3ZSCHK, R3LOAD, and JLOAD.
Task 1: R3LDCTL reads the ABAP dictionary and writes database independent table and index structures into *.STR files. 1.
As the *.STR file only contains database independent structures, how is R3LOAD able to assemble a create table SQL statement for the target database?
2.
Not all the tables within *.STR files can be found with transaction SE11 (table maintenance) in the SAP System. A look at the database dictionary confirms that these tables do exist. What is the reason?
Task 2: The program R3SZCHK computes the size of each table, primary key, and index. 1.
The target database of a system copy does not require INITIAL EXTENTs when creating a table. What else can be the purpose of the size computation?
Task 3: Every R3LOAD process needs a command file to start a data export or import. 1.
Which programs generate the command files?
2.
How do the programs know how many command files to create if no table splitting is involved?
Task 4: JLOAD is used to export the JAVA data, which is stored in the database. 1.
2012
How is JAVA Web AS related file system data handled in NetWeaver 7.00?
Solution 4: SAP Migration Tools Task 1: R3LDCTL reads the ABAP dictionary and writes database independent table and index structures into *.STR files. 1.
As the *.STR file only contains database independent structures, how is R3LOAD able to assemble a create table SQL statement for the target database? a)
2.
R3LDCTL creates DDL . TPL template files, which contain all the necessary information to assemble a create table SQL statement for the target database. Information from *.STR and *.EXT files are used to fill the table or index specific part of the statement.
Not all the tables within *.STR files can be found with transaction SE11 (table maintenance) in the SAP System. A look at the database dictionary confirms that these tables do exist. What is the reason? a)
Tables that made up the ABAP dictionary itself, or used by internal kernel functions, can not be viewed with standard dictionary transactions. R3LDCTL contains a built-in knowledge about these tables and can write their structures directly into the *.STR files.
Task 2: The program R3SZCHK computes the size of each table, primary key, and index. 1.
The target database of a system copy does not require INITIAL EXTENTs when creating a table. What else can be the purpose of the size computation? a)
The sizes of tables and indexes are used to compute the amount of disk space that will be required to create the target database. The Package Split-ters rely on size information from the *.EXT files
Task 3: Every R3LOAD process needs a command file to start a data export or import. 1.
Which programs generate the command files? a)
2.
The programs R3SETUP, SAPINST or MIGMON create the command files.
How do the programs know how many command files to create if no table splitting is involved? a)
Command files are created for every *.STR file that can be found.
Task 4: JLOAD is used to export the JAVA data, which is stored in the database. 1.
How is JAVA Web AS related file system data handled in NetWeaver 7.00? a)
2012
The installed software components must be recognized by SAPINST 7.00 or by the tools which are called from it. Most of the file system data is col-lected in SAPCAR files, and the SDM data is stored inside the SDMKIT.JAR file. In addition, SAP System copy notes might give in-structions on how to copy some files manually.
Unit 5 R3SETUP/SAPINST Unit Overview This unit describes the SAP installation programs R3SETUP and SAPINST. The control files will be explained. Emphasis is on how to implement user-defined break-points to stop R3SETUP/SAPINST after/before certain installation steps.
Unit Objectives After completing this unit, you will be able to: •
•
Understand how R3SETUP and SAPINST control the export and import processes of homogeneous or heterogeneous system copies and how to influence their behavior. Recognize the structure of the R3SETUP *.R3S control files, and be able to adjust their contents if necessary.
Lesson: R3SETUP/SAPINST Lesson Overview Contents The role of R3SETUP and SAPINST in the homogeneous or heterogeneous system copy process
Lesson Objectives After completing this lesson, you will be able to: •
•
Understand how R3SETUP and SAPINST control the export and import processes of homogeneous or heterogeneous system copies and how to influence their behavior. Recognize the structure of the R3SETUP *.R3S control files, and be able to adjust their contents if necessary.
Business Example The export or import phase of a R3LOAD based system copy should be improved. For that purpose the installation tool R3SETUP/SAPINST must be stopped in certain phases. You need to know how to prepare the tools for that.
Figure 61: R3SETUP: *.R3S Files
The command file DBEXPORT.R3S controls the database export of a homogeneous or heterogeneous system copy. CENTRAL.R3S calls other *.R3S files as selected. DBRELOAD.R3S is only used for re-loading an already finished installation (that is, after the test migration). Available for Oracle only.
Older *.R3S files are: CENTRDB.R3S for a combined installation of central instance and database, and CEDBMIG, used for a combined installation of central instance and database for homogeneous or heterogeneous system copies.
Figure 62: R3SETUP: *.R3S File Structure
The command file consists of several sections. The beginning of a section is always indicated by the section name in square brackets. Each section contains a set of keys and corresponding parameter values. The [EXE] section represents an installation roadmap with all of the steps listed in sequence. The steps are executed as listed (the step with the lowest number first). Some parameters are not written to the R3SETUP command file until runtime. Parameters which are preset by editing the *.R3S file will not be overwritten from default values. After a section has been successfully executed, it receives the status OK. R3SETUP stops on error if a section can not be executed. The section receives the status ERROR. R3SETUP always reads the [EXE] section first to get the execution order, and then examines the status of each section. The first section with an ERROR status or without any status will be executed next. Removing the OK status from a section will force R3SETUP to execute this section again.
Between the execution of two command sections in a *.R3S file, you may need to stop and make manual changes to the R3LOAD control files, modify database settings, or even call MIGMON. As shown in the graphic, R3SETUP can be forced to stop, by implementing user-defined break-points. The R/3 installation kits for Windows operating systems provide R3SEDIT.EXE for modifying *.R3S files in an easy way. SAP Note: “118059 Storage parameter for system copy with R3load” describes how to implement user break-points. SAP Note: “784118 System Copy Java Tools” explains how to find the MIGMON software on the SAP Marketplace. The MIGMON*.SAR archive contains a PDF-document which shows how to use MIGMON with R3SETUP.
The content of the LABEL.ASC file in the export directory will be compared against the expected string inside DBMIG.R3S to make sure that the import is read from the right location. The same mechanism is used by SAPINST.
Figure 65: SAPINST: *.XML Files
SAPINST records the installation progress in the “keydb.xml” file. SAPINST can continue the installation from a failed step, without having to repeat previous steps. The package.xml file contains the name of installation media (CDs) and the expected LABEL.ASC content.
The current version of SAPINST can be checked by executing “SAPINST –v”. As long as the used SAPINST version does not provide a documented way to implement user break-points, the program must be forced to stop by intended error situations. SAPINST starting with 7.0 SR2 offers the possibility to manipulate step execution via a graphic user interface, the so-called “Step Browser”. The Step Browser shows the components and steps that make up an installation. You may manipulate the state of single steps, groups of steps, and even whole components and their sub-components. By invoking the context menu for a step and choosing “Insert Dialog Exit Step above Selection” or “Insert Dialog Exit Step below Selection” you may stop an installation before or after a certain step. To activate the Step Browser, call SAPINST with the command line parameter “SAPINST_SET_STEPSTATE=true”. Please be aware, the “Step Browser” functionality is not supported officially, so the usage is done on own risk!
The values calculated for ABAP database storage units are estimates, like the values for the initial extents, and primarily serve as guidelines for sizing the target database. You will probably have to increase or decrease individual values during, or after the first test migration. ABAP Tables and indexes that are larger than 1.78 GB will be normalized to an initial extent of 1.78 GB. The target database size calculation is based on estimations. Adjust the database size manually if required. The JAVA DBSIZE calculation does not have table or index size limitations, but the result is based on estimations as well.
Exercise 5: R3SETUP/SAPINST Exercise Objectives After completing this exercise, you will be able to: • Know how R3SETUP and SAPINST can be influenced and adapted to special needs.
Business Example You want to force a specific behavior of R3SETUP/SAPINST.
Task 1: The installation program R3SETUP will be started with a command line containing the name of a “*.R3S” file to read (i.e. “RESETUP –f DBMIG.R3S”). The purpose of “*.R3S” files is not only to define installation steps; it is also used to store parameters and status information. R3SETUP sets the status of completed steps to OK and stops on error if a step can not be executed successfully. An erroneous step will get the status “ERROR”. Every time R3SETUP is started, the “*.R3S” file will be copied first, to have a backup of the original content. Next, R3SETUP will begin the execution at the first step that has the status “ERROR”, or no status at all. For repeated test migrations or for the final migration of a production system, it would be helpful to have a DBMIG.R3S file that rebuilds the database without reinstalling the database software again. For this purpose, we need a “DBMIG.R3S” file which starts with the generation of an empty database. 1.
What can be done to create such a “*.R3S” file? Different methods are possible.
2.
What happens to R3SETUP parameters that were preset by hand?
Task 2: SAPINST stores all its installation information in “*.XML” files. As the file structure is neither easy to read nor documented, modifying the files can be risky, as it might cause unexpected side effects.
2012
1.
What can be done to force SAPINST to stop before a certain installation step?
2.
In the case where we need to repeat a system copy import, it would be useful to have a SAPINST that starts at a certain step. How could this be achieved without modifying the files?
Solution 5: R3SETUP/SAPINST Task 1: The installation program R3SETUP will be started with a command line containing the name of a “*.R3S” file to read (i.e. “RESETUP –f DBMIG.R3S”). The purpose of “*.R3S” files is not only to define installation steps; it is also used to store parameters and status information. R3SETUP sets the status of completed steps to OK and stops on error if a step can not be executed successfully. An erroneous step will get the status “ERROR”. Every time R3SETUP is started, the “*.R3S” file will be copied first, to have a backup of the original content. Next, R3SETUP will begin the execution at the first step that has the status “ERROR”, or no status at all. For repeated test migrations or for the final migration of a production system, it would be helpful to have a DBMIG.R3S file that rebuilds the database without reinstalling the database software again. For this purpose, we need a “DBMIG.R3S” file which starts with the generation of an empty database. 1.
What can be done to create such a “*.R3S” file? Different methods are possible. a)
Insert a break point in the “*.R3S” file at the place where R3SETUP should stop. Copy the “*.R3S” file using a new name.
b)
Remove the “STATUS=OK” lines from completed “*.R3S” files. Begin editing the section where R3SETUP should start later on. Caution: The step order is defined in the [EXE] section. If you reuse an already executed “*.R3S” file, be sure to remove the STATUS=OK lines from all sections following an [EXE] order. Do not skip steps. Use this method only if you want to repeat the installation exactly like it was done before!
2.
What happens to R3SETUP parameters that were preset by hand? a)
R3SETUP does not overwrite preset parameters with default values. A description of each installation step and related parameters can be found in the installation directory (sub-directory “doc”), or on the installation CD.
Task 2: SAPINST stores all its installation information in “*.XML” files. As the file structure is neither easy to read nor documented, modifying the files can be risky, as it might cause unexpected side effects. 1.
What can be done to force SAPINST to stop before a certain installation step? a)
2.
In the case where we need to repeat a system copy import, it would be useful to have a SAPINST that starts at a certain step. How could this be achieved without modifying the files? a)
2012
Since SAPINST NetWeaver 7.0 SR2 the step browser can be used to insert an exit dialog before or after an installation step. Earlier SAPINST versions can only be stopped by forcing intended errors.
Stop SAPINST before the step where you would like to start later on. Copy the entire installation directory as it is. Restore the saved installation directory to its original location to redo the installation.
Lesson Summary You should now be able to: • Understand how R3SETUP and SAPINST control the export and import processes of homogeneous or heterogeneous system copies and how to influence their behavior. • Recognize the structure of the R3SETUP *.R3S control files, and be able to adjust their contents if necessary.
Unit Summary You should now be able to: • Understand how R3SETUP and SAPINST control the export and import processes of homogeneous or heterogeneous system copies and how to influence their behavior. • Recognize the structure of the R3SETUP *.R3S control files, and be able to adjust their contents if necessary.
Unit 6 Technical Background Knowledge Unit Overview This unit explains where all the information is coming from that is stored in the various R3LOAD and JLOAD control files. This is the key to understand why things are as they are. It also gives a better understanding of the ABAP dictionary. • • •
ABAP table types and storage parameters. ABAP data types and data access through the DBSL interface. JAVA data types and data access through the JDBC interface.
Unit Objectives After completing this unit, you will be able to: • • • • • •
Explain how Data Classes are used to map tables to database storage units Understand how Data Classes are handled by R3LDCTL and R3LOAD Create customer Data Classes Explain the purpose of table DBDIFF Understand how the R3LOAD/JLOAD data access is working Distinguish between the R3SZCHK behavior if the target database type is the same or different than the source database type
Unit Contents Lesson: Data Classes (TABARTs) ............................................. 96 Lesson: Miscellaneous Background Information ...........................105 Exercise 6: Technical Background Knowledge ......................... 111
Lesson: Data Classes (TABARTs) Lesson Overview Purpose of Data Classes (TABARTs) in the ABAP DDIC and R3LOAD control files
Lesson Objectives After completing this lesson, you will be able to: • • •
Explain how Data Classes are used to map tables to database storage units Understand how Data Classes are handled by R3LDCTL and R3LOAD Create customer Data Classes
Business Example In the target database of a migration, some very large tables should be stored in customer defined database storage units. For that purpose, you need to know how the ABAP data dictionary and R3LOAD is dealing with Data Classes/TABARTs.
Figure 68: Definition
By this definition, examples of database storage units are: • • •
The table types are maintained in the ABAP Dictionary, regardless of the database used.
Figure 70: TABART – Table Types (2)
Tables in clusters or pools also contain TABART entries in their technical configuration. These entries do not become active, unless the tables are converted to transparent tables.
Since NetWeaver '04, the above TABARTs can be found in any SAP System based on Web AS 6.40 and later. Even if no BW info cube was created, some tables do exist belonging to the TABARTs as shown above.
Figure 73: Tables DDART, DARTT, TS
The TS tables contain the list of all SAP defined storage units in a database. Table DDART contains all the TABARTs that are known in the SAP System. Table DARTT contains short TABART descriptions in various languages. Note: table TSDB2 may not exist in NetWeaver systems.
Figure 74: Assignment: TABART – Database Storage Unit
R3LDCTL reads tables TA and IA, and writes the assignments between TABARTs and database storage units into DDLTPL. Tables TA and IA only exist for databases with the appropriate architecture.
Figure 75: Technical Configuration – Table DD09L
DD09L: ABAP Dictionary, technical configuration of tables (TABART and TABKAT) R3LDCTL extracts the corresponding TABART and size category (TABKAT) for each table, from table DD09L of the ABAP Dictionary. This information is written to the *.STR files.
TG/IG: Assignment of size category (TABKAT = table category) to database storage parameters. Table TG gives R3LDCTL the information (i.e. for Oracle) about the size of “Default Initial Extent”, “Next Extent”, “Min Extent”, and “Max Extent”. This information is saved in the files DDL.TPL. The assignment of a table to a specific table category is used to determine the “Next Extent Size” in *.STR. The “Initial Extent Size” actually used, is calculated and saved in *.EXT. Note: table TGDB2 and IGDB2 may not exist in NetWeaver systems.
If tables have been moved to customer-defined database storage units (that is, tablespaces) in the source database during a migration, these tables are only re-loaded into the correct storage units when tables DARTT, DDART, TS, IA, TA, and DD09L have been maintained correctly. The technical configuration of all tables (stored in DD09L) must include the correct TABART. After the tables have been unloaded, the files “DDL.TPL” and “DDL.TPL” should contain the customer-specific TABART and database storage unit names. Note: change the content of DD09L by calling transaction SE11 (technical setting maintenance). This will be a modification and is shown in SPDD later on. If you use database tools (i.e. sqlplus) to update DD09L, the change is lost after an upgrade, if the corresponding large table is a SAP delivered one. A fast check can be performed by calling R3LDCTL without parameters. R3LDCTL generates the files “SAP.STR” and “DDL.TPL” in the current directory. Duration: a few minutes. See SAP Notes: • •
046272 Implement new data class in technical settings 163449 DB2/390: Implement new data class (TABART)
Figure 78: Creating New TABARTs (2)
For information on how to create a new TABART, see SAP Note 46272. A customer TABART name must start with “Z” or “Y”, and four additional characters. If SAPDBA or BRSPACE was used to create additional tablespaces, TABART names like U####, USR##, and USER#, can be seen as well. To prevent SAP upgrades from overwriting these definitions, the class for customer created TABARTs must be “USR”.
In the example above, the new tablespace will be used for table COEP data and index storage location. It is recommended to name new database storage units like the TABART to identify their purpose, but this is not strictly necessary. See SAP Notes: • •
046272 Implement new data class in technical settings 490365 Tablespace naming conventions
Figure 79: Moving Tables and Indexes Between SAP Releases
During a homogeneous or heterogeneous R3LOAD system copy, tables may be moved unintentionally from one database storage unit to another. The reason for this could be that: •
• •
Some tables were assigned to TABARTs of other database storage units, instead of being assigned to the TABART were currently being stored. R3LOAD always creates tables and indexes in locations obtained from the ABAP Dictionary. Older SAP System Releases were installed with slightly different table locations than subsequent releases. ABAP Dictionary parameters were not properly maintained after the customer had re-distributed the tables to new database storage units.
If it is essential to have single tables stored in specific database storage units, check the *.STR files before starting an import.
Table movement can significantly change the size of source and target database storage units. If the Oracle reduced tablespace set is used for the target database, all thoughts about table and index locations are obsolete.
Lesson Summary You should now be able to: • Explain how Data Classes are used to map tables to database storage units • Understand how Data Classes are handled by R3LDCTL and R3LOAD • Create customer Data Classes
Lesson: Miscellaneous Background Information Lesson Overview Miscellaneous background information about table DBDIFF, R3LOAD/JLOAD data access, and R3SZCHK size computation.
Lesson Objectives After completing this lesson, you will be able to: • • •
Explain the purpose of table DBDIFF Understand how the R3LOAD/JLOAD data access is working Distinguish between the R3SZCHK behavior if the target database type is the same or different than the source database type
Business Example You wonder why there are more tables in the *.STR files then visible in the ABAP dictionary transaction SE11, and some objects are defined even differently. You also want to know how the ABAP data types are translated into database specific data types.
Figure 80: Exception Table DBDIFF
R3LDCTL reserves special treatment for tables, views, and indexes contained in the exception table DBDIFF, since the ABAP Dictionary either does not contain information about these tables, or the data definitions intentionally vary from those in the database. Generally, this involves database-specific objects and the tables of the ABAP Dictionary itself.
033814 DB02 reports inconsistencies between database & Dictionary 193201 Views PEUxxxxx and TEUxxxxx unknown in DDIC
Figure 81: Database Modeling of the ABAP Data Types
The ABAP data types are modeled through the SAP database interface (DBSL) into the suitable data type for the database used. Refer to the ABAP Dictionary manual for further information. The SAP.STR files contain the ABAP data types, not the data types of the database. Different databases provide different data types and limitations to store binary or compressed data in long raw fields. If necessary, the DBSL interface stores the same amount of data in a different number of rows, depending on the database type. R3LOAD uses the interface to read/write data to/from the database.
Figure 83: Consistency Check: ABAP DDIC – DB and Runtime
Transaction SE11 can be used to check the consistency of individual tables or views. In this process, the system checks whether the tables or view definitions in the ABAP Dictionary (DDIC) agree with the runtime object or database object. The data in the database is accessed via the runtime object of the active NAMETAB. Changes to the ABAP Dictionary are not written (and therefore are not effective) until they are activated in the NAMETAB. The ABAP Dictionary should be OK in a standard SAP System. If you suspect that any tables are inconsistent, you can check them individually using transaction SE11. Sometimes tables exist in the active NAMETAB but not in the database. In this case, R3LOAD will stop the export on error. Fix the NAMETAB problem with appropriate methods or mark the table entry as comment in the *.STR file.
In the case of a database change, the sizing information from the source database cannot be used to size the target database, since the data types and storage methods differ. In the case of a homogeneous system copy, the size values can be taken from the source database. Tables that have a large number of extents can be given a sufficiently large initial extent in the target database. To determine the correct size values, the database statistics (update statistics and so on) must be current.
Figure 85: JAVA Data Dictionary
The JAVA Web AS table and index definitions are stored as XML documents in the dictionary table. Exclude lists tell JLOAD and JPKGCTL (JSPLITTER) which objects must not be exported and which objects need special treatment during the export (i.e. removal of trailing blanks) A catalog reader (JAVA Dictionary browser) will be available with 7.10. Note: The JAVA Dictionary table will only be filled with the XML-documents, describing the table and indexes, if JLOAD is used! Do not mix a JLOAD import with other methods (i.e. database specific import tools).
The SAP JDBC interface implements specific extensions to the JDBC standard, which are used by SAP Open SQL (i.e. the SAP JAVA DDIC, SAP transaction logic, SAP OPEN SQL compatibility). JLOAD uses SAP Open SQL to access database data.
Exercise 6: Technical Background Knowledge Exercise Objectives After completing this exercise, you will be able to: • Utilize the concept of TABARTs to create additional *.STR files. • Understand the function of the JAVA database interface
Business Example You need to know how to handle customer specific Data Classes/TABARTs and you are interested in information about how the ABAP and JAVA data types are converted to database specific data types.
Task 1: The OS migration of a large Oracle database was utilized to move the heavily used customer table ZTR1 to a separate table space. For that purpose the necessary tasks were done in the ABAP dictionary: TABART ZZTR1 was created and the tablespace name PSAPSR3ZZTR1 was defined. 1.
Which changes were done to the ABAP dictionary of the source system? Which tables were involved? Note the table entries.
Task 2: A customer database was exported using R3LOAD. A look into the export directory shows that no additional *.STR files exist for tables, which were stored in separate Oracle tablespaces. The ABAP dictionary tables, which are used to define additional TABARTs, containing the list of tablespaces, and the mapping between TABART/tablespace, were properly maintained. 1.
What is the reason that no additional *.STR files were created besides the standard ones?
2.
What can be done in advance to check the proper creation of an *.STR file before starting a time consuming export? Which steps are necessary?
Task 3: The *.STR files contain database independent data type definitions as used in the ABAP dictionary. 1.
How is R3LOAD able to convert database independent into database specific data types? Continued on next page
Solution 6: Technical Background Knowledge Task 1: The OS migration of a large Oracle database was utilized to move the heavily used customer table ZTR1 to a separate table space. For that purpose the necessary tasks were done in the ABAP dictionary: TABART ZZTR1 was created and the tablespace name PSAPSR3ZZTR1 was defined. 1.
Which changes were done to the ABAP dictionary of the source system? Which tables were involved? Note the table entries. a)
Define new TABARTs ZZTR1 in tables DDART and DARTT.
b)
Add the new tablespace name to TSORA.
c)
Map TABART ZZTR1 to tablespace PSAPSR3ZZTR1 in tables TAORA and IAORA.
d)
Change the TABART entry for table ZTR1 to ZZTR1 in table DD09L. Note: Table and index data can also be stored in the same tablespace.
Task 2: A customer database was exported using R3LOAD. A look into the export directory shows that no additional *.STR files exist for tables, which were stored in separate Oracle tablespaces. The ABAP dictionary tables, which are used to define additional TABARTs, containing the list of tablespaces, and the mapping between TABART/tablespace, were properly maintained. 1.
What is the reason that no additional *.STR files were created besides the standard ones? a)
2.
The technical settings (table DD09L) of the involved objects were not changed. The existence of customer TABARTs does not cause the creation of additional *.STR files, if no tables have been mapped to it.
What can be done in advance to check the proper creation of an *.STR file before starting a time consuming export? Which steps are necessary? a)
R3LDCTL can be executed stand-alone as the adm user. If no command line parameters are provided, R3LDCTL will create *.STR and DDL.TPL files in the current directory. It will take a few minutes. The created files can then be checked for proper content.
Task 3: The *.STR files contain database independent data type definitions as used in the ABAP dictionary. 1.
How is R3LOAD able to convert database independent into database specific data types? a)
R3LOAD does not need specific knowledge about the data types of the target database, because it calls the database interface (DBSL), which knows how to handle them.
Task 4: Every database vendor provides a JDBC interface for easy database access. interface? 1.
Why is SAP using its own JDBC interface? a)
114
Standard JDBC interfaces do not provide features required by SAP applications. Important JDBC extensions are the usage of the SAP JAVA Dictionary and the implementation of the SAP transaction mechanism.
Lesson Summary You should now be able to: • Explain the purpose of table DBDIFF • Understand how the R3LOAD/JLOAD data access is working • Distinguish between the R3SZCHK behavior if the target database type is the same or different than the source database type
Unit Summary You should now be able to: • Explain how Data Classes are used to map tables to database storage units • Understand how Data Classes are handled by R3LDCTL and R3LOAD • Create customer Data Classes • Explain the purpose of table DBDIFF • Understand how the R3LOAD/JLOAD data access is working • Distinguish between the R3SZCHK behavior if the target database type is the same or different than the source database type
Unit 7 R3LOAD & JLOAD Files Unit Overview This unit gives an overview about all the R3LOAD and JLOAD control and data files.
Unit Objectives After completing this unit, you will be able to: • •
Understand the purpose, contents, and structure of the R3LOAD control and data files Understand the purpose, contents, and structure of the JLOAD control, and data files
Lesson: R3LOAD Files Lesson Overview Purpose, contents and structure of the R3LOAD control and data files
Lesson Objectives After completing this lesson, you will be able to: •
Understand the purpose, contents, and structure of the R3LOAD control and data files
Business Example During a R3LOAD system copy some problems occurred. For troubleshooting, you need to know the purpose of all the various control files created.
Figure 87: Overview: R3LOAD Control and Data Files
R3LOAD writes *.XML files during Unicode conversions. They contain the primary key of each row which cannot be properly translated to Unicode. The content is used by transaction SUMG to fix the problems in the target system. These files are not discussed in this course.
The “DDL.TPL” files contain the database-specific description of the create table/index statements. R3LOAD uses these descriptions to generate the tables and indexes. Depending on the database used, the primary key or secondary indexes are generated either before, or after the data is loaded. Normally the R3LOAD based data export is done sorted by primary key. This default behavior can be switched on and off in the DDL.TPL file. A negative list can be used to exclude tables, views, or indexes from the load process. Typical examples include tables LICHECK and MLICHECK. The assignment of TABART and data/index storage is made here for databases that support the distribution of data among database storage units.
“Next Extent Size” classes are defined separately for tables and indexes, provided this is supported by the target database. Database specific drop, delete and truncate data SQL statements can be defined for better performance in R3LOAD restart situations.
Figure 90: DDL.TPL: Naming Conventions
The “DDL.TPL” files are generated by R3LDCTL. Since R3LDCTL 6.40 “DDL_LRG.TPL” files are created to support unsorted exports (were it makes sense). For Oracle parallel index creation was added. You can also see a DDLMYS.TPL file, but this is not used.
Do not change the sections marked “do not change” unless explicitly asked to do so in an SAP Note or by SAP support. Function / Section names: •
Create primary index order, sorted / unsorted export: prikey
• • • • • • • • • •
Create secondary index order: seckey Create table: cretab Create primary key: crepkey Create secondary index: creind Do not create and load table: negtab Do not create index: negind Do not create view: negvie Do not compress table: negcpr Storage location: loc Storage parameters: sto
Figure 92: DDL.TPL: Structure – Create Table
The DDL files are templates used to generate database specific SQL statements for creating tables, primary keys and secondary indexes by R3load. Variables are indicated by “&” and filled with various values from *.STR, *.EXT files, and from the storage sections of the DDL.TPL file itself.
Secondary indexes can be unique or ununique. Primary keys are always unique.
Figure 94: DDL.TPL: Structure − Negative List
The negative list can be used to prevent tables, indexes, and views from being loaded. The entries are separated by blanks and can be inserted into a single line.
The default initial extent is only used when no .EXT exists or when it does not contain the table. New TABARTs for additional storage units (i.e. tablespaces for Oracle) can be added to the DDL.TPL by changing the table and index storage parameters. If you do this, change the *.STR files, and the corresponding create database templates for R3SETUP (DBSIZE.TPL) or SAPINST (DBSIZE.XML). It is easier to change the ABAP Dictionary before the export, than to change the R3LOAD control files. If R3LOAD cannot find a specific table or index entry in the .EXT file, the missing entry is ignored and default values are used.
The default initial extent is only used when no “.EXT” exists, or when it does not contain the index. The same index storage parameters are used for primary and secondary indexes.
Do not change the sections marked “do not change” unless explicitly asked to do so in an SAP Note or by SAP support. Function / Section names: • • • • • • • • • • • • • • •
2012
Create primary key order, sorted / unsorted export: prikey Create secondary index order: seckey Create table: cretab, drop table: drptab Create primary key: crepky, drop primary key: drppky Create secondary index: creind, drop secondary index: drpind Create view: crevie, drop view: drpvie Truncate data: trcdat Delete data: deldat Do not create table: negtab Do not load data: negdat Do not create index: negind Do not create view: negvie Do not compress table: negcpr Storage location: loc Storage parameters: sto
Above are the templates for dropping objects and deleting/truncating table data. The “&where&” condition is used when restarting the import of splitted tables. All other sections are similar to 4.6D and below. Some functions apply to specific database types or database releases only.
The .EXT files will created for all database types, because the extent values are used to compute the size of the target database DBSIZE.TPL/DBSIZE.XML and for package splitting.
Figure 102: .EXT: Initial Extent (2)
The size of “initial extent” is based on assumptions about the expected space requirements of a table. Factors such as the number and average length of the data records, compression, and the data type used, play an important role. In case of Oracle dictionary managed tablespaces the values for the “initial extent” can be increased or decreased as required. Observe database-specific limitations for maximum “initial extent” sizes. R3ZSCHK limits the maximum initial extent to a value of 1.78 GB. This was implemented to prevent data load errors of very large tables because of having not enough consecutive space in a single storage unit, otherwise small tables or indexes could block the storage unit easily. Today's database releases handle the storage more flexibly, making this mechanism obsolete. Even if the maximum size of a table is limited to 1.78 GB (more precisely 1700 MB), this information is accurate enough for package splitting.
If R3LOAD cannot find a specific table or index entry in the .EXT file, the missing entry is ignored and default values are used. Typical warning in R3SZCHK.log if reaching the size limit: • •
WARNING: REPOLOAD in SLEXC: initial extent reduced to 1782579200 WARNING: /BLUESKY/FECOND in APPL0: initial extent reduced to 1782579200
Figure 103: R3LOAD: .STR
Figure 104: .STR: Description (1)
The term “package” is used as a synonym for R3LOAD structure files (*.STR). The data of tables in SAP0000.STR will never be exported. ABAP report loads must be regenerated on the target system.
The ABAP Nametab tables DDNTF / DDNTT (and since 6.x DDNTF_CONV_UC / DDNTT_CONV_UC for Unicode conversions) require a certain import order. The JAVA-based Package Splitter makes sure that the Nametab tables are always put into the same file (SAPNTAB.STR). If R3LOAD cannot find a specific table or index entry in the .EXT file, the missing entry is ignored and defaults values are used.
Figure 105: .STR: Description (2)
The buffer flag is used for OS/390 migrations (as of Release 4.5A). It indicates how to buffer tables in an OS/390 DB2 database. Table type (conversion type with code page change): • • • • • • • •
C = Cluster table D = Dynpro (screen) table N = Nametab (active ABAP Dictionary) P = Pooled table Q = Unicode conversion related purpose R = Report table T = Transparent table X = Unicode conversion related purpose
R3LOAD activity: • • •
all = Create table/index and load data data = Load data only (table must be created manually) struct = Create table/index, but do not load any data
For tables which are marked with “struct”, R3LOAD will not create a data export or import row inside the task file. This will prevent the export or import of unwanted table data. Comments are indicated by a “#” character in the first column.
The total of field lengths is the offset of the next data record to read in the “.” file.
Figure 107: .STR: Object Structure (2)
The “dbs:” list specifies databases for which the object should be created. A leading “!” means the opposite. In the above example, the index MLST~1 will be created on all databases except ADA and MSS. The index MLST~1AD will be created on ADA and MSS only. The “dbs:” was implemented, starting with R3LOAD 6.40.
Views are not generated in the target system until all of the tables and data have been imported. The corresponding “SAPVIEW.EXT” file does not contain any entries or does not even exist, since views do not require any storage space other than for their definition in the DB Data Dictionary.
The content of the .TOC file is used by R3LOAD version 4.6D and below, to restart an interrupted export. As of R3LOAD 6.10, the .TSK file is used for the restart.
The above restart description is only valid for R3LOAD less or equal to 4.6D! A restart without option “-r” will force R3LOAD to begin the export at the very first table of the *.STR file in charge. The existing import *.LOG file will be automatically renamed to *.SAV and the existing *.TOC file will be reused, but not cleared. It is recommended to delete the related *.LOG, *.TOC, and dump files before repeating a complete export of a single *.STR file or of the whole database. If R3LOAD export processes are interrupted due to a system crash or a power failure, the *.TOC file may list more exported tables than the dump file really contains (since the operating system was not able to write all the dump file buffers to disk). In this case, a restart can be dangerous as it starts after the last *.TOC entry which might not be valid. This can lead to missing data or duplicate keys later on. See the troubleshooting chapter for details on how to prevent this situation. R3SETUP adds the “-r” command line option automatically when restarting R3LOAD.
Figure 113: .TOC: Internal Structure ≥ 6.10
Since R3LOAD 6.10, the *.TSK file is used to restart a terminated data export! The *.TOC file is read to find the last write position only.
In case of splitted tables, the WHERE condition used during the export is written into the respective *.TOC file. Before starting the import, R3LOAD compares the WHERE condition in the *.TOC file against the where condition in the *.TSK file. R3LOAD assumes a problem if they do not match and stops on error. If there is an error during data load and R3LOAD must be restarted, the WHERE condition is used for selective deletion of already imported data. Unicode code pages: 4102 Big Endian, 4103 Little Endian. Non-Unicode code pages: 1100, MDMP (for exports of MDMP systems).
Depending on the source database used for the export, a data compression ratio of between 1:4, and 1:10 or more can be achieved. The compression is performed at block level, so the file cannot be decompressed as a whole. Some versions of R3SETUP/SAPINST are asking for the maximum dump file size (other versions use different defaults - check the *.CMD file for the used value). Each additional dump file (for the same *.STR file) is assigned to a new number (such as SAPAPPL1.001 or SAPAPPL1.002). The files of a PACKAGE are all generated in the same directory (if not specified differently in the *.CMD file – >=6.10 only!). Make sure that the available disk space is sufficient. A checksum calculation at block level is implemented as of R3LOAD 4.5A to ensure data integrity. R3LOAD versions 4.5A and above compare source system information obtained from the dump file against the actual system information. If R3LOAD detects a difference in OS or DB, a migration key is necessary to perform the import (see GSI section in export log file).
R3LOAD reads a certain amount of database data into an internal buffer and does a compression on it. The number of written blocks (group) will depend on the compression result and block size used. This figure is also written into the dump, to tell R3LOAD how many blocks to read later on. Since 4.5A, a header block is used to identify heterogeneous system copies and to verify the migration key. Implemented with 4.5A, was that every group of compressed data blocks has its own checksum. Before a checksum can be verified, all blocks of a group must be read by R3LOAD. If a dump file has been corrupted during a file transfer, typical R3LOAD read errors will be: RFF (cannot read from file), RFB (cannot read from buffer), or “cannot allocate buffer of size ...”. For more details, see unit “Troubleshooting”.
Up to R3LOAD 4.6D, the restart point for an interrupted import is read from the .log file. The restart performs a delete data (DELETE FROM) or a drop table/index. A restart without option “-r” will force R3LOAD to begin at the very first table of the *.STR. The existing import *.LOG file will be automatically renamed to *.SAV. The import process will terminate on error, as the database objects already exist. R3SETUP adds the “-r” option automatically when restarting R3LOAD.
Figure 123: .LOG: Import Log ≥ 6.10 and < 6.40
Since R3LOAD 6.10, only the *.TSK file is used to restart an interrupted import! The restart point for the data load is the first entry in the *.TSK file of status error (err) or execute (xeq).
As of R3LOAD 6.40, separate time stamps for create table, load data, and create index are implemented. This allows a much better load analytics than on previous releases.
Command files are automatically generated by SAP installation programs R3SETUP, SAPINST, and MIGMON.
Figure 127: .CMD: Internal Structure ≤ 4.6D
The “.CMD” files contain the names and paths of the files from where R3LOAD retrieves its instructions. The name of the “.CMD” file must be supplied in the R3LOAD command line. R3LOAD dump files can be redirected to different file systems by adapting the “dat:” entry. The default maximum size (fs) of a dump file is often 1000M (1000 MB). Possible units: • • • •
icf: Independent control file dcf: Database dependent control file dat: Data dump file location dir: Directory file (table of contents) ext: Extent file (not required at export time)
The DDL.TPL file is often read from the installation directory. In this case, R3SETUP/SAPINST copied it from the export directory. This is done for the option to adapt storage locations and so on.
Figure 128: .CMD: Internal Structure ≥ 6.10
Meaning of section names: • • • • • •
tsk: Task file icf: Independent control file dcf: Database dependent control file dat: Data dump file location (up to 16 different locations) dir: Directory file (table of contents) ext: Extent file (not required at export time)
In the above example, the first dump file SAPPOOL.001 will be written to /migration/DATA, the second dump file SAPPOOL.002 to /mig1/DATA, and so on. The 4th , 5th, etc., ... dump file will be stored in the last defined dump location. If more than one PACKAGE is mentioned in a *.CMD file, a single R3LOAD will execute them in sequential order. This might be useful in certain cases.
The values are estimates and serve primarily to display the load progress. The generation of statistic files is switched off by default. Use R3LOAD option -s to make use of the statistic feature.
Since R3LOAD is using task files, the restart points are no longer read from *.LOG or *.TOC files. Complex restart situations with manual user interventions are minimized or more easy to handle. Objects or data can be easily omitted from the import process by simply changing the status of the corresponding .TSK row. See SAP Note 455195 “R3LOAD: Purpose of TSK Files” for further reference.
The slide above shows the initial .TSK file content, after it was created by R3LOAD. Please check unit 8 “Advanced Migration Techniques” for the table split case.
Figure 134: .TSK: Internal Structure for Import
The above .TSK file shows the content after R3LOAD has stopped on error. The corresponding .LOG file contains the error description/reason. Please check unit 8 “Advanced Migration Techniques” for the table split case.
The “.TSK” files are used to define which objects have to be created and which data has to be exported/imported by R3LOAD. It is also used to find the right restart position after a termination. Status • • • •
xeq = Task not yet processed. ok = Task successfully processed. err = Failure occurred while processing the task. The next run will drop the object ordelete/truncate data before re-doing the task. ign = Ignore task, do nothing.
The Status “ ign” can be used to omit a task action and to document it as well. Setting a task manually to “ok” will have the same result, but it is not visible for later checks. There is also an action “D” which can be used to delete objects with R3load, but used in exceptional cases only.
R3LOAD creates the “.TSK” files from existing “ .STR” files. Example: Create *.TSK file for export R3LOAD -ctf E SAPAPPL0.STR DDLORA.TPL SAPAPPL0.TSK ORA -l SAPAPPL0.log After starting the database export or import, R3LOAD renames .TSK to .TSK.BCK and inserts line-by-line from .TSK.BCK into a new .TSK as soon as a task (create, export, import, ignore) was finished successfully (status: ok) or unsuccessfully (status: err). R3LOAD automatically deletes .TSK.BCK after each run. In the case of restarting, R3LOAD searches in the .TSK for not completed tasks of status “err” or “xeq”, and executes them. In case of table splitting, the content of the WHERE file (*.WHR) is added to the task file.
In rare cases, it may be necessary to rebuild an already used .TSK file after a hard termination, caused by operating system crashes, power failures, etc. This must be done by merging the file .TSK.BCK with .TSK. Note: If more than one R3LOAD is executing the same task file by accident, one of the processes will find an existing .TSK.BCK file and then stop on error. This should prevent running parallel R3LOAD processes against the same database objects. The “-merge_bck” option can only be used in combination with “-e” or “-i”. The export or import will start immediately after the merge is finished! The merge option “-merge_only” merges the .TSK.BCK into the .TSK files, but does not start an export or import. See unit 10 “Troubleshooting” for possible dangers when using the merge option.
R3LOAD stops on error if a .TSK.BCK file is found, as it is not clear how to proceed. For example, a power failure interrupted the import processes and R3LOAD will not be able to cleanup the .TSK.BCK and .TSK files. The current content of both files are shown above.
Figure 139: .TSK: Merge Option (3)
After R3LOAD has been restarted with option “-merge_bck”, the content of .TSK.BCK will be compared against .TSK, and the missing lines will be copied to .TSK. In this stage, it is not known whether some objects not listed in .TSK already exist in the database. R3LOAD solves this problem by changing the status of each “xeq” line to “err”, to force a “DROP” or “DELETE” statement before repeating an import task.
After the task file merge is completed, R3LOAD will attempt to drop each object before creating it. Errors caused by drop statements are ignored.
Figure 141: .TSK: R3LOAD Restart Behavior
No special R3LOAD restart option is necessary! Rare cases are hard terminations caused by power failures and operating system crashes. Export write order: dump data, .TOC, .TSK
Figure 143: Why Do BW Objects Need Special Handling?
In the case of BW non-standard database objects, the ABAP Dictionary contains table and index definitions that are not sufficient to describe all objects properties. The missing information is held in the BW meta data (i.e. partition information, bit mapped indices, ...). R3LDCTL reads the ABAP Dictionary only. Additional information from BW meta data cannot be inserted into *.STR files. The *.STR file content is enough to export and import BW data via R3LOAD, but is insufficient to create the BW object in the target system. To overcome the existing limitations of R3LDCTL and R3LOAD, the report SMIGR_CREATE_DDL was developed, which writes database specific DDL statements into *.SQL files. R3LOAD was extended to switch between the normal
way of creating tables and indexes, and the direct execution of DDL statements from a *.SQL file. So it is possible to create non-standard database objects and to load data into them using R3LOAD.
Figure 144: .SQL: File Generation
The report SMIGR_CREATE_DDL is mandatory for all systems using non-standard database objects (mainly BW objects). Since NetWeaver 7.02, SMIGR_CREATE_DDL inserts the list of created .SQL files into the file SQLFiles.LST.
Example 1: R3LOAD creates table /BI0/B0000103000, using the supplied CREATE TABLE statement. Depending on the DDL.TPL, content data will be loaded before or after the primary key /BI0/B0000103000~0 creation. Example 2: R3LOAD creates table /BI0/B0000106000 and primary key /BI0/B0000106000~0 in a single step. Afterwards, data will be loaded. As the /BI0/B0000106000~0 SQL section is empty, R3load will not try to create /BI0/B0000106000~0 again. This configuration is used to make sure that table and index will always be created together, independent from DDL.TPL entries. Example 3: R3LOAD creates table /BI0/B0000108000 and will load data into it. As the index /BI0/B0000108000~0 has no SQL section, no further action is required. The table will not have a primary key. Empty SQL sections are used to prevent R3LOAD from creating objects according to *.STR file content.
The example above combines a create table and a create unique index statement, forcing R3load to load data after the index creation (which can be useful for some table types). The variable &APPL0& will be replaced by the TABART according to DDL.TPL content.
Figure 148: R3LOAD: Execution of External SQL Statements
Since R3LOAD 7.20, the file SQLFiles.LST is examined and the existence of the mentioned .SQL files is verified. The SQLFiles.LST is searched in the current directory and then in the export DB directory. SAPINST is taking care, that the .SQL files and the SQLFiles.LST is put into the right place. Before R3LOAD assembles the first SQL statement, it searches for a .SQL file, which matches the TABART of the first object. The .SQL file is searched in the current directory first and then in the DB/ directory. All object names in the .SQL file will be added to an internal list (index). R3load will then scan the internal list for a matching object name prior to assembling a create object statement. If a match has been found, the SQL statement from the .SQL file will be used, instead of building a statement according to DDL.TPL content. R3LOAD can only read one .SQL file per *.STR file! The usage of a .SQL file may not be mentioned in the import log file before R3LOAD 7.20.
The SQLFiles.LST is read by R3LOAD to retrieve the *.SQL file names. R3LOAD will abort, if there is a .SQL file mentioned in the list, which cannot be found. This was implemented as an additional safety mechanism. Independent from the SQLFiles.LST content, R3LOAD is searching for the .SQL files based on the TABART in the respective .STR file. Even if a .SQL file is not in the SQLFiles.LST it will be used, if it was found by R3LOAD.
Figure 150: Common R3LOAD Command Line Options (1)
Increasing the commit count can improve database performance, if a database monitor shows the cause of the slowing down of the database leading to the number of commits as opposed to loading of the data. Changing the value can also decrease performance. Load tests are recommended. The default commit count is approximately 1 commit for 10.000 rows. The “-k” or “-K” option is not valid for R3LOAD below 4.5A. For additional R3LOAD options, see “R3LOAD -h”. The option “-continue_on_error” is dangerous for the export! On MDMP systems, R3LOAD 6.x automatically uses a dummy code page called “MDMP”, which indicates “do no conversion”. The MDMP code page entry can be seen in the *.TOC file. For the conversion of MDMP systems to Unicode, see unit 11 “Special Projects”.
Figure 151: Common R3LOAD Command Line Options (2)
The statistic data file is useful to watch the load progress of large data dump files. Option “-o” can be combined with option “-ctf” (create task file), “-e” (export), and “-i” (import). In the combination with “-ctf”, the corresponding tasks will not be inserted into the *.TSK file. The “-o” options is used in the case of the import of splitted tables. For example: R3LOAD -ctf I → resulting task file content: • • • •
R3LOAD -o D -ctf I → resulting task file content:content: • •
T TAB01 C xeq P TAB01 C xeq
Since R3LOAD 6.40, the “-v” command line option shows the program compile time, to make it easier to identify patch levels. Database specific load options can be listed by “-h”. The options are used to speed up the R3LOAD import bypassing database mechanisms, which are not required for a system copy load. If in doubt about which options are recommended, check to see what R3SETUP or SAPINST is using.
Figure 152: DB Specific R3LOAD Option: Load Procedure Fast
1. 2. 3. 4.
5.
158
0905614 DB6: R3load -loadprocedure fast COMPRESS 1058427 DB6: R3load options for compact installation 1464560 FAQ R3load in MaxDB, 1014782 MaxDB: FAQ System Copy 1054852 Recommendations for migrations to MS SQL Server 1045847 ORACLE DIRECT PATH LOAD SUPPORT IN R3LOAD 1046103 ORACLE DIRECT PATH LOAD SUPPORT IN R3LOAD 7.00 AND LATER 1591424 SYB: 7.02 Heterogeneous system copy with target Sybase ASE 1672367 SYB: 7.30 Heterogeneous system copy with target Sybase ASE
Lesson: JLOAD Files Lesson Overview This lesson explains the purpose, content, and structure of the JLOAD control and data files.
Lesson Objectives After completing this lesson, you will be able to: •
Understand the purpose, contents, and structure of the JLOAD control, and data files
Business Example Problems occurred during a JLOAD system copy. For troubleshooting, you need to know the purpose of all the various control files created.
Figure 154: Overview: JLOAD Control and Data Files
SAPINST 7.02 and its improvements on JLOAD (JPKGCTL, JMIGMON) were implemented for NetWeaver 7.02 the first time. Other versions like NetWeaver 7.10 have an higher version number, but were released earlier. This leads to a situation that a lower NetWeaver version (7.02) seems to provides more (advanced) JLOAD functionalities than a higher NetWeaver version (i.e. 7.10). In general: if no JPKGCTL was used or is available, JLOAD behaves like in NW 7.00, if JPKGCTL was run the behavior is similar to the NW 7.02 examples.
Job files are used to specify JLOAD actions. SAPINST in NetWeaver '04 SR1 and NetWeaver 04S is starting a single JLOAD process, which exports the whole JAVA schema (meta data and table data). The default data dump file name is “EXPDUMP”. JLOAD can create the EXPORT.XML and IMPORT.XML files by itself. The job file can also contain a maximum data dump file size. Without such a parameter, the default size is set to 2 GB. For example: Future versions may contain additional object types like database views (which are not used yet).
Figure 157: Export Job Files created by JPKGCTL 7.02
Starting with SAPINST 7.02, JPKGCTL can be used to create the JLOAD job files. The meta data describing a table or index (EXPORT_METADATA.XML / EXPORT_POSTPROCESS.XML) is separated from the data export (EXPORT_.XML). This allows multiple JLOAD export and import processes. For table splitting it is necessary to create the table first, then load data, and create indices afterwards (post-processing).
Figure 158: Export Job Files - JPKGCTL 7.30
In 7.30, there is one meta data export, several package exports, and for each package its own post-process export job file.
Job files are used to specify JLOAD actions. In NetWeaver 04 SR1 and NetWeaver 04S, SAPINST is starting a single JLOAD process, which imports the entire JAVA schema.
Figure 160: Import Job Files created by JPKGCTL 7.02
For table splitting it is necessary to create the table first, then load data, and create indexes afterwards (post-processing).
The above *.STA file contains the export status. As soon as an item is exported, a new line will be added to the *.STA file. The content of the *.STA file is used to identify where to proceed, in case of a restart. The status can either be “OK” for successful, or “ERR” for failed. In NetWeaver 04 SR1, the “EXPORT.STA” file can be found under: /usr/sap///j2ee/sltools Check the SAPINST log file for the location in other versions.
Figure 164: Export Status Files 7.02
The meta data export is separated from the table data export.
The above *.STA file contains the import status. As soon as an item is imported, a new line will be added to the *.STA file. The content of the *.STA file is used to identify where to proceed, in case of a restart. The status can either be “OK” for successful, or “ERR” for failed.
Figure 166: Import Status Files 7.02
First the meta data is applied (create table, primary key), then the data import takes place (insert), and finally the secondary indexes are generated (post-processing).
The existence of a matching export *.STA file identifies a restart situation, otherwise the export starts from scratch. NetWeaver 7.02 JLOAD writes log files with the following naming conventions: EXPORT_METADATA.XML.LOG, EXPORT_.XML.LOG, and EXPORT_POSTPROCESS.XML.LOG. It separates the meta data export from table data export.
The existence of a matching import *.STA file identifies a restart situation, otherwise the import starts from the first data dump file entry. NetWeaver 7.02 JLOAD writes log files with the following naming conventions: IMPORT_METADATA.XML.LOG, IMPORT_.XML.LOG, and IMPORT_POSTPROCESS.XML.LOG. It separates the meta data import from table data import and post-processing.
If not otherwise specified in the export job file, a dump file can grow up to 2 GB before an additional file will be automatically created (i.e. .001, .002, ...). Because the length of each data block can be found in the respective header, JLOAD can easily search for a certain location inside the data dump file.
Figure 174: Data Dump File Structures for separated Meta Data
If JPKGCTL was used meta data and table data were put into separate dump files.
After the package splitting was completed, JPKGCTL writes the “sizes.xml” file containing the expected package sizes. This helps JMIGMON to identify large packages which should be exported first.
url = url for database to connect driver = JDBC database driver auth = database logon
If no job file is specified, the complete database will be exported by default. In addition, suitable “EXPORT.XML” and “IMPORT.XML” files will be generated. The default log file name will be “JLOAD.LOG”, unless a job file is specified; in this case, the log file will get the same name as the job file, with *.XML replaced by *.LOG.
Exercise 7: R3LOAD & JLOAD Files (Part I) Exercise Objectives After completing this exercise, you will be able to: • Modify R3LOAD files to serve special demands. Often, various methods are available to achieve the same result.
Business Example You want to modify R3LOAD control files to adapt the standard behavior to specific system copy needs.
Task 1: In a DB migration of a large database to Oracle, it was decided to move the heavily-used customer table ZTR1 (TABART APPL1) to a separate table space. No changes were done to the ABAP dictionary in advance. The export was executed the normal way. 1.
What changes should be done to the R3LOAD files for creating table ZTR1 and its indexes in tablespace PSAPSR3ZZTR1 and to load data into it? Fragment of SAPAPPL1.STR tab: ZTR1 att: APPL1 4 ?? T all
ZTR1~0
APPL1 4
fld: MANDT
CLNT
3
0
0 not_null
1
fld: MBLNR
CHAR
10
fld: TSTMP
FLTP
8
0
0 not_null
2
0
16 not_null
0
ind: ZTR1~PSP att: ZTR1
APPL1 4
not_unique
fld: MANDT fld: TSTMP
After the import is finished, which dictionary maintenance tasks should be done? 2.
After the import is finished, which dictionary maintenance tasks should be done?
Task 2: An Informix export of a heterogeneous system copy with R3LOAD 6.x is short on disk space. None of the available file systems is large enough to store the expected amount of dump data. All TABARTs will fit into the “sapreorg” file system, except TABART CLUST, which has a size of 600 MB. File system A: /tools/exp_1
~ 400 MB free
File system B: /oracle/C11/sapreorg/exp
~ 4500 MB free
File system C: /usr/sap/trans/exp_2
~ 350 MB free
1.
Which SAPCLUST.cmd file content would allow an export without any manual intervention? tsk: "/oracle/C11/sapreorg/install/SAPCLUST.TSK" icf: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.STR" dcf: "/oracle/C11/sapreorg/install/DDLINF.TPL" dat: "/oracle/C11/sapreorg/exp/DATA/" bs=1k fs=1000M dir: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.TOC"
2.
Which other solutions are possible with more or less manual intervention?
Task 3: While doing an export, R3LOAD stops on error, because an expected table does not exist. This seems to be an inconsistency between the ABAP Dictionary and the database dictionary. As most of the tables are already exported, it does not make sense to restart the SAP instance to fix the problem and to repeat the export afterwards. 1.
How can R3LOAD 4.x be forced to skip the export of the table?
2.
How can R3LOAD 6.x be forced to skip the export of the table?
Task 4: During a heterogeneous system copy, because of a mistake while cleaning up some tables in an Oracle database, the content of table ATAB was accidentally deleted. The SAP System was not started yet, but the load of all tables is already finished. 1.
R3LOAD 4.x: What can be done to load the content of table ATAB without re-creating the table or an index? At least two solutions are possible. Which files must be created, and what should the R3LOAD command line look like? Table ATAB belongs to TABART POOL. SAPPOOL.cmd: Continued on next page
Note: Check for R3LOAD command line options at the end of Unit 7! 2.
R3LOAD 6.x: Which R3LOAD 6.x features and command line options can be used to load table ATAB again?
Task 5: In an Oracle OS migration the database must be installed with dictionary-managed tablespaces because of certain reasons. After the test import, some large tables and indexes show a huge amount of extents.
2012
1.
The customer adapted the Next Extents values in the source database on a regular base. What are the reasons of so many extents in the target database?
2.
What can be done to reduce the number of extents in the next test run?
Solution 7: R3LOAD & JLOAD Files (Part I) Task 1: In a DB migration of a large database to Oracle, it was decided to move the heavily-used customer table ZTR1 (TABART APPL1) to a separate table space. No changes were done to the ABAP dictionary in advance. The export was executed the normal way. 1.
What changes should be done to the R3LOAD files for creating table ZTR1 and its indexes in tablespace PSAPSR3ZZTR1 and to load data into it? Fragment of SAPAPPL1.STR tab: ZTR1 att: APPL1 4 ?? T all
After the import is finished, which dictionary maintenance tasks should be done? a)
To create additional tablespaces on the target database, the files DBSIZE.TPL or DBSIZE.XML must be adapted. A new TABART / table-space assignment must be added in the file DDLORA.TPL file: # table storage parameters ZZTR1 PSAPSR3ZZTR1 # index storage parameters ZZTR1 PSAPSR3ZZTR1 … and the original TABART in the SAPAPPL1.STR file has to be changed from: ZTR1~0
APPL1 4
fld: MANDT
tab: ZTR1 att: APPL1 4 ?? T all CLNT
3
0
0 not_null
1
fld: MBLNR
CHAR
10
0
0 not_null
2
fld: TSTMP
FLTP
8
0
16 not_null
0
ind: ZTR1~PSP att: ZTR1
APPL1 4
not_unique
fld: MANDT fld: TSTMP
to: tab: ZTR1 att: ZZTR1 4 ?? T all
ZTR1~0
ZZTR1 4
fld: MANDT
CLNT
3
0
0 not_null
1
fld: MBLNR
CHAR
10
0
0 not_null
2
fld: TSTMP
FLTP
8
0
16 not_null
0
ind: ZTR1~PSP att: ZTR1
ZZTR1 4
not_unique
fld: MANDT fld: TSTMP
2.
After the import is finished, which dictionary maintenance tasks should be done? a)
After the import is finished, the ABAP dictionary should be maintained for table ZTR1 (update tables DDART, DARTT, TSORA, TAORA, IAORA and DD09L).
Task 2: An Informix export of a heterogeneous system copy with R3LOAD 6.x is short on disk space. None of the available file systems is large enough to store the expected amount of dump data. All TABARTs will fit into the “sapreorg” file system, except TABART CLUST, which has a size of 600 MB. File system A: /tools/exp_1
~ 400 MB free
File system B: /oracle/C11/sapreorg/exp
~ 4500 MB free
File system C: /usr/sap/trans/exp_2
~ 350 MB free
1.
Which SAPCLUST.cmd file content would allow an export without any manual intervention? tsk: "/oracle/C11/sapreorg/install/SAPCLUST.TSK" icf: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.STR" dcf: "/oracle/C11/sapreorg/install/DDLINF.TPL" dat: "/oracle/C11/sapreorg/exp/DATA/" bs=1k fs=1000M dir: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.TOC"
Which other solutions are possible with more or less manual intervention? a)
Move the dump files of small packages out of the export directory, as soon as they are completed. It can also be helpful to reduce the dump file size, to move completed dump files of large packages sooner.
Task 3: While doing an export, R3LOAD stops on error, because an expected table does not exist. This seems to be an inconsistency between the ABAP Dictionary and the database dictionary. As most of the tables are already exported, it does not make sense to restart the SAP instance to fix the problem and to repeat the export afterwards. 1.
How can R3LOAD 4.x be forced to skip the export of the table? a)
2.
a) R3LOAD 4.x: In the *.STR file, the definitions of the non-existing table (and its indexes) can be marked as comments, by placing a “#” at the beginning of each line. Deleting the entries would also work, but afterwards the change will not be visible to others who might be searching for errors. Restart R3LOAD.
How can R3LOAD 6.x be forced to skip the export of the table? a)
R3LOAD 6.x: Change the status of the table entry inside the export *.TSK file to “ign” (ignore). This will help fix the export problem, but for the import, you will still have to a change the *.STR file (see R3LOAD 4.x). Restart R3LOAD.
Task 4: During a heterogeneous system copy, because of a mistake while cleaning up some tables in an Oracle database, the content of table ATAB was accidentally deleted. The SAP System was not started yet, but the load of all tables is already finished. 1.
R3LOAD 4.x: What can be done to load the content of table ATAB without re-creating the table or an index? At least two solutions are possible. Which files must be created, and what should the R3LOAD command line look like? Table ATAB belongs to TABART POOL. SAPPOOL.cmd: icf: /exp/DATA/SAPPOOL.STR dcf: /install/DDLDBS.TPL dat: /exp/DATA/ bs=1k fs=1000M dir: /exp/DATA/SAPPOOL.TOC ext: /exp/DB/DBS/SAPPOOL.EXT
Note: Check for R3LOAD command line options at the end of Unit 7! a)
a) Copy SAPPOOL.STR to ATAB.STR b) Remove everything from ATAB.STR that doesn’t belong to table ATAB. c) Inside ATAB.STR, change the action field from “all” to “data” d) Copy SAPPOOL.cmd to ATAB.cmd e) Change the content of ATAB.cmd from: icf: /exp/DATA/SAPPOOL.STR dcf: /install/DDL.TPL dat: /exp/DATA/ bs=1k fs=1000M dir: /exp/DATA/SAPPOOL.TOC ext: /exp/DB//SAPPOOL.EXT
Task 5: In an Oracle OS migration the database must be installed with dictionary-managed tablespaces because of certain reasons. After the test import, some large tables and indexes show a huge amount of extents. 1.
The customer adapted the Next Extents values in the source database on a regular base. What are the reasons of so many extents in the target database? a)
2.
What can be done to reduce the number of extents in the next test run? a)
2012
The next extent values used by R3LOAD are obtained from the size catego-ries of the ABAP Dictionary. These size categories are part of the technical settings of tables and will not be updated by any external database administration tool. If R3SZCHK computes the initial extent of tables smaller than needed, the number of next extents increases because the size category values are often too small.
The initial or next extent values of the involved tables should be increased by modifying the *.EXT or *.STR file.
Exercise 8: R3LOAD & JLOAD Files (Part II, Hands-On Exercise) Exercise Objectives After completing this exercise, you will be able to: • Use R3LOAD standalone. • Manually create *.TSK and *.CMD files.
Business Example You want to execute R3LOAD standalone, to fix problems or to make use of specific settings not possible in the standard setup, i.e. with SAPINST.
Task 1: This is a hands-on exercise for which you must logon to the source system of the example migration. Hostname:
________________
Group-ID
________________
Telnet user
________________
Password:
________________
Hostname
________________
Instance #:
________________
SAP user
________________
Password:
________________
Client #
________________
If there is a unique group number on your workstation monitor, please use this number as your Group-ID. Depending on the training setup, use the Windows Remote Desktop Connection or Telnet to logon to system DEV (logon method, user, password, and hostname as supplied by the trainer). Note: You will logon as an administrator! Please do not make any changes to the system, except for those explained in this exercise. 1.
Perform the following preparation steps: Change to the drive and the directory as supplied by the trainer Copy the whole directory “TEMPLATE” to your work directory. Use name work“ (i.e. “xcopy TEMPLATE work00”) Execute “env.bat” in your work directory. It will set required environment variables. Repeat this step after each logon! Change to your work directory (i.e. work00) Continued on next page
Use the editor “notepad” for the Windows Remote Desktop Connection or “xvi” for Telnet to perform the following modifications:
In ZZADDRESS.STR change the table and primary key name ZZADDRESS to ZZADDRESS (i.e. ZZADDRESS00) ZZADDRESS~0 to ZZADDRESS~0 (i.e. ZZADDRESS00~0)
In ZZADDRESS.EXT change the table and primary key name ZZADDRESS to ZZADDRESS ZZADDRESS~0 to ZZADDRESS~0
In ZZADDRESS.TOC change the table name ZADDRESS to ZZADDRESS Hint: Edit Notes - “xvi” survival guide The editor “xvi” is a “vi” implementation for Windows systems, which works very well on telnet sessions. Insert mode: press “i”, end insert mode: press “Escape” Delete character under cursor: press “x” Delete character while in insert mode: press “Backspace” Save file: enter “:wq” (write and quit), if it doesn’t work, press “Escape” and try it again Do not use cursor keys while in insert mode; press “Escape” first
Task 2: 1.
Which fields belong to the primary key of table ZZADDRESS
Task 3: 1.
Logon to SAP System DEV and verify the ABAP Dictionary against the DB Dictionary in transaction DB02. Save the shown output to a file.
Task 4: 1.
Use R3LOAD to create the import task file ZZADDRESS.TSK. Continued on next page
Use an editor to create an R3LOAD command file, which can be used to import table ZZADDRESS Note: R3load for Windows is recognizing “\” and “/” as path separator.
Task 6: 1.
Import table ZZADDRESS with R3LOAD and check the content of table ZZADDRESS by using the MSS command line utility “osql”. The command line is case sensitive! osql -E -Q "SELECT * FROM dev.ZZADDRESS"
Task 7: Repeat the verification of the ABAP Dictionary against the DB Dictionary (do a refresh!). 1.
Does the output look different than before? New entries?
Task 8: Try to load table ZZADDRESS again by changing the ZZADDRESS. TSK file: D ZZADDRESS I ok → D ZZADDRESS I xeq
1.
What happened? What is the content of ZZADDRESS.TSK and ZZADDRESS.log?
2.
Try the import again, what happens now?
Task 9: 1.
Create a new sub-directory in your work directory. Name it “export”. Create a task and a command file to export ZZADDRESS. Export table ZZADDRESS and compare the number of exported rows against the number of table rows in the database. osql -E -Q “select count(*) from dev.ZZADDRESSgroup-id”
Note: Make sure not to overwrite your existing ZZADDRESS.TOC and ZZADDRESS.001 file!
Solution 8: R3LOAD & JLOAD Files (Part II, Hands-On Exercise) Task 1: This is a hands-on exercise for which you must logon to the source system of the example migration. Hostname:
________________
Group-ID
________________
Telnet user
________________
Password:
________________
Hostname
________________
Instance #:
________________
SAP user
________________
Password:
________________
Client #
________________
If there is a unique group number on your workstation monitor, please use this number as your Group-ID. Depending on the training setup, use the Windows Remote Desktop Connection or Telnet to logon to system DEV (logon method, user, password, and hostname as supplied by the trainer). Note: You will logon as an administrator! Please do not make any changes to the system, except for those explained in this exercise. 1.
Perform the following preparation steps: Change to the drive and the directory as supplied by the trainer Copy the whole directory “TEMPLATE” to your work directory. Use name work“ (i.e. “xcopy TEMPLATE work00”) Execute “env.bat” in your work directory. It will set required environment variables. Repeat this step after each logon! Change to your work directory (i.e. work00) Use the editor “notepad” for the Windows Remote Desktop Connection or “xvi” for Telnet to perform the following modifications:
In ZZADDRESS.STR change the table and primary key name ZZADDRESS to ZZADDRESS (i.e. ZZADDRESS00) ZZADDRESS~0 to ZZADDRESS~0 (i.e. ZZADDRESS00~0)
In ZZADDRESS.EXT change the table and primary key name Continued on next page
In ZZADDRESS.TOC change the table name ZADDRESS to ZZADDRESS Hint: Edit Notes - “xvi” survival guide The editor “xvi” is a “vi” implementation for Windows systems, which works very well on telnet sessions. Insert mode: press “i”, end insert mode: press “Escape” Delete character under cursor: press “x” Delete character while in insert mode: press “Backspace” Save file: enter “:wq” (write and quit), if it doesn’t work, press “Escape” and try it again Do not use cursor keys while in insert mode; press “Escape” first a)
Use an editor to create an R3LOAD command file, which can be used to import table ZZADDRESS Note: R3load for Windows is recognizing “\” and “/” as path separator. a) tsk: ZZADDRESS.TSK icf: ZZADDRESS.STR dcf: DDLMSS.TPL dat: .\
Import table ZZADDRESS with R3LOAD and check the content of table ZZADDRESS by using the MSS command line utility “osql”. The command line is case sensitive! osql -E -Q "SELECT * FROM dev.ZZADDRESS" a)
ZZADDRESS.log: (IMP) INFO: import of ZZADDRESS00 completed (20 rows)
ZZADDRESS.TSK: T ZZADDRESS00 C ok P ZZADDRESS00~0 C ok D ZZADDRESS00 I ok osql -E
-Q "SELECT * FROM dev.ZZADDRESS00"
Wattenberg
Muenchen
Werle
Offenbach
(20 rows affected)
Note: As the dump was created on a little endian Unicode system (see ZZADDRESS.TOC), the import must be performed with dbcodepage “4103”. For more information on “osql”, see document “MSS_osql.txt” in your work directory. Ignore R3LOAD messages starting with “sapparam”: sapparam: sapargv( argc, argv) has not been called. sapparam(1c): No Profile used. sapparam: SAPSYSTEMNAME neither in Profile nor in Commandline
Task 7: Repeat the verification of the ABAP Dictionary against the DB Dictionary (do a refresh!). 1.
Does the output look different than before? New entries? a)
Transaction DB02 will show your imported table ZZADDRESS00. If not, refresh the display (tables of your student neighbors might be visible as well).
Task 8: Try to load table ZZADDRESS again by changing the ZZADDRESS. TSK file: D ZZADDRESS I ok → D ZZADDRESS I xeq
1.
What happened? What is the content of ZZADDRESS.TSK and ZZADDRESS.log? a)
Because of the primary key on field “NAME”, it is impossible to insert two identical names. R3LOAD returns an error (rc=26 error). The ZZADDRESS.TSK file contains: D ZZADDRESS00 I err
The import works the second time as the status “err” in ZZADDRESS.TSK forces R3LOAD to delete the table content before starting the import.. ZZADDRESS.log: (IMP) INFO: import of ZZADDRESS00 completed (20 rows)
Task 9: 1.
Create a new sub-directory in your work directory. Name it “export”. Create a task and a command file to export ZZADDRESS. Export table ZZADDRESS and compare the number of exported rows against the number of table rows in the database. osql -E -Q “select count(*) from dev.ZZADDRESSgroup-id”
Create the task file ZZADDRESS.TSK containing the following line (you can use R3LOAD or an editor): R3load -ctf E ..\ZZADDRESS.STR ..\DDLMSS.TPL ZZADDRESS.TSK MSS -l ZZADDRESS.log
ZZADDRESS.TSK D ZZADDRESS00 E xeq
Start the export: R3load -datacodepage 4103 -e ZZADDRESS.CMD -l ZZADDRESS.log
There should be 20 rows in the database, and the same number should be mentioned in the *.TOC file.
Unit Summary You should now be able to: • Understand the purpose, contents, and structure of the R3LOAD control and data files • Understand the purpose, contents, and structure of the JLOAD control, and data files
Unit 8 Advanced Migration Techniques Unit Overview Contents • •
Time critical steps in an R3LOAD / JLOAD based system copy. Methods/strategies to save time during system copy.
Unit Objectives After completing this unit, you will be able to: • • • • • • • • • • • • • •
Identify the time consuming steps during export / import Minimize the downtime by applying appropriate measures Understand the MIGMON functions and operation variants Configure MIGMON Understand the time analyzer features Analyze the generated output Understand the table splitting concept Distinguish between the generic R3TA and the Oracle specific table splitter Describe how MIGMON handles table splitting during export/import Understand the Distribution Monitor functionality Understand the JMIGMON functions and operation variants Configure JMIGMON Understand the JLOAD package and table splitting concept Configure the rule file
Unit Contents Lesson: Lesson: Lesson: Lesson: Lesson: Lesson:
2012
Time Consuming Steps during Export / Import ...................199 MIGMON - Migration Monitor for R3LOAD........................214 MIGTIME & JMIGTIME - Time Analyzer...........................225 Table Splitting for R3LOAD..........................................230 DISTMON - Distribution Monitor for R3LOAD ....................245 JMIGMON - Migration Monitor for JLOAD ........................249
Lesson: Time Consuming Steps during Export / Import
Lesson: Time Consuming Steps during Export / Import Lesson Overview How to identify, minimize, or avoid time consuming steps during the export/import phases
Lesson Objectives After completing this lesson, you will be able to: • •
Identify the time consuming steps during export / import Minimize the downtime by applying appropriate measures
Business Example You need to know the long running OS/DB Migration steps to estimate the time schedule in a cut-over plan.
Figure 179: General Remarks
Please take into account, that the number of rows on ABAP cluster tables will differ between source and target system in case of a Unicode Conversion because of their compressed content. For comparable results use a SQL statement like this: SELECT COUNT (*) FROM CDCLS WHERE PAGENO='0' .
Figure 180: Technical View: Time Consuming Export Steps (1)
It depends on the SAPINST version and database whether the above tasks are available or not. Different databases have different space requirements for storing the data. The programs R3LDCTL/R3SZCHK compute the INITIAL EXTENT of all tables and indexes for the target database. The sum of all of these provides the estimated size of the target database. Depending on the database, table splitting can be a time consuming process, which should run before the export. In most cases there is not enough time for that during the export/import downtime. The computed WHERE conditions are defined in such a way that data added or deleted afterwards doesnt matter, the conditions will fetch all data in the table. If possible, large data updates should be avoided after creating the WHERE conditions (or compute them again). If using the Oracle PL/SQL table splitter, special consideration apply to ROWID splitting. More information in SAP Note: 1043380. SAPINST Export Preparation: You want to build the target system up to the point where the database load starts, before the export of the source system has finished. Export and import processes should run in parallel during the system copy process. SAPINST Table Splitting Preparation: Optional step for preparing the table splitting before starting the export of a SAP System based on ABAP. If some of the tables a very large the downtime can be decreased by splitting the large tables into several smaller package, which can be the processed in parallel.
Figure 181: Technical View: Time Consuming Export Steps (2)
The most important way to tune export performance is to optimize the use of parallel export processes. Transportable storage devices can be DVDs, external USB disks, laptops, or tapes.
Lesson: Time Consuming Steps during Export / Import
When R3LOAD stores the exported data into dump files, it uses a very efficient compression algorithm. You do not need to compress these files again (you may even find that the resulting file is larger than before). To save time for coping very large amounts of dump data to the target media/system, it can be useful to set the dump file size to a small value, like 300 MB. As soon as a dump file is completed, the copy can be started. Note: the MIGRATION MONITOR waits until all dump files of a package have been completed.
Figure 182: Technical View: Time Consuming Import Steps
If a parallel export / import using R3LOAD is planned the database must be ready to import when the export starts. Normally the first database update statistic is started directly after the database import. If short on time, the update statistic can be postponed to a later point in time where it can run in parallel with other activities.
Figure 183: Saving Time on Import – After Load Errors
R3SETUP or SAPINST is starting a one-time R3LOAD process for each package. If R3LOAD processes terminate with an error condition, R3SETUP/SAPINST will stop after all R3LOAD processes are finished. The execution of R3SETUP/SAPINST must be repeated until all R3LOAD processes are successful. If you know that the cause of an R3LOAD error termination is fixed, you can save time by starting R3LOAD beside R3SETUP or SAPINST. Your own R3LOAD process must be started with the same parameter set as was used by R3SETUP or SAPINST before. The parameters can be obtained from the corresponding “.LOG” file. Log-on as the operating system user who owns the “.log” files in the installation directory (for example: adm). Change into the install directory. Only start R3LOAD processes for the *.STR files that have already been processed by the current run of R3SETUP or SAPINST. Never restart R3SETUP while your own R3LOAD processes are running. This will cause competition between your R3LOAD process and the R3LOAD processes started by R3SETUP to process the same *.STR file. In the case of using SAPINST, the second R3LOAD process will be stopped automatically, as a backup task file already exists. R3SETUP/SAPINST must be restarted after all data is loaded, to execute the remaining steps of the installation/migration.
Figure 184: R3LOAD Parameters from Import *.LOG File
Do not forget to add the restart parameter “-r” to the command line of R3LOAD (4.6D and below only). If you are starting R3LOAD manually, make sure that your current working directory is the installation directory!
Lesson: Time Consuming Steps during Export / Import
Figure 185: Export / Import Time Diagram
In customer databases, most of the transaction data is stored in tables belonging to only a few TABARTs. This causes long running R3LOAD export and import processes for the TABARTs. To save time and optimize the parallelism of R3LOAD processes, you can: Split package files (*.STR) into several smaller files, or separate large tables into additional package files.
Figure 186: Optimizing the Export / Import Process
Export and import times are reduced by splitting package files (*.STR) and creating additional package files for large tables. Always try to export/import large tables first. This ensures the maximum parallelism of R3LOAD processes. Very large tables should be exported / imported with multiple R3LOAD processes (table splitting). Optimizing the database parameters speeds up the export or import process and can prevent time-consuming errors because of bottlenecks.
Reduce CPU load on the database server by running R3LOAD on a different system. In a fast and stable network environment the usage of the R3LOAD socket method can save time.
Figure 187: JAVA-Based Package Splitter
The JAVA Package Splitter can also be used for earlier releases than Web AS 6.40. The term package is used a synonym for *.STR files. SAPINST 6.40 for NetWeaver '04 can call the JAVA or the PERL Package Splitter (depending on the selected option). Starting with NetWeaver 04S, only the JAVA Splitter will be used. The Splitter analyzes the content of *.EXT files to find the best splitting points. Fine tuning can be done to *.STR files after a test migration. The splitting of *.STR files is even possible without *.EXT files, if tables are named in a provided input file. Package file names for the split *.STR files are generated automatically. The documentation is provided as PDF file together with the splitting tool. In the case where no JAVA JRE 1.4.x or higher is installed, the *.STR files can be transported to another system, and split there.
Lesson: Time Consuming Steps during Export / Import
Figure 188: PERL-Based Package Splitter
The Perl script SPLITSTR.PL may also be used for earlier Release 4.x migrations. Do not use the Perl splitter for Unicode conversions! The DBEXPORT.R3S command file for R3SETUP releases since 4.6 and SAPINST calls SPLITSTR.PL if the option has been selected. SPLITSTR.PL analyzes the content of *.EXT files to find the best split points. Fine tuning can be done to *.STR files after a test migration. Package file names for split *.STR files are generated automatically. The Perl script is self-explanatory. Calling SPLITSTR.PL without parameters or using the “-help” option causes a help text to appear. Do not use SPLITSTR.PL on already split files, as it can lead to problems. Always split from the original files, thus preserving them. The SPLITSTR.PL script is not intended to be used on 3.x *.STR files. The results are erroneous! A Perl version is available for every operating system. The installed version of Perl can be checked with “perl –v”. In the case where no Perl is installed, the *.STR files can be transported to another system, and split there.
Socket connections are released for R3LOAD 6.40 and later. (Do not try to use an earlier version, even if R3LOAD provides the socket option). R3LOAD writes directly to the opened socket and does not need any dump or table of content (*.TOC) file. Error situations will be handled, as with conventional exports and imports. The exporting and importing processes will use their respective task files for restart. Network interruptions will terminate the export and import process immediately. Make sure that the export or import process does not fail because of database resource bottlenecks. R3LOAD restarts can make the import more time consuming than expected. The R3LOAD import process has to be started first, because the export process must connect to an existing socket - otherwise the process fails. The Migration Monitor does support socket connections in an easy-to-configure way.
The same files are used, as in standard R3LOAD scenarios, but no dump or *.TOC files are created. The R3LOAD control files must be accessible as usual, on the source and target system.
Figure 191: .CMD – Socket Connection ≥ 6.40
R3LOAD must be started with the “-socket” command line option. The importing process must be invoked before the export process can be started. The socket port can be any free number between 1024 and 65535 on the import host.
tsk: Task file icf: Independent control file dcf: Database dependent control file dat: Socket port number and name or IP address of the import host ext: Extent file (not required at export time)
The “dir” section is not required because no .TOC file will be created
Figure 192: .LOG – R3LOAD Socket Logs ≥ 6.40
The importing process listens on the specified port and waits for the exporting process to connect.
Figure 193: Migration / Table Checker (MIGCHECK) – Features
Lesson: Time Consuming Steps during Export / Import
The JAVA-based “Migration Checker” was developed to check that there is a log file for each package file (option: -checkPackages). This is an indicator that R3LOAD did run for them. The second check is to verify that each action in the task file is completed successfully (option: -checkObjects). Unsuccessful tasks are listed in an output file. The two features are used by SAPINST for NetWeaver 04S and later, to check the import completeness. Database and table depending exceptions are handled automatically. The “Table Checker” feature is used to check the number of table rows. It can be used to make sure, that tables are containing the right number of rows after import. As this is a long running task, it can be started manually only.
SAPINST: allows for custom export/import order definitions and even the change of individual parameters for each single package. ORDER: defines the sequence in which the packages are to be loaded. The load starts with the lowest values first. Negative values are also allowed. PKGID: Identifier _. PKGNAME: Name of the package (*.STR file). PKGFILESIZE: Size of the data dump file. PGKDIR: Path where DATA and DB sub-directories for the package reside. PKGDDLFILE: The name of the DDL.TPL file. PKGCMDFILE: The name of the command file for this package. (Will be generated automatically, but if you want to use your own, you may enter its name).
PKGLOADOPTIONS: Additional DB specific R3LOAD options that will be applied when this package is imported. As NetWeaver 04S uses MIGMON to start R3LOAD, the advanced features of the Migration Monitor are used, instead of the mechanism above.
Figure 195: Unsorted Export
Before starting an unsorted export, please read SAP Note: “954268 Optimization of export: Unsorted unloading”! By default, the system unloads the data as sorted. This is controlled by the following entry in the DDL.TPL file: prikey: .... ORDER_BY_PKEY. Sorting takes time and needs a large temporary storage, if it can be omitted, the export will be faster. Take care about consequences in the target system (performance impact). If you use MaxDB as target database, you must export all of the tables as sorted. If you use MaxDB as the source database, you can unload sorted data only. Do not override this option when you export MaxDB. If you use MSSQL as the target database, you should export all of the tables as sorted, so that you can avoid performance problems during the import. If you have to unload the tables as unsorted and if you use MSSQL as the target database, you should refer to Note 1054852. Certain table types are not allowed to be exported in an unsorted way. SAP Note 954268 explains the considerations release and code page dependent. R3LDCTL generates DDL_LRG.TPL files to simplify unsorted exports since NetWeaver 04.
Lesson: Time Consuming Steps during Export / Import
Figure 196: Changing R3LOAD Table Load Sequence in *.STR
Do not re-order tables in *.STR files after export. If more than one dump file exists for a single *.STR file, R3LOAD will not be able to read table data from i.e. File *.002 and for the next table from file *.001, if the table order in the *.STR file was changed after export.
Figure 197: Initial Extent Larger than Consecutive DB Storage
The situation above can be a problem on Oracle dictionary managed tablespaces, but should not apply to locally managed tablespaces as well.
Customer databases can contain tables and indexes that require a larger “initial extent” than the maximum possible in a single data container. In such cases, reduce the “initial extent” in the *.EXT file and adapt the “next extent” size class in the relevant *.STR file. The new “initial extent” size should be slightly less than the maximum available space in the data container. This gives the database some space for internal administration data.
Lesson: Time Consuming Steps during Export / Import
Lesson Summary You should now be able to: • Identify the time consuming steps during export / import • Minimize the downtime by applying appropriate measures
Lesson: MIGMON - Migration Monitor for R3LOAD Lesson Overview Purpose of the MIGMON Migration Monitor for R3LOAD
Lesson Objectives After completing this lesson, you will be able to: • •
Understand the MIGMON functions and operation variants Configure MIGMON
Business Example You need to know the appropriate MIGMON configuration scenario for specific customer SAP System landscapes.
Figure 198: Migration Monitor (MIGMON) – Features (1)
SAP Note: 784118 “System Copy JAVA Tools”. The note also describes how to download the software from SAP Marketplace. The export server mode applies where R3SETUP/SAPINST will be replaced for the export. Even if MIGMON is not used for the import, the advanced control features of the export processes can help to save time.
Already existing *.TSK or *.CMD files will not be overwritten, but used.
Figure 199: Migration Monitor (MIGMON) – Features (2)
The export client mode applies, where R3SETUP/SAPINST performs the database export, and MIGMON is used for the import. The client MIGMON is used to transfer the files to the target host and to signal the importing MIGMON, that a package is ready to load. Even if MIGMON was not used to perform the export, the import can still benefit from the advanced MIGMON R3LOAD control features.
The number of export and import processes can be different. In the case of socket usage, the number of export and import processes is the same. The export job number is ignored, because the Export Monitor requests the job number from Import Monitor during startup. Groups of packages can be assigned to different DDL*.TPL files.
FTP: File transfer via FTP between source and target system Network: Export directory is shared between source and target system Socket: R3LOAD will use sockets (requires R3LOAD 6.40 or higher). It can be combined with ftp to copy R3LOAD control files to the target system. Stand-alone: MIGMON runs stand-alone, i.e. the export will be provided on a transportable media only (possibly no fast network connection to source system available).
FTP Parameters contain the logon password. To hide FTP password in the command line (visible using “ps –ef” command on UNIX, or various Windows tools) ,the export_monitor_secure.sh/bat files should be used. The usage of FTP might be a security risk, but it is a reliable method of data transfer.
Figure 201: Migration Monitor – Net Configuration Variant
The Migration Monitor Net Configuration Variant is useful in environments, where file systems can be shared. For consistency reasons, exports should always be done to local file systems! In the example above, the export directory and the network exchange directory are shared from the exporting to the importing system. As soon as a package is successfully exported, the corresponding signal file (*.SGN) will be created in the network exchange directory. Now the importing Migration Monitor starts an R3LOAD process to load the dump from the shared export directory.
The file “export_statistics.properties” is generated from the exporting Migration Monitor before it exits and is used to inform the importing Monitor about the total number of packages and how many of them are erroneous. If all export packages are ok, the importing Migration Monitor stops looking for new packages in the exchange directory. After the successful load of all packages, it starts the load of the SAPVIEW.STR.
The Migration Monitor FTP Configuration Variant is useful in environments, where file systems cannot be shared, but a FTP file transfer is possible. In the above example, the export and import directories are located on different hosts. The FTP exchange directory is on the target system. As soon as a package is successfully exported, the corresponding files will be transferred to the importing system. After success, the signal file (*.SGN) will be created in the FTP exchange directory. Then the importing Migration Monitor starts an R3LOAD process to load the dump from the import directory. The “export_statistics.properties” file is used in the same way as in Net mode. Pay attention to the FTP time-out settings. FTP servers may have certain default settings, which limit the amount of data which can be copied in a single session. In the case of unclear FTP transfer problems it is very important to check FTP server logs and settings, because the returned error information will sometimes, not provide a sufficient description of the FTP problem.
The Migration Monitor socket method is in theory, the fastest method ever in export and import of data in that we have to have a stable network and we have to make sure that the exporting and importing databases always have enough resources to serve the R3LOAD processes. A network share, a manual file copy, or the Migration Monitor FTP file transfer (option –ftpCopy) can be used to copy the R3LOAD control files to the target system. The importing Migration Monitor must be started first. The exporting Migration Monitor connects to the importing Monitor, using the provided socket port. The socket port numbers are incremented one-by-one, for each R3LOAD process started. The communication between the export and import Monitor ensures, that the right port numbers will be written into the corresponding *.CMD files. No port number is used twice. Unusable port numbers are skipped (may be in use by others). If a firewall is between the source and target system, make sure that a whole port range (base port + number of R3LOAD packages + safety) is released for the duration of the migration.
The Migration Monitor Stand-Alone Configuration Variant is useful in environments, where source and target systems do not have a network connection, or the existing connection is too slow for a file transfer. In the above example, the export and import directories are located on different hosts in different locations. The Migration Monitor is used to start R3LOAD processes only. The file transfer between the source and target system will be done using transportable media.
The export/import state or the file transfer state can be changed from minus (“-”) to zero (“0”) for restarting R3LOAD or a file transfer. Sockets only: MIGMON for NetWeaver '04 cannot restart the R3LOAD process by changing the state only (future versions will support this). In case of a file transfer restart, all dump files of a package are copied again. Example: import_state.properties SAPAPPL1=0
Not started yet
COEP=?
Running
SWW_CONT-1=+
Finished (part 1 of splitted table)
SWW_CONT-2=+
Finished (part 2 of splitted table)
SWW_CONT-3=-
Error (part 3 of splitted table)
SWW_CONT-4=0
Not started yet (part 4 of splitted table)
SWW_CONT-5=0
Not started yet (part 5 of splitted table)
SWW_CONT-post=0
Not started yet (secondary index creation, post-processing)
SWW_CONT-pre=+
Finished (table and primary key creation, pre-processing)
The MIGMON server mode for pre-NetWeaver 04 SR1 versions can only be used if SAPINST has been forced to stop, i.e. by implementing an intended error situation.
The MIGMON server mode for NetWeaver 04 SR1 can only be used if SAPINST had been forced to stop, i.e. by implementing an intended error situation. SAPINST Netweaver 04S requires a manual start of MIGMON if using the socket mode.
Figure 208: Summary: R3LOAD Unload/Load Order by Tool
The MIGRATION MONITOR unload/load process order can be defined in the respective properties file. In addition, a file can be provided that contains a list of packages used to define the unload/load order. If the file does not contain all existing packages, the remaining packages are unloaded in alphabetical order and loaded by size – starting with the largest package. (Nothing will be lost).
SAPINST allows you to select different orders for unloading or loading the database. The feature of customizing the execution order of each *.STR file gives a good control over the unload or load process. SAPINST NetWeaver 04S uses MIGMON to start R3LOAD processes. The MIGMON R3LOAD start features are integrated into SAPINST dialogs.
Figure 209: MIGMON Export / Import Order
In the above example, the largest tables should be exported first. For that purpose, the tables were splitted from its standard *.STR files into package files containing one table only. The package names were inserted into “export_order.txt”. The Migration Monitor will export the packages in exactly the order as defined in “export_order.txt”. Afterwards it will export the remaining packages in alphabetical order. On the target system, the packages will be imported as specified in “import_order.txt”. If no package mentioned in “import_order.txt”, is available for import (still exporting) the package with the next largest dump file will be used instead. Often two different export- and import-order files make sense, i.e. if some tables have a lot of indexes but are small compared to the largest tables. In this case the overall run-time of a smaller table can be much longer then for the larger table, because of the index creation time. In the above example the tables GLPCA and MSEG are big, but not the biggest. For the import it was decided to give them top priority because they have a lot of indexes and so the index creation times will exceed even the import time of the largest table SOFFCONT1.
The Migration Monitor can be used to export or import selected packages with specific DDL.TPL files. The above export example, shows a how to export three packages unsorted (DDLORA_LRG.TPL) and the majority of all tables the standard way (DDLORA.TPL). The import example, utilizes a special Oracle feature to parallelize the index creation. For that purpose two different DDL.TPL files were generated to import two packages with index creation parallel degree 2 (DDLORA_par_2.TPL) and the other two packages with index creation parallel degree 4 (DDLORA_par_4.TPL). The remaining packages are imported as usual (DDLORA.TPL).
Lesson: MIGTIME & JMIGTIME - Time Analyzer Lesson Overview Export / import time analysis based on R3LOAD/JLOAD files
Lesson Objectives After completing this lesson, you will be able to: • •
Understand the time analyzer features Analyze the generated output
Business Example You need to analyze the export/import behavior in an OS/DB Migration to minimize the downtime for the final migration of a productive system.
Figure 211: Time Analyzer (MIGTIME / JMIGTIME) – Features
SAP Note: 784118 “System Copy JAVA Tools”. The note also describes how to download the software from the SAP Marketplace. Over time, the content of R3LOAD *.LOG and *.TOC files has been improved by adding more and more information. The Time Analyzer can handle all existing formats. R3LOAD 6.40 writes separate time stamps for data load and index creation (earlier versions did not!). MIGTIME obtains the export import time information from *.TOC and *.LOG files. JMIGTIME retrieves the time information from the JLOAD .STAT.XML files.
Figure 212: Time Analyzer – Output Based on Export Files (1)
The list output shows the start/end date and the export duration of each package, and additionally provides run-time information, as seen above in the longest running tables.
Figure 213: Time Analyzer – Output Based on Export Files (2)
The HTML output gives a quick overview on the package run-time distribution.
Figure 214: Time Analyzer – Output Based on Import Files (1)
The list output shows the start/end date and the import duration of each package. If the used R3LOAD version (i.e. 6.40) provides time stamps for each table import and primary key/index creation, the output list can then distinguish between data load and index creation time. The list of long running tables can be generated for pre-6.40 R3load releases too, but it does not contain data and index columns, only a time column. The log file contains time information for data load ends, therefore the time for tables in the old R3load releases is not 100% correct: table time = table load time + index/pkey creation time for the previous table (if index/pkey is created after data load). From R3LOAD 6.40, table time is correctly determined, because create table/index times are present in the log files.
Lesson: Table Splitting for R3LOAD Lesson Overview Explanation of the table splitting procedure for R3LOAD
Lesson Objectives After completing this lesson, you will be able to: • • •
Understand the table splitting concept Distinguish between the generic R3TA and the Oracle specific table splitter Describe how MIGMON handles table splitting during export/import
Business Example You need to know how R3LOAD table splitting is working and how to troubleshoot problems.
Figure 217: R3TA Table Splitter
R3TA analyzes a given table and returns a set of WHERE conditions that will select approximately the same amount of rows. For each WHERE condition one R3LOAD can be started. The parallel export does not reduce the export time only, it will also allow an earlier start of the import. Because of the complex handling of splitted tables, the usage of MIGMON is mandatory.
-n.WHR” format (WHERE SPLITTER). If the parallel import into a single table is not possible on a particular database type, a sequential import of splitted tables can be forced by defining MIGMON load groups. Please check the respective system copy manual and related notes for current limitations. Even if the parallel import into a single table is not supported on your database, the overall time saving because of the parallel export itself is significantly enough. SAP Note: 952514 Using the table splitting feature
Figure 218: Oracle PL/SQL Table Splitter
The PL/SQL table splitter analyzes a given table and returns a set of WHERE conditions that will select approximately the same amount of rows. For each WHERE condition one R3LOAD can be started. Normally the PL/SQL script is faster then R3TA as it is using Oracle specific features. The resulting *.WHR files can be used without further splitting (no WHERE SPLITTER required). SAP Note: 1043380 Efficient Table Splitting for Oracle Databases (the current PL/SQL table splitter script is attached to the note)
ROWID table splitting MUST be performed during downtime of the SAP system. No table changes are allowed for ROWID splitted tables after ranges have been calculated and export was completed. Any table change before the export requires a recalculation of the ROWID ranges. ROWID splitted tables MUST be imported with the “-loadprocedure fast” option of R3load. ROWID table splitting works only for transparent and non-partitioned tables. ROWID table splitting CANNOT be used if the target database is a non Oracle database.
Figure 219: Table Splitting in SAPINST ≥ NW04
Table splitting is a task which will be done before the export. The “split_input.txt” file must specify the tables to split and how often. Take care about the different input formats in case of R3TA or the Oracle PL/SQL table splitter. Check the corresponding system copy guide. The “R3ta_hints.txt” contains predefined split fields for the most common large tables. More tables and fields can be inserted with an editor. The file has to be located in the directory in which R3ta will be started. If “R3ta_hints.txt” was found and the table to split is inside, the predefined field will be used, otherwise R3TA analyzes each field of the primary key to find the best matching one. The “R3ta_hints.txt” is part of the R3TA archive which can be downloaded from SAP Marketplace, if not already on the installation media.
CAUTION: When doing a system copy with the change of the code page (non-Unicode to Unicode; 4102 to 4103; 4103 to 4102), make sure not to use a WHERE condition with the PAGENO column included for cluster tables (i.e. CDCLS, RFBLG, ). The resulting “*.WHR” files will be written into subdirectory DATA of the specified export directory. Table splitting will take place if the specified export directory is the same like for the R3LOAD export later on. The “whr.txt” file contains the name of the splitted tables. It can be used as an input file for the package splitter to make sure that each splitted table has it own *.STR file. It depends on the SAPINST release whether a database type can be selected or not. SAPINST 7.02 can make use of the Oracle PL/SQL splitter if the database type Oracle was selected. Radio buttons allow to choose between the R3TA and the PL/SQL table splitter.
Figure 220: Example of an R3TA Based Table Splitting
The above example shows the R3TA WHERE file creation for an Oracle database. The CKIS.STR is provided to the command line to tell R3TA which fields belong to the primary key. R3TA generates a CKIS.WHR file containing the computed R3LOAD WHERE conditions, a set of files to create a temporary index, and a further set of files to drop the temporary index. It must be decided on an individual base, whether it makes sense to create an additional index or not.
Figure 221: R3TA Example: Create Temporary Index (Optional)
Depending on the database type, database optimizer behavior, table type, table field or table size, a temporary index can improve the R3LOAD data selection considerably. To find out whether a temporary index makes sense or not, a SQL EXPLAIN statement can help to check the database optimizer cost factor on the data to select. Indexes should be checked on a copy of the productive system for example. The corresponding system copy guide describes how to create or delete R3TA related indexes.
If the temporary index does not improve the R3LOAD export, it can be dropped using the predefined files or with SQL commands directly.
Figure 223: R3TA Example: WHERE Condition File CKIS.WHR
R3TA writes all WHERE conditions for a table into one single file. It must be splitted into pieces to utilize a parallel export with MIGMON. If it cannot be achieved to create exactly the number of splits as requested, it can happen that more or less WHERE conditions are created. In the example above, 10 split were requested but R3TA created 11.
Each WHERE condition must be put into a separate file, otherwise the MIGMON mechanism to support table splitting would not work as intended. The WHERE splitter is part of the JAVA package splitter archive. In case of SAPINST, it will be called automatically. If R3TA was called directly, the WHERE splitter must called manually. A description of the WHERE splitter usage is available in the splitter archive.
Figure 225: Example of an Oracle PL/SQL Based Table Splitting
The above example shows the PL/SQL script based WHERE file creation for an Oracle database. A split strategy can be chosen between field or ROWID splitting. ROWID splitting can be used if the target database is Oracle (“-loadprocedure fast” must be used for the import). Opposite to R3TA, the PLS/SQL splitter creates *.WHR files directly usable by MIGMON.
As soon as MIGMON finds “*.WHR” files, it generates the necessary “*.TSK” and “*.CMD” files automatically. The “*.TSK” files will be created with the special option “-where” to put the WHERE condition into it. Make sure to have a separate “*.STR” file for each splitted table.
Figure 227: Example: MIGMON Export Processing (2)
For each “*.TSK” file a corresponding “*.CMD” file will be created.
R3LOAD inserts the used WHERE condition into the *.TOC file. So it is easy to find out which part of a table is stored in which dump file. Furthermore this information is used for a safety mechanism to make sure the import does run with the same WHERE conditions as the export did (otherwise it could lead to a potential data lost in import restart situations). I the case of a mismatch, R3LOAD stops on error.
Figure 229: Example: Directory Content after Export
To simplify the graphic above, no deep directory structures are shown (like SAPINST is creating) and the files under “/DB” are not explicitly mentioned. R3LOAD is assumed to run in “/inst” and the export directory is named “/exp”. The “/inst/split” directory is used to run R3TA some days or hours before the database export. The R3TA WHERE file was splitted and the results were copied into “/exp/DATA”. In case of the Oracle PL/SQL splitter, the WHERE files and be put directly into “/exp/DATA”. The export log file information of R3LOAD 7.20: "(DB) INFO: Read hintfile: D:\EXPORT\ABAP\DATA\CKIS-1.WHR" means, the respective „*.WHR” file is scanned for an optional database hint to be utilized during the data export (currently implemented for Oracle only, directing the optimizer to choose a certain execution plan).
Figure 230: Example: MIGMON Import Processing (1)
MIGMON makes automatically sure, that the “*.TSK” and “*.CMD” files for table creation are generated before data import. After successfully creating the table, the data load processes are started. This preparation phase is marked in the MIGMON “import.state.properties” file as “
-pre=+”. For databases with the need of a primary key before import, it will be created together with the table.
After the table was created successfully, multiple “*.TSK” files are generated for each WHERE condition. The “*.TSK” files will be created with the special option “-where” to put the WHERE condition into it.
Figure 232: Example: MIGMON Import Processing (3)
For each “*.TSK” file, the corresponding “*.CMD” file is generated.
Before starting the import, R3LOAD compares the WHERE condition between the “*.TOC” and “*.TSK” files. R3LOAD stops on error in case of a mismatch.
Figure 233: Example: MIGMON Import Processing (4)
After start, R3LOAD compares the WHERE condition between the “*.TOC” and “*.TSK” files and terminates on error in case of a mismatch. A successful import is only possible if the WHERE condition used for the export is identical to the one during import. Otherwise a possible restart would delete more or less data from a table, which can result in a data loss. In case of an Oracle “-loadprocedure fast”, R3LOAD does not commit data until the import is finished successfully.
After all parallel import processes for the splitted table were finished, the remaining tasks can be started: creating the primary key and secondary indexes. This post-import phase is marked in the MIGMON “import.state.properties” file as “
-post=+”. For databases creating the primary index before import, the remaining task is the secondary index generation only.
Figure 235: Example: Force Sequential Import of CKIS Splits
If the target database does not allow to import with multiple R3LOAD processes into the same table (because of performance or locking issues), MIGMON can be instructed to use a single R3LOAD process for a specified list of packages. In the above example, the file “import_order.txt” is read by MIGMON to set the import
order. All packages belonging to group [CKIS], that is CKIS-1 to CKIS-11, will be imported using one single R3LOAD process (jobNum = 1). This does not guarantee, that CKIS-1 is imported before CKIS-2, but it will make sure that no two R3LOAD processes import into CKIS. A group can have any name, but it makes sense to name it like the table in charge. Beside the number of R3LOAD processes (jobNum=) the R3LOAD arguments for task file generation (taskArgs=) and import (loadArgs=) can be defined individually for each group. The total number of running R3LOAD processes is the sum of the specified number of processes in “import_monitor_cmd.properties” and the number of processes defined in “import_order.txt”.
Figure 236: Example: Directory Content after Import
For Oracle: • • •
2012
CKIS__DPI.TSK: create table, but do not create primary key, indexes, or load data CKIS__TPI.TSK: load data, but do not create table, primary key, or indexes CKIS__DT.TSK: create primary key and indexes, but do create table, or load data
Lesson Summary You should now be able to: • Understand the table splitting concept • Distinguish between the generic R3TA and the Oracle specific table splitter • Describe how MIGMON handles table splitting during export/import
Lesson: DISTMON - Distribution Monitor for R3LOAD Lesson Overview Purpose and configuration of the Distribution Manager for R3LOAD
Lesson Objectives After completing this lesson, you will be able to: •
Understand the Distribution Monitor functionality
Business Example In a test run of a Unicode Conversion project, it was identified that the CPU load on the database server was the bottleneck of the R3LOAD export. Running R3LOAD on a separate server would solve the problem. If more than one R3LOAD server is planned, it makes sense to utilize the Distribution Monitor.
Figure 237: DISTMON – Distribution Monitor
To distribute the R3LOAD CPU load to different systems, various types of applications servers can be used, i.e. a mix of two 4 CPU systems and one 8 CPU system or even systems running on different operating systems. As long as the operating systems and DB clients libraries are supported by the respective SAP release, a wide range of system combinations are possible. Nevertheless, from an administrative point of view it will be more easy to have a homogeneous operating system landscape, file system sharing can be complex otherwise.
DISTMON is making use of R3LOAD features not available below 6.40. DISTMON can only handle the ABAP data export. JAVA stacks must be exported using JLOAD.
Figure 239: DISTMON Server Layout
The communication directory is used to share configuration and status information among the servers. It is physically mounted on one of the involved systems and shared to the other application servers. Control files (*.STR, DDL*.TPL, export_monitor_cmd.properties and import_monitor_cmd.properties) are generated here and distributed during the preparation phase.
Because of safety reasons, the export of each application server is written to local mounted disks and not to NFS mounted file systems.
Figure 240: DISTMON Distribution Process
Each MIGMON will be started locally on the respective application server. That means, each application server can run a MIGMON for export and a second one for the import. The start is initiated by DISTMON. Each MIGMON runs independently and does not know about other MIGMONs in the case of parallel export/import on the same server. The status monitor allows the monitoring of the application servers from a single user interface. Status information is read from the shared communication directory. Each MIGMON will be started locally on the respective application server. That means, each application server can run a MIGMON for export and a second one for the import. The start is initiated by DISTMON. Each MIGMON runs independently and does not know about other MIGMONs in the case of parallel export/import on the same server. The status monitor allows the monitoring of the application servers from a single user interface. Status information is read from the shared communication directory.
Lesson: JMIGMON - Migration Monitor for JLOAD Lesson Overview Purpose of the Migration Monitor for JLOAD
Lesson Objectives After completing this lesson, you will be able to: • •
Understand the JMIGMON functions and operation variants Configure JMIGMON
Business Example You need to know, the appropriate JMIGMON configuration scenario for a specific customer SAP System landscape.
Figure 241: JAVA Migration Monitor (JMIGMON) – Features
The very first implementation came with SAPINST 7.02. The JLOAD package files must be created with JPKGCTL before starting the export or import. The parallel export / import makes use of “*.SGN” files like in the MIGMON implemenation. JPKGCTL creates a “sizes.xml” containing the package sizes to support an ordered export with the largest packages first. Failed JLOAD processes can be restarted by changing the content of the file “export/import.jmigmon.states”.
The JMIGMON network configuration is useful in environments, where file systems can be shared between source and target system. For consistency reasons, exports should always be done to local file systems / directories! In the example above, the export directory and the network exchange directory are shared from the exporting to the importing system. As soon as a package is successfully exported, the corresponding signal file (*.SGN) will be created in the network exchange directory. Now the importing JMIGMON starts an JLOAD process to load the dump from the shared export directory.
The JMIGMON “Stand-Alone Configuration” is useful in environments, where source and target systems do not have a network connection, or the existing connection is too slow for a file transfer. In the above example, the export and import directories are located on separate hosts in different locations. The JMIGMON is used to start JLOAD processes only. The file transfer between the source and target system will be done using transportable media.
Figure 244: JMIGMON- Control and Output Files
The “jmigmon.console.log” should be inspected in case of export or import errors. More detailed information can be found in the respective job log. The JMIGMON state files are used to control which packages are already exported, currently in use, or terminated on error. Changing a package state from minus (“-”) to zero (“0”), will force JMIGMON to restart the job. Example export_jmigmon_states: EXPORT_METADATA.XML=+
Lesson: Table Splitting for JLOAD Lesson Overview Explanation of the table splitting procedure for JLOAD
Lesson Objectives After completing this lesson, you will be able to: • •
Understand the JLOAD package and table splitting concept Configure the rule file
Business Example You need to know, how JLOAD table splitting is working and how to troubleshoot problems.
Figure 245: JPKGCTL – Package and Table Splitting
The “split” parameter defines the size limit for JLOAD packages. JPKGCTL will add as many tables to a package until the size limit is reached. The number of packages is related to the size limit parameter. A small size will result into a large number of package files compared to a large size which will create few packages only. If a table is equal or larger then the given size, the package file will contain this single table only.
The “splitrulesfile” is only required if table splitting is planned. It can contain entries in three different formats. If only the number of splits is specified, all fields of the primary key are checked for highest selectivity. In the case where a single field is explicitly given, only this field is used for splitting. If multiple fields are provided, the most selective field is used.
Figure 246: JPKGCTL (JSPLITTER) – Workflow
The “jsplitter_cmd.properties” file is generated according user input by SAPINST. JPKGCTL connects to the database, reads the database objects definitions and calculates the sizes of items to be exported. The tables are distributed to the JLOAD job files (packages). The distribution criteria is the package size as provided in the “jsplitter_cmd.properties” file. After all packages are created, the “sizes.xml” file containing the expected export size of each package is written. JMIGMON will use the content to start the export in the package size order.
Table splitting is an optional task. It makes sense for large tables which do influence the export time significantly. JPKGCTL is able to find a useful split column automatically, but then it will only check the fields of the primary key. If a different field should be used, it must be explicitly mentioned in a split rule file. If the requested number of splits cannot be achieved, the number of splits will be automatically reduced. If even this does not result into useful WHERE conditions, JPKGCTL gives up and no table splitting takes place.
Exercise 9: Advanced Migration Techniques Exercise Objectives After completing this exercise, you will be able to: • Prevent situations where the content of split *.STR files is not satisfactory • Handle situations where no Perl of JAVA is available or cannot be installed; but the Package Splitter features should still be utilized.
Business Example
Task 1: A customer database of an ABAP SAP System has 10 very large tables that are between 2 and 20 GB in size and some other large tables ranging from 500 – 2.000 MB. After the JAVA- or Perl-based Package Splitter was executed with option “-top 10” (move the 10 largest tables to separate *.STR files) 10 additional *.STR files exit, but contain other tables than expected. 1.
What can be the reason of this behavior? Hint: What file is read to get the table size? What happens to large tables?
Task 2: In a preparation of an R3LOAD heterogeneous system copy, the customer was asked to install Perl 5 or a JAVA JDK on his Windows production system, but he denied, because of restrictive software installation policies. 1.
Nevertheless, what can be done to improve the export time?
Task 3: The Migration Monitor has a client and a server export mode. 1.
Solution 9: Advanced Migration Techniques Task 1: A customer database of an ABAP SAP System has 10 very large tables that are between 2 and 20 GB in size and some other large tables ranging from 500 – 2.000 MB. After the JAVA- or Perl-based Package Splitter was executed with option “-top 10” (move the 10 largest tables to separate *.STR files) 10 additional *.STR files exit, but contain other tables than expected. 1.
What can be the reason of this behavior? Hint: What file is read to get the table size? What happens to large tables? a)
R3SZCHK limits the computed table sizes to a maximum of 1.78 GB. Be-cause of this, the package splitter catches the first 10 largest tables found in the *.EXT files. A 20 GB table will have the same *.EXT entry as a 2000 MB table.
Task 2: In a preparation of an R3LOAD heterogeneous system copy, the customer was asked to install Perl 5 or a JAVA JDK on his Windows production system, but he denied, because of restrictive software installation policies. 1.
Nevertheless, what can be done to improve the export time? a)
The *.STR files can be split manually using an editor, or can be transferred to another system where Perl or JAVA is available to perform the split. In order to do this, the export will need to have been stopped after R3SZCHK has started. Caution: If the split is done in advance, be sure that no new changes have been made to the ABAP dictionary since the initial creation of the *.STR files! Otherwise you risk inconsistencies.
Unit Summary You should now be able to: • Identify the time consuming steps during export / import • Minimize the downtime by applying appropriate measures • Understand the MIGMON functions and operation variants • Configure MIGMON • Understand the time analyzer features • Analyze the generated output • Understand the table splitting concept • Distinguish between the generic R3TA and the Oracle specific table splitter • Describe how MIGMON handles table splitting during export/import • Understand the Distribution Monitor functionality • Understand the JMIGMON functions and operation variants • Configure JMIGMON • Understand the JLOAD package and table splitting concept • Configure the rule file
Unit 9 Performing the Migration Unit Overview Contents •
Scheduling a standard SAP OS/DB Migration using the SAP Migration Tools.
Unit Objectives After completing this unit, you will be able to: • •
Explain the steps required to migrate an ABAP based system Explain the steps required to migrate an JAVA based system
Unit Contents Lesson: Performing an ABAP System Migration ...........................266 Lesson: Performing a JAVA System Migration..............................283 Exercise 10: Performing the Migration ..................................293
Many migration steps can be performed in parallel in the source and target systems. After step 3 (generate templates for DB sizes) has been performed in the source system, you should be prepared to start step 8 (create database in the target system). Once step 6 (file transfer) is complete, steps 7-8 should already have been performed in the target system. In the case where MIGMON is used for concurrent export/import, the steps 4, 5, 6, 9, 10 will run in parallel
Just before you start the migration, check all the migration-related SAP Notes for updates.
Figure 251: Technical Migration Preparation (2)
To reduce the time required to unload and load the database, minimize the amount of data in the migration source system. Before the migration make sure to de-schedule all jobs in the source system. This avoids jobs failing directly after the first start of the migrated SAP System. The reports BTCTRNS1 (set jobs into suspend mode) and BTCTRNS2 (reactivate jobs) can be helpful. Check the corresponding SAP Notes and SAP System upgrade guides for further reference.
If the target system has a new SAP SID, release all the corrections and repairs before starting the export. If the database contains tables that are not in the ABAP Dictionary, check whether some of these tables also have to be migrated.
Figure 252: Technical Migration Preparation (3)
The execution of report “SMIGR_CREATE_DDL” is mandatory for all SAP systems using non-standard database objects (BI/BW, SCM/APO). For NetWeaver 04 and later, the execution of “SMIGR_CREATE_DDL” is a must! Make sure not to make any changes to the non-standard objects after “SMIGR_CREATE_DDL” has been called! If no database specific objects exist, then no .SQL files will be generated. As long as the report terminates with status “successfully”, everything is ok. The “Installation Directory” can be any file system location. Copy .SQL files to the SAPINST export install directory or directly into the “/DB/” directory. Follow the guidelines in the homogeneous/heterogeneous system copy manual. Depending on the target database additional options might be available, which can be selected in the field “Database Version”. “Optional Parameters” allows the creation of a single .SQL file for a certain TABART, or for a specific table only. The resulting .SQL file will always have the name of the TABART. If the selected TABART or table is not a BW object, no .SQL file will be created.
771209 “NetWeaver 04: System copy (supplementary note)” 888210 “NetWeaver 7.00/7.10: System Copy (supplementary note)”
Figure 253: Generate *.EXT and *.STR Files
R3SETUP/SAPINST calls R3LDCTL and R3SZCHK. The runtime of R3SZCHK depends on the version, the size of the database and the database type. DBSIZE.TPL is created by R3SETUP, from the information computed by R3SZCHK and stored in table DDLOADD.
The generated *.STR and *.EXT files will be split into smaller units to improve the unload/load times. R3SETUP calls the Perl script to split *.STR files. Depending on the version, SAPINST uses the JAVA- or the Perl-based Package Splitter. On large databases table splitting will reduce the export / import run-time significantly.
R3SETUP/SAPINST/MIGMON generates command files. If WHERE files exist, the WHERE conditions will be inserted into the *.TSK files.
Figure 256: Export Database with R3LOAD
R3SETUP/SAPINST/MIGMON start a number of R3LOAD processes. A separate R3LOAD processes is started for each command file. The R3LOAD processes write the dump files to disk. As soon as an R3LOAD process terminates (whether successfully or due to error), R3SETUP/SAPINST/MIGMON start a new R3LOAD process for the next command file. Do not use NFS file systems as an export target for the dump files! Dump files can be unnoticeably damaged and cause data corruption!
EBCDIC R3LOAD control files created on AS/400 systems must be transferred in ASCII mode, if the target system is to run on an ASCII-based platform. In cases where dump files must be copied to transportable media, make sure that the files are copied correctly. Its better to spend additional time on verifying the copied files against the original files than spending several hours or even days to transport them to the target system, only to discover that some files had been corrupted by the copy procedure used. Appropriate checksum tools are available for every operating system.
The file “LABEL.ASC” is generated during the export of the source database. R3SETUP/SAPINST uses its content to determine whether the load data is read from the correct directory. Since 7.02, the SQLFiles.LST is generated by SMIGR_CREATE_DDL together with the *.SQL files. The *.CMD and *.TSK files are generated separately for export and import. Therefore, do not copy them!
Figure 259: Get Migration Key (1)
The migration key must be requested from the customer, because he has to accept the shown migration key license agreement. Check the migration key as soon as possible! All entries are case sensitive. Before opening a problem call, See SAP Note 338372.
Since 4.6D, the migration key is identical for different SAP Systems of the same installation number. The migration key must match the R3LOAD version. If asked for the SAP R/3 Release, enter the release version of the used R3LOAD. If in doubt check the log files. Some systems are using several different hostnames (i.e. in a cluster environment). The node name shown by “uname -a” or “hostname” should be the “DB Server Hostname”. Starting with 4.5A, always generate the migration key from the node name which is listed in the “(GSI) INFO” section of the R3LOAD export log (source system) and MIGKEY.log (target system). In some installations the System ID can even be in lower-case letters, because it is obtained from the first three characters of “(GSI) INFO: dbname”! The R3LOAD log files of 3.1I and 4.0B do not contain information about the source system, as in versions 4.5A and above. R3SETUP and SAPINST test the migration key by calling R3LOAD -K (upper-case K). The file “MIGKEY.log” contains the check results. The migration key in NetWeaver 7.00 Systems with “old” SAP license installed (upgraded system) is different then for the “new” SAP license. Check the corresponding system copy note for details. See SAP Note 338372 “Migration key does not work” for further reference.
The size values for the target database that are calculated from the source database serve only as starting points. Generally, some values will be too large and others will be too small. Therefore, be generous in your database sizing during the first migration test run. The experience gained through the test migration is better than any advanced estimate you could calculate, and you can always adjust the values in subsequent tests.
SAPINST/MIGMON call R3LOAD to create task files. R3SETUP/SAPINST/MIGMON generate commands files. If WHERE files exist, the WHERE conditions will be inserted into the *.TSK files.
Figure 264: Import Data with R3LOAD
R3SETUP/SAPINST/MIGMON starts the import R3LOAD processes.
The general follow-up activities are described in the homogeneous and heterogeneous system copy guides and their respective SAP Notes. Before the copied system is started the first time, the consistency between the ABAP and database dictionary will be checked and updated. R3SETUP/SAPINST will start the program “dipgntab” for that purpose. All updates of the active NAMETAB are logged in the file “dipgntab.log”. The summary at the end of this file should not report any error!
In many cases, the change of a database system will also include a change in the backup mechanism. Make sure to get familiar with the changed/new backup/restore procedures.
After the migration, the SAP System statistics and backup information for the source system can be deleted from the target database. For a list of the tables, see the system copy guide.
Report RADDBDIF creates database-specific objects (tables, views, indexes). RADDBDIF is usually called by R3SETUP/SAPINST via RFC (user DDIC, client 000) after the data is loaded. After ok from customer side, the SAP jobs that have been set to suspend mode via report BTCTRNS1, can now be rescheduled with BTCTRNS2.
The non-standard database objects (mainly BW objects) which were identified on the source system and are recreated and imported into the target system, will need some adjustments. The report RS_BW_POST_MIGRATION will do this. For further reference check SAP Note 777024 “BW 3.0 and BW 3.1 System copy” and/or read the corresponding chapter “Final Activities” in the homogeneous and heterogeneous system copy 6.40 or higher. The program should run independently, whether or not a *.SQL file was used or not. The data source system connection can be checked in transaction RSA1. The RFC parameters can be changed in transaction SM59.
The report variants SAP&POSTMGRDB and SAP&POSTMGR are pre-defined for system copies changing/not changing the database system. Run the report in the background, because the execution can take a while. Invalidate Generated Programs: Generated programs can be database specific. In order to make sure that every program will be re-generated according to the new database needs, the already generated programs will be invalidated. Adapt DBDIFF to New DB: Depending on the database type, more or less indexes will be required. Table DBDIFF will be adapted accordingly. No missing BW objects will be shown in transaction DB02 afterwards. Adapt Aggregate Indexes: runs CHECK_INDEX_STATE Adapt Basis Cube Indexes: runs CHECK_INDEX_STATE Generate New PSA Version: runs RS_TRANSTRU_ACTIVATE_ALL Delete Temporary Tables: runs SAP_DROP_TMPTABLES
Repair Fact View: runs SAP_FACTVIEWS_RECREATE L_DBMISC: Database specific tasks (if defined for current database) Restriction to One Cube: restricts CHECK_INDEX_STATE to a single cube only
The tables in the SAP0000.STR file contain the generated ABAPs (ABAP loads) of the SAP System. These loads are no longer valid after a hardware migration. For this reason, R3SETUP/SAPINST does not load these tables. Each ABAP load is generated automatically the next time a program is called. The system will be slow unless all commonly used programs are generated. Use transaction SGEN (starting with Release 4.6B) to regenerate all ABAPs. On versions before 4.6B, run transaction SAMT or report RDDGENLD. The report RDDGENLD requires the file REPLIST in the SAP instance work directory. To create the file REPLIST in the source system, call report RDDLDTC2.
Take care when setting up the test environment. To prevent unwanted data communication to external systems, isolate the system. External systems do not distinguish between migration tests and production access. To develop a cut over plan, an already existing checklist from a previous upgrade/migration can be a valuable source of ideas. To identify any differences between the original and the migrated system, involve end users as soon as possible.
Figure 275: Collect Application Data from File System
If SAPINST does not recognize the installed application and its related files, no archives will be created. Make sure to use the right version of the installation CD, as mentioned in the appropriate SAP Notes regarding homogeneous and heterogeneous system copies. Applications that are not recognized by SAPINST may require operation system specific commands to copy the respective directories and files to the target system. If this is the case, the corresponding SAP Notes will give instructions how to deal with it. The copy of other applications might need the installation of a certain support stack and a matching SAPINST.
Figure 276: Collect SDM Data
In SAP releases below 7.10, the SDM repository itself is installed in the file system and will be redeployed into the target system from the SDMKIT.JAR file.
JPKGCTL is optional used since SAPINST 7.02. Packaged job files containing multiple tables are named “EXPORT_.XML” and job files for a single table only are named “EXPORT_