DATASTAGE FAQ’s & TUTORIAL’s TOPIC INDEX
1. DATASTAGE QUESTIONS....................................................... .......... .................. ............ ....2 2. DATASTAGE FAQ from GEEK INTERVIEW QUESTIONS ....... ............... ............ ....... ...13 13 3. DATASTAGE FAQ.................................................... ................. ......................... ............... ............ ..... 26 4. TOP 10 1 0 FEATURES IN DATASTAGE HAWK................................... .......................................... .......... ...30 5. DATASTAGE NOTES ................................................. .............................................................. .................... ............... ............... ....... 32 6. DATASTAGE TUTORIAL.................................................. ........................................................... ................ .............. ........... .... 43 ............................................................................................................................................44 ............................................................................................................................................ 44 About DataStage.................................................. .......................................................................................................... ........................................................ .......44 44 Client Components.................................................... ............................................ .................................................... ............. .....44 44 DataStage Designer. ..................................................................... ....................... ............................... .............. ...... 44 DataStage Director..................................................... Director..................................................... ..................................... ............................................. ............... ........... .... 44 DataStage Manager...................................................... Manager...................................................... ................................ ....................................... ............... ............... ....... 44 DataStage Administrator...................................................... ............................... ....................................... ............... ....... 44 DataStage Manager Roles................................................... ................................................................................. ..................................... .............. .......... ... 44 Server Components...................................................... ................................. ........................................ ............... .............. ...... 45 DataStage Features.................................................... ........................................... ................................................... .............. ...... 45 Types of jobs....................................................... ........................................... ................................................... ............... ............ ..... 45 DataStage NLS................................................ ..................................................................................................... ..................................................... ....... .............. .......46 46 JOB................................................ ........................................................................................................ ........................................................ ....................... ............................. ...... 46 Built-In Stages – Server Jobs................................................ ......................................................................... ................................. ............... ............ ..... 46 Aggregator. ......................................................................................... ................................. ........................................................ ........................... ............................... .... 46 Hashed File. ........................................................................................ ................................ ........................................................ ........................... ............................... .... 46 UniVerse. ...........................................................................................................................47 47 UniData........................................................ UniData. ....................................................... ........................................... ................................................... ............... .............. ............ ..... 47 ODBC...................................................... ODBC. ..................................................... ....................................................... ............... .................... ..... 47 Sequential File. ..................................................................................................................47 47 Folder Stage. ......................................................................................................................48 48 Transformer........................................................................................................................ Transformer. .......................................................................................................................48 48 Container.................................................. Container. ...................................................................................................... ..................................................... ....... ............... .............. ...... 48 IPC Stage................................................. Stage........................................................................................................ ....................................................... ............... .................... ..... 48 Link Collector Stage........................................................................................................... Stage...........................................................................................................49 49 Link Partitioner Stage......................................................................................................... Stage.........................................................................................................49 49 Server Job Properties............................................... .................................................................................................. ........................................................... ........ 49 Containers..................................................... ........................................................ ..............50 50 Local containers. .................................................. ...................................................................................... ........................................... .............. ............... ........... ... 50 Shared containers. ................................................ .................................................................................... ........................................... .............. ............... ........... ... 50 Job Sequences Seque nces...................................................... ........................................................ .......50 50 7. LEARN FEATURES OF DATASTAGE.................................................... ..... 51 8. INFORMATICA vs DATASTAGE:.................................................... ....... ............ ..... 93 ☻Page 1 of 243☻
9. 10. 11. 12. 13.
BEFORE YOU DESIGN YOUR APPLICATION......................... ................................. ............... ....... 104 DATASTAGE 7.5x1 GUI FEATURES........................................... .................................................. .............. .......111 111 DATASTAGE & DWH INTERVIEW QUESTIONS ................. ........................ ............... ........... ...115 115 DATASTAGE ROUTINES..................................................... .......................129 129 SET_JOB_PARAMETERS_ROUTINE....................................... .............................................. ............... .......... ..197
DATASTAGE QUESTIONS 1. What is the flow of loading data into fact & dimensional tables? A) Fact table - Table with Collection of Foreign Keys corresponding to the Primary Keys in Dimensional table. Consists of fields with numeric values. Dimension table - Table with Unique Primary Key. Load - Data should be first loaded into dimensional table. Based on the primary key values in dimensional table, the data should be loaded into Fact table. 2. What is the default cache size? How do you change the cache size if needed? A. Default cache size is 256 MB. We can increase it by going into Datastage Administrator and selecting the Tunable Tab and specify the cache size over there. 3. What are types of Hashed File? A) Hashed File is classified broadly into 2 types. a) Static - Sub divided into 17 types based on Primary Key Pattern. b) Dynamic - sub divided into 2 types i) Generic ii) Specific.
☻Page 2 of 243☻
Dynamic files do not perform as well as a well, designed static file, but do perform better than a badly designed one. on e. When creating a dynamic file you can specify the following Although all of these have default values) By Default Hashed file is "Dynamic - Type Random 30 D" 4. What does a Config File in parallel extender consist of? A) Config file consists of the following. a) Number of Processes or Nodes. b) Actual Disk Storage Location. 5. What is Modulus and Splitting in Dynamic Hashed File? A. In a Has Hashed hed File File,, the the size size of the the fil filee kee keeps ps chang changin ing g ran random domly ly.. If the size of the file increases it is called as a s "Modulus". If the size of the file decreases it is called as "Splitting". 6. What are Stage Variables, Derivations and Constants? A. Stage Variable - An intermediate processing variable that retains value during read and doesn’t pass the value into target column. Derivation - Expression that specifies value to be passed on to the target column. Constant - Conditions that are either true or false that specifies flow of data with a link. 7. Types of views in Datastage Director? There are 3 types of views in Datastage Director a) Job View - Dates of Jobs Compiled. b) Log View - Status of Job last run c) Status View - Warning Messages, Event Messages, Program Generated Messages. 8. Types of Parallel Processing? A) Parallel Processing is broadly classified into 2 types. a) SMP - Symmetrical Multi Processing. b) MPP - Massive Parallel Processing. 9. Orchestrate Vs Datastage Parallel Extender? A) Orchestrate itself is an ETL tool with extensive parallel processing capabilities and running on UNIX platform. Datastage used Orchestrate with Datastage XE (Beta version of 6.0) to incorporate the parallel p arallel processing capabilities. Now Datastage has purchased Orchestrate and integrated it with Datastage XE and released a new version Datastage 6.0 i.e Parallel Extender. 10. Importance of Surrogate Key in Data warehousing? A) Surrogate Key is a Primary Key for a Dimension table. Most importance of using it is it is independent of underlying database. i.e. Surrogate Key is not affected by the changes going on with a database. 11. How to run a Shell Script within the scope of a Data stage job? A) By using "ExcecSH" command at Before/After job properties.
☻Page 3 of 243☻
12. How to handle Date conversions in Datastage? Convert a mm/dd/yyyy format to yyyy-dd-mm? A) We use a) "Iconv" function - Internal Conversion. b) "Oconv" function - External Conversion.
Function to convert mm/dd/yyyy format to yyyy-dd-mm is Oconv(Iconv(Filedname,"D/MDY[2,2,4]"),"D-MDY[2,2,4]") 13 How do you execute datastage job from command line prompt? A) Using "dsjob" command as follows. dsjob -run -jobstatus projectname jobname 14. Functionality of Link Partitioner and Link Collector? Link Partitioner: It actually splits data into various partitions or data flows using various partition methods. Link Collector: It collects the data coming from partitions, merges it into a single data flow and loads to target. 15. Types of Dimensional Modeling? A) Dimensional modeling is again sub divided into 2 types. a) Star Schema - Simple & Much Faster. Denormalized form. b) Snowflake Schema - Complex with more Granularity. More normalized form. 16. Differentiate Primary Key and Partition Key? Primary Key is a combination of unique and not null. It can be a collection of key values called as composite primary key. Partition Key is a just a part of Primary Key. There are several methods of partition like Hash, DB2, and Random etc. While using Hash partition we specify the Partition Key. 17. Differentiate Database data and Data warehouse data? A) Data in a Database is a) Detailed or Transactional b) Both Readable and Writable. c) Current. 18. Containers Usage and Types? Container is a collection of stages used for the purpose of Reusability. There are 2 types of Containers. a) Local Container: Job Specific b) Shared Container: Used in any job within a project. 19. Compare and Contrast ODBC and Plug-In stages? ODBC: a) Poor Performance. b) Can be used for Variety of Databases. c) Can handle Stored Procedures.
☻Page 4 of 243☻
Plug-In: a) Good Performance. b) Database specific. (Only one database) c) Cannot handle Stored Procedures. 20. Dimension Modelling types along with their significance Data Modelling is Broadly classified into 2 types. a) E-R Diagrams (Entity - Relatioships). b) Dimensional Modelling. Q 21 What are Ascential Dastastage Products, Connectivity Ans: Ascential Products
Ascential DataStage Ascential DataStage EE (3) Ascential DataStage EE MVS Ascential DataStage TX Ascential QualityStage Ascential MetaStage Ascential RTI (2) Ascential ProfileStage Ascential AuditStage Ascential Commerce Manager Industry Solutions Connectivity Files RDBMS Real-time PACKs EDI Other Q 22 Explain Data Stage Architecture? Data Stage contains two components, Client Component. Server Component. Client Component: Data Stage Administrator. Data Stage Manager Data Stage Designer Data Stage Director Server Components: Data Stage Engine Meta Data Repository
☻Page 5 of 243☻
Package Installer
Data Stage Administrator: Used to create the project. Contains set of properties
We can set the buffer size (by default 128 MB) We can increase the buffer size. We can set the Environment Variables. In tunable we have in process and inter-process In-process—Data read in sequentially Inter-process— It reads the data as it comes. It just interfaces to metadata. Data Stage Manager: We can view and edit the Meta data Repository. We can import table definitions. We can export the Data stage components in .xml or .dsx format. We can create routines and transforms We can compile the multiple jobs.
☻Page 6 of 243☻
Data Stage Designer: We can create the jobs. We can compile the job. We can run the job. We can declare stage variable in transform, we can call routines, transform, macros, functions. We can write constraints. Data Stage Director: We can run the jobs. We can schedule the jobs. (Schedule can be done daily, weekly, monthly, quarterly) We can monitor the jobs. We can release the jobs. Q 23 What is Meta Data Repository? Meta Data is a data about the data. It also contains Query statistics ETL statistics Business subject area Source Information Target Information Source to Target mapping Information. Q 24 What is Data Stage Engine?
☻Page 7 of 243☻
It is a JAVA engine running at the background. Q 25 What is Dimensional Modeling? Dimensional Modeling is a logical design technique that seeks to present the data in a standard framework that is, intuitive and allows for high performance access. Q 26 What is Star Schema? Star Schema is a de-normalized multi-dimensional model. It contains centralized fact tables surrounded by dimensions table. Dimension Table: It contains a primary key and description about the fact table. Fact Table: It contains foreign keys to the dimension tables, measures and aggregates. Q 27 What is surrogate Key? It is a 4-byte integer which replaces the transaction / business / OLTP key in the dimension table. We can store up to 2 billion record. Q 28 Why we need surrogate key? It is used for integrating the data may help better for primary key. Index Index mainte maintenanc nance, e, joins, joins, table table size, size, key updates updates,, discon disconnect nected ed insert insertss and partitioning. Q 29 What is Snowflake schema? It is partially normalized dimensional model in which at two represents least one dimension or more hierarchy related tables. Q 30 Explain Types of Fact Tables? Factless Fact: It contains only foreign keys to the dimension tables. Additive Fact: Measures can be added across any dimensions. Semi-Additive: Measures can be added across some dimensions. Eg, % age, discount Non-Additive: Measures cannot be added across any dimensions. Eg, Average Conformed Fact: The equation or the measures of the two fact tables are the same under the facts are measured across the dimensions with a same set of measures. Q 31 Explain the Types of Dimension Tables? dimension table table is connected connected to more than one fact table, table, Conformed Dimension: If a dimension the granularity that is is defined in the dimension dimension table is common across between between the fact tables. on ly flags. Junk Dimension: The Dimension table, which contains only Monster Dimension: If rapidly changes in Dimension are known as Monster Dimension. De-generative Dimension: It is line item-oriented fact table design. Q 32 What are stage variables? Stage variables are declaratives in Transformer Stage used to store values. Stage variables are active at the run time. (Because memory is allocated at the run time). Q 33 What is sequencer? It sets the sequence of execution of server jobs.
☻Page 8 of 243☻
Q 34 What are Active and Passive stages? Active Stage: Active stage model the flow of data and provide mechanisms for combining data streams, aggregating data and converting data from one data type to another. Eg, Transformer, aggregator, sort, Row Merger etc. Passive Stage: A Passive stage handles access to Database for the extraction or writing of data. Eg, IPC stage, File F ile types, Universe, Unidata, DRS stage etc. Q 35 What is ODS? Operational Data Store is a staging area where data can be rolled back. Q 36 What are Macros? They are built from Data Stage functions and do not require arguments. A number number of macros macros are provid provided ed in the JOBCON JOBCONTRO TROL.H L.H file file to facili facilitat tatee gettin getting g information about the current job, and links and stages belonging to the current job. These can be used in expressions (for example for use in Transformer stages), job control routines, filenames and table names, and before/after subroutines.
These macros provide the functionality of using the DSGetProjectInfo DSGetProjectInfo,, DSGetJobInfo DSGetJobInfo,, DSGetStageInfo,, and DSGetLinkInfo functions with the DSJ.ME token as the JobHandle DSGetStageInfo and can be used in all active stages and before/after subroutines. The macros provide the functionality for all the possible po ssible InfoType arguments for the DSGet…Info functions. See the Function call help topics for more details. The available macros are:
DSHostName DSProjectName DSJobStatus
DSJobName DSJobController DSJobStartDate DSJobStartTime DSJobStartTimestamp DSJobWaveNo DSJobInvocations DSJobInvocationId DSStageName DSStageLastErr DSStageType DSStageInRowNum DSStageVarList DSLinkRowCount DSLinkLastErr DSLinkName
1) Examples 2) To obtain obtain the name name of the curren currentt job: job: ☻Page 9 of 243☻
3) My MyNa Name me = DSJo DSJobN bNam amee To obtain the full current stage name: MyName = DSJobName : ″ .″ : DSStageName Q 37 What is keyMgtGetNextValue? It is a Built-in transform it generates Sequential numbers. Its input type is literal string & output type is string. Q 38 What are stages? The stages are either passive or active stages. Passive stages handle access to databases for extracting or writing data. Active stages model the flow of data and provide mechanisms for combining data streams, aggregating data, and converting data from one data type to another. Q 39 What index is created on Data Warehouse? Bitmap index is created in Data Warehouse. Q 40 What is container? A container is a group of stages and links. Containers enable you to simplify and modularize your server job designs by replacing complex areas of the diagram with a single container stage. You can also use shared containers as a way of incorporating server job functionality into parallel jobs. DataStage provides two types of container: • Local containers . These are created within a job and are only accessible by that job. A local container is edited in a tabbed page of the job’s Diagram window. • Shared containers . These are created separately and are stored in the Repository in the same way that jobs are. There The re are two types of shared container Q 41 What is function? ( Job Control – Examples of Transform Functions ) Functions take arguments and return a value. BASIC functions: A function performs mathematical or string manipulations on the arguments supplied to it, and return a value. Some functions have 0 arguments; most have 1 or more. Arguments are always in parentheses, separated by commas, as shown in this general syntax: FunctionName (argument, argument ) DataStage BASIC functions: These functions can be used in a job control routine, which is defined as part of a job’s properties and allows other jobs to be run and controlled from the first job. Some of the functions can also be used for getting status information on the current job; these are useful in active stage expressions and before- and after-stage subroutines. To do this ... Specify the job you want to control Set parameters for the job you want to control
☻Page 10 of 243☻
Use this function ... DSAttachJob DSSetParam
Set limits for the job you want to control Request that a job is run Wait for a called job to finish Gets the meta data details for the specified link Get information about the current project Get buffer size and timeout value for an IPC or Web Service stage Get in information ab about th the co controlled jo job or or cu current jo job
DSSetJobLimit DSRunJob DSWaitForJob DSGetLinkMetaData DSGetProjectInfo DSGetIPCStageProps
Get information about the meta bag properties associated with the named job Get information about a stage in the controlled job or current job Get Get the the nam names es of the the li links nks at attache ached d to to the the speci pecifi fied ed stag stagee
DSGetJobMetaBag
Get a list of stages of a particular type in a job.
DSGetStagesOfType
Get information about the types of stage in a job.
DSGetStageTypes
Get inform informati ation on about about a link link in in a cont control rolled led job or curr current ent job
DSGetL DSGetLink inkInf Info o
Get information about a controlled job’s parameters
DSGetParamInfo
Get the log event from the job log Get a number of log events on the specified subject from the job log Get the newe newest st log log event event,, of a specif specified ied type, type, from from the job log
DSGetLogEntry DSGetLogSummary
Log an event to the job log of a different job Stop a controlled job Return a job handle previously obtained from DSAttachJob
DSLogEvent DSStopJob DSDetachJob
Log a fatal fatal error error mess message age in a job' job'ss log log file file and and abort abortss the the job. job.
DSLogF DSLogFata atall
Log an information message in a job's log file.
DSLogInfo
Put an info message in the job log of a job controlling current c urrent job. Log a warning message in a job's log file. Generate a string describing the complete status of a valid attached job. Insert arguments into the message template.
DSLogToController
Enssure En ure a job job is in the corr correc ectt stat tate to be run or val validat idated ed..
DSPre SPrepa parreJob eJob
Interface to system send mail facility.
DSSendMail
☻Page 11 of 243☻
DSGetJobInfo
DSGetStageInfo DSGet SGetSt Stag ageL eLiinks nks
DSGetN DSGetNewes ewestLo tLogId gId
DSLogWarn DSMakeJobReport DSMakeMsg
Log a warning message to a job log file.
DSTransformError
Convert a job control status or error code into an explanatory text message. Suspend Suspend a job job until until a named named file file either either exists exists or or does not not exist. exist.
DSTranslateCode
Checks if a BASIC routine is cataloged, either in VOC as a callable item, or in the catalog space. Execute a DOS or Data Stage Engine command from a before/after subroutine. Set a status message for a job to return as a termination message when it finishes
DSCheckRoutine
DSWait DSWaitFor ForFil Filee
DSExecute DSSetUserStatus
Q 42 What is Routines? Routines Routines are stored stored in the Routines branch of the Data Stage Repository, where you can create, view or edit. The following programming components are classified as routines: Transform Transform functions, functions, Before/Aft Before/After er subroutines subroutines,, Custom Custom UniVerse UniVerse functions, functions, ActiveX ActiveX (OLE) functions, Web Service routines Q 43 What is data stage Transform? Q 44 What is Meta Brokers? Q 45 What is usage analysis? Q 46 What is job sequencer? Q 47 What are different activities in job sequencer? Q 48 What are triggers in data Stages? (conditional, unconditional, otherwise) Q 49 Are u generated job Reports? S Q 50 What is plug-in? Q 51 Have u created any custom transform? Explain? (Oconv)
☻Page 12 of 243☻
DATASTAGE FAQ from GEEK INTERVIEW QUESTIONS Question: Dimension Modeling types along with their significance Answer: Data Modelling is broadly classified into 2 types. A) E-R Diagrams (Entity - Relatioships). Re latioships). B) Dimensional Modelling. Question: Dimensional modelling is again sub divided into 2 types. Answer: A) Star Schema - Simple & Much Faster. Denormalized form. B) Snowflake Schema - Complex with more Granularity. More normalized form. Question: Importance of Surrogate Key in Data warehousing? Answer: Surrogate Key is a Primary Key for a Dimension table. Most importance of using it is, it is independent of underlying database, i.e. Surrogate Key is not affected by the changes going on with a database. Question: Differentiate Database data and Data warehouse data? Answer: Data in a Database is A) Detailed or Transactional B) Both Readable and Writable. C) Current. Question: What is the flow of loading data into fact & dimensional tables? Answer:
☻Page 13 of 243☻
Fact table - Table with Collection of Foreign Keys corresponding to the Primary Keys in Dimensional table. Consists of fields with numeric values. Dimension table - Table with Unique Primary Key. Load - Data should be first loaded into dimensional table. Based on the primary key values in dimensional table, then data should be loaded into Fact table. Question: Orchestrate Vs Datastage Parallel Extender? Answer: Orches Orchestra trate te itsel itselff is an ETL tool tool with with extens extensive ive parall parallel el proces processin sing g capabi capabilit lities ies and running on UNIX platform. Datastage used Orchestrate with Datastage XE (Beta version of 6.0) to incorporate the parallel processing capabilities. Now Datastage has purchased Orchestrate and integrated it with Datastage XE and released a new version Datastage 6.0 i.e. Parallel Extender. Question: Differentiate Primary Key and Partition Key? Answer: Primary Primary Key is a combinatio combination n of unique and not null. It can be a collection collection of key values called as composite primary key. Partition Key is a just a part of Primary Key. There are several methods of partition like Hash, DB2, Random etc...While using Hash partition we specify the Partition Key. Question: What are Stage Variables, Derivations and Constants? Answer: Stage Variable Variable - An intermediat intermediatee processing processing variable variable that retains value during read and doesn’t pass the value into target column. Constraint - Conditions that are either true or false that specifies flow of data with a link. Derivation - Expression that specifies value to be passed on to the target column. Question: What is the default cache size? How do you change the cache size if needed? Answer: Default cache size is 256 MB. We can increase it by going into Datastage Administrator and selecting the Tunable Tab and specify the cache size over there. Question: What is Hash file stage and what is it used for? Answer: Used for Look-ups. It is like a reference table. It is also used in-place of ODBC, OCI tables for better performance. Question: What are types of Hashed File? Answer: Hashed File is classified broadly into 2 types. A) Static - Sub divided into 17 types based on Primary Key Pattern. B) Dynamic - sub divided into 2 types i) Generic ii) Specific Default Hased file is "Dynamic - Type Random 30 D"
☻Page 14 of 243☻
Question: What are Static Hash files and Dynamic Hash files? Answer: As the names itself suggest what they mean. In general we use Type-30 dynamic Hash files. The Data file has a default size of 2GB and the overflow file is used if the data exceeds the 2GB size. Question: What is the Usage of Containers? What are its types? Answer: Container is a collection of stages used for the purpose of Reusability. There are 2 types of Containers. A) Local Container: Job Specific B) Shared Container: Used in any job within a project. Question: Compare and Contrast ODBC and Plug-In stages? Answer: ODBC Poor Performance Can Can be be use used d for for Vari Variet ety y of of Dat Datab abas ases es Can handle Stored Procedures
PLUG-IN Good Performance Dat Databas abasee Spe Speci cifi ficc (on (onlly o one ne dat databas abase) e) Cannot handle Stored Procedures
Question: How do you execute datastage job from command line prompt? Answer: Using "dsjob" command as follows. dsjob -run -jobstatus projectname jobname Question: What are the command line functions that import and export the DS jobs? Answer: dsimport.exe - imports the DataStage components. dsexport.exe - exports the DataStage components. Question: How to run a Shell Script within the scope of a Data stage job? Answer: By using "ExcecSH" command at Before/After job properties. Question: What are OConv () and Iconv () functions and where are they used? Answer: IConv() - Converts a string to an internal storage format OConv() - Converts an expression to an output format. Question Question:: How to handle handle Date convertion convertionss in Datastag Datastage? e? Convert Convert mm/dd/yy mm/dd/yyyy yy format to yyyy-dd-mm? Answer: We use a) "Iconv" function - Internal Convertion. b) "Oconv" function - External Convertion.
Function to convert mm/dd/yyyy format to yyyy-dd-mm is Oconv(Iconv(Filedname,"D/MDY[2,2,4]"),"D-MDY[2,2,4]")
☻Page 15 of 243☻
Question: Types of Parallel Processing? Answer: Parallel Processing is broadly classified into 2 types. a) SMP - Symmetrical Multi Processing. b) MPP - Massive Parallel Processing. Question: What does a Config File in parallel extender consist of? Answer: Config file consists of the following. a) Number of Processes or Nodes. b) Actual Disk Storage Location. Question: Functionality of Link Partitioner and Link Collector? Answer: Link Partitioner: It actually splits data into various partitions or data flows using various Partition methods.
Link Collector: It collects the data coming from partitions, merges it into a single data flow and loads to target. Question: What is Modulus and Splitting in Dynamic Hashed File? Answer: In a Hashed File, the size of the file keeps changing randomly. If the size of the file increases it is called as "Modulus". If the size of the file decreases it is called as "Splitting". Question: Types of views in Datastage Director? Answer: There are 3 types of views in Datastage Director a) Job View - Dates of Jobs Compiled. b) Log View - Status of Job last Run c) Status View - Warning Messages, Event Messages, P rogram Generated Messages. Question: Did you Parameterize the job or hard-coded the values in the jobs? Answer: Always parameterized the job. Either the values are coming from Job Properties or from a ‘Parameter Manager’ – a third part tool. There is no way you will hard–code some parameters in your jobs. The often Parameterized variables in a job are: DB DSN name, username, password, dates W.R.T for the data to be looked against at. Question: Have you ever involved in updating the DS versions like DS 5.X, if so tell us some the steps you have taken in doing so? Answer: Yes. The following are some of the steps: Definitely take a back up of the whole project(s) by exporting the project as a .dsx file
☻Page 16 of 243☻
See that you are using the same parent folder for the new version also for your old jobs using the hard-coded file path to work. After installing the new version import the old project(s) and you have to compile them all again. You can use 'Compile All' tool for this. Make sure that all your DB DSN's are created with the same name as old ones. This step is for moving DS from one machine to another. In case if you are just upgrading your DB from Oracle 8i to Oracle 9i there is tool on DS CD that can do this for you. Do not stop the 6.0 server before the upgrade, version 7.0 install process collects project inform informati ation on during during the upgrade upgrade.. There There is NO rework rework (recom (recompil pilati ation on of existi existing ng jobs/routines) needed after the upgrade. Question: How did you handle reject data? Answer: Typically a Reject-link is defined and the rejected data is loaded back into data warehouse. So Reject link has to be defined every Output link you wish to collect rejected data. Rejected data is typically bad data like duplicates of Primary keys or null-rows where data is expected. Question: What are other Performance tunings you have done in your last project to increase the performance of slowly running jobs? Answer: Staged the data coming from ODBC/OCI/DB2UDB stages or any database on the server using Hash/Sequential files for optimum performance also for data recovery in case job aborts. Tuned the OCI stage for 'Array Size' and 'Rows per Transaction' numerical values for faster inserts, updates and selects. Tuned the 'Project Tunables' in Administrator for better performance. Used sorted data for Aggregator. Sorted the data as much as possible in DB and reduced the use of DS-Sort for better performance of jobs. Removed the data not used from the source as early as possible in the job. Work Worked ed with with DB-a DB-adm dmin in to crea create te appro appropr pria iate te Index Indexes es on tabl tables es for for bett better er performance of DS queries. Converted some of the complex joins/busines joins/businesss in DS to Stored Procedures Procedures on DS Converted for faster execution of the jobs. If an input file has an excessive number of rows and can be split-up then use standard logic to run jobs in parallel. Before re writ writin ing g a rout routin inee or a tran transf sfor orm, m, make make sure sure that that ther theree is not not the the Befo functionality required in one of the standard routines supplied in the sdk or ds utilities categories. Constraints are generally CPU intensive and take a significant amount of time to process. This may be the case if the constraint calls routines or external macros but if it is inline code then the overhead will be minimal. Try to have the constraints in the 'Selection' criteria of the jobs itself. This will eliminate the unnecessary records even getting in before joins are made.
☻Page 17 of 243☻
Tuning should occur on a job-by-job basis. Use the power of DBMS. Try not to use a sort stage when you can use an ORDER BY clause in the database. Using a constraint to filter a record set is much slower than performing a SELECT … WHERE…. Make every attempt to use the bulk loader for your particular database. Bulk loaders are generally faster than using ODBC or OLE. Question: Tell me one situation from your last project, where you had faced problem and How did u solve it? Answer: 1. The jobs jobs in which data data is read directl directly y from OCI stages stages are are running running extreme extremely ly slow. I had to stage the data before sending to the transformer to make the jobs run faster. 2. The job aborts aborts in in the middle middle of loading loading some some 500,000 500,000 rows. rows. Have an option option either either cleaning/del cleaning/deleting eting the loaded data and then run the fixed job or run the job again from the row the job has aborted. To make sure the load is proper we opted the former. Question: Tell me the environment in your last projects Answer: Give the OS of the Server and the OS of the Client of your recent most project Question: How did u connect with DB2 in your last project? Answer: Most of the times the data was sent to us in the form of flat files. The data is dumped and sent to us. In some cases were we need to connect to DB2 for look-ups as an instance then we used ODBC drivers to connect to DB2 (or) DB2-UDB depending the situation and availability. Certainly DB2-UDB is better in terms of performance as you know the native drivers are always better than ODBC drivers. 'iSeries Access ODBC Driver 9.00.02.02' ODBC drivers to connect to AS400/DB2. Question: What are Routines and where/how are they written and have you written any routines before? Answer: Routines are stored in the Routines branch of the DataStage Repository, where you can create, view or edit. The following are different types of Routines: 1. Tran Transf sfor orm m Funct Functio ions ns 2. Before Before-Af -After ter Job subrou subroutin tines es 3. Job Job Cont Contro roll Rout Routin ines es Question: How did you handle an 'Aborted' sequencer? Answer: In almost all cases we have to delete the data inserted by this from DB manually and fix the job and then run the job again. Question: What are Sequencers?
☻Page 18 of 243☻
Answer: Sequencers are job control programs that execute other jobs with preset Job parameters. Question: Read the String functions in DS Answer: Functions like [] -> sub-string function and ':' -> concatenation operator Syntax: string [ [ start, ] length ] string [ delimiter, instance, repeats ] Question: What will you in a situation where somebody wants to send you a file and use that file as an input or reference and then run job. Answer: Under Windows : Use the 'WaitForFileActivity' under the Sequencers and then run • the job. May be you can schedule the sequencer around the time the file is expected to arrive. Under Under UNIX UNIX: Poll Poll for the file. file. Once Once the file file has start start the job or sequenc sequencer er • depending on the file. Question: What is the utility you use to schedule the jobs on a UNIX server other than using Ascential Director? Answer: Use crontab utility along with dsexecute() function along with proper parameters passed. Question: Did you work in UNIX environment? Answer: Yes. One of the most important requirements. Question: How would call an external Java function which are not supported by DataStage? Answer: Starting from DS 6.0 we have the ability to call external Java functions using a Java package from Ascential. In this case we can even use the command line to invoke the Java function and write the return values from the Java program (if any) and use that files as a source in DataStage job. Question: How will you determine the sequence of jobs to load into data warehouse? Answer: First we execute the jobs that load the data into Dimension tables, then Fact tables, then load the Aggregator tables (if any). Question: The above might raise another question: Why do we have to load the dimensional tables first, then fact tables: Answer: As we load the dimensional tables the keys (primary) are generated and these keys (primary) are Foreign keys in Fact tables.
☻Page 19 of 243☻
Question: Does the selection of 'Clear the table and Insert rows' in the ODBC stage send a Truncate statement to the DB or does it do some kind of Delete logic. Answer: There is no TRUNCATE on ODBC stages. It is Clear table blah blah and that is a delete from statement. On an OCI stage such as Oracle, you do have both Clear and Truncate options. They are radically different in permissions (Truncate requires you to have alter table permissions where Delete doesn't). Question: How do you rename all of the jobs to support your new File-naming conventions? Answer: Create an Excel spreadsheet with new and old names. Export the whole project as a dsx. Write a Perl program, which can do a simple rename of the strings looking up the Excel file. Then import the the new dsx file probably into a new project for testing. testing. Recompile all jobs. Be cautious that the name of the jobs has also been changed in your job control jobs or Sequencer jobs. So you have to make the necessary changes to these Sequencers. Question: When should we use ODS? Answer: DWH's are typically read only, batch updated on a schedule ODS's are maintained in more real time, trickle fed constantly
Question: What other ETL's you have worked with? Answer: Informatica and also DataJunction if it is present in your Resume. Question: How good are you with your PL/SQL? Answer: On the scale of 1-10 say 8.5-9 Question: What versions of DS you worked with? Answer: DS 7.5, DS 7.0.2, DS 6.0, DS 5.2 Question: What's the difference between Datastage Developers...? Answer: Datastage developer is one how will code the jobs. Datastage designer is how will design the job, I mean he will deal with blue prints and he will design the jobs the stages that are required in developing the code Question: What are the requirements for your ETL tool? Answer: Do you have large sequential files (1 million rows, for example) that need to be compared every day versus yesterday? If so, then ask how each vendor would do that. Think about what process they are going to do. Are they requiring you to load yesterday’s file into a table and do lookups?
☻Page 20 of 243☻
If so, RUN!! Are they doing a match/merge routine that knows how to process this in sequential files? Then maybe they are the right one. It all depends on what you need the ETL to do. If you are small enough in your data sets, then either would probably be OK. Questi Ques tion on:: Wh What at are are the the main main diff differ eren ence cess betw betwee een n Asce Ascent ntia iall Data DataSt Stag agee and and Informatica PowerCenter? Answer: Chuck Kelley’s Answer : You are right; they have pretty much similar functionality. However, what are the requirements for your ETL tool? Do you have large sequential files (1 million rows, for example) that need to be compared every day versus yesterday? If so, then ask how each vendor would do that. Think about what process they are going to do. Are they requiring you to load yesterday’s file into a table and do lookups? If so, RUN!! Are they doing a match/merge routine that knows how to process this in sequential files? Then maybe they are the right one. It all depends on what you need the ETL to do. If you are small enough in your data sets, then either would probably be OK. Les Barbusinski’s Answer : Without getting into specifics, here are some differences you may want to explore with each vendor: •
•
•
•
•
Does the tool use a relational or a proprietary database to store its Meta data and scripts? If proprietary, why? What What add-on add-onss are availa available ble for extrac extracti ting ng data data from from indust industry ry-st -stand andard ard ERP, ERP, Accounting, and CRM packages? Can the tool’s Meta data be integrated with third-party data modeling and/or business intelligence tools? If so, how and with which ones? How well does each tool handle complex transformations, and how much external scripting is required? What kinds of languages are supported for ETL script extensions?
Almost any ETL tool will look like any other on the surface. The trick is to find out which one one will will work work best best in your your envi enviro ronm nmen ent. t. Th Thee best best way way I’ve I’ve foun found d to make make this this determination is to ascertain how successful each vendor’s clients have been using their product. Especially clients who closely resemble your shop in terms of size, industry, inhouse skill sets, platforms, source systems, data volumes and transformation complexity. Ask both vendors for a list of their customers with characteristics similar to your own that have used their ETL product for at least a year. Then interview each client (preferably several people at each site) with an eye toward identifying unexpected problems, benefits, or quirkiness with the tool that have been encountered encountered by that customer. customer. Ultimately, Ultimately, ask each customer – if they had it all to do over again – whether or not they’d choose the same tool and why? You might be surprised at some of the answers. Joyce Bischoff’s Answer : You should do a careful research job when selecting products. You should first document your requirements, identify all possible products and evaluate each product against the detailed requirements. There are numerous ETL products on the
☻Page 21 of 243☻
market and it seems that you are looking at only two of them. If you are unfamiliar with the many products available, you may refer to www.tdan.com, www.tdan.com, the Data Administration Newsletter, for product lists. If you ask the vendors, they will certainly be able to tell you which of their product’s features are stronger than the other product. Ask both vendors and compare the answers, which may or may not be totally accurate. After you are very familiar with the products, call their references and be sure to talk with technical people who are actually using the product. You will not want the vendor to have a representative present when you speak with someone at the reference site. It is also not a good idea to depend upon a high-level manager at the reference site for a reliable opinion of the product. Managers may paint a very rosy picture of any selected product so that they do not look like they selected an inferior product. Question : How many places u can call Routines? Answer: Four Places u can call 1. Transform of of ro routine a. Date Date Tran Transf sfor orma mati tion on b. b. Upst Upstri ring ng Transf Transfor orma mati tion on 2. Tran Transf sfor orm m of of the the Befo Before re & Afte Afterr Subr Subrout outin ines es 3. XML transformation 4. Web ba base tr transform ormation Question : What is the Batch Program and how can generate? Answer: Batch program is the program it's generate run time to maintain by the Datastage itse itself lf but but u can can easy easy to chan change ge own own the the basi basiss of your your requ requir irem emen entt (Ext (Extra ract ctio ion, n, Transformation, Loading) .Batch program are generate depends your job nature either simple job or sequencer job, you can see this program on job control option. Question: Suppose that 4 job control by the sequencer like (job 1, job 2, job 3, job 4 ) if job 1 have 10,000 row ,after run the job only 5000 data has been loaded in target table remaining are not loaded and your job going to be aborted then.. How can short out the problem? Answer: Suppose job sequencer synchronies or control 4 job but job 1 have problem, in this condition should go director and check it what type of problem showing either data type problem, warning massage, job fail or job aborted, If job fail means data type problem or missing column action .So u should go Run window ->Click-> Tracing->Performance or In your target table ->general -> action-> select this option here two option (i) On Fail -- Commit , Continue (ii) On Sk Skip --- Co Commit, Co Continue. First u check how much data already load after then select on skip option then cont contin inue ue and and what what rema remain inin ing g posi positi tion on data data not not load loaded ed then then sele select ct On Fail Fail , Continue ...... Again Run the job defiantly u gets successful massage Question: What happens if RCP is disable?
☻Page 22 of 243☻
Answer: In such case OSH has to perform Import and export every time when the job runs and the processing time job is also increased... Question: How do you rename all of the jobs to support your new File-naming conventions? Answer: Create a Excel spreadsheet with new and old names. Export the whole project as a dsx. Write a Perl program, which can do a simple rename of the strings looking up the Exce Ex cell file file.. Th Then en impor importt the the new new dsx dsx file file prob probab ably ly into into a new new proj projec ectt for for test testin ing. g. Recompile all jobs. Be cautious that the name of the jobs has also been changed in your job control jobs or Sequencer jobs. So you have to make the necessary changes to these Sequencers. Question: What will you in a situation where somebody wants to send you a file and use that file as an input or reference and then run job. Answer: A. Under Windows: Use the 'WaitForFileActivity' under the Sequencers and then run the job. May be you can schedule the sequencer around the time the file is expected to arrive. B. Under UNIX: Poll for the file. Once the file has start the job or sequencer depending on the file Question: What are Sequencers?
Sequencers are job control programs that execute other jobs with preset Job Answer: Sequencers parameters. Question: How did you handle an 'Aborted' sequencer? Answer: In almost all cases we have to delete the data inserted by this from DB manually and fix the job and then run the job again. Question34: What is the difference between the Filter stage and the Switch stage? Ans: There are two main differences, and probably some minor ones as well. The two
main differences are as follows. 1)
Thee Filt Th Filter er stag stagee can can send send one one inpu inputt row to more more than than one one outp output ut link link.. The
Switch stage can not - the C switch construct has an implicit break in every case. 2)
Thee Switc Th Switch h stage stage is lim limit ited ed to 128 128 outpu outputt links links;; the Filt Filter er stage stage can can have have a
theoretically unlimited number of output links. (Note: this is not a challenge!) Question: How can i achieve constraint based loading using datastage7.5 .My target tables have inter dependencies i.e. Primary key foreign key constraints. I want my primary key tables to be loaded first and then my foreign key tables and also primary key tables should be committed before the foreign key tables are executed. How can I go about it?
☻Page 23 of 243☻
equencer to load you tables in Seque quential mode Ans:1) Create a Job Sequ In the sequencer Call all all Primary Primary Key tables loading loading Jobs first first and followed by Foreign Foreign key tables, when triggering the Foreign tables load Job trigger them only when Primary Key load Jobs run Successfully ( i.e. OK trigger) 2) To improve the performance of the Job, you can disable all the constraints on the tables and load them. Once loading done, check for the integrity of the data. Which does not meet raise exceptional data and cleanse them.
This only a suggestion, normally when loading on constraints are up, will drastically performance will go down. 3) If you use Star schema modeling, when you create physical DB from the model, you can delete all constraints and the referential integrity would be maintained in the ETL pro proce cess ss by refe referr rrin ing g all all your your dime dimens nsio ion n keys keys whil whilee load loadin ing g fact fact tabl tables es.. Once Once all all dimensional keys are assigned to a fact then dimension and fact can be loaded together. At the same time RI is being maintained at ETL process level. Question: How do you merge two files in DS? Ans: Either use Copy command as a Before-job subroutine if the metadata of the 2 files are same or create a job to concatenate the 2 files into one, if the metadata is different. d ifferent. Question: How do you eliminate duplicate rows? Ans: Data Stage provides us with a stage Remove Duplicates in Enterprise edition. Using that stage we can eliminate the duplicates based on a key column. Question: How do you pass filename as the parameter for a job? Ans: While job development we can create a parameter 'FILE_NAME' and the value can be passed while Question: How did you handle an 'Aborted' sequencer? Ans: In almost all cases we have to delete the data inserted by this from DB manually and fix the job and then run the job again. Question: Is there a mechanism available to export/import individual DataStage ETL jobs from the UNIX command line? Ans: Try dscmdexport and dscmdimport. Won't handle the "individual job" requirement. You can only export full projects from the command line. You You can find find the the expor exportt and impo import rt execut executabl ables es on the client client mach machine ine usua usually lly some somepl place ace like: like: C:\Pr C:\Prog ogram ram Files\Ascential\DataStage.
Question: Diff. between JOIN stage and MERGE stage . Answer:
☻Page 24 of 243☻
JOIN: Performs join operations on two or more data sets input to the stage and then outputs the resulting dataset. MERGE: Combines a sorted master data set with one or more sorted updated data sets. The columns from the records in the master and update data set s are merged so that the out put record contains contains all the column columnss from from the master master record record plus plus any additiona additionall columns from each update record that required.
A master record and an update record are merged only if both of them have the same values for the merge key column(s) that we specify .Merge key columns are one or more columns that exist in both the master and update records. Question: Advantages of the DataStage? Answer: Business advantages: • • • • • •
Helps for better business decisions; It is able to integrate data coming from all parts of the company; It helps to understand the new and already existing clients; We can collect data of different clients with him, and compare them; It makes the research of new business possibilities possible; We can analyze trends of the data d ata read by him.
Technological advantages: • • • • •
It handles all company data and adapts to the needs; It offers the possibility for the organization of a complex business intelligence; Flexibly and scalable; It accelerates the running of the project; Easily implementable.
☻Page 25 of 243☻
DATASTAGE FAQ 1. What is the architecture of data stage? Basically architecture of DS is client/server architecture. Client components & server components Client components are 4 types they are 1. Data stage designer 2. Data stage administrator 3. Data stage director 4. Data stage manager Data stage designer is user for to design the jobs Data stage manager is used for to import & export the project to view & edit the contents of the repository. Data stage administrator is used for creating the project, deleting the project & setting the environment variables. Data stage director is use for to run the jobs, validate the jobs, scheduling the jobs. Server components
☻Page 26 of 243☻
DS server: runs executable server jobs, under the control of the DS director, that extract, transform, and load data into a DWH. DS Package installer : A user interface used to install packaged DS jobs and plug-in; Repository or project : a central store that contains all the information required to build DWH or data mart. 2. Wh What at r the stage stagess u wor worke ked d on? on? 3. I have some jobs every month automatically delete the log details what r the steps u have to take for that
We have to set the option o ption autopurge in DS Adminstrator. 4. I want to run the multiple jobs in the single job. How can u handle .
In job properties set the option ALLOW MULTIPLE INSTANCES. 5. What is version controlling in DS?
In DS, version controlling is used for back up the project or jobs. This option is available in DS 7.1 version onwards. Version controls r of 2 types. 1. VSSVSS- visu visual al sour source ce saf safee 2. CVSSCVSS- concur concurren rentt visual visual sour source ce safe. safe. VSS is designed by Microsoft Microsoft but the disadvantage disadvantage is only one user can access at a time, time, other user can wait until the first user complete the operation. CVSS, by using this many users can access concurrently. When compared to VSS, CVSS cost is high. 6. What is the difference between clear log file and clear status file? Clear log--- we can clear the log details by using the DS Director. Under job menu clear log option is available. By using this option we can clear the log details of particular job. Clear status file ---- lets the user remove the status of the record associated with all stages of selected jobs.(in DS Director) 7. I developed 1 job with 50 stages, at the run time one stage is missed how can u identify which stage is missing?
By using usage analysis tool, which is available in DS manager, we can find out the what r the items r used in job.
☻Page 27 of 243☻
8. My job takes 30 minutes time to run, I want to run the job less than 30 minutes? What r the steps we have to take?
By using performance tuning aspects which are available in DS, we can reduce time. Tuning aspect In DS administrator In betw betwee een n pass passiv ivee sta stage gess OCI stage
: in-process and inter process : int inter er pro proce cess ss stag stagee : Array size and transaction size
And also use link partitioner & link collector co llector stage in between passive stages 9. How to do road transposition in DS?
Pivot stage is used to transposition purpose. Pivot is an active stage that maps sets of columns in an input table to a single column in an output table. 10. If a job locked by some user, how can you unlock the particular job in DS?
We can unlock the job by using clean up resources option which is available available in DS Director. Other wise we can find PID (process id) and kill the process in UNIX server. 11. What is a container? How many types containers are available? Is it possible to use container as look up?
A cont contai aine nerr is a grou group p of stag stages es and links links.. Cont Contai aine ners rs enabl enablee you you to simp simpli lify fy and and modularize your server job designs by replacing complex areas of the diagram with a single container stage. DataStage provides two types of container: • Local containers . These are created within a job and are only accessible by that job only. • Shared containers . These are created separately and are stored in the Repository in the same way that jobs are. Shared containers can use any job in the project. Yes we can use container as look up. 12. How to deconstruct the shared container?
To deconstruct the shared container, first u have to convert the shared container to local container. And then deconstruct the container. 13. I am getting input value like X = Iconv(“31 DEC 1967”,”D”)? What is the X value?
X value is Zero. Iconv Function Converts a string to an internal storage format.It takes 31 dec 1967 as zero and counts days from that date(31-dec-1967).
☻Page 28 of 243☻
14. What is the Unit testing, integration testing and system testing? Unit testing: As for Ds unit test will check the data type mismatching, Size of the particular data type, column mismatching. Integration testing: According to dependency we will put all jobs are integrated in to one sequence. That is called control sequence. System testing: System testing is nothing but the performance tuning aspects in Ds. 15. What are the command line functions that import and export the DS jobs?
Dsimport.exe ---- To import the DataStage DataStage components Dsexport.exe ---- To export the DataStage components 16. How many hashing algorithms are available for static hash file and dynamic hash file?
Sixteen hashing algorithms for static hash file. Two hashing algorithms for dynamic hash file( GENERAL or S EQ.NUM) 17. What happens when you have a job that links two passive stages together?
Obvio Obvious usly ly ther theree is some some proc proces esss goin going g on. Under Under cove covers rs Ds inse insert rtss a cut-d cut-down own transformer stage between the passive stages, which just passes data straight from one stage to the other. 18. What is the use use of Nested condition activity?
Nested Condition. Allows you to further branch the execution of a sequence depending on a condition. 19. I have three three jobs A,B,C . Which are dependent dependent on each other? I want want to run A & C jobs daily and B job runs only on Sunday. How can u do it?
First you have to schedule A & C jobs Monday to Saturday in one sequence. Next take three jobs according to dependency in one more sequence and schedule that job only Sunday.
☻Page 29 of 243☻
TOP 10 FEATURES IN DATASTAGE HAWK The IILive2005 conference marked the first public presentations of the functionality in the WebSphere WebSphere Information Information Integrati Integration on Hawk release. Though it's still a few months away I am sharing my top Ten things I am looking forward to in DataStage Hawk: 1) The metadata server. To borrow a simile from that judge on American Idol "Using MetaStage is kind of like bathing in the ocean on a cold morning. You know it's good for you but that doesn't stop it from freezing the crown jewels." MetaStage is good for ETL projects but none of the projects I've been on has actually used it. Too much effort required to install the software, setup the metabrokers, migrate the metadata, and learn how the product product works and write write report reports. s. Hawk brings brings the common common reposi repositor tory y and improved metadata reporting and we can get the positive effectives of bathing in sea water without the shrinkage that comes with it. 2) QualityStage overhaul. Data Quality reporting can be another forgotten aspect of data integration projects. Like MetaStage the QualityStage server and client had an additional install, training and implementation overhead so many DataStage projects did not use it. I am looking forward to more integration projects using standardisation, matching and survivorship to improve quality once these features are more accessible and easier to use. 3) Frictionless Connectivity and Connection Objects. I've called DB2 every rude name under the sun. Not because because it's a bad database but because setting setting up remote remote access takes me anywhere from five minutes to five weeks depending on how obscure the error mess message age and and how how hard hard it is to find find the the obsc obscur uree setu setup p step step that that was was miss missed ed duri during ng installation. Anything that makes connecting to database easier gets a big tick from me. 4) Parallel job range lookup. I am looking forward to this one because it will stop people asking for it on forums. It looks good, it's been merged into the existing lookup form and seems easy to use. Will be interested to see the performance.
☻Page 30 of 243☻
5) Slowly Changing Dimension Stage. This is one of those things that Informatica were able to trumpet at product comparisons, that they have more out of the box DW support. There are a few enhancements to make updates to dimension tables easier, there is the improved surrogate key generator, there is the slowly changing dimension stage and updates passed to in memory lookups. That's it for me with DBMS generated keys, I'm only doing the keys in the ETL job from now on! DataStage server jobs have the hash file lookup where you can read and write to it at the same time, parallel jobs will have the updateable lookup. 6) Collaboration: better developer collaboration. Everyone hates opening a job and being told it is locked. "Bloody whathisname has gone to lunch, locked the job and now his password protected screen saver is up! Unplug his PC!" Under Hawk you can open a readonly copy of a locked job plus you get told who has locked the job so you know whom to curse. 7) Sess Sessio ion n Disc Discon onne nect ctio ion. n. Acco Accomp mpan anie ied d by the the meta metall llic ic cry cry of "ext "exter ermi mina nate te!! exterminate!" an administrator can disconnect sessions and unlock jobs. 8) Improved SQL Builder. I know a lot of people cross the street when they see the SQL Builder coming. Getting the SQL builder to build complex SQL is a bit like teaching a monkey how to play chess. What I do like about the current SQL builder is that it synchronises your SQL select list with your ETL column list to avoid column mismatches. I am hoping the next version is more flexible and can build complex SQL. 9) Improved job startup times. Small parallel jobs will run faster. I call it the death of a thousand cuts, your very large parallel job takes too long to run because a thousand smaller jobs are starting and stopping at the same time and cutting into CPU and memory. Hawk makes these cuts less painful. 10) Common logging. Log views that work across jobs, log searches, log date constraints, wildcard message filters, saved queries. It's all good. You no longer need to send out a search party to find an error message. That's my top ten. I am also hoping the software comes in a box shaped like a hawk and makes a hawk scream when you open it. A bit like those annoying greeting cards. Is there any functionality you think Hawk is missing that you really want to see?
☻Page 31 of 243☻
DATASTAGE NOTES DataStage Tips: 1. Aggreg Aggregato atorr stage does does not support support more more than one source, source, if you try try to do this this you will get error, “The destination stage cannot support any more stream input links”. 2. You You can can give give N numb number er inpu inputt link linkss to tran transf sfor orme merr stag stage, e, but but you you can can give give sequential file stage as reference link. You can give only one sequential file stage as primary link and number other links as reference link. If you try to give sequential file stage as reference link you will get error as, “The destination stage cannot support any more stream input links” because reference link represent a lookup table, but sequential file does not use as lookup table, Hashed file can be use as lookup table. Sequential file stage: Sequential file stage is provided by datastage to access data from sequential file. • (Text file) The access mechanism of a sequential file is sequence order. • We cannot use a sequential file as a lookup. • The problem with sequential file we cannot directly ‘filter rows’ and query is not • supported. Update actions in sequential file: Over write existing file (radio button). • Append to existing file (radio button). • Backup existing file (check box). • Hashed file stage: Hashed file is used to store data in hash file. • A hash file is similar to a text file but the data will be organized using ‘hashing • algorithm’. Basically hashed file is used for lookup purpose. • ☻Page 32 of 243☻
•
The retrieval of data in hashed file faster because it uses ’hashing algorithm’.
Update actions in Hashed file: Clear file before waiting • Backup existing file. • Sequential file (all are check boxes). •
DATABASE Stages: ODBC Stage: ODBC Stage “Stage” Page:
You can use an ODBC stage to extract, write, or aggregate data. Each ODBC stage can have any number of inputs or outputs. Input links specify the data you are writing. Output links specify the data you are extracting and any aggregations required. You can specify the data on an input link using an SQL statement constructed by DataStage, a generated query, a stored procedure, or a user-defined SQL query. •
GetSQLInfo : is used to get quote character and schema delimiters of your data source. Optionally specify the quote character used by the data source. By default, this is set to " (double quotes). You can also click the Get SQLInfo button to connect to the data source and retrieve the Quote character it uses. An entry of 000 (three zeroes) specifies that no quote character should be used. Optionally specify the schema delimiter used by the data source. By default this is set to. (period) but you can specify a different schema delimiter, or multiple schema delimiters. S o, for example, identifiers have the form Node:Schema.Owner;TableName you would enter :.; into this field. You can also click the Get SQLInfo SQLInfo button to connect to the data source and retrieve the Schema delimiter it uses. NLS tab: You can define a character set map for an ODBC stage using the NLS • tab of the ODBC Stage The ODBC stage can handle the following SQL Server data types: • GUID • Timestamp • SmallDateTime ODBC Stage “Input” Page:
☻Page 33 of 243☻
•
Update action . Specifies how the data is written. Choose the option you want from the drop-down list box: Clear the table, then insert rows . Deletes the contents of the table and adds the new rows. Insert rows without clearing . Inserts the new rows in the table. add ed or, if the insert fails, Insert new or update existing rows . New rows are added the existing rows are updated. Replace existing rows completely . Deletes the existing rows, then adds the new rows to the table. Update existing rows only . Updates the existing data rows. If a row with the supplied key does not exist in the table then the table is not updated but a warning is logged. Update existing or insert new rows . The existing data rows are updated or, if this fails, new rows are added. Call stored procedure . Writes the data using a stored procedure. When you select this option, the Procedure name field appears. User-defined SQL . Writes the data using a user-defined SQL statement. When you select this option, the View SQL tab is replaced by the Enter SQL tab.
•
Select ect this his chec check k box box if you want want to Create Create table table in targe targett datab database ase. Sel automatically create a table in the target database at run time. A table is created based on the defined column set for this stage. If you select this option, an additional tab, Edit DDL, appears. This shows the SQL CREATE statement to be used for table generation. •
Transaction Transaction Handling. This page allows you to specify the transaction handling features of the stage as it writes to the ODBC data source. You can choose whether to use transaction grouping or not, specify an isolation level, the number of rows written before each commit, and the number of rows written in each operation. Read Unco Uncomm mmit itte ted, d, Read Read Commi Committ tted, ed, Isolatio Isolation n Levels: Levels: Read Repeatable Read, Serializable, Versioning, and Auto-Commit. Rows per transaction transaction field. This is the number of rows written before the data is committed to the data table. The default value is 0, that is, all the rows are written before being committed to the data table. Parameter array size field. This is the number of rows written at a time. The default is 1, that is, each row is written in a separate operation. ODBC Stage “Output” Page:
==
PROCESSING Stages: TRANSFORMER Stage: TRANSFORMER Stage:
☻Page 34 of 243☻
Transformer stages do not extract data or write data to a target database. They are used to handle handle extrac extracted ted data, data, perfor perform m any convers conversion ionss requir required, ed, and pass pass data data to another another Transformer stage or a stage that writes data to a target data table. Transformer stages can have any number of inputs and outputs. The link from the main data input source is designated the primary input link. There can only be one primary input link , but there can be any number of reference inputs. Input Links
The main data source is joined to the Transformer stage via the primary link, but the stage can also have any number of reference input links. A reference link represents a table lookup. These are used to provide information that might affect the way the data is changed, but do not supply the actual data to be changed.
Reference input columns can be designated as key fields. You can specify key expressions that are used to evaluate the key fields. The most common use for the key expression is to specify an equi-join, which is a link between a primary link column and a reference link column. For example, if your primary input data contains names and addresses, and a reference input contains names and phone numbers, the reference link name column is marked as a key field and the key expression refers to the primary link’s name column. During processing, the name in the primary input is looked up in the reference input. If the names match, the reference data is consolidated with the primary data. If the names do not match, i.e., there is no record in the reference input whose key matches the expression given, all the columns specified for the reference input are set to the null n ull value. Where a reference link originates from a UniVerse or ODBC stage, you can look up multiple rows from the reference table. The rows are specified by a foreign key, as opposed to a primary key used for a single-row lookup. Output Links
You can have any number of output links from your Transformer stage. You may want to pass some data straight through the Transformer stage unaltered, but it’s likely that you’ll want to transform data from some input columns before outputting it from the Transformer stage. You can specify such an operation by entering a BASIC expression or by selecting a transform to apply to the data. DataStage has many built-in transforms, or you can define your own custom transforms that are stored in the Repository and can be reused as required. The source of an output link column is defined in that column’s Derivation cell within the Transformer Editor. You can use the Expression Editor to enter expressions or transforms
☻Page 35 of 243☻
in this cell. You can also simply drag an input column column to an output column’s Derivation cell, to pass the data straight through the Transformer stage. In additio addition n to specif specify y deriva derivatio tion n detail detailss for indivi individua duall output output column columns, s, you can also also specify constraints that operate on entire output links. A constraint is a BASIC expression that specifies criteria that data must meet before it can be passed to the output link. You can also specify a reject link, which is an output link that carries all the data not output on other links, that is, columns that have not met the criteria. Each output link is processed in turn. If the constraint expression evaluates to TRUE for an input row, the data row is output on that link. Conversely, if a constraint expression evaluates to FALSE for an input row, the data row is not output on that link. Constraint expressions on different links are independent. If you have more than one output link, an input row may result in a data row being output from some, none, or all of the output links. For example, if you consider the data that comes from a paint shop, it could include informati information on about any number of different colors. colors. If you want to separate separate the colors colors into different files, you would set up different constraints. You could output the information about green and blue paint on LinkA, red and yellow paint on LinkB, and black paint on LinkC. When When an input input row contain containss inform informati ation on about about yellow yellow paint, paint, the LinkA LinkA constr constrain aintt expression evaluates to FALSE and the row is not output on LinkA. However, the input data does satisfy the constraint criterion for LinkB and the rows are output on LinkB. If the the input input data data cont contai ains ns info inform rmat atio ion n about about white white paint paint,, this this does does not sati satisf sfy y any any constraint and the data row is not output on Links A, B or C, but will be output on the reject link. The reject link is used to route data to a table or file that is a “catch-all” for rows that are not output on any other link. The table or file containing these rejects is represented by another stage in the job design. Before-Stage and After-Stage Routines
Because the Transformer stage is an active stage type, you can specify routines to be executed before or after the stage has processed the data. For example, you might use a before-sta before-stage ge routine to prepare prepare the data before before processing processing starts. starts. You might use an afterstage routine to send an electronic message when the stage has finished. Specifying the Primary Input Link
The first link to a Transformer stage is always designated as the primary input link. However, you can choose an alternative link to be the primary link if necessary. To do this: 1. Select Select the current current primary primary input input link link in the Diagram Diagram window. window. 2. Choose Convert to Reference from the Diagram window shortcut menu.
☻Page 36 of 243☻
3. Select Select the reference reference link link that that you want want to be the the new primar primary y input link. 4. Choose Convert to Stream from the Diagram window shortcut menu. ==
AGGREGATOR Stage: AGGREGATOR Stage: Aggregator stages classify data rows from a single input link into groups and compute totals totals or other aggregate aggregate functions for each group. The summed summed totals totals for each group are output from the stage via an output link. If you want to aggregate the input data in a number of different ways, you can have several output links, each specifying a different set of properties to define how the input data is grouped and summarized. ==
FOLDER Stage: FOLDER Stage: Folder stages are used to read or write data as files in a directory located on the DataStage server. The folder stages can read multiple files from a single directory and can deliver the files to the job as rows on an output link. The folder stage can also write rows of data as files to a directory. The rows arrive at the stage on an input link. Note: The behavior of the Folder stage when reading folders that contain other folders is undefined.
In an NLS environment, the user running the job must have write permission on the folder so that the NLS map information can be set up correctly. Folder Stage Input Data
The properties are as follows: • Preserve CRLF. When Preserve CRLF is set to Yes field marks are not converted to newlines on write. It is set to Yes by default.
☻Page 37 of 243☻
The Columns tab defines the data arriving on the link to be written in files to the directory. directory. The first first column on the Columns tab must be defined as a key, and gives the name of the file. The remaining columns are written to the named file, each column separated by a newline. Data to be written to a directory would normally be delivered in a single column. Folder Stage Output Data
The properties are as follows: • Sort order . Choose from Ascending, Descending, or None. This specifies the order in which the files are read from the directory. • Wildcard . This allows for simple wildcarding of the names of the files found in the directory. Any occurrence of * (asterisk) or … (three periods) is treated as an instruction to match any or no characters. • Preserve CRLF. When Preserve CRLF is set to Yes newlines are not converted to field marks on read. It is set to Yes by default. • Fully qualified . Set this to yes to have the full path name of each file written in the key column instead of just the file name. The Columns tab defines a maximum of two columns. The first column must be marked as the Key and receives the file name. The second column, if present, receives the contents of the file. ==
IPC Stage: An inter-process (IPC) stage is a passive stage which provides a communication channel between DataStage processes running simultaneously in the same job. It allows you to design jobs that run on SMP systems with great performance benefits. To understand the benefits of using IPC stages, you need to know a bit about how DataStage jobs actually run as processes, see “DataStage Jobs and Processes”. The output link connecting IPC stage to the stage reading data can be opened as soon as the input link connected to the stage writing data has been opened. You can use Inter-process stages to join passive stages together. For example you could use them to speed up data transfer between two data sources:
In this example the job will run as two processes, one handling the communication from sequential file stage to IPC stage, and one handling communication from IPC stage to ODBC stage. As soon as the Sequential File stage has opened its output link, the IPC stage
☻Page 38 of 243☻
can start passing data to the ODBC stage. If the job is running on a multi processor system, the two processor can run simultaneously so the transfer will be much faster. Defining IPC Stage Properties
The Properties tab allows you to specify two properties for the IPC stage: • Buffer Size. Defaults to 128 Kb. The IPC stage uses two blocks of memory; one block can be written to while the other is read from. This property defines the size of each block, so that by default 256 Kb is allocated in total. • Timeout . Defaults to 10 seconds. This gives time limit for how long the stage will wait for a process to connect to it before timing out. This normally will not need changing, but may be important where you are prototyping multi-processor jobs on single processor platforms and there are likely to be delays. ==
LINK PARTITIONER Stage: PARTITIONER Stage: The Link Partitioner stage is an active stage which takes one input and allows you to distribute partitioned rows to up to 64 output links. The stage expects the output links to use the same meta data as the input link. Partitioning your data enables you to take advantage of a multi-processor system and have the data processed in parallel. It can be used in conjunction with the Link Collector stage to partition partition data, process it in parallel, parallel, then collect collect it together together again before writing it to a singl singlee targ target et.. To real really ly und under erst stan and d the the benef benefit itss you you need need to kno know w a bit bit abou aboutt how how DataStage jobs are run as processes, see “DataStage Jobs and Processes”.
In order for this job to compile and run as intended on a multi-processor system you must have inter-process turned on, either either at projec projectt level level using using the DataSt DataStage age inter-process buffering turned Administrator, or at job level from the Job Properties dialog box.
Before-Stage and After-Stage Subroutines The General tab on the Stage page contains optional fields that allow you to define routines to use which are executed before or after the stage has processed the data.
☻Page 39 of 243☻
•
Before-stage subroutine and Input Value . Contain the name (and value) of a subroutine that is executed before the stage starts to process any data. For example, you can specify a routine that prepares the data before processing starts. After-stage subroutine and Input Value . Contain the name (and value) of a • subroutine that is executed after the stage has processed the data. For example, you can specify a routine that sends an electronic message when the stage has finished.
Choose a routine from the drop-down list box. This list box contains all the routines defined as a Before/After Subroutine under the Routines branch in the Repository. Enter an appropriate value for the routine’s input argument in the Input Value field. If you choose a routine that is defined in the Repository, but which was edited but not compiled, a warning message reminds you to compile the routine when you close the Link Partitioner Stage dialog box. A return code of 0 from the routine indicates success, any other code indicates failure and causes a fatal error when the job is run. If you installed or imported a job, the Before-stage subroutine or Afterstage subroutine field may reference a routine that does not exist on your system. In this case, a warning message appears when you close the Link Partitioner Stage dialog box. You must install or import the “missing” routine or choose an alternative one to use.
Defining Link Partitioner Stage Properties The Properties tab allows you to specify two properties for the Link Partitioner stage: Partitioning Algorithm . Use this property to specify the method the stage uses to • partition data. Choose from: Round-Robin . This is the default method. Using the round-robin method the stage will write each incoming row to one of its output links in turn. Random . Using this method the stage will use a random number generator to distribute incoming rows evenly across all output links. Hash. Using this method the stage applies a hash function to one or more input column values to determine which output link the row is passed to. Modulus . Using this method the stage applies a modulus function to an integer input column value to determine which output link the row is passed to. •
Partitioning Partitioning Key . This property is only significant where you have chosen a partitioning algorithm of Hash or Modulus. For the Hash algorithm, specify one or more column names separated by commas. These keys are concatenated and a hash function applied to determine the destination output link. For the Modulus algorithm, specify a single column name which identifies an integer numeric column. The value of this column value determines the destination output ou tput link.
Defining Link Partitioner Stage Input Data
☻Page 40 of 243☻
The Link Partitioner stage can have one input link. This is where the data to be partitioned arrives. The Inputs page has two tabs: General and Columns. General . The General tab allows you to specify an optional description of the • stage. Columns . The Columns tab contains the column definitions for the data on the • input link. This is normally populated by the meta data of the stage connecting on the input side. You can also Load a column definition from the Repository, or type one in yourself (and Save it to the Repository if required). Note that the meta data on the input link must be identical to the meta data on the output links.
Defining Link Partitioner Stage Output Data The Link Partitioner stage can have up to 64 output links. Partitioned data flows along these links. The Output Name drop-down list on the Outputs pages allows you to select which of the 64 links you are looking at. The Outputs page has two tabs: General and Columns. General . The General tab allows you to specify an optional description of the • stage. Columns . The Columns tab contains the column definitions for the data on the • input link. You can Load a column definition from the Repository, or type one in yourself (and Save it to the Repository if required). Note that the meta data on the output link must be identical to the meta data on the input link. So the meta data is identical for all the output links. ==
LINK COLLECTOR Stage: COLLECTOR Stage: The Link Collector stage is an active stage which takes up to 64 inputs and allows you to collect data from this links and route it along a single output link. The stage expects the output link to use the same meta data as the input links. The Link Collector stage can be used in conjunction with a Link Partitioner stage to enable you to take advantage of a multi-processor system and have data processed in parallel. The Link Partitioner stage partitions data, it is processed in parallel, then the Link Collector stage collects it together again before writing it to a single target. To really understand the benefits you need to know a bit about how DataStage jobs are run as processes, see “DataStage Jobs and Processes”.
☻Page 41 of 243☻
In order for this job to compile and run as intended on a multi-processor system you must have interinter-pro proces cesss buffer buffering ing turned turned on, either either at projec projectt level level using using the DataSt DataStage age Administrator, or at job level from the Job Properties dialog box. The Properties tab allows you to specify two properties for the Link Collector stage: •
Collection Collection Algorithm. Use this property to specify the method the stage uses to collect data. Choose from: Round-Robin . This is the default method. Using the round-robin method the stage will read a row from each input link in turn. Sort/Merge . Using the sort/merge method the stage reads multiple sorted inputs and writes one sorted output. Sort Key. This property is only significant where you have chosen a collecting • algorithm of Sort/Merge. It defines how each of the partitioned data sets are known to be sorted and how the merged output will be sorted. The key has the following format:
]]... Columnname { sortorder ] [,Columnname [ sortorder sortorder ]]... Columnname specifies one (or more) columns to sort on. sortorder defines the sort order as follows: In an NLS environment, the collate convention of the locale may affect the sort order. The defaul defaultt collat collatee convent convention ion is set in the DataSt DataStage age Admini Administr strato ator, r, but can be set for individual jobs in the Job Properties dialog box. Asc Ascend ndiing Ord Ordeer A asc ascending A ASC ASCENDING
Desc Desceend ndiing Or Order der d dsc descending D DSC DESCENDING
For example: FIRSTNAME d, SURNAME D
☻Page 42 of 243☻
Specifies that rows are sorted according to FIRSTNAME column and SURNAME column in descending order. The Link Collector stage can have up to 64 input links. This is where the data to be collected arrives. The Input Name drop-down list on the Inputs page allows you to select which of the 64 links you are looking at. The Link Collector stage can have a single output link.
DATASTAGE TUTORIAL 1. About DataStage 2. Client Components 3. DataStage Designer . 4. DataStage Director 5. DataStage Manager 6. DataStage Administrator 7. DataStage Manager Roles 8. Server Components 9. DataStage Features 10. Types of Jobs 11. DataStage NLS 12. JOB 13. Aggregator
14. Hashed File 15. UniVerse 16. UniData. 17. ODBC 18. Sequential File 19. Folder Stage 20. Transformer 21. Container 22. IPC Stage 23. Link Collector Stage 24. Lin Link k Pa Part rtiti itione onerr Stage 25. Server Job Properties 26. Containers 27. Local containers 28. Shared containe containers rs 29. Job Sequences
☻Page 43 of 243☻
About DataStage DataStage is a tool set for designing, developing, and running appl applic icat atio ions ns that that pop popul ulat atee one one or more more tabl tables es in a data data ware warehou house se or data data mart mart.. It cons consis ists ts of clie client nt and and serv server er components.
Client Components DataStage Designer. A design interface used to create DataStage applications (known as jobs). Each job specifies the data sources, the transforms required, and the destination of the data. Jobs are compiled to create executables that are scheduled by the Director and run by the Server. S erver.
DataStage Director. A user user interf interface ace used used to valida validate, te, schedu schedule, le, run, run, and monitor DataStage jobs.
DataStage Manager. A user interface used to view and edit the contents of the Repository.
DataStage Administrator A user interface used to configure DataStage
DataStage Manager Roles • •
Import table or stored or stored procedure definitions Create table or store stored d proced procedure ure definitions, data elements,, custom trans elements transforms forms,, se serv rver er jo job b ro rout utin ines es,, mainframe routines, routines, machine profiles, profiles, and plug-ins and plug-ins
Theree are Ther are also also more more spec specia iali lize zed d task taskss that that can can only only be performed from the DataStage Manager. These include: • • •
Perform usage analysis queries. Report on Repository contents. Importing, exporting and packaging DataStage jobs.
☻Page 44 of 243☻
Server Components There are three server components which are installed on a server: •
•
•
centra rall stor storee that that cont contai ains ns all all the the Repository . A cent info inform rmat atio ion n requi require red d to buil build d a data data mart mart or data data warehouse. DataStage DataStage Server. Runs executable jobs that extract, transform, and load data into a data warehouse. DataStage Package Installer . A user interface used to install packaged DataStage jobs and plug-ins.
DataStage Features Extracts data from any number or types of database Handles all the meta data definitions required to define your data warehouse. Aggregates data. You can modify SQL SELECT statements used to extract data. Transforms data. DataStage has a set of predefined transforms and functions you can use to convert your data. Loads the data warehouse
Types of jobs There are three basic types of DataStage job: •
•
•
Thes esee are are comp compil iled ed and and run run on the the Server jobs Server jobs. Th Data DataSt Stag agee serv server er.. A serv server er job job will will conn connec ectt to databas databases es on other other machin machines es as necess necessary ary,, extrac extractt data, process it, then write the data to the target data warehouse. Parallel jobs . These are available only if you have Enterprise Edition installed. Parallel jobs are compiled and run on a DataStage UNIX server, and can be run in parallel on SMP, MPP, and cluster systems. Mainframe jobs . These are available only if you have Enterprise MVS Edition installed. A mainframe job is compiled and run on the mainframe. Data extracted by such jobs is then loaded into the data warehouse.
☻Page 45 of 243☻
There are two other entities that are similar to jobs in the way they appear in the DataStage Designer, and are handled by it. These are: Shared containers .
These are reusable job elements. They typically comprise a number of stages and links. Copies of shared containers can be used in any number of server jobs and edited as required. Job Sequences .
A job sequence allows you to specify a sequence of Data DataSt Stag agee jobs jobs to be exec execut uted ed,, and and acti action onss to take take depending on results
DataStage NLS • •
•
•
Process data in a wide range of languages Accept data in any character set into most DataStage fields Use local formats for dates, times, and money (Server Jobs) Sort data according to local rules
JOB A job consists of stages linked together which describe the flow of data from a data source to a final data warehouse.
Built-In Stages – Server Jobs Aggregator. Aggregator stages are active stages that classify data rows from a single input link into groups and compute totals or other aggregate aggregate functions functions for each group. The summed totals totals for each group are output from the stage via an output link.
Hashed File. Extrac Extr acts ts data data from from or load loadss data data into into data databas bases es that that contain hashed files. Also acts as an intermediate stage for quick lookups. Hashed File stages represent a hashed file, i.e., a file that uses a hashing algorithm for distributing records in one or more groups on disk. You can use a Hashed File stage to extract or ☻Page 46 of 243☻
write data, or to act as an intermediate file in a job. The primary role of a Hashed File stage is as a reference table based on a single key field. Each Hashed File stage can have any number of inputs or outputs. Input links specify the data you are writing. Output links specify the data you are extracting.
UniVerse. •
Extrac Extr acts ts data data from from or load loadss data data into into UniV UniVer erse se databases.
UniData. •
Extrac Extr acts ts data data from from or load loadss data data into into UniD UniDat ataa databases.
ODBC. Extracts data from or loads data into databases that support the industry standard standard Open Database Connectivity Connectivity API. This stage is also used as an intermediate stage for aggregating data. ODBC stages are used to represent a database that supports the industry standard Open Database Connectivity API. You can use an ODBC stage to extract, write, or aggregate data. Each ODBC stage can have any number of inputs or outputs. outputs. Input links specify the data you are writing. Output links specif specify y the data data you are extrac extracti ting ng and any aggrega aggregatio tions ns required.
Sequential File. Extract Extr actss data data from from or load loadss data data into into "fla "flatt file files" s" in the the Windows NT file system. Sequential File stages are used to extract data from, or write data to, a text file in the server file system. The text file can be created created or exist on any drive that is either either local or mapped to the server. Each Sequential File stage can have any number of inputs or outputs.
☻Page 47 of 243☻
Folder Stage. Folder stages are used to read or write data as files in a directory located on the DataStage server. Folder stages are used to read or write data as files in a directory. The fold The folder er stage stagess can read read mult multip iple le file filess from from a singl singlee directory and can deliver the files to the job as rows on an output output link. link. By defaul default, t, the file file content content is delive delivered red with with newlines converted to char(254) field marks. The folder stage can also write rows of data as files to a directory. The rows arrive at the stage on an input link.
Transformer. Receives incoming data, transforms it in a variety of ways, and outputs it to another stage in the job. Transformer stages do not extract data or write data to a target database. They are used to handle extracted data, perform any conversions required, and pass data to another Transformer stage or a stage that writes data to a target data table. Transformer stages in server jobs can have any number of inputs and outputs. The link from the main data input source is designated the primary input link. There can only be one primary input link, but there can be any number of reference inputs.
Container. Represents a group of stages and links. The group is replaced by a single Container stage in the Diagram window.
IPC Stage. Provi Provide dess a commu communi nicat catio ion n chan channel nel betw between een Data DataSt Stag agee processes running simultaneously in the same job. It allows you you to desi design gn jobs jobs that that run run on SMP SMP syst system emss with with grea greatt performance benefits .An .An inte interr-pr proc oces esss (IPC (IPC)) stag stagee is a pass passiv ivee stag stagee whic which h p pro rovi vide dess a commu communi nicat catio ion n chann channel el betw betwee een n Data DataSt Stag agee processes running simultaneously in the same job. It allows you you to desi design gn jobs jobs that that run run on SMP SMP syst system emss with with grea greatt ☻Page 48 of 243☻
performance benefits. To understand the benefits of using IPC stages, you need to know a bit about how DataStage jobs actually run as processes, see Chapter 2 of the Server Job Developer's Guide for information. The output link connecting IPC stage to the stage reading data can be opened as soon as the input link connected to the stage writing data has been opened. You You can can use use Inte Interr-pr proc oces esss stage stagess to join join passi passive ve stag stages es together. For example you could use them to speed up data transfer between two data sources
Link Collector Stage. Takes up to 64 inputs and allows you to collect data from these links and route it along a single output link. link. The Link Collector stage is an active active stage which takes up to 64 inputs and allows you to collect data from these links and route it along a single output link. The stage expects the output link to use the same meta data as the input links
Link Partitioner Stage. Takes one input and allows you to distribute partitioned rows to up to 64 output ou tput links. The Link Partitioner stage is an active stage which takes one input and allows you to distribute partitioned rows to up to 64 output links. The stage expects the output links to use the same meta data as the input link. •
Container Input and Container Output. Represent the interface that links a container stage to the rest of the job design.
Server Job Properties The Job Properties dialog og box appe appear ars. s. Th Thee dial dialog og box box Properties dial differs depending on whether it is a server job, a mainframe job, or a job sequence. A server job has up to six pages: General General,, Parameters Parameters,, Job control,, NLS control NLS,, Performance Performance,, and Dependencies Dependencies.. Note that the NLS page is not available if you open the dialog box from the Manager, even if you have NLS installed. ☻Page 49 of 243☻
Containers A container is a group of stages and links. Containers enable you to simplify and modularize your server job designs by repl replac acin ing g comp comple lex x area areass of the the diag diagra ram m with with a sing single le container stage. You can also use shared containers as a way of incorporating server job functionality into parallel jobs. DataStage provides two types of container:
Local containers. •
These are created within a job and are only accessible by that job. A local container is edited in a tabbed page of the job’s Diagram window.
Shared containers. •
These These are created created separa separatel tely y and are stored stored in the Repository in the same way that jobs are. There are two types of shared container:
Job Sequences DataStage provides a graphical Job Sequencer which allows you to specify a sequence of server or parallel jobs to run. The sequence can also contain control information, for example, you can specify different courses of action to take depending on whether a job in the sequence sequence succeeds or fails. Once you have defined a job sequence, it can be scheduled and run using using the DataSt DataStage age Direct Director. or. It appear appearss in the DataSt DataStage age Repository and in the DataStage Director client as a job.
☻Page 50 of 243☻
LEARN FEATURES OF DATASTAGE DATASTAGE: DataStage has the following features to aid the design and processing required to build a data warehouse: Uses Uses grap graphi hica call desi design gn tool tools. s. With With simp simple le point-and-click techniques you can draw a sche sc hem me to repres presen entt your your proc proce essi ssing requirements. Extr Extract acts s data data from from any numb number er or type type of database. Handles all the metadata definitions required to defi define ne your your data data ware wareho house use.. You You can can view view and and modi modify fy the the tabl table e defi defini niti tion ons s at any point during the design of your application. Aggregates data. You can modify SQL SELECT statements used to extract data. Tra Trans nsfo form rms s data data.. Data DataSt Stag age e has has a set set of predef predefine ined d transf transform orms s and functi functions ons you can can us use e to conv conver ertt your your data data.. You You can can ☻Page 51 of 243☻
easily extend the functionality by defining your own transforms to use. Loads the data warehouse.
COMPONENTS OF DATASTAGE: DATASTAGE: Data DataSt Stage age cons consis ists ts of a numb number er of clie client nt and and serv server er comp compon onen ents. ts. Data DataSt Stage age has has four four clie client nt components 1. DataStage Designer. A design interface used us ed to crea create te Dat DataSta aStage ge appl applic icat atio ions ns (kno (known wn as jobs) jobs).. Each Each job job sp spec ecif ifie ies s the the data sources, the transforms required, and the destinati ation of the data. Jobs are compil compiled ed to creat create e execut executabl ables es that are scheduled scheduled by the Director Director and run by the Server (mainframe jobs are transferred and run on the mainframe). 2. DataStage DataStage Director Director. A user interface ace used sed to vali alidate, schedule, run, and monitor DataStage server jobs and parallel jobs. 3. DataStage user er inte nterface face DataStage Manager Manager. A us used to view and edit the contents of the Repository. 4. Data DataSt Stag age e Admin Adminis istr trato atorr. A user inter interface face used used to perfor perform m admini administr strati ation on tasks such as setting up DataStage users, creating and moving projects, and setting up purging criteria.
SERVER COMPONENTS: There are three server components: 1. Repository. A central store that contains all the inform informati ation on requir required ed to build build a data mart or data warehouse. 2. DataStage Server. Runs executable jobs that extract, transform, and load data into a data warehouse. 3. DataSt DataStage age Packag Package e Instal Installer ler. A user interface used to install packaged DataStage jobs and plug-ins.
☻Page 52 of 243☻
DATASTAGE PROJECTS: You always enter DataStage through a DataStage project. When you start a DataStage client you are prompted to attach to a project. Each project contains: DataStage jobs. • Built-in components. These are predefined • components used in a job. User ser-def -defiined ned comp compo onent nents. s. The These are are • customized components created using the Data DataSt Stage age Manag Manager er.. Each Each us user er-de -defi fine ned d component component performs performs a specific specific task in a job.
DATASTAGE JOBS: There are three basic types of DataStage job: 1. Server jobs. These are compiled and run on the DataStage server. A server job will connect to databases on other machines as necess necessary ary,, extrac extractt data, data, proces process s it, then then write the data to the target datawarehouse. 2. Parallel jobs. These are compiled and run on the DataStage server in a similar way to server jobs, but support parallel processing on SMP, MPP, and cluster systems. 3. Mainframe jobs. These are available only if you have Enterprise MVS Edition installed. A mainframe job is compiled and run on the mainframe. Data extracted by such su ch jobs jobs is then then load loaded ed into into the the data data warehouse.
SPECIAL ENTITIES: •
•
These e are are reus reusab able le Shared Shared containers containers. Thes job job elemen elements. ts. They They typica typically lly compri comprise se a num number ber of stag stages es and and link links. s. Copi Copie es of shar sh ared ed cont contai aine nerrs can be us used ed in any any number of server jobs or parallel jobs and edited as required. job sequ sequen ence ce allo allows ws Job Sequences Sequences. A job you you to sp spec ecif ify y a sequ sequen ence ce of DataS DataSta tage ge
☻Page 53 of 243☻
jobs to be executed, and actions to take depending on results.
TYPES OF STAGES: •
•
•
Built-in stages. Supplied with DataStage and and us used ed for for extrac tracti ting ng aggr aggre egat gating, ing, transforming, or writing data. All types of job have these stages. Plug-in stages. Additional stages that can be ins nsttalle alled d in Dat DataSta aStage ge to per perfor form specialized tasks that the built-in stages do not suppor supportt Server Server jobs jobs and parall parallel el jobs jobs can make use of these. Job Job Sequen Sequence ce Stages Stages. Spec Specia iall buil built-i t-in n stages which allow you to define sequ sequen ence ces s of acti activi viti ties es to run. run. Only Only Job Job Sequences have these.
DATASTAGE NLS: DataS ataSta tage ge has bui built-in t-in Nati Nation onal al Langu anguag age e Support (NLS). With NLS installed, DataStage can do the following: Process data in a wide range of languages • Accept data in any character set into most • DataStage fields Use Use loca locall form formats ats for for date dates, s, time times, s, and and • money (server jobs) Sort data according to local rules • To load a data mart or data warehouse, you must do the following: Set up your project • Create a job • Develop the job • Edit the stages in the job • Compile the job • Run the job •
☻Page 54 of 243☻
SETTING UP YOUR PROJECT: Before you create any DataStage jobs, you must set up your project by entering information about your data. This includes the name and location of the tabl table es or file files s hol holding ding your your data data and a definition of the columns they contain. Information is stored in table definitions in the Repository.
STARTING THE DATASTAGE DESIGNER: To start the DataStage Designer, choose Start →
Programs → Ascential DataStage → DataStage Designer. The Attach to Project dialog box appears:
TO CONNECT TO A PROJECT: 1. Enter the name of your host in the Host system field ield.. Thi This is the name ame of the the system where the DataSt aStage age Server components are installed. 2. Enter your user name in the User name field. This is your user name on the server system. 3. Ente Enterr your your pass passwo word rd in the the Password field. 4. Choose the project to connect to from the Project drop-down list box. 5. Click OK . The DataStage Designer window appear appears s with with the New dialog dialog box open, open, ready for you to create a new job:
☻Page 55 of 243☻
CREATING A JOB: Jobs are created using the DataStage Designer. For this example, you need to create a server job, so double-click the New Server Job icon.
Choose File → Save to save the job. The Create new job dialog box appears:
☻Page 56 of 243☻
DEFINING TABLE DEFINITIONS:
For most data sources, the quickest and simplest way to specify a table definition is to import it directly from your data source or data warehouse. IMPORTING TABLE DEFINITIONS: 1. In the Repository window of the DataStage Desi Design gner er,, sele select ct the the Tabl Table e Defi Defini niti tion ons s bran branch ch,, and and choos hoose e Import Table Definitions Definitions… … from from the the sh shor ortc tcut ut Importt Meta Metada data ta (ODB (ODBC C menu menu.. The The Impor Tables) dialog box appears:
2. Choose data Source Name from the DSN drop-down list box. 3. Click OK . The updated Import Metadata ( ODBC Tables) dialog box displays all the files for the chosen data source name:
☻Page 57 of 243☻
project .EXAMPLE1 from the Tables list box, where project is project is the name of your DataStage project. 5. Click OK . The The colu column mn info inform rmat atio ion n from from EXAMPLE1 is imported into DataStage. 6. A table definition is created and is stored under the Table Definitions → ODBC → DSNNAME branch in the Repository. The upda update ted d DataS ataSta tage ge Desi Design gner er window ndow displa dis plays ys the new table table defini definitio tion n entry entry in the Repository window. 4. Select
DEVELOPING A JOB: Job Jobs s are are desi design gned ed and and deve develo lope ped d us usin ing g the the Desi Design gner er.. The The job job desi design gn is deve develo lope ped d in the the Diagram window (the one with grid lines). Each data data sour source ce,, the the data data ware wareho hous use, e, and and each each processing step is represented by a stage in the job job desi design gn.. The The stag stages es are are link linked ed toge togeth ther er to show the flow of data. For For Exam Exampl ple e we can can deve develo lop p a job job with with the the following three stages: A Univer Universe se stage stage to repre represen sentt EXAMPLE1 (the data source). • A Transformer stage to convert the data in the DATE ATE colu colum mn from from an YYYY-MM-DD dat date in internal date format to a string giving just year and month (YYYY-MM (YYYY-MM). ). • A Sequen Sequentia tiall File File stage stage to repre represen sentt the file created at run time (the data warehouse in this example).
☻Page 58 of 243☻
Adding Stages: Stage Stages s are are added added usi using ng the the tool tool pale palett tte. e. This This pale alette tte conta ontaiins icon icons s that that repres presen entt the the components you can add to a job. The palette has diff differ ere ent gro groups ups to orga organi nize ze the tool tools s available.
To add a stage: 1. Click the stage button button on the tool palette palette that represents the stage type you want to add. 2. Click in the Diagram window where you want the stage to be positioned. The stage appears in the Diagram Diagram window window as a square. square. You can also drag drag item items s from from the the pale palett tte e to the the Diag Diagra ram m window. We recommend that you position your stages as follows: Data sources on the left Data warehouse on the right Transformer stage in the center When When you add stages stages,, they they are automa automatic ticall ally y assigned default names. These names are based on the type of stage and the number of the item in the Diagram window. You can use the default names in the example. Once all the stages are in place, you can link them together to show the flow of data.
Linking Stages You need to add two links:
☻Page 59 of 243☻
• One betwee between n the Univer Universe se and Transf Transform ormer er stages • One between the Transformer and Sequential File stages Links are always made in the direction the data will flow, that is, usually left to right. When you add links, they are assigned default names. You can use the default names in the example.
To add a link: 1. Righ Rightt-cl clic ick k the the firs firstt stage stage,, hold hold the the mouse mouse button down and drag the link to the transformer stage. Release the mouse button. 2. Right-click the Transformer stage and drag the link to the Sequential File stage. The The foll follow owin ing g sc scre reen en shows shows how how the the Diagr Diagram am window looks when you have added the stages and links:
Editing the Stages Your job design currently displays the stages and the the link links s betw betwee een n them them.. You You must must edit edit each each stage in the job to specify the data to use and what to do with it. Stages are edited in the job desi design gn by doub double le-cl -clic icki king ng each each stage stage in turn turn.. Each stage type has its own editor.
Editing the UniVerse Stage The data source (EXAMPLE1 (EXAMPLE1)) is represented by a UniVerse stage. You must specify the data you
☻Page 60 of 243☻
want ant to extr extrac actt from from this this file file by edit editin ing g the the stage. Double-click the stage to edit it. The UniVerse Stage dialog box appears:
This dialog box has two pages: • Stage. Disp spllayed by defaul ault. This page age contains the name of the stage you are editing. The General tab specifies where the file is found and the connection type. • Outputs. Contains information describing the data flowing from the stage. You edit this page to describe the data you want to extract from the file. In this example, the output from this stage goes goes to the the Tran Transf sfor orm mer stag stage. e. To edi edit the Universe stage: 1. Check that you are displaying the General tab on the Stage page. Data source source name name Choose localuv from from the Data drop-down drop-down list. Localuv is where EXAMPLE1 is copied to during installation. The remaining parameters on the General and Details tabs are used to enter logon details and describe where to find the file. Because EXAMPLE1 is ins instal talled led in localuv, you do not have to complete these fields, which are are disabled. 2. Clic Click k the the Outputs tab. tab. The Outputs page appears:
☻Page 61 of 243☻
The Outputs page contains the name of the link the data flows along and the following four tabs: • General. Contains the name of the table to use and an optional description of the link. • Columns. Cont Contai ains ns info inform rmat atio ion n abou aboutt the the columns in the table. • Selection. Use Used to ent enter an opti option onal al SQL SQL SELECT clause (an Advanced (an Advanced procedure). procedure). • View SQL. Displays the SQL SELECT statement used to extract the data. 3. Choose dstage.EXAMPLE1 from the Available tables drop-down list. 4. Click Add to add dstage.EXAMPLE1 to the Table names field. 5. Clic Click k the the Columns tab. tab. The The Columns tab appears at the front of the dialog dialog box. You must spec sp ecif ify y the the colu column mns s cont contai aine ned d in the the file file you you want to use. Because the column definitions are stored in a table definition in the Repository, you can load them directly. 6. Click Click Load…. The Table Definitio Definitions ns window window appears with then UniVerse localuv branch highlighted. 7. Select dstage.EXAMPLE1. The Select Columns dialog dialog box appears appears,, allowi allowing ng you to select which column definitions you want to load. 8. In this this case case you you want want to load load all all avai availa labl ble e columns definitions, so just click OK . The column defini definitio tions ns specif specified ied in the table table defini definitio tion n are copied to the stage. The Columns tab contains definitions for the four columns in EXAMPLE1: EXAMPLE1:
☻Page 62 of 243☻
9. You can can us use e the the Data Data Brow Browse serr to view view the the actual data that is to be output from the UniVerse stage. Click the View Data… button to open the Data Browser window.
11. Choose File → Save to save your job design so far.
Editing the Transformer Stage The The Tran ransfo sforme rmer stag stage e per perfor forms any any data ata conversion required before the data is output to another stage in the job design. In this example, the the Trans Transfo form rmer er stag stage e is us used ed to conv conver ertt the the data in the DATE column from an YYYYMM-DD
☻Page 63 of 243☻
date in internal date format to a string giving just the year and month (YYYY-MM (YYYY-MM). ). There are two links in the stage: • The input from the data source (EXAMPLE1 (EXAMPLE1)) • The output to the Sequential File stage To enable the use of one of the built-in Data DataSt Stag age e tran transf sfor orms ms,, you you will will assi assign gn data data elements to the DATE columns input and output from the Transformer stage. A DataStage data element defines more precisely the kind of data that hat can appe appear ar in a giv given col column. umn. In this his example, you assign the Date data element to the input column, to specify the date is input to the transfo sform in internal format, at, and the MONTH.TA MONTH.TAG G data element to the output column, column, to specify that the transform produces a string of the format YYYY-MM. YYYY-MM. Double-click the Transf Transform ormer er stage stage to edit edit it. The Transf Transform ormer er Editor appears:
1. Working in the upper-left pane of the Transformer Editor, select the input columns that you want to derive output columns from. Click on the CODE, DATE, and QTY columns while holding down the Ctrl key. 2. Click the left mouse button again and, keeping it held down, drag the selected selected columns columns to to the the outp output ut link link in the the uppe upperr-ri righ ghtt pane pane.. Drop Drop the the columns over the Column Name field by rele releas asin ing g the the mouse mouse butt button on.. The The colu column mns s appea ppearr in the the top top pane pane and and the the asso associ ciat ated ed metadata appears in the lower-right pane:
☻Page 64 of 243☻
Data elem elemen entt field for the 3. In the Data DSLink DSLink3.D 3.DATE ATE column column,, select select Date from from the the drop-down list. 4. In the SQL type field for the DSLink4 DATE column, select Char from the drop-down list. 5. In the Length fiel field d or the the DSLi DSLink nk4 4 DATE DATE column, enter 7. 6. In the Data element field for the DSLink DSLink4 4 element field DATE DATE colu column mn,, sele select ct MONTH.TAG from from the the dro drop-do p-dow wn list. st. Nex Next you wil will sp spe ecify cify the transform to apply to the input DATE column to produce the output DATE column. You do this in the upper right pane of the Transformer Editor. 7. Doub Double le-c -cli lick ck the the Derivation fie field for for the DSLink4 DATE column. The Expression Editor box appears. At the moment, the box contains the text DSLink3.DATE, whic which h indi indica cate tes s that that the the output is directly derived from the input DATE column. Select the text DSLink3 and delete it by pressing the Delete key.
☻Page 65 of 243☻
10. Select the MONTH.TAG tran transfo sform rm.. It appear appears s in the Expressi Expression on Editor Editor box with with the argument field [%Arg1%] highlighted. Suggest Operand Operand 11. Right-click to open the Suggest menu again. This time, select Input Column. A list of available input columns appears:
12. Select Select DSLink3.DATE. This This then then beco become mes s the argument for the transform.
☻Page 66 of 243☻
13. Click OK to save the changes and exit the Transf Transform ormer er Editor Editor.. Once Once more more the small small icon icon appears on the output link from the transformer stage to indicate that the link now has column definitions associated with it.
Editing the Sequential File Stage The data ware arehouse is represented by a Sequential File stage. The data to be written to the data warehouse is already specified in the Transformer stage. However, you must enter the name of a file to which the data is written when the job runs. uns. If the the fil file does does not not exist, ist, it is creat created. ed. Double Double-cl -click ick the the stage to edit edit it. The Sequential File Stage dialog box appears:
This dialog box has two pages: • Stage. Disp spllayed by defaul ault. This page age contains the name of the stage you are editing and two tabs. The General tab specifies the line termi terminati nation on type, type, and the NLS tab specifies a char charac acte terr set set map map to us use e with with the the stag stage e (thi (this s appears if you have NLS installed). • Inputs. Desc Descri ribe bes s the the data data flow flowin ing g into into the the stage. This page only appears when you have an input to a Sequential File stage. You do not need to edit edit the the colu column mn defi defini niti tion ons s on this this page page,, because they were all specified in the Transformer stage.
☻Page 67 of 243☻
To edit the Sequential File stage: 1. Click the Inputs tab. ab. The The Inputs page appears. This page contains: • The name of the link. This is automatically set to the link name used in the job design. • General tab. tab. Contai Contains ns the pathname pathname of the file file,, an opti option onal al desc descri ript ptio ion n of the the link link,, and and update action action choice choices. s. You You can can use use the default default settings for this example, but you may want to enter a file name (by default the file is named after the input link). • Format tab. tab. Dete Determ rmin ines es how how the the data data is written to the file. In this example, the data is written using the default settings that is, as a comma-delimited comma-delimited file. • Columns tab. Contains the column definitions for the data ata you want ant to extract. This tab contains the column definitions specified in the Transformer stage’s output link. 2. Enter the pathname of the text file you want to File name name field, crea create te in the the File field, for examp example, le, seqfile.txt. By default the file is placed in the server project directory (for example, c:\Ascential\DataStage\Projects\datastage) and is named after the input link, but you can enter, or browse for, a different directory. 3. Click OK to close the Sequential File Stage dialog box. 4. Choose File Save to save the job design. The job design is now complete and ready to be compiled.
Compiling a Job When you finish your design you must compile it to create an executable job. Jobs are compiled using the Designer. To compile the job, do one of the following: • Choose File → Compile. • Click the Compile button on the toolbar. The Compile Job window appears:
☻Page 68 of 243☻
Running a Job Executable jobs are scheduled by the DataStage Director and run by the DataStage Server. You can can star startt the the Dire Direct ctor or from from the the Desi Design gner er by choosing Tools → Run Director. When When the the Dire Direct ctor or is star starte ted, d, the the Data DataSt Stage age Director Director window window appears appears with with the status status of all the jobs in your project:
Highlight your job in the Job name column. To run the job, choose Job → Run Now or click the Run but Job Run Run button ton on the the tool oolbar bar. The The Job Options dialog box appears and allows you to specify specify any parameter parameter values and to specify any job run limits. In this case, just click Run. The stat status us chan change ges s to Runn Runnin ing. g. When When the the job job is complete, the status changes to Finished. ile → Ex Exit it to clos Choose File close e the the DataS DataSta tage ge Director window.
☻Page 69 of 243☻
Developing a Job
The DataStage Designer is used to create and develop DataSt aStage age jobs. A DataStage age job popu popula late tes s one one or more ore tabl ables in the the targe argett database. There is no limit to the number of jobs you can create in a DataStage project. A job design contains: • Stag Stages es to repr repres esen entt the the proc proces essi sing ng step steps s required • Links between the stages to represent the flow of data There are are three different types of job in DataStage: • Server jobs. These are available if you have inst instal alle led d Serv Server er.. They They run run on the the Data DataSt Stag age e Serv Server er,, conn connec ecti ting ng to othe otherr data data sour source ces s as necessary. • Mainframe jobs. These are available only if you you have have inst instal alle led d Ente Enterp rpri rise se MVS MVS Edit Editio ion. n. Mainfr Mainframe ame jobs jobs are uploade uploaded d to a mainfr mainframe ame,, where they are compiled and run. • Parallel jobs. These are available only if you have installed the Enterprise Edition. These run on Data DataSt Stag age e serv server ers s that that are are SMP, SMP, MPP, MPP, or cluste clusterr systems. systems. There There are two other other entitie entities s that are similar to jobs in the way they appear in the DataStage Designer, and are handled by it. These are: Shared containers containers. These are reusable job • Shared elements. They typically comprise a number of stages and links. Copies of shared containers can be used in any number of server jobs and parallel jobs and edited as required. • Job Sequences. A job sequence allows you to spec pecify ify a seque equenc nce e of Dat DataSt aStage age ser server or parallel jobs to be executed, and actions to take depending on results.
STAGES: A job consis consists ts of stage stages s linke linked d togeth together er which which describe the flow of data from a data source to a
☻Page 70 of 243☻
data target (for example, a final data warehouse). A stage usually has at least one data input and/or one one data data outpu output. t. Howe Howeve ver, r, some some stages stages can accept more than one data input, and output to more than than one stage. stage. The different different types types of job have different stage types. The stages that are available in the DataStage Designer depend on the the type type of job job that that is curr curren entl tly y open open in the the Designer.
Server Job Stages DataStage offers several built-in stage types for use in server jobs. These are used to represent data sources, data targets, or conversion stages. These stages are either passive or active stages. A passive stage handles access to databases for the extraction or writing of data. Active stages model the flow of data and provide mechanisms for combin combining ing data data stream streams, s, aggreg aggregati ating ng data, data, and conv conve erting ting dat data from from one one dat data type type to another. As well as using the built-in stage types, you can also also use plug-i plug-in n stages stages for speci specific fic operat operation ions s that that the buil builtt-in in stage stages s do not suppo support rt.. The The Pale Palett tte e orga organi nize zes s stag stage e type types s into into diff differ eren entt groups, according to function: • Database • File • PlugIn • Processing • Real Time
Stage Stages s and links links can can be grou groupe ped d in a sh shar ared ed container. Instances of the shared container can then be reused in different server jobs. You can also also define define a local local contai container ner within within a job, job, this this groups stages and links into a single unit, but can only be used within the job in which it is defined. Each Each stag stage e type type has has a set set of pred predef efin ined ed and and editable properties. These properties are viewed or edited using stage editors.
☻Page 71 of 243☻
At this point in your job development you need to deci decide de whic which h stag stage e type types s to us use e in your your job job desi design gn.. The The foll follow owin ing g buil builtt-in in stage stage type types s are are available for server jobs:
☻Page 72 of 243☻
Mainframe Job Stages
☻Page 73 of 243☻
DataStage offers several built-in stage types for use in mainframe jobs. These are used to represent data ata sources, data targ argets, or conversion stages. The Palette organizes stage types into different groups, according to function: • Database • File • Processing Each Each stag stage e type type has has a set set of pred predef efin ined ed and and editable properties. properties. Some stages can be used as data sources and some as data targets. Some can be used used as both. both. Proces Processin sing g stages stages read read data from a source, process it andwrite it to a data target target. These properties are viewed or edit edited ed us usin ings gsta tage ge edit editor ors. s. A stag stage e edit editor or exists for each stage type and At this point in your job development you need to decide which stage types to use in your job design.
`
☻Page 74 of 243☻
Parallel jobs Processing Stages
☻Page 75 of 243☻
SERVER JOBS:
☻Page 76 of 243☻
When you design a job you see it in terms of stage tages s and and link links. s. When hen it is comp compiiled, ed, the the DataStage engine sees it in terms of processes that that are subsequen subsequently tly run on the server. server. How does the DataStage engine define a process? It is here here that that the the dist distin inct ctio ion n betw betwee een n acti active ve and and pass passiv ive e stag stages es beco become mes s impo import rtan ant. t. Ac Acti tive ves s stages, such as the Transformer and Aggregator perform processing tasks, while passive stages, such as Sequential file stage and ODBC stage, are reading or writing data sources and provide serv servic ices es to the the acti active ve stag stages es.. At its its simp simple lest st,, active stage ages become processes. ses. But the situation becomes more complicated where you conn connec ectt acti active ve stag stages es toge togeth ther er and and pass passiv ive e stages together.
☻Page 77 of 243☻
Sing ingle Pro Processo essorr Systems
and
Mult Multii-Pr Pro ocess essor
The default behavior when compiling DataStage job jobs s is to run run all all adja adjace cent nt acti active ve stag stages es in a single process. This makes good sense when you are running the job on a single processor system. When When you you are are runn runnin ing g on a mult multii-pr proc oces esso sorr system it is better to run each active stage in a separ eparat ate e pro process cess so the proc proces esse ses s can be distributed among available processors and run in parallel. The enhancements to server jobs at Release 6 of DataStage make it possible for you to stipulate at design time that jobs should be comp compil iled ed in this this way. way. Ther There e are are two two ways ways of doing this: • Expl Explic icit itly ly – by inse insert rtin ing g IPC IPC stage stages s betw betwee een n connected active stages. • Impl Implic icit itly ly – by turn turnin ing g on inte interr-pr proc oces ess s row row buffering either project wide (using the DataStage DataStage Administr Administrator) ator) or for individua individuall jobs (in the Job Properties dialog box) The The IPC IPC faci facili lity ty can can also also be us used ed to prod produc uce e mult multip iple le proc proces esse ses s wher where e passi passive ve stage stages s are are directly connected. This means that an operation readi reading ng from from one one data data sourc source e and and writ writin ing g to another could be divided into a reading process and a writing process able to take advantage of multiprocessor systems.
☻Page 78 of 243☻
Partitioning and Collecting
☻Page 79 of 243☻
With With the the intr introd oduc ucti tion on of the the enhan enhance ced d mult multiiprocessor su sup pport at Release ase6, there are are opportunities to further enhance the performance of server server jobs by partitioning data. The Link Partitioner stage allows you to partition data you you are reading reading so it can be process processed ed by indi ndividu vidual al proc proces esso sorrs runni unning ng on mul multipl tiple e processor processors. s. The Link Link Collector Collector stage stage allows allows you to coll collec ectt part partit itio ione ned d data data toge togeth ther er agai again n for for writi writing ng to a sin single gle data data target target.. The follow following ing diagram illustrates how you might use the Link Partitioner and Link Collector stages stages within a job. Both stages are active, and you should turn on inter-process row buffering at project or job level in order to implement process boundaries.
Aggregator Stages
Aggreg Aggr egat ator or stag stages es clas classi sify fy data data rows rows from from a single input link into groups and compute totals or other aggregate functions for each group. The summed totals for each group are output from the stage via an output link.
Using an Aggregator Stage If you you want want to aggr aggreg egat ate e the the inpu inputt data data in a number of different ways, you can have several output links, each specifying a different set of prop proper erti ties es to defi define ne how how the the inpu inputt data data is grouped and summarized. s ummarized. When you edit an Aggregator stage, the Aggregator Stage dialog box appears: ☻Page 80 of 243☻
This dialog box has three pages: • Stage. Displays the name of the stage you are edit editin ing. g. This This page page has has a General tab whic which h contains an optional description of the stage and names of before- and after-stage routines • Inputs. Specifies the column definitions for the data input link. • Outputs. Specifies the column definitions for the data output link.
Defining Aggregator Input Data Data to be aggregated is passed from a previous stage in the job design and into the Aggregator stage via a single input link. The properties of this link and the column definitions of the data are defined on the Inputs page in the Aggregator Stage dialog box.
☻Page 81 of 243☻
Note: The Aggregator stage does not preserve the order of input rows, even when the incoming data is already sorted. The Inputs page has the following field and two tabs: • Input name. The name of the input link link to the Aggregator stage. • General. Disp Displa laye yed d by defa defaul ult. t. Cont Contai ains ns an optional description of the link. • Columns. Cont Contai ains ns a gri grid disp displlayi aying the column definitions for the data being written to the stage, and an optional sort order. Column name: The name of the column. Specif ifie ies s the the sort sort orde order. r. This This Sort Sort Order: Order: Spec field is blank by default, that is, there is no Ascending for sort order. Choose ascend ascending ing order order,, Descending for descending order, or Ignore if you do not want the order to be checked. Key: Indicates whether the column is part of the primary key. SQL type: The SQL data type. Length: The data precision. This is the length for CHAR data and the maximum length for VARCHAR data. Scale: The data scale factor. Nullable: Specifies whether the column can contain null values. Display: The maximum number of characters required to display the column data. Data Data element element:: The type of data in the column. Description: A text description of the column.
Defining Aggregator Output Data When you output data from an Aggregator stage, the the prop proper erti ties es of outp output ut links links and the the colu column mn definitions of the data ata are are defined on the Outputs page in the Aggregator Stage dialog box.
☻Page 82 of 243☻
The Outputs page has the following field and two tabs: • Output name. The name of the output link. Choose the link to edit from the Output name drop-down list box. This list box displays all the output links from the stage. • General. Disp Displa laye yed d by defa defaul ult. t. Cont Contai ains ns an optional description of the link. • Columns. Cont Contai ains ns a gri grid disp displlayi aying the column column defini definitio tions ns for the data data bein being g output output fro from the stag stage e. The The gri grid has the fol followi owing columns: Column name. The name of the column. Specif ifie ies s whet whethe herr to grou group p by the the Group. Spec data in the column. Derivation. Contains an expression specif specifyin ying g how the data data is aggreg aggregate ated. d. This is a complex cell, requiring more than one piece piece of inform informati ation. on. Double Double-cl -click icking ing the cell opens the Derivation Transformer Stages
Transformer stages do not extract data or write data data to a targ target et data databa base se.. They They are are us used ed to handle extracted data, perform any conversions required, and pass data to another Transformer stage or a stage that writes data to a target data table.
Using a Transformer Stage
☻Page 83 of 243☻
Tra Trans nsfo form rmer er stage stages s can can have have any any numb number er of inputs and outputs. The link from the main data input source is designated the primary input link. There There can only only be one primar primary y input input link, link, but there can be any number of reference inputs. When you edit a Transformer stage, the Tra Trans nsfo form rmer er Edit Editor or appe appear ars. s. An exam exampl ple e Tr Transf ansfor orm mer stag stage e is sh sho own bel below. ow. In thi this exam exampl ple, e, meta metada data ta has been been defi define ned d for for the the input and the output links.
Link Area The The top top area area disp displa lay ys link links s to and and from from the the Transf Transform ormer er stage, stage, showin showing g their their column columns s and the relationships relationships between between them. The link area is where where all column column defini definitio tions, ns, key expre expressi ssions ons,, and stage variables are are defined. The link area is divided into two panes; you can drag the splitter bar between them to resize the panes relative to one another. There is also a horizontal scroll bar, allowing allowing you to scroll scroll the view view left or right. right. The left pane shows input links, the right pane shows output links. The input link shown at the top of the the left left pane pane is alwa always ys the the prim primar ary y link link.. An Any y subsequent links are reference links. For all types of link, key fields are shown in bold. Reference link key fields that have no expression defined are shown in red (or the color defined in Tools ‰ Options), as are output columns that have no derivation defined.
☻Page 84 of 243☻
Within the Transformer Editor, a single link may be selected at any one time. When selected, the link’s link’s title title bar is highli highlight ghted, ed, and arrowh arrowhead eads s indicate any selected columns.
Metadata Area The bottom area shows the column metadata for input and output links. Again this area is divided into nto two two pane panes: s: the the left left sh sho owing wing input nput link ink meta metada data ta and and the the righ rightt sh show owin ing g outp output ut link link metadata. metadata. The metadat metadata a for each link link is shown shown in a grid contained within a tabbed page. Click the tab to bring the required link to the front. That link is also selected in the link area. If you select a link in the link area, its metadata tab is brought to the front automatically. You can edit the grids to change the column metadata on any of the links. You can also add and delete metadata.
Input Links The main data source is joined to the Transformer stage via the primary link, but the stage stage can also also have have any number number of refer referenc ence e input links. A reference link represents a table lookup. These are used to provide information that might affect the way the data is changed, but do not supply the actual actual data to to be changed. changed. Referenc Reference e input columns can be designated as key fields. You can specify key expressions that are used to evaluate the key fields fields.. The The most most common common use for the key expression is to specify an equi-join, which is a link link betw betwee een n a prim primar ary y link link colu column mn and and a refe refere renc nce e link link colu column mn.. For For exam exampl ple, e, if your your primary input data contains names and addresses, and a reference input contains names and phone phone number numbers, s, the refer referenc ence e link link name colu column mn is mark marked ed as a key key fiel field d and and the the key key expre expressi ssion on refer refers s to the primar primary y link’s link’s name colu column mn.. Duri During ng proc proces essi sing ng,, the the name name in the the primary input is looked up in the reference input. If the nam names match atch,, the the refe eferenc ence dat data is
☻Page 85 of 243☻
consolidated with the primary data. If the names do not not matc match, h, i.e. i.e.,, ther there e is no reco record rd in the the reference input whose key matches the expressi expression on given, given, all the columns columns specified specified for the reference input are set to the null value.
Output Links You can have any number of output links from your Transformer stage. You may want to pass some some data data stra straig ight ht thro throug ugh h the the Trans Transfo form rmer er stage unaltered, but it’s likely that you’ll want to transform data from some input columns before output outputtin ting g it from from the Transfo Transforme rmerr stage. stage. You can can sp spec ecif ify y su such ch an oper operat atio ion n by ente enteri ring ng a BASIC expression or by selecting a transform to apply to the data. DataStage has many built-in transforms, or you can define your own custom transform transforms s that are stored stored in the Repository Repository and can can be reus reused ed as requ requir ired ed.. The The sour source ce of an output output link link column column is define defined d in that that column column’s ’s Derivation cell cell withi within n the Transf Transform ormer er Editor Editor.. You You can us use e the the Expr Expres essi sion on Edit Editor or to ente enterr expressions or transforms in this cell. You can also simply drag an input column to an output column’s Derivation cell, to pass the data ata stra straig ight ht throu through gh the Trans Transfo form rmer er stage stage.. In addi additi tion on to sp spec ecif ifyi ying ng deri deriva vati tion on deta detail ils s for for individual output columns, you can also specify constraints that operate on entire output links. A constr constrain aintt is a BASIC BASIC expres expressio sion n that that specif specifies ies crit criter eria ia that that data data must must meet meet befo before re it can can be passed to the output link. You can also specify a reject link, which is an output link that carries all the dat data not not outp output ut on othe otherr links, nks, that that is, is, column columns s that have have not met the the criter criteria. ia. Each Each output link is processed in turn. If the constraint expression evaluates to TRUE for an input row, the data row is output on that link. Conversely, if a constraint expression evaluates to FALSE for an input row, the data row is not output on that tha t link. Const Constra rain intt expr expres essi sion ons s on diff differ eren entt links links are are independent. If you have more than one output link, an input row may result in a data row being outp output ut from from some some,, none none,, or all all of the the outp output ut links. For example, if you consider the data that
☻Page 86 of 243☻
comes from a pain aint shop, it could include information about any number of different colors. If you want to separate the colors into different files, you would set up different constraints. You could could output output the inform informati ation on about about green green and blue paint on Link A, red and yellow paint on Link B, and black black paint paint on Link Link C. When When an input input row contains information about yellow paint, the Link A constraint expression evaluates to FALSE and the row is not output output on Link A. Howev However, er, the inpu inputt dat data does does sati satisf sfy y the the cons consttraint aint criterion for Link B and the rows are output on Link Link B. If the input input data data contain contains s informat information ion abou aboutt whit white e pain paint, t, this this does does not not sati satisf sfy y any any cons constr trai aint nt and and the the data data row row is not not outp output ut on Links Links A, B or C, but will will be output output on the the reject reject link. The reject link is used to route data to a table or file that is a “catch-all” for rows that are not output on any other link. The table or file cont contai aini ning ng the these reje reject cts s is repr repres esen ente ted d by another stage in the job design. Inter-Process Stages
An inter-process (IPC) stage is a passive stage which hich prov proviides des a comm commun uniicati cation on chan channe nell between DataStage processes running simultaneously in the same job. It allows you to design jobs that run on SMP systems with great performanc ance benefits. To understand and the benefits of using IPC stages, you need to know a bit bit abou aboutt how how DataSt DataStag age e jobs jobs actua actuall lly y run run as processes,
In this example the job will run as two processes, one handling the communication from sequential file sta stage to IPC stage age, and and one handling
☻Page 87 of 243☻
communication from IPC stage to ODBC stage. As soon as the Sequential File stage has opened its output link, the IPC stage can start passing data to the ODBC stage. If the job is running on a multi-pro multi-processor cessor system, the two processor processor can run simultaneously so the transfer will be much faste ster. You can also also use the IPC stage age to expli explicit citly ly specif specify y that that connec connected ted active active stages stages shoul hould d run as sepa separrate ate pro process cesse es. Thi This is advantageous for performanc ance on multiproc proces esso sorr sy syst stem ems. s. You You can also also sp spec ecif ify y this this behavior implicitly by turning inter process row buff buffer erin ing g on, on, eith either er for for the the whol whole e proj projec ectt via via DataStage Administrator, or individually for a job in its Job Properties dialog box.
Using the IPC Stage When you edit an IPC stage, the InterProcess Stage dialog box appears.
☻Page 88 of 243☻
This dialog box has three pages: • Stage. The Stage page has two tabs, General and Properties. The General page allows you to specif specify y an option optional al descri descripti ption on of the page. page. The Properties tab allows you to specify stage properties. • Inputs. The IPC stage can only have one input link. the Inputs page displays information about that link. • Outputs. The IPC stage can only have one output link. The Outputs page page disp displa lay ys information about that link.
Defining IPC Stage Properties The Properties tab allows you to specify two properties for the IPC stage: • Buffer Size . Defaults to 128 Kb. The IPC stage uses two blocks of memory; one block can be writ writte ten n to whil while e the the othe otherr is read read from from.. This This property defines the size of each block, so that by default 256 Kb is allocated in total. • Timeout. Defaults to 10 seconds. This gives time limit for how long the stage will wait for a process to connect to it before timing out. This norm normal ally ly will will not not need need chang changin ing, g, but but may may be impo import rtan antt wher where e you you are are prot protot otyp ypin ing g mult multiiprocessor jobs on single processor platforms and there are likely to be delays.
Defining IPC Stage Input Data
☻Page 89 of 243☻
The IPC stage can have one input link. This is where the process that is writing connects. The The Inpu Inputs ts page page has has two two tabs tabs:: General and Columns. • General. The General tab all allows ows you to specify an optional description of the stage. • Columns. The Columns tab tab cont contai ains ns the the column definitions for the data on the input link. This is normally populated by the metadata of the stage connecting on the input side. You can also Load a column definition from the Repository, or type one in yourself (and Save it to the the Repo Reposi sito tory ry if requ requir ired ed). ). Note Note that that the the metadata on the input link must be identical to the metadata on the output link.
Defining IPC Stage Output Data The IPC stage can have one output link. This is where the process that is reading connects. The Outputs page has two tabs: General and Columns. • General. The General tab all allows ows you to specify an optional description of the stage. • Columns. The Columns tab tab cont contai ains ns the the column definitions for the data on the input link. This is normally populated by the metadata of the stage connecting on the input side. You can also Load a column definition from the Repository, or type one in yourself (and Save it to the the Repo Reposi sito tory ry if requ requir ired ed). ). Note Note that that the the metadata on the output link must be identical to the metadata on the input link.
Link Partitioner Stage:
The The Link Link Part Partit itio ione nerr stag stage e is an acti active ve stag stage e which takes one input and allows you to distri dis tribut bute e partit partition ioned ed rows rows to up to 64 output output links. The stage expects the output links to use the same metadata as the input link. Partitioning your data enables you to take advantage of a multi ulti-p -prrocess cessor or sy syst ste em and have have the the dat data processed in parallel. It can be used in
☻Page 90 of 243☻
conj conjun unct ctio ion n with with the the Link Link Coll Collec ecto torr stag stage e to partit partition ion data, data, proces process s it in parall parallel, el, and then then collec collectt it togeth together er again again before before writing writing it to a single target.
In orde orderr for this his job to compi ompille and and run as intended on a multi-processor system you must have inter-process buffering turned on, either at project level using the DataStage Administrator, or at job level from the Job Properties dialog box.
Defining Link Partitioner Stage Properties The Properties tab allows you to specify two properties for the Link Partitioner stage: • Partitioning Algorithm. Use this property to specify the method the stage uses to partition data. Choose from: This is the the defa defaul ultt metho ethod. d. – RoundRound-Rob Robin in. This Using the round-robin method the stage will write each incoming row to one of its output links in turn. – Random. Using this method the stage will use a rando andom m num number ber gene generrator ator to dist distrribut ibute e incoming rows evenly across all output links. – Hash. Using this method the stage applies a hash ash func functtion ion to one one or more ore input nput colum olumn n values to determine which output link the row is passed to. – Modulus. Using this method the stage applies a modulus function to an integer input column value to determine which output link the row is passed to.
☻Page 91 of 243☻
• Partit Partition ioning ing Key. This property is only significant where you have chosen a partitioning algo algori rith thm m of Hash Hash or Modu Modulu lus. s. For For the the Hash Hash algori algorithm thm,, specif specify y one or more more column column names names separated by commas. These keys are concat concatena enated ted and a hash hash functi function on applied applied to deter determin mine e the destin destinati ation on output output link. link. For the Modulus algorithm, specify a single column name which identifies an integer numeric column. The value alue of this his col column umn value alue dete etermine mines s the the destination output link. Link Collector Stages
The Link Collector stage is an active stage which takes up to 64 inputs and allows you to collect data from this links and route it along a single output link. The stage expects the output link to use the same same metadata metadata as as the input input links. links. The Link Collector stage can be used in conjunction with with a Link Link Part Partit itio ione nerr stag stage e to enab enable le you you to take advantage of a multi-processor system and have data ata processed in para arallel. The Link Partitioner stage partitions data, it is processed in parallel, then the Link Collector stage collects it togeth together er again again befor before e wri writin ting g it to a singl single e target. The The follow following ing diagra diagram m illust illustrat rates es how the Link Link Collector stage can be used in a job in this way.
In orde orderr for this his job to compi ompille and and run as intended on a multi-processor system you must
☻Page 92 of 243☻
have inter-process buffering turned on, either at project level using the Data Stage Administrator, Administrator, or at job level from the Job Properties dialog box.
Defining Link Collector Stage Properties The Properties tab allows you to specify two properties for the Link Collector stage: Collection Algorithm Algorithm. Use • Collection Use this this prop proper erty ty to spec sp ecif ify y the the meth method od the the stage stage us uses es to coll collec ectt data. Choose from: This is the the defa defaul ultt metho ethod. d. – RoundRound-Rob Robin in. This Using the round-robin method the stage will read a row from each input link in turn. – Sort/Merge. Using the sort/merge method the stage reads multiple sorted inputs and writes one sorted output. • So This prop proper erty ty is only only sign signif ific icant ant Sort rt Ke Key y. This where you have chosen a collecting algorithm of Sort/Merge. It defines how each of the partitioned data sets are known to be sorted and how the merged output will be sorted. The key has the following following format: format: Colu Column mn name name {sort sort orde order r ] [,Column [,Column name [sort order ]]... ]]...
INFORMATICA vs DATASTAGE: Features
Informatica
DataStage
System Requirement - Platform Support
Win NT/UNIX
Win NT/UNIX/More Platforms
Yes, My experience has been that INFA is definitely easier to implement initially and upgrade.
No, Ascential has done a good job in recent releases.
Deployment Facility - Ability to handle initial deployment, major releases, minor releases and patches with equal ease
Transformations
☻Page 93 of 243☻
- No of available transformation functions
58
- Support for looping the source row (For While Loop)
Supports for comparing immediate previous record Full Full hist history ory,, recen recentt values, Current & Prev values
- Slow Slowly ly Changi Changing ng Dime Dimensi nsion on
- Time Di Dimension ge generation - Rejected Records
Does not support. Can be captured
- Debugging Facility
Not Supported.
Application Integration Functionality - Support for real Time Data Exchange
- Support for CORBA/XML Metadata - Ability to view & n avigate metadata on the web
Not Available
Does not support
Does Not Support
- Ability to Customize views of Supports metadata for different users (DBA Vs
☻Page 94 of 243☻
28, DS has many more canned transformation functions than 28. Does not support
Supports only through Custom scripts. Does not have a wizard to do this. DS has a component called ProfileStage that handles this type of comparison. You'll want to use it judiciously in your production processing because it does take extra resources to use it but I have found it to be very useful. Does not support. Cannot be captured in separate file. DS absolutely has the ability to capture rejected records in a separate file. That's a pretty basic capability and I don't know of any ETL tool that can't do it... Supports basic debugging facilities for testing.
Not Available, The 7.x version of DS has a component to handle real-time data exchange. I think it is called RTE. Does not support
Job se sessions can be be mo monitored using Informatica Classes. This is completely not true. DS has a very strong metadata component (MetaStage) that works not only with DS, but also has plug-ins to work with modeling tools (like ERWin) and BI tools (like Cognos). This is one of their strong suits (again, IMHO). Not Available, Also not true - MetaStage allows publishing of metadata
Business user).
- Metadata repository can be stored in RDBMS
Support And Maintenance - Com Comm mand and lin line e ope operrati ation - Ability to maintain versions of mappings
Job Controlling & Scheduling - Alerts like sending mails
Yes
Pmcmd mcmd -ser -serve verr int inter erfa fac ce for command line Yes
Supported
in HTML format for different types of users. It is completely customizable. No. But the proprietary meta data can be moved to a RDBMS using the DOC Tool
Not Available No, Not true - this has been a weak spot for DS in past releases, but the7.x version of DS has a good versioning tool.
Does not support directly (no option). But possible to call custom programs after the job get executed)
1) System Requirement 1.1 Platform Support 1.1.1 Informatica: Win NT/ Unix 1.1.2 DataStage: Win NT/ Unix/More platforms. 2) Deployment facility 2.1. Ability to handle initial deployment, major releases, minor releases and patches with equal ease 2.1.1.Informatica:. 2.1.1.Informatica:. Yes 2.1.2.DataStage: 2.1.2.DataStage: No My experience has been that INFA is definitiely easier to implementinitially implementinitially and upgrade. Ascential has done a good job in recent releases to improve, but IMHO INFA still does this b etter. 3) Transformations 3.1. No of available transformation functions 3.1.1.Informatica:. 3.1.1.Informatica:. 58 3.1.2.DataStage: 3.1.2.DataStage: 28 DS has many more canned transformation functions than 28. I'm not surewhat leads you to this number, but I'd recheck it if I were you. 3.2. Support for looping the source row (For While Loop) 3.2.1.Informatica:. 3.2.1.Informatica:. Supports for comparing immediate previous record 3.2.2.DataStage: 3.2.2.DataStage: Does not support 3.3. Slowly Changing Dimension 3.3.1.Informatica:. 3.3.1.Informatica:. Supports Full history, recent values, Current & Prev values. 3.3.2.DataStage: 3.3.2.DataStage: Supports only through Custom scripts. Does not
☻Page 95 of 243☻
have a wizard to do this DS has a component called ProfileStage that handles this type ofcomparison. You'll want to use it judisciously in your production processing because it does take extra resources to use it but I have found it to be very useful. 3.4. Time Dimension generation 3.4.1.Informatica:. 3.4.1.Informatica:. Does not support. 3.4.2.DataStage: 3.4.2.DataStage: Does not support 3.5. Rejected Records 3.5.1.Informatica:. 3.5.1.Informatica:. Can be captured 3.5.2.DataStage: 3.5.2.DataStage: Cannot be captured in separate file DS absolutely has the ability to capture rejected records in a separatefile. That's a pretty basic capability and I don't know of any ETL tool that can't do it... 3.5. Debugging Facility 3.5.1.Informatica:. 3.5.1.Informatica:. Not Supported 3.5.2.DataStage: 3.5.2.DataStage: Supports basic debugging facilities for testing. 4) Application Integration Functionality 4.1. Support for real Time Data Exchange 4.1.1..Informatica:. 4.1.1..Informatica:. Not Available 4.1.2.DataStage: 4.1.2.DataStage: Not Available. The 7.x version of DS has a component to handle real-time dataexchange. I have not personnaly used it yet, but you should look into it. I think it is called RTE. 4.2. Support for CORBA/XML 4.1.1..Informatica:. 4.1.1..Informatica:. Does not support 4.1.2.DataStage: 4.1.2.DataStage: Does not support 5) Metadata 5.1. Ability to view & navigate metadata on the web 5.1.1..Informatica:. 5.1.1..Informatica:. Does not support 5.1.2.DataStage: 5.1.2.DataStage: Job sessions can be monitored using Informatica Classes This is completely not true. DS has a very strong metadata component(MetaStage) component(MetaStag e) that works not only with DS, but also h as plug-ins to work with modeling tools (like ERWin) and BI tools (like Cognos). This is one of their strong suits (again, IMHO). 5.1. Ability to Customize views of metadata for different users (DBA Vs Business user) 5.1.1..Informatica:. 5.1.1..Informatica:. Supports. 5.1.2.DataStage: 5.1.2.DataStage: Not Available Also not true - MetaStage allows publishing of metadata in HTML
☻Page 96 of 243☻
format for different types of users. It is completely customizable. 5.1. Metadata repository can be stored in RDBMS 5.1.1..Informatica:. 5.1.1..Informatica:. Yes 5.1.2.DataStage: 5.1.2.DataStage: No. But the proprietary meta data can be b e moved to a RDBMS using the DOC Tool 6) Support And Maintenance 6.1. Command line operation 6.1.1..Informatica:. 6.1.1..Informatica:. Pmcmd -server interface for command line 6.1.2.DataStage: 6.1.2.DataStage: Not Available 6.2. Ability to maintain versions of mappings 6.1.1..Informatica:. 6.1.1..Informatica:. Yes 6.1.2.DataStage: 6.1.2.DataStage: No Not true - this has been a weak spot for DS in past releases, but the7.x version of DS has a good versioning tool. 7) Job Controlling & Scheduling 7.1. Alerts like sending mails 7.1.1..Informatica:. 7.1.1..Informatica:. Supported. 7.1.2.DataStage: 7.1.2.DataStage: Does not support directly ( no option). o ption). But possible to call custom programs after the job get executed)
Further mistakes in your comparison, mainly from a DataStage based angle as my experience is with that product:
Both DataStage DataStage and Informatica support XML. DataStage comes with XML input, transformation and output stages. Both products have an unlimited number of transformation functions since you can easily write your own using the command interface. Both Both prod produc ucts ts have have opti option ons s for for inte integr grat atin ing g with with ERP ERP systems such as SAP, PeopleSoft and Seibel but these come at a significant extra extra cost. You may need to evaluate thes these. e. SAP SAP is a rese resell ller er of Data DataSt Stag age e for for SAP SAP BW, BW, PeopleSoft bundles DataStage in its EPM products. Data DataSt Stag age e has has some some very very good good debu debugg ggin ing g faci facili liti ties es including the ability to step through a job link by link or row by row and watch data values values as a job execute executes. s. Also Also server side tracing. Data DataSt Stag age e 7.x 7.x rele releas ases es have have inte intell llig igen entt assi assist stan ants ts (wizards) for creating the template jobs for each type of slowly slowly changing dimension dimension table loads. The DataStage DataStage Best Practices course also provides training in DW loading with SCD and surrogate key techniques. Ascen Ascentia tiall and Inform Informati atica ca both both have have robust robust metada metadata ta mana managem gemen entt prod product ucts. s. Ascen Ascenti tial al Meta MetaSt Stag age e come comes s bundle bundled d free free with with DataSt DataStage age Enter Enterpri prise se and manage manages s metadata metadata via a hub and spoke architect architecture. ure. It can import metadata from a wide range of databases and modelling tools and has a high degree of interaction with DataStage
☻Page 97 of 243☻
for operat operation ional al metad metadata ata.. Inform Informati atica ca Super SuperGl Glue ue was released last year and is rated more highly by Gartner in the metadata field. field. It integrates closely closely with PowerCenter PowerCenter products. They both support multiple multiple views (business (business and technical) of metadata plus the functions you would expect such as impact analysis, semantics and data lineage. DataS DataStag tage e can send emails. emails. The sequenc sequence e job has an email email stage that is easy to configure. configure. DataStage DataStage 7.5 also has new mobile device support so you can administer your DataStag DataStage e jobs via a palm pilot. pilot. There There are also 3rd party web based tools that let you run and review jobs over a browse browser. r. I found it easy to send send sms admin admin messag messages es from a DataStage Unix server. Data DataSt Stag age e has has a comm comman and d line line inte interf rfac ace. e. The The dsjo dsjob b command can be used by any scheduling tool or from the command line to run jobs and check the results and logs of jobs. Both products integrate well with Trillium for data quality, DataStage also integrate with QualityStage for data quality. This This is the prefe preferre rred d method method of addres address s cleansi cleansing ng and fuzzy matching.
Milind - I've got to ask - where are you getting your information from??? I have done ETL tool comparisons for several clients over the past 7 or so years. They are both good tools with different strengths so it really depends on what your organizations needs / priorities are as to which one is "better". I have spent much more time in the past couple of years on DS than INFA so I don't feel I can speak to the changes INFA has made lately, but I know you have incorrect info about DS. I am currently working with a client on DS v7.1. I've made a few comments below for the more glaring inaccuracies or topics where I have up-to-date experience. I suggest you re-research and perhaps do a proof-of-concept with each vendor. FYI - I don't know if you have looked at the Parallel Extender component of DS 7.x, but it is a terrific capability if you have challenges with meeting availability requirements. It is one of the most impressive changes Ascential has made lately (IMHO).
Gartne Gartnerr has vendor vendor report reportss on Ascent Ascential ial and Inform Informati atica. ca. They also have a magic quadrant that lists both DataStage and Informatica as the the clear market leaders. I don't think you you can go wrong with either product, it comes down to whether you can access experts experts in these products products for your project project and what options options you have for trainin training. g. I think if you go into into a major project with either product and you don't have an expert on your team it can go badly wrong. Furt Furthe herr mist mistak akes es in your your comp compar aris ison on,, main mainly ly from from a DataStage based angle as my experience is with that product:
☻Page 98 of 243☻
-
-
-
-
-
-
-
-
Both Both Data DataSt Stag agee and and Info Inform rmat atic icaa supp suppor ortt XML. XML. DataSt DataStage age comes comes with with XML input, input, transf transform ormati ation on and output stages. Both produ oducts have ave an unli nlimited num number ber of transform transformation ation functions since you can easily easily write write your own using the command interface. Both products have options for integrating with ERP systems such as SAP, PeopleSoft and Seibel but these come come at a signific significant ant extra extra cost. You may need need to evaluate these. SAP is a reseller reseller of DataStage DataStage for SAP BW, BW, Peop People leSo Soft ft bund bundle less Data DataSt Stag agee in its its EPM EPM products. DataStage has some very good debugging facilities including the ability to step through a job link by link or row by row and watch data values as a job executes. Also server side tracing. DataSt DataStage age 7.x releas releases es have have intell intelligen igentt assist assistant antss (wizards) for creating the template jobs for each type of slow slowly ly chan changi ging ng dime dimens nsio ion n tabl tablee load loads. s. Th Thee DataStage Best Practices course also provides training in DW loadin ding with SCD and surrogate key techniques. Ascential and Informatica both have robust metadata manage managemen mentt product products. s. Ascent Ascential ial MetaSt MetaStage age comes comes bundled free with DataStage Enterprise and manages meta metada data ta via via a hub hub and and spok spokee arch archit itec ectu ture re.. It can can import metadata from a wide range of databases and modelling tools and has a high degree of interaction with DataStage for operational metadata. Informatica SuperGlue was released last year and is rated more highly highly by Gartner in the metadata field. It integrates integrates clos closel ely y with with Powe PowerC rCen ente terr prod produc ucts ts.. Th They ey both both suppor supportt multip multiple le views views (busin (business ess and techni technical cal)) of metadata plus the functions you would expect such as impact analysis, semantics and data lineage. DataStage DataStage can send emails. emails. The sequence sequence job has an email stage stage that is easy to configure. configure. DataStage DataStage 7.5 also also has has new new mobi mobile le devi device ce supp suppor ortt so you you can can admi admini nist ster er your your DataS DataSta tage ge jobs jobs via via a palm palm pilo pilot. t. There are also 3rd party web based tools that let you run and review jobs over a browser. I found it easy easy to send send sms admin messages messages from a DataSt DataStage age Unix Unix server. DataStage DataStage has a command line interface. interface. The dsjob command command can be used by any scheduling tool or from the command line to run jobs and check the results and logs of jobs.
☻Page 99 of 243☻
-
Both products integrate well with Trillium for data quality, DataStage also integrate with QualityStage for data quality. quality. This is the preferred preferred method of address address cleansing and fuzzy matching.
How Should uld Dimension?
We
Impl Impleemen ment
A
Slowl lowly y
Cha Changi nging
Curr Curren entl tly, y, our data data ware warehou house se has has only only Ty Type pe 1 Slowl Slowly y Changing Dimensions (SCD). That is to say say we overwrite the dimension record with with every update. The problem with that is when data changes, it changes for all history while this is valid for data entry corrections, it may not be valid for all data. data. An accepta acceptable ble exampl examplee could could be Custom Customer er Date Date of Birth. If the date of birth was changed, chances are are the reason was that their data was incorrect. However, if the Customer address were changed, this may and probably does mean the customer moved. moved. If we simply overwrite the address then all sales for that customer will belon belong g to the new addres address. s. Suppose Suppose the custom customer er moved moved from Florida to Ohio. If we were trying trying to track sales patterns by region, all of the customer’s purchase that were made in Florida would now appear to have been made in Ohio. Type 1 Slowly Changing Dimension Customer Dimension CODE
ID CustKey Name 1001 BS001 Bob Smith 1002 LJ004 Lisa Jones
DOB 6/8/1961 10/15/1954
City State Tampa FL Miami FL
Customer Dimension After Edits CODE
ID 1001 1002
CustKey Name BS001 Bob Smith LJ004 Lisa Jones
DOB 6/8/1961 10/15/1954
City State Dayton OH Miami FL
In the example above, the DOB change doesn’t affect any dimensional reporting facts. However, the City, State change would have an affect. Now all sales for Bob Smith would appear to come from Dayton, Ohio rather than from Tampa, Florida. The solution we have chosen for solving this problem is to ☻Page 100 of 243☻
implement implement a Type 2 slowly slowly changing dimension. dimension. A Type 2 SCD records a separate row each time a value is changed in the dimension. In our case, we are declaring that we will only create create a new dimens dimension ion record record when when certai certain n column columnss are changed. changed. In the example above, we would not record record a new record for the DOB change but we would for the address change. Type 2 Slowly Changing Dimension Customer Dimension CODE
ID CustKey Name DOB City St Effective Date 1001 BS001 Bob Smith 6/8/1961 Tampa Y 5/1/2004 1002 LJ004 Lisa Jones 10/15/1954 Miami Y 5/2/2004
Curr FL FL
Customer Dimension After Edits CODE
ID CustKey Name DOB City St Effective Date 1001 BS001 Bob Smith 6/8/1961 Tampa FL 5/1/2004 1002 LJ004 Lisa Jones 10/15/1954 Miami FL 5/2/2004 1003 BS001 Bob Smith 6/8/1961 Dayton OH 5/27/2004
Curr N Y Y
As you can see, there are two dimension records for Bob Smith now. They both have the same CustKey CustKey values, but the have different different ID values. All future fact table table rows will use the the new new ID to link link to the the Cust Custom omer er dime dimens nsio ion. n. This This is acco accomp mpli lish shed ed by the the use use of the the Curr Curren entt Flag Flag.. The ETL ETL process looks only at the current flag when recording new orders. However, in the case of an update to an order the Effective Date must be used to determine which customer the update applies to. The primary issue with Type 2 SCD is the volume of data grows exponential exponentially ly as more changes are tracked. This can impact impact performance performance in a star schema. The principle principle behind the star schema design is that while facts are few columns, they have many rows but they only have to perform single level joins to resolve their dimensions. The assumption is that ☻Page 101 of 243☻
the the dime dimens nsio ions ns have have lots lots of colum columns ns but rela relati tivel vely y few few rows. This allows for very fast joining of data. Conforming Dimensions
For the purposes of this discussion conforming dimensions only need a brief definition. Conforming dimensions are a feature of star schemas that allow facts to share dimensional data. A conforming dimension dimension occurs when two dimensions dimensions share the same keys. Often they have different attributes. The goal is to ensure that any fact fact table can link to the conforming dimension and consume its data so long as the dimension is relevant. Conforming Dimension Customer Dimension CODE
ID 1001 1002
CustKey Name BS001 Bob Smith LJ004 Lisa Jones
DOB City State 6/8/1961 Tampa FL 10/15/1954 Miami FL
Billing Dimension CODE
ID Bill2Ky Bill2Ky Name Account Type Credit Credit Limit CustKey 1001 9211 Bob Smith Credit $10,000 BS001 1002 23421 Lisa Jones Cash $100 LJ004 In the example above, we could use the ID from the Customer dimension in a fact and in the future a link to the Billing dimension could be established without having to reload the data. We are considering a slight modification to the standard Type 2 SCD. The idea is to to maintain two dimensions dimensions one as a Type 1 and one as a Type 2. The problem problem with this is we lose lose the ability to use conforming dimensions. Type 2 and Type 1 Slowly Changing Dimension Customer Dimension Type 1
☻Page 102 of 243☻
CODE
ID CustKey Name DOB City St Effective Date 1001 BS001 Bob Smith 6/8/1961 Dayton Y 5/1/2004 1002 LJ004 Lisa Jones 10/15/1957 Miami Y 5/2/2004
Curr OH FL
Customer Dimension Type 2 CODE
ID CustKey Name DOB City St Effective Date 1001 BS001 Bob Smith 6/8/1961 Tampa N 5/1/2004 1002 LJ004 Lisa Jones 10/15/1957 Miami Y 5/2/2004 1003 BS001 Bob Smith 6/8/1961 Dayton Y 5/27/2004
Curr FL FL OH
As you can see, the current ID for Bob Smith in the Type 1 SCD is 1001, while it is 1003 in in the Type 2 SCD. This is not conforming. Our solution is to create a composite key for the Type 2 SCD. Type 2 and Type 1 Slowly Changing Dimension Customer Dimension Type 1 CODE
ID 1001 1002
CustKey Name DOB City St BS001 Bob Smith 6/8/1961 Dayton OH LJ004 Lisa Jones 10/15/1957 Miami FL
Customer Dimension Type 2 CODE
ID SubKey CustKey Name DOB City St Curr Eff Date 1001 001 BS001 Bob Smith 6/8/1961 6/8/1961 Tampa FL N 5/1/2004 1002 001 LJ004 Lisa Lisa Jones 10/15/1957 Miami FL Y 5/2/2004 1001 100 1 002 BS001 BS001 Bob Smith Smith 6/8/19 6/8/1961 61 Dayton OH Y 5/27/2004
☻Page 103 of 243☻
In the example above, the Type 1 and the Type 2 dimensions dimensions conform conform on the ID level. If a fact needs the historical historical data it will consume both the ID and the SubKey.
BEFORE YOU DESIGN YOUR APPLICATION You must assess your data. Data Stage jobs can be quite complex and so it is advisable to consider the following before starting a job: •
•
The number and type of data sources . You will need a stage for each data source you want to access. For each different type of data source you will need a different type of stage. The location of the data. Is your data on a networked disk or a tape? You may find that if your data is on a tape, you will need to arrange for a custom stage to extract the data.
•
Whether you will need to extract data from a mainframe source. If this is the case, you will need Enterprise MVS Edition installed and you will use mainframe jobs that actually run on the mainframe.
•
The content of the data. What columns are in your data? Can you import the table definitions, or will you need to define them manually? Are definitions of the data items consistent between data sources? The data warehouse. What do you want to store in the data warehouse and how do you want to store it?
•
☻Page 104 of 243☻
To assign a null value to a variable, use this syntax:
variable = @NULL To assign a character string containing only the character used to represent the null value to a variable, use this syntax:
variable = @NULL.STR
Errors that occur as the files are loaded into Oracle O racle are recorded in the sqlldr log file. Rejected rows are written to the bad file. The main reason for rejected rows is an integrity constraint in the target table; for example, null values in NOT NULL columns, nonunique values in UNIQUE columns, and so on. The bad file is in the same format as the input data file • • •
String operators for: Concatenating strings with Cat or : Extracting sub strings with [ ]
Hello. ′ : ′ My Name is ′ : X : ″ . What’s yours?″ ... evaluates to: ″
Hello. My name is Tarzan. What’s yours?″
Field Function: Returns delimited substrings in a string Returns delimited substrings in a string MyString = "London+0171+NW2+AZ" SubString = Field(Mystring, "+", 2, 2) * returns "0171+NW2" A=′ 12345′ A[3]=1212 The result is 121212. MyString = "1#2#3#4#5" String = Fieldstore (MyString, "#", 2, 2, "A#B") * above results in: "1#a#B#4#5" Operator
Relation
☻Page 105 of 243☻
Example
Eq or = Ne or # or >< or <> Lt or < Gt or > Le or <= or =< or #> Ge or >= or => or #<
Equality Inequality Less than Greater than Less than or equal to Greate Greaterr than than or equal equal to to
X=Y X # Y, X <> Y XY X <= Y X >= Y
You cannot use relational operators to test for a null value. Use the IsNull function instead. Tests if a variable contains a null value.
MyVar = @Null ;* sets variable to null value If IsNull(MyVar * 10) Then * Will be true since any arithmetic involving a null value * results in a null value. End IF Operator:
Assigns a value that meets the specified conditions •
Return A or B depending on value in Column1: If Column1 > 100 Then "A" Else "B"
Function MyTransform(Arg1, Arg2, Arg3) * Then and Else clauses occupying a single line each: If Arg1 Matches "A..." Then Reply = 1 Else Reply = 2 * Multi-line clauses: If Len(arg1) > 10 Then Reply += 1 Reply = Arg2 * Reply End Else Reply += 2 Reply = (Arg2 - 1) * Reply End * Another style of multiline clauses: If Len(Arg1) > 20 Then Reply += 2 Reply = Arg3 * Reply End Else Reply += 4 Reply = (Arg3 - 1) * Reply End Return(Reply)
Calls a subroutine. Not available in expressions.
☻Page 106 of 243☻
Syntax Call subroutine [ ( argument [ ,argument ] … ) ] Subroutine MyRoutineA(InputArg, ErrorCode) ErrorCode = 0 ;* set local error code * When calling a user-written routine that is held in the * DataStage Repository, you must add a "DSU." Prefix. * Be careful to supply another variable for the called * routine's 2nd argument so as to keep separate from our own. Call DSU.MyRoutineB("First argument", ErrorCodeB) If ErrorCodeB <> 0 Then ... ;* called routine failed - take action Endif Return
Special DataStage BASIC Subroutines DataStage provides some special DataStage subroutines for use in a before/after subroutines or custom transforms. You can: •
•
Log events in the job's log file using DSLogInfo DSLogInfo,, DSLogWarn,, DSLogFatal DSLogWarn DSLogFatal,, and DSTransformError Execute DOS or DataStage Engine commands using DSExecute
All the subroutines are called using the Call statement.
Logs an information message in a job's log file.
Syntax Call DSLogInfo (Message, CallingProgName)
Example Call DSLogInfo("Transforming: ":Arg1, "MyTransform")
Example Call DSLogInfo("Transforming: ":Arg1, "MyTransform")
Date( ) : Returns a date in its internal system format.
☻Page 107 of 243☻
This example shows how to turn the current date in internal form into a string representing the next day: Tomorrow = Oconv(Date() + 1, "D4/YMD") ;* "1997/5/24"
Ereplace Function: Formats data for output.:
Replaces one or more instances of a substring.
Syntax string , substring , replacement [ ,number [ ,begin] ] Ereplace ( string )
MyString = "AABBCCBBDDBB" NewString = Ereplace(MyString, "BB", "") * The result is "AACCDD" = FMT("1234567", "14R2") X = FMT("1234567", "14R2$,")X = FMT("12345", "14*R2$,") X = FMT("1234567", "14L2") X = FMT("0012345", "14R") X = FMT("0012345", "14RZ") X = FMT("00000", "14RZ") X = FMT("12345", "14'0'R") X = FMT("ONE TWO THREE", "10T") X = FMT("ONE TWO THREE", ""1 10R") X = FMT("AUSTRALIANS", "5T") X = FMT("89", "R#####") X = FMT FMT(" ("61 6179 7932 3283 8323 23", ", "L## "L####-## #### #### ###" #")) X = FMT("123456789", ""L L#3-#3-#3") X = FMT("123456789", "R#5") X = FMT("67890", "R#10") X = FMT("123456789", "L#5") X = FMT("12345", "L#10") X = FMT("123456", "R##-##-##") X = FMT("555666898", "20*R2$,") X = FMT("DAVID", "10.L") X = FMT("24500", "10R2$Z") X = FMT("0.12345678E1", "9*Q")
X = "1234567.00" X = " $1,234,567.00" X = "1234567.00" X = "0012345" X = "12345" X="" X = "00000000012345" X = "ONE TWO ":T:"THREE " X = "ONE TWO TH TH":T:"REE " X = "AUSTR":T:"ALIAN":T:"S " X = " 89" X = "617 "617-9 -932 3283 8323 23"" X = "123-456-789" X = "56789" X = " 67890" X = "12345" X = "12345 " X = "12-34-56" X = "*****$555,666,898.00" X = "DAVID....." X = " $24500.00" X = "*1.2346E0"
☻Page 108 of 243☻
X = FMT("233779", "R")
X = "233779"
Date Conversions The following examples show the effect of various D (Date) conversion codes. Conversion Expression X = Iconv("31 DEC 1967", "D") X = Iconv("27 MAY 97", "D2") X = Iconv("05/27/97", "D2/") X = Iconv("27/05/1997", "D/E") X = Iconv("1997 5 27", "D YMD")
Internal Value X=0 X = 1 0 740 X = 1 0 740 X = 1 074 0 X = 1 07 40
X = Iconv("27 MAY 97", "D DMY[,A3,2]") X = Icon Iconv( v("5 "5/2 /27/ 7/97 97", ", "D/M "D/MDY DY[Z [Z,Z ,Z,2 ,2]" ]"))
X = 10740
X = Iconv("27 MAY 1997", "D DMY[,A,]") X = Iconv conv(("97 "97 05 05 27" 27",, ""DY DYMD MD[[2,2, 2,2,2] 2]") ")
X = 10740
X = 1074 10740 0
X = 1074 10740 0
Date Conversions The following examples show the effect of various D (Date) conversion codes. Conversion Expression X = Oconv(0, "D") X = Oconv(10740, "D2") X = Oconv(10740, "D2/") X = Oconv(10740, "D/E") X = Oconv(10740, "D-YJ") X = Ocon Oconv( v(10 1074 740, 0, "D2*J D2*JY" Y")) X = Oco Oconv nv(1 (107 0740 40,, ""D D YMD YMD") ") X = Oconv(10740, "D MY[A,2]") X = Oconv(10740, "D DMY[,A3,2]") X = Oconv(10740, "D/MDY[Z,Z,2]") X = Oconv(10740, "D DMY[,A,]") X = Oconv(10740, "DYMD[2,2,2]")
External Value X = "31 DEC 1967" X = "27 MAY 97" X = "05/27/97" X = "27/05/1997" X = "1997-147" X = "147 "147*9 *97" 7" X = "199 "1997 7 5 27" 27" X = "MAY 97"
X = "27 MAY 97" X = "5/27/97" X = "27 MAY 1997" X = "97 05 27"
☻Page 109 of 243☻
X = Oconv(10740, "DQ") X = Oconv(10740, ""D DMA") X = Oconv(10740, "DW") X = Oconv( nv(1074 0740, "DWA")
X = "2" X = "MAY" X = "2" X = "TUESDAY"
OpenSeq ".\ControlFiles\File1" To PathFvar Locked FilePresent = @True End Then FilePresent = @True End Else FilePresent = @False End
Example This example shows how a before/after routine must be declared as a subroutine at DataStage release 2. The DataStage Manager will automatically ensure this when you create a new before/after routine. Subroutine MyRoutine(InputArg, ErrorCode) * Users can enter any string value they like when using * MyRoutine from within the Job Designer. It will appear * in the variable named InputArg. * The routine controls the progress of the job by setting * the value of ErrorCode, which is an Output argument. * Anything non-zero will stop the stage or job. ErrorCode = 0 ;* default reply * Do some processing... ... Return MyStr = Trim(" String with whitespace ") * ...returns "String with whitespace" MyStr = Trim("..Remove..redundant..dots....", ".") * ...returns "Remove.redundant.dots" "Remove.redundant.dots" MyStr = Trim("Remove..all..dots....", ".", "A") * ...returns "Removealldots" MyStr = Trim("Remove..trailing..dots....", ".", "T") * ...returns "Remove..trailing..dots "Remove..trailing..dots""
This list groups BASIC functionality under tasks to help you find the right statement or function to use: • • •
Compiler Directives Declaration Job Control/Job Status
☻Page 110 of 243☻
• • • • • • •
Program Control Sequential Files Processing String Verification and Formatting Substring Extraction and Formatting Data Conversion Data Formatting Locales
Function MyTransform(Arg1) MyTransform(Arg1) Begin Case Case Arg1 = 1 Reply = "A" Case Arg1 = 2 Reply = "B" Case Arg1 > 2 And Arg1 < 11 Reply = "C" Case @True ;* all other values Call DSTransformError("Bad DSTransformError("Bad arg":Arg1, "MyTransform" Reply = "" End Case Return(Reply)
DATASTAGE 7.5x1 GUI FEATURES New and Expanded Functionality to aid DataStage users in job design and debugging. •
New Stored Procedure Stage:
A new stored procedure stage allows users to easily use Oracle stored procedures written in PL/SQL via OCI. The Stored Procedure Stage supports input and output parameters making it easier to get information back from
☻Page 111 of 243☻
a stored procedure. It can return return a result set via output parameters and can return more than one row if the procedure uses cursors. The stage can also also execute a stored function and returns status information from the procedure. •
HTML Job Reporting from the Designer: A detailed printable HTML format job report can be generated for the currently open joy or shared shared container. The report can be produced using the new menu option in Designer: File -> Generate Report. The final HTML report can be customized customized by applying different XSL style sheets to the generated XML file. •
Changes to File & Directory Browser Form: The old style File & Directory browser form has been replaced with one modeled on the the standard Windows 2K browser. browser. The new browser provides enhanced functionality on directory navigation (tree-oriented) and file selection, it has filtering as well as saving and restoring capabilities for the last viewed file list. •
Ability to globally set Annotation properties: The Annotation properties dialog is presented in the Tools -> Options dialog and the settings are saved in the registry per user. The Annotation Annotation stage always defaults to the saved settings. •
Ability to unset Environment Variables when a Job Runs ($UNSET): A special value $UNSET was introduced where there is a need for a user defined environment variable to explicitly unset the Unix environment variable to indicate false.
A dialog that provides information about what all the special environment variable values are and what they are for is available by double-clicking at: - Job proper propertie tiess dialo dialog, g, Parame Parameter terss tab, tab, when when editi editing ng the the Default Value cell for a job parameter defined as an environment variable. - Admin Admin Clien Client, t, Envir Environm onment ent dial dialog, og, when when editin editing g a value value cell.
Article-II: •
Transformer “Cancel” operation:
If the Cancel button or key are pressed from the main Transformer dialog and changes have been made, then a confirmation message box is displayed, to check that the user wants to quit without without saving the changes. changes. If no changes have been made, no confirmation message is displayed.
☻Page 112 of 243☻
•
Multi-Client Manager: The previously unsupported “Client Switcher” tool has been enhanced and integrated into into the DataStage Client. This tool allows the users to install and switch between multiple different versions of the client. client. Switching between them also also changes the desktop shortcuts and the Start Menu group to point to another installed DataStage client.
Enterprise Edition: •
Complex Flat File Stage: A new Parallel Complex Flat File stage has been added to read or write files that contain complex structures (for example groups, arrays, redefines, redefines, occurs depending depending on, etc.). Arrays from complex source can be passed as-is or optionally flattened or normalized. •
Parallel Job Runtime Message Handling: When DataStage parallel jobs are run they can generate a large number of messages that are logged and may be viewed in the Director Client. Note: When the Local Run button is disabled, you cannot view log information information from the Director. Director. Someone authorized authorized to do so can enable the Local Run via the Apiary.
Message Handlers allow the user to customize the severity of individual messages and can be applied at project of job level. Messages can be suppressed from the log (Information and Warning messages only), promoted (from Information to Warning) or demoted demoted (from warning to Information). A message handler management tool (available from DS Manager and Director) provides options to edit, add or delete message handlers. A new Director option allows message handling handling to be enabled/disabled for the current job.
•
Visual Cues in Designer – Designer time job validation: For parallel jobs (including parallel shared containers) and job sequences, errors that would occur during compilation are optionally presented on the canvas without requiring the user to explicitly compile the the job. If there are potential problems problems with the stage that would cause a compilation error, a warning triangle icon (Visual Cue) is shown on the top of the stage. When the user hovers the mouse over a stage with the Visual Cue, a tool tip is displayed. displayed. The Visual Cues can be turned turned off via a toolbar button (a ‘tick’ image). •
Additional properties for Parallel Job Stages:
☻Page 113 of 243☻
-
-
-
-
-
•
-
File File Name Name Colu Column mn (opt (option ional) al) – add add a colu column mn to to the the stage stage output that contains the name of the file that the record is sourced from. Available on Sequential File and and File Set Stages. Source Source Name Name Colum Column n (opti (optiona onal) l) – adds adds a column column to the stage output that contains the name of the source that the record is sourced sourced from. Available on External External Source Stage. Row Number Number Column Column (optio (optional nal)) – adds adds a column column to the stage output that contains the row number of the record. Available on Sequential File, File Set and External Source Stages. Read Read First First Row Rowss (optio (optiona nal) l) – cons constra trains ins the stage stage to only only read the specified number of rows from each file. Available on Sequential File Stage. First First Line Line is Colu Column mn Names Names (manda (mandato tory) ry) – on reading reading this this tells the stage to ignore the first line since it contains column names. On writing it causes the the first line written to be the column names. names. Available on Sequential Sequential File Stage. Stage. View Data functionality on the Source & Target custom stages: View View data data supp support ort was added added to custom custom parall parallel el stage stagess for for both source and targets. “Show “Show file file”” had had been been replac replaced ed with with “View “View Data Data”” for for Parallel Job, Sequential File and File Set stages.
•
New Parallel Job Advanced Developer’s Guide: A new Parallel Job Advanced Developer’s Guide gives DataStage Enterprise Edition users information on efficient job design, stage usage, usage, performance turning, turning, and more. It also documents all of the parallel environment variables available for use.
☻Page 114 of 243☻
DATASTAGE & DWH INTERVIEW QUESTIONS COMPANY: TCS (DataStage) 1. 2. 3. 4. 5. 6. 7. 8.
9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
Tell Tell abo about ut you yours rsel elf? f? Type Typess of of Stag Stages es?? Exam Exampl ples es What What are are active active stag stages es and and passiv passivee stages stages?? Can you filter filter data data in in hashe hashed d file? file? (No) (No) Difference Difference between between sequentia sequentiall and hashed hashed file? file? How do you you popu populat latee time time dime dimens nsion ion?? Can we we use targ target et hashe hashed d file file as looku lookup? p? (Yes) (Yes) What What is is Mer Merge ge Stag Stage? e? What is your role? What is Job Job Sequenc Sequencer? er? What are are stages stages in sequenc sequences? es? How do do you pass pass paramet parameters? ers? What paramete parameters rs you used in your your project? project? What are log tables? tables? What is is job contro controlling lling?? Facts and and dimensio dimension n tables? tables? Confirmed Confirmed dimension dimensions? s? Time dimension dimension contains contains what data? (numeric (numeric data) Difference Difference between between OLTP OLTP and OLAP? OLAP? Difference between between star schema and snow flake flake schema? What are are hierarchies hierarchies?? Examples? Examples? What are are materiali materialized zed views? views?
☻Page 115 of 243☻
23. What is aggregatio aggregation? n? 24. 24. What What is surr surrog ogat atee key? key? Is it used used for for both both fact fact and and dimension tables? 25. Why do you go for oracle sequence sequence generato generatorr rather than datastage routine? 26. Flow of data data in datasta datastage? ge? 27. Initial Initial loading loading and incremental incremental loading? 28. What is SCD? Types? Types? 29. How do you develop develop SCD type2 in your project? 30. How do you load load dimension dimension data and fact fact data? Which Which is first? 31. Any idea about about shell shell scripting scripting and UNIX? UNIX? 32. Difference between between oracle oracle function and procedure? procedure? 33. Difference between between unique unique and primary key? key? 34. Difference Difference between between union union and union union all? 35. What is minus minus operat operator? or?
COMPANY: ACCENTURE (Datastage (Datastage)) 1. What What is is aud audit it tabl table? e? 2. If there there is a large large hash hash file file and a smal smaller ler orac oracle le table table and if if you are looking up from transformer in different jobs which will be faster? 3. Tell Tell me abou aboutt SCD SCD’s ’s?? 4. How did you you imple implemen mentt SCD in in your your proje project? ct? 5. Do a business business people people need to know know the the surrogat surrogatee key? key? 6. What What are are deriva derivatio tions ns in transf transform ormer? er? 7. How do you you use surr surroga ogate te key key in repor reportin ting? g? 8. Logs view in datasta datastage, ge, logs logs in Informatic Informaticaa which which is clear? 9. Have Have you used used aud audit it tabl tablee in your your proj project ect?? 10. What is keen? Have you used it in your project? 11. While While developing developing your project what are the considera consideration tionss you take first like performance or space? 12. What is job scheduler? Have you you used it? How How did you do? 13. Have you used datastag datastagee parallel extende extender? r? 14. What is the Link Partitioner and and link collector collector stage? 15. How does does pivot pivot stage work? work? 16. What is surrogate surrogate key? key? What is the importance importance of it? How did you implement it in your project? 17. Totally Totally how many jobs jobs did you developed developed and how many lookups did you use totally? 18. How do constrain constraintt in transformer transformer work? work? 19. How will you declare a constraint in datastage?
☻Page 116 of 243☻
20. 21. 22. 23. 24. 25.
How will will you handle handle rejected rejected data? data? Where the the data stored stored in datastag datastage? e? Give me some performa performance nce tips in datastag datastage? e? Can we use sequen sequential tial file file as a lookup? lookup? How does does hash file file stage lookup lookup?? Why can’t can’t we use sequential sequential file file as a lookup?
26. 27. 28. 29.
What is data data warehou warehouse? se? What is ‘Star-Sche ‘Star-Schema’? ma’? What is is ‘Snowflake ‘Snowflake-Schem -Schema’? a’? What is difference difference between between Star-Schema Star-Schema and SnowflakeSnowflakeSchema? What is is mean by surrog surrogate ate key? key? What is ‘Confo ‘Conformed rmed Dimensio Dimension’? n’? What is is Factless Factless Fact Fact Table? Table? When will we use connected connected and unconnected lookup? lookup? Which cache supports connected connected and unconnected unconnected lookup? lookup? What What is the the diff differ eren ence ce betw between een SCD Type Type2 2 and and SCD SCD Type3? Draw the the ETL Architectu Architecture? re? Draw the the DWH Architectu Architecture? re? What is is materiali materialized zed view? view? What is procedure? procedure? What is Function? Function? What is the difference between procedure procedure and function? function? What What is trigg trigger? er? What are are types types of trigge triggers? rs?
30. 31. 32. 33. 34. 35. 35. 36. 37. 38. 39. 40. 41. 42. 43.
COMPANY: SATYAM (Datastage (Datastage)) 1. Tell Tell me abou aboutt you yours rsel elf? f? 2. What What are are the client client compon component ents? s? 3. About administrator? With this, what do you do in your project? 4. What What is you you projec projectt and expla explain in the the process process?? 5. Info Inform rmat atio iona nall dimen dimensi sion ons? s? 6. Measures? 7. What What is data data mart mart size size and and data data wareh warehous ousee size? size? 8. Fact Fact tab table le?? Dime Dimens nsio ion n tabl table? e? 9. Data Mart? 10. How do you you clear clear source source files? files? 11. Pivot Pivot Stage? Stage? 12. How do you you find a link, link, if not found? found? 13. Difference between transformer and routine? routine? 14. How do you you secure secure your your project? project? 15. How do you handle handle errors? Exception handlers? 16. How do you you know, know, how many rows rows rejected? rejected? 17. How do you manage manage surrogate surrogate key in datastag datastage? e? 18. What What is look lookup? up? 19. Aggreg Aggregato atorr Stage? Stage? 20. Univer Universe se Stage? Stage?
☻Page 117 of 243☻
21. How do you merge merge two tables in datasta datastage? ge? 22. What is is export export and import? import? 23. What What are Integr Integrati ation on testin testing, g, unit unit testin testing, g, perfor performan mance ce testing? 24. UAT testing? testing? (User (User Acceptance Acceptance Testing) Testing) 25. Local, development, development, preproduction, production server?
COMPANY: SYNTEL, Mumbai (DataStage – Telephonic Interview). Basic DWH:
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Tell Tell me about about your your curren currentt proje project? ct? What What is your your role role or or job prof profile ile in in the the projec project? t? What What is is you yourr Job Job prof profil ile? e? What What is is dim dimes esio ion n and and fact fact?? What What are are types types of of dimen dimensi sion ons? s? What What are confir confirmed med dimens dimension ions? s? What What are genera generated ted dimens dimension ions? s? What What are are slowly slowly changi changing ng dime dimens nsion ions? s? How many many data data mart martss in you yourr proj project ect?? What is data data mart name name in your project project?? What is is the size size of your your data mart? mart? What is factless factless fact fact table? Give Give example. example. How many many fact tables tables are used used in the project? project? What is your your fact table name in your your project? project? How many dimensi dimension on tables used used in the project? project? What are the names names of the dimension dimension tables? tables?
☻Page 118 of 243☻
17. 17. What What is Sche Schema ma?? Type Types? s? Expl Explai ain n Star Star-S -Sch chem emaa and and Snowflake Snowflake Schema with difference. difference. Which Which schema schema you used in your project? Why? 18. Why star-schema star-schema called as star-schema? Give example. 19. How frequently frequently and from from where you get the data data as source? source? 20. What is difference between data mart and data warehouse? 21. What is composi composite te key? key? 22. What is surrogat surrogatee key? When you will will go for it? 23. What is dimens dimensiona ionall modeling? modeling? 24. 24. What What are are SCD SCD and and SGT? SGT? Diff Differ eren ence ce betw betwee een n them them?? Example of SGT from your project. 25. How do you rate rate yourself yourself in data warehouse? warehouse? 26. What is the status status of your current current project?
DataStage: 27. How do you import import your source source and targets? targets? What are the types of sources and targets? 28. 28. What What is Acti Active ve Stag Stages es and and Pass Passiv ivee Stag Stages es mean meanss in datastage? 29. What What is differ differenc encee betwee between n Inform Informati atica ca and DataSt DataStage age?? Which do you think is best? 30. What are the stages stages you used used in your project? project? 31. Whom do you you report report?? 32. What What is orches orchestra trate? te? Differ Differenc encee betwee between n orches orchestra trate te and datastage? 33. What is parallel extender? extender? Had you work on this? this? 34. What do you mean mean by parallel parallel processing? processing? 35. What is difference between Merge Stage and and Join Stage? Stage? 36. What is difference difference between between Copy Stage and Transformer Transformer Stage? 37. What is difference between ODBC Stage and OCI Stage? 38. What is difference between Lookup Stage Stage and Join Join Stage? 39. What What is differ differenc encee betwee between n Change Change Capture Capture Stage Stage and Difference Stage? 40. What What is differ differenc encee betwee between n Hashed Hashed file and Sequenti Sequential al File? 41. What are different different Joins Joins used in Join Join Stage? 42. 42. How How you you deci decide de when when to go for for join join stage stage and and look lookup up stage? 43. What What is partit partition ion key? key? Which Which key is used in round round robin partition? 44. How do you you handle handle SCD in datastage? datastage? 45. What are Change Capture Stage Stage and Change Change Apply Stages? 46. How many streams to the transformer you can give? 47. What is primary primary link and referenc referencee link? 48. What What is routine? routine? What is before before and after after subrou subroutin tines? es? These are run after/before job or stage? 49. Had you write any subroutines in your project? 50. What is Config Config File? Each job having its own config config file or one is needed?
☻Page 119 of 243☻
51. 52. 53. 54. 55.
What What is is Node? Node? What is IPC Stage? What it increase performance? performance? What is is Sequenti Sequential al buffer? buffer? What are Link Link Partioner Partioner and Link Collect Collector? or? What are the performan performance ce tunning tunning you have done in your project? 56. Did you done done scheduling? scheduling? How? How? Can you schedule schedule a job at the every end date of month? How? 57. What is job sequence? Had you run any jobs? jobs? 58. What is status status view? view? Why you clear this? this? If you clear the status view what internally done? 59. What What is hashed hashed file? file? What What are the types types of hashed hashed file? Which you use? What is default? What is main advantage of hash hashed ed file file?? Diffe Differe renc ncee betw betwee een n them them.. (sta (stati ticc and and dynamic) 60. What are containers? Give Give example from your project. 61. Had you done any hardware hardware configurati configuration on while running running parallel jobs? 62. What are are operators operators in parallel parallel jobs? jobs? 63. What are are parameters parameters and paramete parameterr file? 64. Can you use use variables? variables? In which which stages? stages? 65. How do you convert convert columns columns to rows rows and rows to columns columns in datastage? (Using Pivot Stage). 66. What is Pivot Pivot Stage? 67. What What is execut execution ion flow of constr constrain aints, ts, deriva derivatio tions ns and variables in transformer stage? What are these? 68. How do you eliminate duplicates in datastage? Can you use hash file for it? 69. If 1st and 8th record is duplicate then which will be skipped? Can you configure it? 70. How do you import import and export datastag datastagee jobs? What is the file extension? (See each component while importing and exporting). 71. How do you you rate yourself yourself in DataSt DataStage? age? 72. Explain Explain DataStage DataStage Architectu Architecture? re? 73. What is repository? What What are the repository items? items? 74. What is difference between between routine and transform? transform? 75. 75. I have have 10 tabl tables es with with four four key key colu column mn valu values es,, in this this situation lookup is necessary, but which type of lookup is used? Either OCBC or Hashed file lookup? Why? 76. When you you write write the routin routines? es? 77. In one project project how many shared containers are created? 78. How do you you protect protect your project project?? 79. What is the complex situation you faced in DataStage? 80. 80. How How will will you you move move hash hashed ed file file from from one one loca locati tion on to another location? 81. How will will you create static static hashed hashed file? 82. How many Jobs Jobs you have have don donee in your project project?? Explai Explain n one of complex Job.
☻Page 120 of 243☻
COMPANY: COMPA NY: KAN KANBAY BAY,, Pu Pune ne (Da (Data taSta Stage ge – Pe Perso rsonal nal Interview) 1. All All abou aboutt comp compan any y deta detail ils, s, proj projec ectt deta detail ils, s, and and clie clien nt details, sample data of your source? 2. Data DataSt Stag agee Arch Archit itec ectu ture re?? 3. Syst System em vari variab able le,, what what are are syst system em vari variab able less used used your your project? 4. What What are the the diff differ eren entt data datast stag agee func functi tion onss used used in your your project? 5. Difference Difference between between star star schema schema and snow snow flake schema? schema? 6. What is confi confirmed, rmed, degenerat degenerated ed and and junk junk dimensi dimension? on? 7. What What are are con confi firme rmed d fac facts ts?? 8. Differ Different ent type type of of facts facts and thei theirr exampl examples? es? 9. What are approaches approaches in devel developing oping data warehouse warehouse?? 10. Different Different types types of hashed hashed files? files? 11. What are routines routines and transform transforms? s? How you used in your project? 12. Difference between between Data Mart and Data Warehouse? Warehouse? 13. What is surrogate key? How do you generate it? 14. What are environment variables variables and global variables? 15. How do you improve improve the performan performance ce of the job? 16. What is SCD? SCD? How do you developed developed SCD type1 type1 and SCD type2? 17. Why do you go for oracle sequence sequence to generate generate surrogate surrogate key rather than datastage routines? 18. How do you generate surrogate key in datastage? 19. What is job job sequen sequence? ce? 20. What are plug-ins? plug-ins? 21. How much much data you you can get get every every day? 22. What is the biggest table and size in your your schema or in your your project? 23. What is the size of of data warehouse warehouse (by loading loading data)? 24. How do you improve the performance of the hashed hashed file? 25. What What is IPC IPC Stage Stage?? 26. What What are the differe different nt types types of stages stages and used used in your project? 27. 27. What What are are the the opera operati tion onss you you can can do in IPC IPC Stag Stagee and and transformer stage? 28. What is merge stage? How do you you merge two two flat files? files? 29. I have have two table, table, in one one table table contain containss 100 record recordss and other table contains 1000 records which table is the master table? Why? 30. I have one job job from one flat flat file. file. I have have to load data data to database, 10 lakhs records are there, after loading 9 lakhs job is aborted? How do you load remaining records? 31. Which Which data your project project contains contains?? 32. What is the the source source in your project? project?
☻Page 121 of 243☻
COMPANY: IBM, Bang COMPANY: Bangalor aloree (Dat (DataSta aStage ge – Tele Telephon phonic ic Interview) 1. Tell me about your educational and prof rofessional background? 2. What What is team team size size?? What What is your your role role in that that?? 3. What What is fact fact less less fact tabl table? e? As it don’t don’t have have facts facts then then what’s the purpose of using it? Had you used in your project? 4. How many many jobs jobs you have have done done in in your your projec project? t? 5. You handl handled ed differen differentt complex complex logic logic jobs jobs in in your your project project or or not? 6. Out of of all jobs jobs you you have have done, done, what what is most most complex complex job job u feel? Explain it? 7. You You do your yourse self lf the compl complex ex logic logic or some someon onee give give you the specifications and you convert them to datastage? 8. What What are are the sour sources ces you you used used in in your your projec project? t? 9. What What are are the stag stages es you you used used in your your proj project ect?? 10. What What is differe difference nce between between ODBC and ORACLE ORACLE OCI stage? 11. As you told, told, if your source sourcess are flat flat files files and ORACLE ORACLE OCI then why you need ODBC in your project rather than ORACLE OCI stage? 12. What difference between sequential sequential file and hashed file? 13. Can you use sequenti sequential al file as source to hashed hashed file? Have Have you done it? What error it will give? 14. Why hashed hashed file improve improve the performance performance?? 15. How do you you sort sort your data data in jobs? jobs? 16. Had you use sort stage stage in your job? (sort (sort stage is parallel parallel stage, be sure that you are using server jobs only, then he will ask Q.12) 17. Can aggregator and transformer stage stage used for sorting data? data? How 18. If I have have two source sourcess to aggrega aggregator tor stage stage and oracle oracle as target, I can sort data in aggregator but if I don’t want to use aggregator to sort data then how you will do it? 19. Why we use surrogat surrogatee key in data warehouse? warehouse? How it will will improve the performance? Where it will store? How do you handle your surrogate key in your project? Where we use mostly surrogate key?
☻Page 122 of 243☻
20. How many input links links you can can give to transformer? 21. Can you give give more than one source source to transformer? transformer? (If you say “No” he will ask what error it will give when you try to do this?) 22. Definition of Slowly Slowly Changing Dimensions? Types? 23. If a compan company y mainta maintaini ining ng type1 SCD, now the compan company y decided to change there plan to maintain type2 SCD, e.g. customer table, so what are the changes to do in customer table? (Whether you have to change the structure of the table, if it is under type3 right? Or no changes? How do you implement this?) 24. How many dimensions in in your project? project? What are they? 25. What are are the facts in your your fact table? table? 26. Are all these these facts are specific (related) to all dimensions? dimensions? 27. How do you you get system system date in oracle? oracle? 28. What is is a dual table table in oracle oracle?? 29. What is the use use of UNION in oracle? oracle? If I write query select select * from EMP UNION select * from dept, is it executed well? 30. I have a query select select * from EMP table table group group by dept; is this query executed? If no what is the error?
MORE QUESTIONS ON DATASTAGE: 1. What are the difficu difficulties lties faced in using using DataStage? DataStage? 2. What What are the the const constrai raints nts in in using using DataS DataStag tage? e? 3. How do you you elimi eliminat natee dupl duplica icate te rows? rows? 4. How do we we do the the autom automati ation on of of dsjob dsjobs? s? 5. What What are are XML file files? s? How How do you rea read d data data from from XML files and which stage to be used? 6. How do you you catch catch bad bad rows rows from from OCI stag stage? e? 7. Why do you you use use SQL SQL LOADE LOADER R or OCI OCI STAGE STAGE?? 8. How do you you pop popula ulate te sour source ce file files? s? 9. How do you you pass pass filenam filenamee as the paramet parameter er for for a job? job? 10. How do you pass pass the parameter parameter to the job sequen sequence ce if the job is running at night? 11. What happen happenss if the job fails fails at night? night? 12. What is SQL SQL tuning? tuning? How do you you do it? 13. What is project project life cycle and how do you implement it? 14. How will you call extern external al functi function on or subrouti subroutine ne from from datastage? 15. How do you track performance statistics statistics and enhance enhance it? 16. How do you do oracle oracle 4 way inner join join if there are 4 oracle oracle input files? 17. Explain your your last project project and your role in in it? 18. What are the often often used Stages or stages stages you worked with with in your last project? 19. How many jobs have you created in in your last last project?
☻Page 123 of 243☻
20. 21. 22. 23. 24. 25. 26.
How do you you merge merge two files files in DS? DS? What is DS DS Manager used used for - did u use it? it? What is DS DS Director Director used for for - did u use it? What is DS Administrator Administrator used for for - did u use it? it? What is DS DS Designer Designer used for - did u use use it? Explain the differences between Oracle8i/9i? Do you know about about INTEGRITY/QUALITY INTEGRITY/QUALITY stage? 27. Do you know about METASTAGE? 28. Difference between between Hashfile Hashfile and Sequential File? 29. What is iconv iconv and oconv oconv function functions? s? 30. How can we join one one Oracle source source and Sequential Sequential file? 31. How can we implement implement Slowly Changing Changing Dimension Dimensionss in DataStage? 32. How can we implement Lookup in DataStage Server jobs? 33. What are all the third third party tools tools used in DataStage? 34. What is the difference difference between between routine routine and transform and function? 35. What are are the Job Job paramete parameters? rs? 36. How can we improve the performance of of DataStage jobs? jobs? 37. How can we create create Containers? Containers? 38. What about about System System variabl variables? es? 39. What is difference between operational data data stage (ODS) (ODS) & data warehouse? 40. How do you fix the error error "OCI has fetched truncat truncated ed data" in DataStage? 41. How to create batches in Datastage from command prompt prompt 42. How do you eliminat eliminatee duplicate duplicate rows? rows? 43. Suppose Suppose if there are million million records records,, did you use OCI? If not then what stage do you prefer? 44. 44. What What is the the orde orderr of exec execut utio ion n done done inte intern rnal ally ly in the the transformer with the stage editor having input links on the lft hand side and output links? 45. I want to process 3 files in sequentially one by one, how I can do that. While processing the files it should fetch files automatically. Datastage: 1. How to create a flat file job… (steps) 2. Is there any tool by ascential to pull th the metadata from various sources 3. What if definition of a table changes...what impact will it have on ur job... 4. how to use debugger 5. how u schedule a DS job via unix script 6. Any third party tools for scheduling the jobs 7. how to use hash file...how to create Hash file... 8. Aggregator Transformations.. 9. pre sql post sql..How to use these...truncate table 10. what is the use of administrator is used 11. what was the most complex mapping u hv developed using datastage
☻Page 124 of 243☻
12. how mu much exp u hv on DS 13. if ta table de definition has been ch changed in in ma manager will it automatically propogate into JOb 14. Can out link from one active stage can become inout link in another active stage 15. Can u use a sequential file as a reference file...difference between a sequential file and a hash file A)NO 16. What diffrent options are there to see a table definition.. 17. What all products of ascential u r aware of 18. What is the advantage of using OCI stage as compared to ODBC stage 19. Normalizer Transformation.. 20. what steps will you take to increase perform formaance in Datastage for large volumes of data 21. what are bridge tables 22. Types of Indexes 23. Table Partitioning 24. Types of schemas,explain 25. How do you do re requirements gathering in in case of non-availability of the personnel and thereafter the project plan 26. How do you take care of unknown values for the primary key for dimension? 27. Factless fact tables 28. Overview of Datastage projects 29. Link Partitioner/ Collector Data stage:
1. What is ETL Architecture? 2. Explain your project Architecture? 3. How many Data marts and how many facts and dimensions Available in your project? 4. What is the size of your Data mart? 5. How many types of loading-techniques are Available? 6. Before going to design jobs in Data stage what are preceding-steps in Data stage? 7. What is the Architecture of Data stage? 8. What is the Main difference between different client components in Data stage? 9. What are the different stage you have worked on it? 10. Can I ca call pr procedures in to da datastage.if so so Ho How to to call store- procedures in Data stage? 11. What is the differen rence between sequential file and hashfile? Can we use sequential file as a lookup? Can we put filter conditions on sequential file? 12. Differences between DRS Stage and ODBC? Which one is the best for performance?
☻Page 125 of 243☻
13. What are the different ent performan mance tunning aspects are there in Data stage? 14. How do you remove th the duplicates in flat-file? 15. What is is th the di difference be between In Interprocess an and inprocess? Which one is the best? 16. What is CRC32? On which situation go for CRC32? 17. What is a pi pivotstage? Can u explain on on scenario which situation used in your project? 18. What is row-spliter and row-merger can I use separately is it possible to do it? 19. If one user locked the res resource? How to release the particular Job? 20. What is a ve version-controll in data st stage? 21. What is is th the di difference be between cl clearlog-file? Clearstage-file? 22. How to scehudle jobs wi with out using Data stage? 23. What is is th the di difference be between St Static-hash an and dynamichashfile? 24. How to do error handling in data stage? 25. What is the difference between Ac Active st stage and passive stage? What are the Active and passive stages? 26. How to set En Environment variables in da datastge? 27. What is job controlled routinue? How set job parameter in Data stage? 28. How to release a job? 29. How to do Auto-purge in Data stage? 30. What is the difference between Da Datastge7.1 and 7.5?
DATA WAREHOUSING QUESTIONS:
1. What are the different Dimensional modeling Techniques are Available? 2. What is the Difference between Star-schema and snow-flake-schema? When we go for star and snow-flake? 3. What are the types of dimension and facts are in DW? 4. What is the life cycle of Data warehousing project? 5. What is a Data-model? 6. What is the Difference between Top-down Approach and Bottom-up Approach? 7. What is a factless-fact Table? 8. What is a confirmed-dimension? 9. What is a junk-dimension? 10 . What is a cleansing? 11.
Tell me about your current project?
☻Page 126 of 243☻
12. What is your role or job profile in the project? 13. What is your Job profile? 14. What is dimesion and fact? 15. What are types of dimensions? 16. What are confirmed dimensions? 17. What are generated dimensions? 18. What are slowly changing dimensions? 19. How many data marts in your project? 20. What is data mart name in your project? 21. What is the size of your data mart? 22. What is is fa factless fa fact ta table? Gi Give example. 23. How many fact tables are used in the project? 24. What is your fact table name in your project? 25. How many dimension tables used in the project? 26. What are the names of the dimension tables? 27. What is Schema? Types? Explain StarSchema and Snowflake Schema with difference. Which schema you used in your project? Why? 28. Why st s tar-schema ca c alled as a s st s tarschema? Give example. 29. How frequently and from where you get the data as source? 30. What is difference between data mart and data warehouse? 31. What is composite key? 32. What is surrogate key? When you will go for it? 33. What is dimensional modeling? 34. What are SCD and SGT? Difference between them? Example of SGT from your project. 35. How do you rate yourself in data warehouse? 36. What is the status of your current project? 37. What is data warehouse? 38. What is ‘Star-Schema’? 39. What is ‘Snowflake-Schema’? 40. What is is di difference be between St StarSchema and Snowflake-Schema? 41. What is mean by surrogate key? 42. What is ‘Conformed Dimension’? 43. What is Factless Fact Table? 44. When will we use connected and unconnected lookup?
☻Page 127 of 243☻
45. Which cache supports connected and unconnected lookup? 46. What is the difference between SCD Type2 and SCD Type3? 47. Draw the ETL Architecture? 48. Draw the DWH Architecture?
DWH FAQ: Conformed dimension: A dimension table connects to more than one fact • tabl table. e. We pres presen entt this this same same dime dimens nsio ion n tabl tablee in both both schemes and we refer to dimension table as conformed dimension. Conformed fact: Defi Defini niti tion onss of meas measur urem emen ents ts (fac (facts ts)) • consistent we call them as conformed fact.
are are highl highly y
Junk dimension: It is conv conven enie ient nt grou groupi ping ng of rand random om flag flagss and and • aggregates to get them out of a fact table and into a useful dimensional framework. Degenerated dimension:
☻Page 128 of 243☻
•
Usually Usually occur in line item oriented fact table designs. designs. Degenerate dimensions are normal, expected and useful. The degenerated dimension key should be the actual • production order of number and should set in the fact table without a join to anything. Time dimension: It contains a number of useful attributes for describing • calendars and navigating. An exclusive time dimension is required because the • SQL date semantics and functions cannot generate several import important ant featur features, es, attrib attribute utess requir required ed for analyti analytical cal purposes. Attr Attrib ibut utes es like like week week days, days, week week ends ends,, holi holida days ys,, • physical periods cannot be generated by b y SQL statements. Fact less fact table: Fact table which do not have any facts are called fact • less fact table. They may consist of keys; these two kinds of fact • tables do not have any facts at all. The first type of fact less fact table records an ‘event’. • Many Many event event trac tracki king ng tabl tables es in dime dimens nsio iona nall data data • warehouses turn out to be factless. student tracking tracking system that details details each ‘student Ex: A student attendance’ event each day. The second type of fact less fact table is coverage. The • coverage tables are frequently needed when a primary fact table in dimensional DWH is sparse. Ex: The sales fact table that records the sales of products in stor stores es on part partic icul ular ar days days unde underr each each prom promot otio ion n condition Types of facts: fact factss invo involv lved ed in the the calc calcul ulati ation onss for for Additive: • deriving summarized data. Semi additive: facts that involved in the calculations • at a particular context of time. factss that that cann cannot ot invo involv lved ed in the the Non additiv additive: e: fact • calculations at every point of time.
DATASTAGE ROUTINES BL:
☻Page 129 of 243☻
BOT v2.3.0 Returns BLANK if passed value is NOT NULL or BLANK, after trimming spaces DataIn = "":Trim(Arg1) If IsNull(DataIn) or DataIn = "" Then Ans = "" End Else Ans = DataIn End
CheckFileRecords:
Function CheckFileRecords(Arg1,Arg2) vParamFile = Arg1 : "/" : Arg2 vCountVal = 0 OpenSeq vParamFile To FileVar Else Call DSLogWarn("Cannot open ":vParamFile , "Cannot Open ParamFile") End Loop ReadSeq Dummy From FileVar Else Exit ;* at end-of-file vCountVal = vCountVal + 1 Repeat CloseSeq FileVar Ans=vCountVal Return (vCountVal) CheckFileSizes:
DIR = "/interface/dashboard/dashbd_dev_dk_int/Source/" FNAME = "GLEISND_OC_02_20040607_12455700.csv" *CMD = "ll -tr ":DIR:"|grep ":FNAME CMD = "cmp -s ":DIR:"|grep ":FNAME
Call DSExecute("UNIX", CMD, Output, SystemReturnCode) Ans = Output
CheckIdocsSent:
☻Page 130 of 243☻
Checks If Idoc delivery job actually sent any Idocs to SAP. This routine will atempt to read the DataStage Director log for the job name specified as an argument. If the job has a fatal error with "No link file", the routine will copy the IDOC link file(s) into the interface error folder. In case the fatal error above is not found the routine aborts the job. A simple log of which runs produce error link file is maintained in the module's log directory.
$INCLUDE DSINCLUDE JOBCONTROL.H vRoutineName = "CheckIdocsSent" Ans = "Ok" If System(91) Then OsType = 'NT' OsDelim = '\' NonOsDelim = '/' Move = 'move ' End Else OsType = 'UNIX' OsDelim = '/' NonOsDelim = '\' Move = 'mv -f ' End vJobHandle = DSAttachJob(JobName, DSJ.ERRFATAL) vLastRunStart = DSGetJobInfo(vJobHandle, DSJ.JOBSTARTTIMESTAMP) vLastRunEnd = DSGetJobInfo(vJobHandle, DSJ.JOBLASTTIMESTAMP)
* Get the delivery log for the last run vLogSummary = DSGetLogSummary ( vJobHandle, DSJ.LOGANY, vLastRunStart, vLastRunEnd, 500) vLogSummary = Change(vLogSummary,@FM,'') * Manipulate vLogSummary within routine to return status PosOfStr = Index(Downcase(vLogSummary),"sent",1) vLogMsg = vLogSummary[PosOfStr,20] * Now work out Status If PosOfStr = 0 then Status = 'NOT SENT' vLogMsg = ''
☻Page 131 of 243☻
end else Status = 'SENT' vLogMsg = vLogSummary[PosOfStr,20] end Ans = Status vErr = DSDetachJob(vJobHandle) Call DSLogInfo("Job " : JobName : " Detached" , vRoutineName)
***** Make a log entry to keep track of how often the pack doesn't work ******** vMessageToWrite = Fmt(Module_Run_Parm, "12' 'L") : Fmt(Status, "10' 'L") : " - " : vLogMsg vIdocLogFilePath = Interface_Root_Path_Parm: OsDelim : "logs" : OsDelim : "IdocSentLog.log" ******** Open the log file OPENSEQ vIdocLogFilePath TO vIdocLogFile Then Call DSLogInfo("IdocSentLog Open" , vRoutineName) ** Label to return to if file is created FileCreated: *** Write the log entry vIsLastRecord = @Null Loop Until vIsLastRecord Do READSEQ vRecord From vIdocLogFile Then *Call DSLogInfo("Record Read - " : vRecord , vRoutineName) End Else *Call DSLogInfo("End of file reached " , vRoutineName) vIsLastRecord = @True End Repeat WRITESEQ vMessageToWrite To vIdocLogFile Then Call DSLogInfo("Log entry created : " : vMessageToWrite, vRoutineName) End Else Call DSLogFatal("Cannot write to " : vIdocLogFilePath, vRoutineName) End End Else Call DSLogInfo("Could not open file " : vIdocLogFilePath , vRoutineName) Call DSLogInfo("Creating new file - " : vIdocLogFilePath , vRoutineName)
☻Page 132 of 243☻
CREATE vIdocLogFile ELSE Call DSLogFatal("Could not create file - " : vIdocLogFilePath , vRoutineName) WEOFSEQ vIdocLogFile WRITESEQ Fmt("Module Run", "12' 'L") : Fmt("Status", "10' 'L") : " " : "Message" To vIdocLogFile Else ABORT Call DSLogInfo("Log file created : " : vIdocLogFilePath , vRoutineName) GOTO FileCreated End
**** Abort the delivery sequence and write error message to the log. ************ If Status = 'NOT SENT' Then Call DSLogInfo("No Idocs were actually sent to SAP - Trying to clean up IDOC Link Files: ", vRoutineName) vIdocSrcLinkPath = Field(Interface_Root_Path_Parm, OsDelim, 1, 4) : OsDelim : "dsproject" : OsDelim : Field(Interface_Root_Path_Parm, OsDelim, 4, 1) vIdocTgtLinkPath = Interface_Root_Path_Parm: OsDelim : "error" OsCmd = Move : " " : vIdocSrcLinkPath : OsDelim : JobName : ".*.lnk " : vIdocTgtLinkPath : OsDelim Call DSExecute(OsType, OsCmd, OsOutput, OsStatus) If OsStatus <> 0 Then Call DSLogWarn("Error when trying to move link file(s)", vRoutineName) LogMessMoveFail = 'The move command (':OsCmd:') returned status ':OsStatus:':':@FM:OsOutput Call DSLogWarn(LogMessMoveFail, vRoutineName) Call DSLogFatal("Cleaning up of IDOC Link Files failed", vRoutineName) End Else LogMessMoveOK = "Link files were moved to " : vIdocTgtLinkPath Call DSLogInfo(LogMessMoveOK, vRoutineName) LogMessRetry = "Job " : JobName : " is ready to be relaunched." Call DSLogInfo(LogMessRetry, vRoutineName) End End Else Call DSLogInfo("Delivery job log indicates run OK ", vRoutineName) End
ClearMappingTable:
☻Page 133 of 243☻
SUBROUTINE ClearMappingTable (Clear_Mapping_Table, Errorcode) Error Code = 0 stop the stage/job
;* set this to non-zero to
**If Clear_Mapping_Table_Parm = 'Y' Then EXECUTE "CLEARFILE Vendor_Map_HF.GEN" **End Else **End
ComaDotRmv:
DataIn = "":(Arg1) If IsNull(DataIn) or DataIn = "" Then Ans = "" End Else DataIn = Ereplace(DataIn, ".", "") DataIn = Ereplace(DataIn, ",", "") Ans = DataIn End
CopyFiles: Move files from one directory to another Function CopyofFiles(sourceDir,SourceFileMask,TargetDir,Ta rgetFileMask,Flags)
RoutineName = "CopyFiles" If SourceDir = '' Then SourceDir = '.' If TargetDir = '' Then TargetDir = '.' If SourceFileMask = '' Or SourceDir = TargetDir Then Return(0) ! If SourceDir # '.' Then ! OpenPath SourceDir To Fv Else ! Call DSU.DSMkDir(MkStatus,SourceDir,'','777') ! End ! End ! If TargetDir # '.' Then ! OpenPath TargetDir To Fv Else ! Call DSU.DSMkDir(MkStatus,TargetDir,'','777') ! End ! End If System(91) Then OsType = 'NT'
☻Page 134 of 243☻
OsDelim NonOsDelim Copy Flag End Else OsType OsDelim NonOsDelim Copy End
= '\' = '/' = 'copy ' = Flags = = = =
'UNIX' '/' '\' 'cp -f '
If Flags <> "" then Flag = NonOsDelim:Flags Else Flag = "" SourceWorkFiles = Trims(Convert(',',@FM,SourceFileMask)) SourceFileList = Splice(Reuse(SourceDir),OsDelim,SourceWorkFiles) Convert NonOsDelim To OsDelim In SourceFileList TargetWorkFiles = Trims(Convert(',',@FM,TargetFileMask)) TargetFileList = Splice(Reuse(TargetDir),OsDelim,TargetWorkFiles) Convert NonOsDelim To OsDelim In TargetFileList OsCmd = Copy:' ' : Flag : " " :SourceFileList:' ':TargetFileList Call DSLogInfo('Copying ': SourceFileList: ' to ':TargetFileList,RoutineName) Call DSExecute(OsType,OsCmd,OsOutput,OsStatus) If OsStatus Then Call DSLogWarn('The Copy command (':OsCmd:') returned status ':OsStatus:':':@FM:OsOutput, RoutineName) End Else Call DSLogInfo('Files moved...','DSMoveFiles') End Ans = OsStatus
CopyofComareROWS: Function copyofcompareRows(Column_Name,Column_Value)
$INCLUDE DSINCLUDE JOBCONTROL.H vJobName=DSGetJobInfo(DSJ.ME, DSJ.JOBNAME) vStageName=DSGetStageInfo(DSJ.ME, DSJ.ME, DSJ.STAGENAME)
☻Page 135 of 243☻
vCommonName=CheckSum(vJobName) : CheckSum(vStageName) : CheckSum(Column_Name)
Common /vCommonName/ LastValue vLastValue=LastValue vNewValue=Column_Value If vNewValue<>vLastValue Then Ans=1 Else Ans=0 LastValue=vNewValue
CopyOfZSTPKeyLookup
Check if key passed exists in file passed Arg1: Hash file to look in Arg2: Key to look for Arg3: Number of file to use "1" or "2" * Routine to look to see if the key passed exists in the file passed * If so, then the non-key field from the file is returned * If not found, "***Not Found***" is returned * * The routine requires the UniVerse file named to have been created previously * $INCLUDE DSINCLUDE JOBCONTROL.H EQUATE RoutineName TO 'ZSTPKeyLookup' *
Call DSLogInfo("Routine started",RoutineName)
Common /ZSTPkeylookup/ Init1, SeqFile1, Init2, SeqFile2, RetVal, msgtext Ans = 0 If NOT(Init1) And Arg3 = "1" Then * Not initialised. Therefore open file Init1 = 1 Open Arg1 TO SeqFile1 Then Clearfile SeqFile1 Else Call DSLogInfo("Open failed 1",RoutineName) msgtext = "Cannot open ZSTP creation control file ":Arg1 Call DSLogFatal(msgtext,RoutineName) Ans = -1 End End If NOT(Init2) And Arg3 = "2"
☻Page 136 of 243☻
Then
* Not initialised. Therefore open file Init2 = 1 Open Arg1 TO SeqFile2 Then Clearfile SeqFile2 Else Call DSLogInfo("Open failed 2",RoutineName) msgtext = "Cannot open ZSTP creation control file ":Arg1 Call DSLogFatal(msgtext,RoutineName) Ans = -1 End End * Read the file to get the data for the key passed, if not found, return "***Not Found***" If Arg3 = "1" Then Read RetVal From SeqFile1, Arg2 Else RetVal = "***Not Found***" End Else Read RetVal From SeqFile2, Arg2 Else RetVal = "***Not Found***" End Ans = RetVal
Create12CharTS: Function Create12CharTS(JobName) $INCLUDE DSINCLUDE JOBCONTROL.H vJobHandle = DSAttachJob(JobName, DSJ.ERRFATAL) vJobStartTime = DSGetJobInfo(vJobHandle, DSJ.JOBSTARTTIMESTAMP) vDate vDate vDate vDate
= = = =
Trim(vJobStartTime, "-","A") Trim(vDate, ":","A") Trim(vDate, " ", "A") vDate[1,12]
Ans=vDate
CreateEmptyFile: Function CreateEmptyFile(Arg1,Arg2) *Create Empty File vParamFile
= Arg1 : "/" : Arg2
☻Page 137 of 243☻
OpenSeq vParamFile To FileVar Else Call DSLogWarn("Cannot open ":vParamFile , "Cannot Open ParamFile") End
WeofSeq FileVar CloseSeq FileVar Ans="1"
Datetrans: DateVal Function Datetrans(DateVal) Function DeleteFiles(SourceDir,FileMask,Flags)
* Function ReverseDate(DateVal) * Date mat be in the form of DD.MM.YY i.e. 01.10.03 * convert to YYYYMMDD SAP format
Ans = "20" : DateVal[7,2] : DateVal[4,2] : DateVal[1,2]
DeleteFiles:
RoutineName = "DeleteFiles" If SourceDir = '' Then SourceDir = '.' If FileMask = '' SourceDir = '' Then Return(0) If System(91) OsType OsDelim NonOsDelim Delete End Else OsType OsDelim NonOsDelim Delete End
Then = 'NT' = '\' = '/' = 'del ' = 'UNIX' = '/' = '\' = 'rm ' : Flags : ' '
WorkFiles = Trims(Convert(',',@FM,FileMask)) FileList = Splice(Reuse(SourceDir),OsDelim,WorkFiles) Convert NonOsDelim To OsDelim In FileList OsCmd = Delete :' ' : FileList
☻Page 138 of 243☻
Call DSLogInfo('Deleting ':FileList,RoutineName) Call DSExecute(OsType,OsCmd,OsOutput,OsStatus) If OsStatus Then Residx= Index(OsOutput,"non-existent",1) if Index(OsOutput,"non-existent",1) = 0 then Call DSLogInfo('The Delete command (':Residx:OsCmd:') returned status ':OsStatus:':':@FM:OsOutput,RoutineName) End Else Call DSLogInfo('No Files matched Wild Card - Delete was not required...',RoutineName) OsStatus = 0 End End Else Call DSLogInfo('Files deleted...',RoutineName) End Ans = OsStatus
DisconnectNetworkDrive: Map a Network Drive on a Windows Server: Function Disconnectnetworkdrive(Drive_Letter)
RoutineName = "MapNetworkDrive" If Drive_Letter = '' Then Return(0)
OsType = 'NT' OsDelim = '\' NonOsDelim = '/' Copy = 'copy ' OsCmd = 'net use ' : Drive_Letter : ": /delete" Call DSLogInfo('Disconnecting Network Drive: ' : OsCmd,RoutineName) Call DSExecute(OsType,OsCmd,OsOutput,OsStatus) If OsStatus Then Call DSLogWarn('The Copy command (':OsCmd:') returned status ':OsStatus:':':@FM:OsOutput, RoutineName) End Else Call DSLogInfo('Drive: ' : Drive_Letter : 'Disconnected ',RoutineName) End
☻Page 139 of 243☻
Ans = OsStatus
DosCmd: Move files from one directory to another: Function DosCmd(Cmd)
RoutineName = "DosCmd" If System(91) Then OsType = 'NT' OsDelim = '\' NonOsDelim = '/' End Else OsType = 'UNIX' OsDelim = '/' NonOsDelim = '\' End OsCmd = Cmd Call DSLogInfo("CMD = " : Cmd,RoutineName) Call DSExecute(OsType,OsCmd,OsOutput,OsStatus) If OsStatus Then Call DSLogWarn('The command (':OsCmd:') returned status ':OsStatus:':':@FM:OsOutput, RoutineName) End Else Call DSLogInfo('The command (':OsCmd:') was successfull ':OsStatus:':':@FM:OsOutput,RoutineName) End Ans = OsStatus : " - " : OsOutput
DSMoveFiles: Move files from one directory to another: f SourceDir = '' Then SourceDir = '.' If TargetDir = '' Then TargetDir = '.' If FileMask = '' Or SourceDir = TargetDir Then Return(0) If System(91) OsType OsDelim NonOsDelim Move End Else OsType OsDelim NonOsDelim Move End
Then = 'NT' = '\' = '/' = 'move ' = = = =
'UNIX' '/' '\' 'mv -f '
☻Page 140 of 243☻
WorkFiles = Trims(Convert(',',@FM,FileMask)) FileList = Splice(Reuse(SourceDir),OsDelim,WorkFiles) Convert NonOsDelim To OsDelim In FileList OsCmd = Move:' ' : FileList:
' ':TargetDir
Call DSLogInfo('Moving ':FileList: ' to ':TargetDir,'DSMoveFiles') Call DSExecute(OsType,OsCmd,OsOutput,OsStatus) If OsStatus Then Call DSLogInfo('The move command (':OsCmd:') returned status ':OsStatus:':':@FM:OsOutput,'DSMoveFiles') End Else Call DSLogInfo('Files moved...','DSMoveFiles') End Ans = OsStatus
Routine Name:ErrorMgmtDummy: Value: The Value to Be Mapped FieldName: The Name of the source field that the Value is contained in Format: The name of the Hash file containing the mappi ng data
Default: The Default value to return if value is not found Msg: ny text you want to store against an error SeverityInd: The Error Severity Indicator: I-Information, W-Warning, E-Error, F-Fatal ErrorLogInd: An Indicator to indicate of errors should be logged (Note this is not yet implemented) HashFileLocation: A Hashfile could be either local to the Module or Generic. Enter "G" for Generic "L" for Local * FUNCTION Map(Value,FieldName,Format,Default,Msg,ErrorLogIn d) * * Executes a lookup against a hashed file using a key * * Input Parameters : Arg1: Value = The Value to be Mapped or checked * Arg2: FieldName = The Name of the field that is either the Target of the Derivation or the sourceField that value is contained in * Arg3: Format = The name of the Hash file containing the mapping data
☻Page 141 of 243☻
* Arg4: Default = The Default value to return if value is not found * Arg5: Msg = Any text you want stored against an error * Arg6: SeverityInd = An Indicator to the servity Level * Arg7: ErrorLogInd = An Indicator to indicate if errors should be logged * Arg8: HashfileLocation = An Indicator to indicate of errors should be logged (Note this is not yet implemented) * * Return Values: If the Value is not found, return value is: -1. or the Default value if that is supplied * If Format Table not found, return value is: -2 * * * RoutineName = 'Map' Common /HashLookup/ FileHandles(100), FilesOpened Common /TicketCommon/ Ticket_Group, Ticket_Sequence, Set_Key, Mod_Root_Path, Generic_Root_Path, Chk_Hash_File_Name, Mod_Run_Num DEFFUN LogToHashFile(ModRunNum,Ticket_Group,Ticket_Seque nce,Set_Key,Table,FieldName,Key,Error,Text,Severi tyInd) Calling 'DSU.LogToHashFile'
If (Ans = "-1" or Ans = "-2" or UpCase(Ans)= "BLOCKED") and ErrorLogInd = "Y" Then Ret_Code=LogToHashFile(Mod_Run_Num,Ticket _Group,Ticket_Sequence,Set_Key,Table,FieldName,Ch k_Value,Ans,Msg,SeverityInd) End RETURN(Ans) FileExists: Move files from one directory to another Function File Exits(Filename)
Routine Name = "File Exists" File Found = @TRUE OPENSEQ FileName TO aFile ON ERROR STOP "Cannot open file (":FileName:")" THEN
☻Page 142 of 243☻
CLOSESEQ aFile END ELSE FileFound = @FALSE ;* file not found END Ans = FileFound
FileSize: Returns the size of a file Function FileSize(FileName) RoutineName = "FileSize" FileSize = -99 OPENSEQ FileName TO aFile ON ERROR STOP "Cannot open file (":FileName:")" THEN status FileInfo from aFile else stop FileSize=Field(FileInfo,@FM,4) * FileSize=FileInfo CLOSESEQ aFile END ELSE FileSize = -999 END Ans = FileSize
FindExtension: FunctionFindExtesion(Arg1)
File_Name=Arg1 * Gets rid of the extension part of the filename LengthofFileName = Len(File_Name) Extension = Index(File_Name, ".", 1) If Extension <> 0 Then LengthofExtension = LengthofFileName - Extension + 1 File_Extension=File_Name[Extension,LengthofExtens ion] End Else End Ans = File_Extension
FindFileSuffix: Function FindFileSuffix(Arg1) File_Name=Arg1 * Gets rid of the extension part of the filename Extension = Index(File_Name, ".", 1)
☻Page 143 of 243☻
If Extension <> 0 Then MyLenRead=Index(File_Name, ".", 1) - 1 File_Name=File_Name[0,MyLenRead] End Else End * Gets the timestamp. Doesn't handle the case where there are suffix types and timestamp only contains 5 digits without "_" inbetween If Index(File_Name, "_", 6) = 0 Then MyLenRead=Index(File_Name, "_", 4) + 1 MyTimestamp = File_Name[MyLenRead,Len(File_Name)-1] End Else MyTimestamp = Field(File_Name,"_",5):"_":Field(File_Name,"_",6) End TimestampEndPos = Index(File_Name,MyTimestamp,1) + Len(MyTimestamp) MySuffix = File_Name[TimestampEndPos + 1, Len(File_Name)] Ans = MySuffix
FindTimeStamp: Function FindTimeStamp(Arg1)
File_Name=Arg1 * Gets rid of the extension part of the filename Extension = Index(File_Name, ".", 1) If Extension <> 0 Then MyLenRead=Index(File_Name, ".", 1) - 1 File_Name=File_Name[0,MyLenRead] End Else End
* Gets the timestamp. Doesn't handle the case where there are suffix types and timestamp only contains 5 digits without "_" inbetween If Index(File_Name, "_", 6) = 0 Then MyLenRead=Index(File_Name, "_", 4) + 1 Timestamp = File_Name[MyLenRead,Len(File_Name)-1] End Else Timestamp = Field(File_Name,"_",5):"_":Field(File_Name,"_",6) End Ans = Timestamp formatCharge: Function FormatCharge(Arg1)
☻Page 144 of 243☻
vCharge=Trim(Arg1, 0, "L") vCharge=vCharge/100 vCharge=FMT(vCharge,"R2") Ans=vCharge formatGCharge:
Ans=1 vLength=Len(Arg1) vMinus=If Arg1[1,1]='-' Then
1 Else 0
If Arg1='0.00' Then Ans=Arg1 End Else If vMinus=1 Then vString=Arg1[2,vLength-1] vString='-':Trim(vString, '0','L') End else vString=Trim(Arg1, '0','L') end Ans=vString End
FTPFile: Script_Path: he path to where the Unix Script file lives File_Path: The Value to Be Mapped File_Name: The Name of the source field that the Value is contained in IP_Address: The name of the Hash file containing the mapping data
User_ID: The Default value to return if value is not found Password: Any text you want to store against an error Target_Path: The target path where the ifle is to saved on the target server
* FUNCTION FTPFile(Script_Path,File_Path,File_Name,IP_Addres s, User_ID,Password,Target_Path) * *
☻Page 145 of 243☻
RoutineName = 'FTPFile'
OsCmd = Script_Path : "/ftp_put.sh":" ":File_Path:" ":File_Name:" ":IP_Address:" ":User_ID:" ":Password:" ":Target_Path :" ":Script_Path Call DSLogInfo('Ftp ':File_Name: ' to ' : IP_Address : ' ' :Target_Path,'FTPFile') Call DSLogInfo('Ftp Script = ':Script_Path,'FTPFile') Call DSExecute("UNIX",OsCmd,OsOutput,OsStatus) If OsStatus Then Call DSLogInfo('The FTP command (':OsCmd:') returned status ':OsStatus:':':@FM:OsOutput,'DSMoveFiles') End Else Call DSLogInfo('Files FTPd...': '(':OsCmd:')','FTPFile') End Ans = OsStatus RETURN(Ans)
FTPmget:
* FUNCTION FTPFile(Script_Path,Source_Path,File_Wild_Card,IP _Address, User_ID,Password,Target_Path) * * RoutineName = 'FTPmget'
OsCmd = Script_Path:"/ftp_Mget.sh":" ":Source_Path:" ":File_Wild_Card:" ":IP_Address:" ":User_ID:" ":Password:" ":Target_Path:" ":Script_Path *OsCmd = Script_Path : "/test.sh" Call DSLogInfo('Ftp ':File_Wild_Card: ' From ' : IP_Address : ' ' :Source_Path : ' to ' :Target_Path,RoutineName) Call DSExecute("UNIX",OsCmd,OsOutput,OsStatus) If OsStatus Then Call DSLogInfo('The FTP command (':OsCmd:') returned status ':OsStatus:':':@FM:OsOutput,'DSMoveFiles') End Else
☻Page 146 of 243☻
Call DSLogInfo('Files FTPd...': '(':OsCmd:')',RoutineName) End Ans = OsStatus RETURN(Ans)
Concatenate All Input Arguments to Output using TAB character Concatenate All Routine="GBIConcatItem" t = Char(009) If ISNULL(IND) THEN Pattern = "" ELSE Pattern = IND [1,1] If ISNULL(VKORG) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : VKORG [1,4] If ISNULL(VTWEG) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : VTWEG [1,2] If ISNULL(SPART) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : SPART [1,2] If ISNULL(WERKS) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : WERKS [1,4] If ISNULL(AUART) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : AUART [1,4] If ISNULL(FKDAT) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : FKDAT [1,8] If ISNULL(KUNAG) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : KUNAG [1,10] If ISNULL(KUNRE) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : KUNRE [1,10] If ISNULL(MATNR) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : MATNR [1,18] If ISNULL(PSTYV) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : PSTYV [1,4] If ISNULL(KWMENG) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : KWMENG [1,15] If ISNULL(XBLNR) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : XBLNR [1,16] If ISNULL(VGPOS) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : VGPOS [1,6] If ISNULL(FKARA) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : FKARA [1,4] If ISNULL(ZOR ISNULL(ZOR_DT_PCOD _DT_PCODE) E) THEN Pattern Pattern = Pattern Pattern : t ELSE Pattern = Pattern : t : ZOR_DT_PCODE [1,8] If ISNULL(ZAWB) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : ZAWB [1,16] If ISNULL(LGORT) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : LGORT [1,4] If ISNULL(VKAUS) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : VKAUS [1,3] If ISNULL(VKBUR) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : VKBUR [1,4] If ISNULL(VKGRP) THEN Pattern = Pattern : t ELSE Pattern = Pattern : t : VKGRP [1,3]
☻Page 147 of 243☻
If t If t If t If t If t If t If t If t If t If t If t If t If t If t If t If t
ISNULL(ZLSCH) ELSE Pattern ISNULL(ZTERM) ELSE Pattern ISNULL(KURSK) ELSE Pattern ISNULL(TAXM1) ELSE Pattern ISNULL(VRKME) ELSE Pattern ISNULL(ARKTX) ELSE Pattern ISNULL(KTGRM) ELSE Pattern ISNULL(ZZTAXCD) ELSE Pattern ISNULL(LAND2) ELSE Pattern ISNULL(NAME1) ELSE Pattern ISNULL(PSTLZ) ELSE Pattern ISNULL(ORT01) ELSE Pattern ISNULL(KOSTL) ELSE Pattern ISNULL(WAERS) ELSE Pattern ISNULL(KUNRG) ELSE Pattern ISNULL(KUNWE) ELSE Pattern
THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern THEN = Pattern
Pattern = Pattern : : t : ZLSCH [1,1] Pattern = Pattern : : t : ZTERM [1,4] Pattern = Pattern : : t : KURSK [1,9] Pattern = Pattern : : t : TAXM1 [1,1] Pattern = Pattern : : t : VRKME [1,3] Pattern = Pattern : : t : ARKTX [1,40] Pattern = Pattern : : t : KTGRM [1,2] Pattern = Pattern : : t : ZZTAXCD [1,2] Pattern = Pattern : : t : LAND2 [1,3] Pattern = Pattern : : t : NAME1 [1,35] Pattern = Pattern : : t : PSTLZ[1,10] Pattern = Pattern : : t : ORT01 [1,35] Pattern = Pattern : : t : KOSTL[1,10] Pattern = Pattern : : t : WAERS [1,5] Pattern = Pattern : : t : KUNRG [1,10] Pattern = Pattern : : t : KUNWE [1,10]
Ans = Pattern
GBIConcatItem: Concatenate All Input Arguments to Output using TAB character: Routine="GBIConcatItem" t = Char(009) If ISNULL(IND) THEN Pattern ELSE Pattern = IND [1,1] If ISNULL(KNUMV) THEN Pattern ELSE Pattern = Pattern : If ISNULL(KPOSN) THEN Pattern ELSE Pattern = Pattern : If ISNULL(KSCHL) THEN Pattern ELSE Pattern = Pattern : If ISNULL(KBETR) THEN Pattern ELSE Pattern = Pattern : If ISNULL(KWERT) THEN Pattern ELSE Pattern = Pattern : If ISNULL(WAERS) THEN Pattern ELSE Pattern = Pattern :
= "" = t = t = t = t = t = t
Pattern : KNUMV Pattern : KPOSN Pattern : KSCHL Pattern : KBETR Pattern : KWERT Pattern : WAERS
☻Page 148 of 243☻
: t [1,16] : t [1,6] : t [1,4] : t [1,11] : t [1,13] : t [1,5]
If ISNULL(KAWRT) THEN Pattern ELSE Pattern = Pattern : If ISNULL(KHERK) THEN Pattern ELSE Pattern = Pattern :
= t = t
Pattern : KAWRT Pattern : KHERK
: t [1,15] : t [1,1]
Ans = Pattern
GCMFConvert: Receive GCMF string and change known strings to required values: DataIn = "":Trim(Arg1) If IsNull(DataIn) or DataIn = "" Then Ans = "" End Else DataIn = Ereplace (DataIn,"$B$","") DataIn = Ereplace (DataIn,"NULL","") DataIn = Ereplace (DataIn,"<","<") DataIn = Ereplace (DataIn,">",">") DataIn = Ereplace (DataIn,""",'"') DataIn = Ereplace (DataIn,"'","'") DataIn = Ereplace (DataIn,"&","&") DataIn = Ereplace (DataIn,"|","|") Ans = DataIn End
GCMFFormating: * * FUNCTION GCMFFormating(Switch, All_Row) * * Replaces some special characters when creating the GCMF file * * Input Parameters : Arg1: Switch = Step to change. * Arg2: All_Row = Row containing the GCMF Record. * DataIn=Trim(All_Row) If Switch=1 Then If IsNull(DataIn) or DataIn= "" Then Ans = "$B$" End Else DataInFmt = Ereplace (DataIn ,"&", "&") DataInFmt = Ereplace (DataInFmt ,"'", "'") DataInFmt = Ereplace (DataInFmt ,'"', """) Ans = DataInFmt End
☻Page 149 of 243☻
End Else If Switch=2 Then DataInFmt = Ereplace (DataIn ,">", ">") DataInFmt = Ereplace (DataInFmt ,"<", "<") Ans = DataInFmt End Else * Final Replace, After the Merge of all GCMF segments DataInFmt = Ereplace (DataIn ,"|", "|") Ans = DataInFmt End End
GeneralCounter: COMMON /Counter/ OldParam, TotCount NextId = Identifier IF UNASSIGNED(OldParam) Then OldParam = NextId TotCount = 0 END IF NextId = TotCount END ELSE OldParam TotCount END
OldParam THEN += 1 = NextId = 1
Ans = TotCount
GetNextCustomerNumber: Sequence number generator. Routine to get the next sequence number to use for a customer from a file, and save the usde value in the file. The routine argument is the name associated with the super group that the customer is being created in. The routine uses a file to store the next available number. It reads the number, then increments and stores the value in common, writing the next value back to file each time.
☻Page 150 of 243☻
* Routine to generate the next customer number. The argument is a string used to * identify the super group for the customer. * * The routine uses a UniVerse file to store the next number to use. This * value is stored in a record named after the supplied argument. The * routine reads the number, then increments and stores the value * in common storage, writing the next value back to file each time. * * Declare shared memory storage. Common /CustSequences/ Initialized, NextVal, SeqFile EQUATE RoutineName TO 'GetNextCustomerNumber' If NOT(Initialized) Then * Not initialised. Attempt to open the file. Initialized = 1 Open "IOC01_SUPER_GRP_CTL_HF" TO SeqFile Else Call DSLogFatal("Cannot open customer number allocation control file",RoutineName) Ans = -1 End End * Read the named record from the file. Readu NextVal From SeqFile, Arg1 Else Call DSLogFatal("Cannot find super group in customer number allocation control file",RoutineName) Ans = -1 End Ans = NextVal * Increment the sequence value, and write back to file. NextVal = NextVal + 1 If Len(NextVal) < 10 then NextVal = Substrings("0000000000",1,10Len(NextVal)):NextVal Writeu NextVal On SeqFile, Arg1 Else Call DSLogFatal("Update to customer number allocation control file failed",RoutineName) Ans = -1 End
GetNextErrorTableID: Sequence number generator in a concurrent environment. Routine to generate a sequential number.
☻Page 151 of 243☻
The routine argument is the name associated with the sequence. The routine uses a file to store the next available number. It reads the number from the file on each invocation; a lock on the file prevents concurrent access. * Routine to generate a sequential number. The argument is a string used to * identify the sequence. * * NOTE: This routine uses locking to allow multiple processes to access the * same sequence. * * The routine uses a UniVerse file to store the next number to use. This * value is stored in a record named after the supplied argument. The * routine always attempts to read the number from the file, so that the * record for the sequence becomes locked. It increments and stores the * value in common storage, writing the next value back to file each * time. Writing back this value frees the lock. * * Declare shared memory storage. Common /ErrorTableSequences/ Initialized, NextVal, SeqFile EQUATE RoutineName TO 'GetNextErrorTableID' If NOT(Initialized) Then * Not initialised. Attempt to open the file. Initialized = 1 Open "ErrorTableSequences" TO SeqFile Else * Open failed. Create the sequence file. EXECUTE "CREATE.FILE ErrorTableSequences 2 1 1" Open "ErrorTableSequences" TO SeqFile Else Ans = -1 End End * Read the named record from the file. * This obtains the lock (waiting if necessary). Readu NextVal From SeqFile, Table_Name Else NextVal = 1 End Ans = NextVal NextVal = NextVal + 1
☻Page 152 of 243☻
* Increment the sequence value, and write back to file. * This releases the lock. Write NextVal On SeqFile, Table_Name Else Ans = -1
GetNextModSeqNo: Gets the Next Mod Run Code from an Initialised Sequence This routine gets the next Mod Run Number in a squenced that was initialised,. The arguments are Mod_Code_Parm and Supplier_ID_Parm which combined form the key for this instance of a sequence GetParameterArray:
* GetParameterArray(Arg1) * Decription: Get parameters * Written by: * Notes: * Bag of Tricks Version 2.3.0 Release Date 200110-01 * Arg1 = Path and Name of Parameter File * * Result = ( <1> = Parameter names, <2> = Parameter values) * ----------------------------------------------------------DEFFUN FileFound(A) Calling 'DSU.FileFound' cBlank = '' cName = 1 cValue = 2 vParamFile = Arg1 aParam = cBlank vParamCnt = 0 vCurRoutineName = 'Routine: GetParameterArray' vFailed = @FALSE Done = @FALSE IF vParamFile AND FileFound(vParamFile) Then OPENSEQ vParamFile TO hParamFile Then Loop READSEQ vLineRaw FROM hParamFile ON ERROR
☻Page 153 of 243☻
Call DSLogWarn('Error from ':vParamFile:'; Status = ':STATUS(),vCurRoutineName) CLOSE hParamFile vFailed = @TRUE Done = @TRUE End Then vLine = TRIM(vLineRaw) vFirstChar = LEFT(vLine,1) vRemark = LEFT(vLine,4) IF NOT(vFirstChar = cBlank OR vFirstChar = '#' OR vFirstChar = '*' OR vFirstChar = '"' OR vFirstChar = "'" OR vFirstChar = ';' OR vFirstChar = ':' OR vFirstChar = '[' OR vRemark = 'REM ') THEN vParamCnt += 1 ; * Add to any parameter array passed as an argument aParam<1,vParamCnt> = TRIM(FIELD(vLine,'=',cName)) aParam<2,vParamCnt> = FIELD(vLine,'=',cValue) END END ELSE Done = @TRUE END Until Done Do Repeat CLOSE hParamFile End Else Call DSLogWarn('Error from ':vParamFile:'; Status = ':STATUS(),vCurRoutineName) vFailed = @TRUE End End Else vFailed = @TRUE End Call DSLogInfo("Values loaded from file: ":vParamFile:@AM:aParam, vCurRoutineName) If vFailed Then Ans = "ERROR" End Else Ans = aParam End
LastDayofMonth: Returns the Last Day of the Month Deffun DSRMessage(A1,A2) Calling "*DataStage*DSR_MESSAGE" Equate TransformName To "ConvertMonth" * Check the format of the input value. If IsNull(Arg1) or (Len(Arg1) < 6) Then Ans = "" GoTo ExitLastDayMonth
☻Page 154 of 243☻
End InYear = Substrings(Arg1,1,4) InMonth = Substrings(Arg1,5,2) If InMonth < 1 Or InMonth > 12 Then Ans = "" GoTo ExitLastDayMonth End * Generate the required output, depending on the Action argument. Begin Case Case InMonth = "1" * Internal date for first day of month. OutDt ="31" Case InMonth = "2" * Internal date for first day of month. if mod(Num(InYear),4)<>0 then OutDt = "28" end if mod(Num(InYear),4)=0 then OutDt = "29" end Case InMonth = "3" * Internal date for first day of month. OutDt = "31" Case InMonth = "4" * Internal date for first day of month. OutDt = "30" Case InMonth = "5" * Internal date for first day of month. OutDt ="31" Case InMonth = "6" * Internal date for first day of month. OutDt ="30" Case InMonth = "7" * Internal date for first day of month. OutDt ="31" Case InMonth = "8" * Internal date for first day of month. OutDt ="31" Case InMonth = "9" * Internal date for first day of month. OutDt ="30" Case InMonth = "10" * Internal date for first day of month. OutDt ="30" Case InMonth = "11"
☻Page 155 of 243☻
* Internal date for first day of month. OutDt ="31" Case InMonth = "12" * Internal date for first day of month. OutDt ="31"
End Case Ans=OutDt:"-":InMonth:"-":InYear ExitLastDayMonth:
LogToErrorFile: Logs errors to an error hashed file * FUNCTION LogToErrorFile(Table,Field_Name,Check_Value,Error _Number,Error_Text_1, Error_Text_2, Error_Text_3,Additional_Message) * * * Places the current Writes Error Messages to a Hash File * * Input Parameters : Arg1: Table = The name of Control table being checked * Arg2: Field_Name = The name of the Field that is in error * Arg3: Check_Value = The value used to look up in the Hash file to get try and get a look up match * Arg4: Error_Number = The error number returned * Arg5: Error_Text_1 = First error message argument. Used to build the default error message * Arg6: Error_Text_2 = Second error message argument. Used to build the default error message * Arg7: Error_Text_3 = Thrid error message argument. Used to build the default error message * Arg8: Additional_Message = Any text that could be stored against an error * RoutineName = "LogToErrorFile" Common /HashLookup/ FileHandles(100), FilesOpened Common /TicketErrorCommon/ ModRunID, TicketFileID, TicketSequence, TicketSetKey, JobStageName, ModRootPath
$INCLUDE DSINCLUDE JOBCONTROL.H
☻Page 156 of 243☻
Ans = "ERROR" If System(91) OsType OsDelim NonOsDelim Move End Else OsType OsDelim NonOsDelim Move End
Then = 'NT' = '\' = '/' = 'move ' = = = =
'UNIX' '/' '\' 'mv -f '
JobName = DSGetJobInfo (DSJ.ME , DSJ.JOBNAME) Path = ModRootPath : OsDelim :"error" : OsDelim FileName = "ErrorLog_HF." : ModRunID PathFile = Path : FileName Call DSLogInfo(Path:"-- checking --" : PathFile,RoutineName) vMessage = "INLOG Error Log = " : PathFile *Call DSLogInfo(vMessage, RoutineName ) vMessage = "INLOG Error Log Data = " : ModRunID : "|" : TicketFileID : "|" : TicketSequence : "|" : TicketSetKey : "|" : Table : "|" : Field_Name : "|" : Check_Value : "|" : Error_Number : "|" : Additional_Message *Call DSLogInfo(vMessage, RoutineName )
Key = JobName : JobStageName : ModRunID: TicketFileID : TicketSequence : TicketSetKey : Table : Field_Name Err_Rec = "" Err_Rec <1> = JobName Err_Rec <2> = JobStageName Err_Rec <3> = ModRunID Err_Rec <4> = TicketFileID Err_Rec <5> = TicketSequence Err_Rec <6> = TicketSetKey Err_Rec <7> = Table Err_Rec <8> = Field_Name Err_Rec <9> = Check_Value Err_Rec <10> = Error_Number Err_Rec <11> = Error_Text_1 Err_Rec <12> = Error_Text_2 Err_Rec <13> = Error_Text_3 Err_Rec <14> = Additional_Message
* Attempt to find the table name in our cache. Locate FileName in FilesOpened Setting POS Then
☻Page 157 of 243☻
Write Err_Rec To FileHandles(POS), Key Then TAns = 0 End Else TAns = -1 End End Else * Table is not in cache of opened tables, so open it. Openpath PathFile To FileHandles(POS) Then FilesOpened<-1> = FileName Write Err_Rec To FileHandles(POS), Key Then TAns = 0 Else TAns = -1 End End Else TAns = -2 End End Ans = "ERROR" Return(Ans)
LogToHashFile: * FUNCTION LogToHashFile(ModRunNum,TGrp,TSeg,SetKey,Table,Fi eldNa,KeyValue,Error,Msg,SeverityInd) * * * Places the current Writes Error Messages to a Hah File * * Input Parameters : Arg1: ModRunNum = The unique number allocated to a run of an Module * Arg2: Ticket_Group = The Ticket Group Number of the Current Row * Arg3: Ticket_Sequence = The Ticket Sequence Number of the Current Row * Arg4: Set_Key = A Key to identify a set of rows e.g. an Invoice Number to a set of invoice lines * Arg5: Table = The name of Control table being checked * Arg6: FieldNa = The name of the Field that is in error * Arg7: KeyValue = The value used to look up in the Hash file to get try and get a look up match * Arg8: Error = The error number returned * Arg9: Msg = Any text that could be stored against an error * Arg10: SeverityInd = An Indicator to state the error severity level
☻Page 158 of 243☻
RoutineName = "LogToHashFile" Common /HashLookup/ FileHandles(100), FilesOpened Common /TicketCommon/ Ticket_Group, Ticket_Sequence, Set_Key, Job_Stage_Name, Mod_Root_Path, Generic_Root_Path, Chk_Hash_File_Name, Mod_Run_Num
$INCLUDE DSINCLUDE JOBCONTROL.H TAns = 0 If System(91) OsType OsDelim NonOsDelim Move End Else OsType OsDelim NonOsDelim Move End
Then = 'NT' = '\' = '/' = 'move ' = = = =
'UNIX' '/' '\' 'mv -f '
JobName = DSGetJobInfo (DSJ.ME , DSJ.JOBNAME) * StageName = DSGetStageInfo (DSJ.ME,DSJ.ME, DSJ.STAGENAME)
Path = Mod_Root_Path : OsDelim :"error" : OsDelim FileName = "ErrorLog_HF." : Mod_Run_Num PathFile = Path : FileName *Message = "INLOG Error Log = " : PathFile *Call DSLogInfo(Message, RoutineName ) *Message = "INLOG Error Log Data = " : ModRunNum : "|" : TGrp : "|" : TSeq : "|" : Set_Key : "|" : Table : "|" : FieldNa : "|" : KeyValue : "|" : Error : "|" : Msg *Call DSLogInfo(Message, RoutineName )
Key = JobName : Job_Stage_Name : ModRunNum: TGrp : TSeq : SetKey : Table : FieldNa Err_Rec = "" Err_Rec <1> = JobName Err_Rec <2> = Job_Stage_Name Err_Rec <3> = ModRunNum Err_Rec <4> = TGrp Err_Rec <5> = TSeq Err_Rec <6> = SetKey Err_Rec <7> = Table Err_Rec <8> = FieldNa Err_Rec <9> = KeyValue
☻Page 159 of 243☻
Err_Rec <10> = Error Err_Rec <11> = Msg Err_Rec <12> = SeverityInd
* Attempt to find the table name in our cache. Locate FileName in FilesOpened Setting POS Then Write Err_Rec To FileHandles(POS), Key Then TAns = 0 End Else TAns = -1 End End Else * Table is not in cache of opened tables, so open it. Openpath PathFile To FileHandles(POS) Then FilesOpened<-1> = FileName Write Err_Rec To FileHandles(POS), Key Then TAns = 0 Else TAns = -1 End End Else TAns = -2 End End Ans = TAns
RETURN(Ans)
MandatoryFieldCheck: Check whether the field name passed is mandatory Routine to check to see if the passed field is populated, and if not, to check to see if it is mandatory. If the field contains "?", then it is handled as if it is blank. The routine uses a control table containing process name, field name, group name and exclusion flag to control mandatory or not. The routine arguments are the field name, the field, the group key, whether this is the first mandatory check for the record, and the process name when the first check flag is "Y". A variable kept in memory (Mandlist) is used to record the mandatory check failures.
☻Page 160 of 243☻
When the passed field name is "Getmand", no processing is performed except to return the Mandlist field.
* Routine to check whether the passed field is filled, and if not, whether it is mandatory. * * The routine uses a UniVerse file "MANDATORY_FIELD_HF" which contains the mandatory field controls * * Arg1 Field name to be checked (literal) * Arg2 Field value * Arg3 Group name * Arg4 1st call for record * Arg5 The process name on the first call (this is saved in storage for subsequent calls) * * Declare shared memory storage. Common /Mandatory/ Initialized, SeqFile, DataIn, GroupIn, GroupV, Mandlist, ProcessIn, ProcessV EQUATE RoutineName TO 'MandatoryFieldCheck' * Call DSLogInfo("Routine started":Arg1,RoutineName) If NOT(Initialized) Then Initialized = 1 * Call DSLogInfo("Initialisatio DSLogInfo("Initialisation n Started",RoutineName) Open "MANDATORY_FIELD_HF" TO SeqFile Else Call DSLogFatal("Cannot open Mandatory field control file",RoutineName) Ans = -1 End * Call DSLogInfo("Initialisatio DSLogInfo("Initialisation n Complete",RoutineName) End If Arg4 = "Y" Then Mandlist = "" ProcessIn = "":Trim(Arg5) If IsNull(ProcessIn) or ProcessIn = "" Then ProcessV = " " Els e ProcessV = ProcessIn End If Arg1 = "Getmand" Then Ans = Mandlist Else DataIn = "":Trim(Arg2) GroupIn = "":Trim(Arg3)
☻Page 161 of 243☻
If IsNull(GroupIn) or GroupIn = "" Then GroupV = " " Else GroupV = GroupIn
If IsNull(DataIn) or DataIn = "" or DataIn = "?" Then * * Field is blank - check for mandatory * * Call DSLogInfo(Arg1:" blank - checking whether mandatory",RoutineName) * mystring = ProcessV:Arg1:GroupV:"X" Read Retval From SeqFile, mystring then * Call DSLogInfo(Arg1:" Group specifically excluded",RoutineName) Ans = 0 end else mystring = ProcessV:Arg1:GroupV Read Retval From SeqFile, mystring then * Call DSLogInfo(Arg1:" Group specifically included",RoutineName) Ans = 1 end else mystring = ProcessV:Arg1:"ALL" Read Retval From SeqFile, mystring then * Call DSLogInfo(Arg1:" Global mandatory",RoutineName) Ans = 1 end else * Call DSLogInfo(Arg1:" blank not mandatory",RoutineName) Ans = 0 end end end End Else Ans = 0 * Call DSLogInfo(Arg1:" Not blank",RoutineName) End If Ans = 1 Then If Mandlist = "" Then Mandlist = Arg1 Else Mandlist = Mandlist:",":Arg1 end End
☻Page 162 of 243☻
Map:(Routinue Name)
* FUNCTION Map(Value,FieldName,Format,Default,Msg,ErrorLogIn d) * * Executes a lookup against a hashed file using a key * * Input Parameters : Arg1: Value = The Value to Be Mapped * Arg2: FieldName = The Name of the field that is either the Target of the Derivation or the sourceField that value is contained in * Arg3: Format = The name of the Hash file containing the mapping data * Arg4: Default = The Default value to return if value is not found * Arg5: Msg = Any text you want stored against an error * Arg6: SeverityInd = An Indicator to the servity Level * Arg7: ErrorLogInd = An Indicator to indicate if errors should be logged * Arg8: HashfileLocation = An Indicator to indicate of errors should be logged (Note this is not yet implemented) * * Return Values: If the Value is not found, return value is: -1. or the Default value if that is supplied * If Format Table not found, return value is: -2 * * * RoutineName = 'Map' Common /HashLookup/ FileHandles(100), FilesOpened Common /TicketCommon/ Ticket_Group, Ticket_Sequence, Set_Key, Job_Stage_Name, Mod_Root_Path, Generic_Root_Path, Chk_Hash_File_Name, Mod_Run_Num *Message = "Map Job Stage Name ==>" : Job_Stage_Name * Call DSLogInfo(Message, RoutineName ) *Message = "Map Mod Root Path ==>" : Mod_Root_Path
☻Page 163 of 243☻
*
Call DSLogInfo(Message, RoutineName )
*Message = "Generic Root Path ==>" : Generic_Root_Path * Call DSLogInfo(Message, RoutineName ) *Message = "Map Chk_Hash_File_Name ==>" : Chk_Hash_File_Name * Call DSLogInfo(Message, RoutineName ) *Message = "Map Mod_Run_Num ==>" : Mod_Run_Num * Call DSLogInfo(Message, RoutineName )
DEFFUN LogToHashFile(ModRunNum,Ticket_Group,Ticket_Seque nce,Set_Key,Table,FieldName,Key,Error,Text,Severi tyInd) Calling 'DSU.LogToHashFile' * If Len(Chk_Hash_File_Name) = 3 And HashFileLocation = "G" Then Format_Extn = Chk_Hash_File_Name Else Format_Extn = Mod_Run_Num [1,5] If System(91) OsType OsDelim NonOsDelim Move End Else OsType OsDelim NonOsDelim Move End
Then = 'NT' = '\' = '/' = 'move ' = = = =
'UNIX' '/' '\' 'mv -f '
ColumnPosition = 0 PositionReturn = 0 Table = Format If HashFileLocation = "G" then PathFormat = Generic_Root_Path : OsDelim :"format" : OsDelim: Format : "_HF." : Format_Extn End Else PathFormat = Mod_Root_Path : OsDelim :"format" : OsDelim : Format : "_HF." : Format_Extn End
If IsNull(Value) then Chk_Value = "" Else Chk_Value = Value *Message = "Map PathFormat ==>" : PathFormat *Call DSLogInfo(Message, RoutineName )
☻Page 164 of 243☻
*Message = "Value ==>" : Value *Call DSLogInfo(Message, RoutineName ) *Message = "Format ==>" : Format *Call DSLogInfo(Message, RoutineName ) *Message = "Default ==>" : Default *Call DSLogInfo(Message, RoutineName ) *Message = "ErrorLogInd ==>" : ErrorLogInd *Call DSLogInfo(Message, RoutineName )
* Set the Default Answer for if a value is not found Begin Case Case UpCase(Default) = "NODEF" Default_Ans = "-1" Case Default = "PASS" NumFields = Dcount(Chk_Value, "|") If NumFields > 1 Then Default_Ans = Field(Chk_Value,"|",2) *Message = "Num Fields > 1 Default_Ans ==>" : Default_Ans : "#" : Chk_Value *Call DSLogInfo(Message, RoutineName )
End Else Default_Ans = Chk_Value *Message = "Num Fields NG 0 Default_Ans ==>" : Default_Ans : "#" : Chk_Value *Call DSLogInfo(Message, RoutineName ) End Case @TRUE If UpCase(Field(Default,"|",1)) <> "BL" Then Default_Ans = Default Else Default_Ans = -1 End Case * Determine if we are returning one column or entire row. If Num(ColumnPosition) then ColumnPosition = Int(ColumnPosition) If ColumnPosition > 0 and ColumnPosition < 99999 Then PositionReturn = 1 End End * Attempt to find the table name in our cache. Locate Format in FilesOpened Setting POS Then
☻Page 165 of 243☻
Read Rec From FileHandles(POS), Chk_Value Then If PositionReturn Then Ans = Rec Else Ans = Rec End Else Ans = Default_Ans End End Else * Table is not in cache of opened tables, so open it. Openpath PathFormat To FileHandles(POS) Then FilesOpened<-1> = Format Read Rec From FileHandles(POS), Chk_Value Else Rec = Default_Ans End If PositionReturn And Rec <> Default_Ans Then Ans = Rec End Else Ans = Rec End End Else Ans = "-2" End End
If UpCase(Field(Default,"|",1)) = "BL" and Ans <> -2 Then If Chk_Value = "" then Ans = Field(Default,"|",2) End End
*Message = "Outside LOGGING" : Mod_Run_Num: "|" : Ticket_Group : "|" : Ticket_Sequence "|" : Set_Key : "|" : Table : "|" : FieldName : "|" : Chk_Value : "|" : Msg *Call DSLogInfo(Message, RoutineName ) *Message = "OUTSIDE PASS Trans Default_Ans ==>" : Default_Ans : " Ans ==> " : Ans *Call DSLogInfo(Message, RoutineName ) LogPass = "N" If (Default = "PASS" and Default_Ans <> Ans) then LogPass = "Y" If LogPass = "Y" Then *Message = "PASS Trans Default_Ans ==>" : Default_Ans : " Ans ==> " : Ans *Call DSLogInfo(Message, RoutineName ) End
☻Page 166 of 243☻
If (Ans = "-1" or Ans = "-2" or UpCase(Ans)= "BLOCKED" or LogPass = "Y" or SeverityInd = "I") and ErrorLogInd = "Y" Then
*Message = "Write to Log Ans==> " : Ans : " ErrorInd==> " : ErrorLogInd *Message = "LOGGING" : Mod_Run_Num: "|" : Ticket_Group : "|" : Ticket_Sequence : "|" : Table : "|" : FieldName : "|" : Chk_Value : "|" : Ans *Call DSLogInfo(Message, RoutineName )
Ret_Code=LogToHashFile(Mod_Run_Num,Ticket _Group,Ticket_Sequence,Set_Key,Table,FieldName,Ch k_Value,Ans,Msg,SeverityInd) End RETURN(Ans)
OutputJobStats: Outputs the job link statistics $INCLUDE DSINCLUDE JOBCONTROL.H hJob = DSAttachJob(JobName, DSJ.ERRFATAL) Start_TS = DSGetJobInfo (hJob, DSJ.JOBSTARTTIMESTAMP) End_TS = DSGetJobInfo (hJob,DSJ.JOBLASTTIMESTAMP) Elapsed_Secs_Cnt = DSGetJobInfo (hJob,DSJ.JOBELAPSED) Job_Term_Status = DSGetJobInfo (hJob,DSJ.JOBINTERIMSTATUS) User_Status = DSGetJobInfo (hJob,DSJ.USERSTATUS) ErrCode = DSDetachJob(hJob) Ans = Start_TS : "|" : End_TS : "|" : Elapsed_Secs_Cnt : "|" : Job_Term_Status : "|" : User_Status
Pattern:
Routine="Pattern" Var_Len = len(Value) Pattern = Value
For i = 1 To Var_Len If Num(Value [i,1]) Then Pattern [i,1] = "n"
☻Page 167 of 243☻
end Else If Alpha(Value [i,1]) Then Pattern[i,1] = "a" end Else Pattern[i,1] = Value [i,1] end end Next i Ans = Pattern Checks a passed field to see if it matches the pattern which is also passed.: The input field is checked to see if it conforms to the format that is also passed as a second parameter. The result of the routine is True is the pattern matches the required format, and false if it does not. If the second parameter is empty, then true is returned.
Equate TransformName To "PatternMatchCheck" Begin Case Case Arg2 = ""
;* No pattern - so return
true Ans = 1 Case Arg3 = "" ;* Only 1 pattern passed Ans = Arg1 Matches Arg2 Case 1 ;* All other cases Ans = Arg1 Matches Arg2 : CHAR(253) : Arg3 End Case
PrepareJob: $INCLUDE DSINCLUDE JOBCONTROL.H Job_Handle = DSAttachJob (Job_Name, DSJ.ERRWARN) ErrCode1=DSPrepareJob(Job_Handle) ErrCode2 = DSDetachJob(Job_Handle) Ans= ErrCode2
RangeCheck: * FUNCTION Map(Value,FieldName,Format,Default,Msg,ErrorLogIn d)
☻Page 168 of 243☻
* * Executes a lookup against a hashed file using a key * * Input Parameters : Arg1: Value = The Value to be checked * Arg2: MinValue = The Min Value allowed * Arg3: MaxValue = The Max Value allowed * Arg4: FieldName = The Name of the Source field being checked * Arg5: Msg = Any text you want stored against an error * Arg6: SeverityInd = An Indicator to the servity Level * Arg7: ErrorLogInd = An Indicator to indicate if errors should be logged * * Return Values: If the Value is not found, return value is -1. else the value supplied is returned * * * RoutineName = 'RangeChk' Common /TicketCommon/ Ticket_Group, Ticket_Sequence,Set_Key, Mod_Root_Path, Generic_Root_Path, Chk_Hash_File_Name, Mod_Run_Num DEFFUN LogToHashFile(ModRunNum,Ticket_Group,Ticket_Seque nce,Set_Key,Table,FieldName,Key,Error,Text,Severi tyInd) Calling 'DSU.LogToHashFile' Table = "Min: " : MinValue "to Max: " : MaxValue Msg1 = "" Msg2 = "" Msg3 = "" Msg4 = "" Ans = "" If Num (Value) = 0 then Msg1 = "-Value is not a number" Ans = -2 End If Num (Value) = 0 then Msg2 = "-MinValue is not a number" Ans = -2 End If Num (Value) = 0 then Msg3 = "-MaxValue is not a number" Ans = -2
☻Page 169 of 243☻
End If Ans <> -2 Then If Value < MinValue Or Value > MaxValue Then Msg4 = "-Value is outside the Range" Ans = -1 End End OutputMsg = Msg : Msg1 : Msg2 : Msg3: Msg4
*Call DSLogInfo(OutputMsg, RoutineName ) If Ans <> -1 and Ans <> -2 then Ans = Value If (Ans = "-1" or Ans = "-2") and ErrorLogInd = "Y" Then Ret_Code=LogToHashFile(Mod_Run_Num,Ticket _Group,Ticket_Sequence,Set_Key,Table,FieldName,Va lue,Ans,OutputMsg,SeverityInd) End RETURN(Ans)
ReadParameter: Read parameter value from configuration file * * Function : ReadParameter - Read parameter value from configuration file * Arg : ParameterName (default=JOB_PARAMETER) * DefaultValue (default='') * Config file (default=@PATH/config.ini) (default=@PAT H/config.ini) * Return : Parameter value from config file Function Readparameters(parametersname,Defaultvalue,Config File)
* Function : ReadParameter - Read parameter value from configuration file * Arg : ParameterName (default=JOB_PARAMETER) * DefaultValue (default='') * Config file (default=@PATH/config.ini) * Return : Parameter value from config file * If ParameterName = "" Then ParameterName = "JOB_PARAMETER"
☻Page 170 of 243☻
If ConfigFile = "" Then ConfigFile = @PATH:"/config.ini" ParameterValue = DefaultValue OpenSeq ConfigFile To fCfg Else Call DSLogFatal("Error opening file ":ConfigFile, "ReadParameter") Loop While ReadSeq Line From fCfg If Trim(Field(Line,'=',1)) = ParameterName Then ParameterValue = Trim(Field(Line,'=',2)) Exit End Repeat CloseSeq fCfg Ans = ParameterValue RETURN(Ans) ReturnNumber:
String=Arg1 Slen=Len(String) Scheck=0 Rnum="" For Scheck = 1 to Slen Schar=Substrings(String,Scheck,1) If NUM(Schar) then Rnum=Rnum:Schar End Next Outer Ans=Rnum
ReturnNumbers:
length=0 length=LEN(Arg1); length1=1; Outer=length; postNum='' counter=1; For Outer = length to 1 Step -1 Arg2=Arg1[Outer,1] If NUM(Arg2) then
☻Page 171 of 243☻
length2=counter-1 if length2 = 0 then length2=counter postNum=RIGHT(Ar g1,length2) END else postNum=RIGHT(Ar g1,counter) END END counter=counter+1 Next Outer Ans=postNum
ReverseDate: Function ReverseDate(DateVal)
* Function ReverseDate(DatelVal) * Date mat be in the form of DDMMYYYY i.e. 01102003 or DMMYYYY 1102003 If Len(DateVal) = 7 then NDateVal = "0" : DateVal End Else NDateVal = DateVal End Ans = NDateVal[5,4] : NDateVal[3,2] : NDateVal[1,2]
RunJob: The routine runs a job. Job parameters may be supplied. The result is a dynamic array containing the job status, and row count information for each link. The routine UtilityGetRunJobInfo can be used to interpret this result. As well as the job name and job parameters, the routine parameters allow the job warning limit and row count limit to be set. Format of returned dynamic array: Status<1>=Jobname=FinishStatus Status<2>=Jobname Status<3>=JobStartTimeStamp Status<4>=JobStopTimeStamp
☻Page 172 of 243☻
Status<5>=LinkNames Status<5>=LinkNa mes (value mark @VM delimited) Status<6>=RowCount Status<6>=RowC ount (value mark @VM delimited)
FunctionRunJob(Arg1,Arg2,Arg3,Arg4) * Demonstrate how to run a job within the GUI development enviroment. Arguments may * be passed in. The result is a dynamic array with the resulting status and run * statistics (row counts for every link on every stage in the job) * $INCLUDE DSINCLUDE JOBCONTROL.H Equate Equate Equate Equate Equate
RoutineName To 'RunJob' RunJobName to Arg1 Params To Arg2 RowLimit To Arg3 WarnLimit To Arg4
Dim Param(100,2) ;* Limited to max of 100 parameters Deffun DSRMessage(A1, A2, A3) Calling "*DataStage*DSR_MESSAGE" Deffun DSRTimestamp Calling "DSR_TIMESTAMP" JobHandle = '' Info = '' ParamCount = Dcount(Params,'|') If RowLimit = '' Then RowLimit = 0 If WarnLimit = '' Then WarnLimit = 0 For ParamNum = 1 to ParamCount Param(ParamNum,1) = Field(Field(Params,'|',ParamNum),'=',1) Param(ParamNum,2) = Field(Field(Params,'|',ParamNum),'=',2) Next ParamNum
JobStartTime = DSRTimestamp() JobHandle = DSAttachJob(RunJobName, DSJ.ERRFATAL) * Prepare the job ErrorCode = DSPrepareJob(JobHandle) Message = DSRMessage('DSTAGE_TRX_I_0014', 'Attaching job for processing - %1 - Status of Attachment = %2', RunJobName:@FM:JobHandle ) Call DSLogInfo(Message, RoutineName) LimitErr = DSSetJobLimit(JobHandle, DSJ.LIMITROWS, RowLimit)
☻Page 173 of 243☻
LimitErr = DSSetJobLimit(JobHandle, DSJ.LIMITWARN, WarnLimit) * Need to check if error occurred. ListOfParams = DSGetJobInfo(JobHandle, DSJ.PARAMLIST) ListCount = Dcount(ListOfParams,',') For ParamNum = 1 To ParamCount Message = DSRMessage('DSTAGE_TRX_I_0015', 'Setting Job Param - %1 Setting to %2', Param(ParamNum,1):@FM:Param(ParamNum,2)) Call DSLogInfo(Message, RoutineName) ErrCode = DSSetParam(JobHandle, Param(ParamNum,1),Param(ParamNum,2)) Next ParamNum ErrCode = DSRunJob(JobHandle, DSJ.RUNNORMAL) ErrCode = DSWaitForJob(JobHandle) Status = DSGetJobInfo(JobHandle, DSJ.JOBSTATUS) JobEndTime = DSRTimestamp() If Status = DSJS.RUNFAILED Then Message = DSRMessage( 'DSTAGE_TRX_E_0020', 'Job Failed: %1', RunJobName) Call DSLogWarn(Message, RoutineName) End * Retrieve more information about this job run. Message = DSRMessage('DSTAGE_TRX_I_0016', 'Getting job statistics', '' ) Call DSLogInfo(Message, RoutineName) StageList = DSGetJobInfo(JobHandle,DSJ.STAGELIST) Message = DSRMessage('DSTAGE_TRX_I_0017', 'List of Stages=%1', StageList ) Call DSLogInfo(Message, RoutineName) StageCount = Dcount(StageList, ',') Count number of active stages.
; *
Info<1> = RunJobName Info<2> = JobStartTime ;* StartTime (Timestamp format) Info<3> = JobEndTime ;* Now/End (Timestamp format) FOR Stage = 1 To StageCount * Get links on this stage. LinkNames = DSGetStageInfo(JobHandle,Field(StageList,',',Stag e),DSJ.LINKLIST) Message = DSRMessage( 'DSTAGE_TRX_I_0018', 'LinkNames for Stage.%1 = %2', Field(StageList,',',Stage):@FM:LinkNames) Call DSLogInfo(Message, RoutineName)
☻Page 174 of 243☻
LinkCount = Dcount(LinkNames,',') For StageLink = 1 To LinkCount * Get Rowcount For this linkname RowCount = DSGetLinkInfo(JobHandle,Field(StageList,',',Stage ),Field(LinkNames,',',StageLink),DSJ.LINKROWCOUNT ) Message = DSRMessage( 'DSTAGE_TRX_I_0019', 'RowCount for %1.%2=%3', Field(StageList,',',Stage):@FM:Field(LinkNames,', ',StageLink):@FM:RowCount) Call DSLogInfo(Message, RoutineName) Info<4,-1> = Field(StageList,',',Stage):'.':Field(LinkNames,', ',StageLink) Info<5,-1> = RowCount Next StageLink Next Stage
Message = DSRMessage( 'DSTAGE_TRX_I_0020', 'RunJob Status=%1', Info ) Call DSLogInfo(Message, RoutineName) Ans = RunJobName:'=':Status:@FM:Info
RunJobAndDetach: The routine runs a job. Job parameters may be supplied. The job is detached from so tht others may be started immediately and the control job finish. As well as the job name and job parameters, the routine parameters allow the job warning limit and row count limit to be set.
FunctionRunDetachJob(Arg1,Arg2,Arg3,Arg4) * Run a job, and detach from it so that this job can end * $INCLUDE DSINCLUDE JOBCONTROL.H Equate Equate Equate Equate Equate
RoutineName RunJobName Params RowLimit WarnLimit
To To To To To
'RunJobAndDetach' Arg1 Arg2 Arg3 Arg4
☻Page 175 of 243☻
Dim Param(100,2) ;* Limited to max of 100 parameters Deffun DSRMessage(A1, A2, A3) Calling "*DataStage*DSR_MESSAGE" Deffun DSRTimestamp Calling "DSR_TIMESTAMP" JobHandle = '' Info = '' ParamCount = Dcount(Params,'|') If RowLimit = '' Then RowLimit = 0 If WarnLimit = '' Then WarnLimit = 0 For ParamNum = 1 to ParamCount Param(ParamNum,1) = Field(Field(Params,'|',ParamNum),'=',1) Param(ParamNum,2) = Field(Field(Params,'|',ParamNum),'=',2) Next ParamNum * Attach to the job JobHandle = DSAttachJob(RunJobName, DSJ.ERRWARN) If JobHandle = 0 Then Call DSLogInfo("Job ":RunJobName:" not started - attach failed",RoutineName) Else * Prepare the job ErrorCode =
DSPrepareJob(JobHandle)
Message = DSRMessage('DSTAGE_TRX_I_0014', 'Attaching job for processing - %1 - Status of Attachment = %2', RunJobName:@FM:JobHandle ) Call DSLogInfo(Message, RoutineName) LimitErr = DSSetJobLimit(JobHandle, DSJ.LIMITROWS, RowLimit) LimitErr = DSSetJobLimit(JobHandle, DSJ.LIMITWARN, WarnLimit) * Need to check if error occurred. ListOfParams = DSGetJobInfo(JobHandle, DSJ.PARAMLIST) ListCount = Dcount(ListOfParams,',') For ParamNum = 1 To ParamCount Message = DSRMessage('DSTAGE_TRX_I_0015', 'Setting Job Param - %1 Setting to %2', Param(ParamNum,1):@FM:Param(ParamNum,2)) Call DSLogInfo(Message, RoutineName) ErrCode = DSSetParam(JobHandle, Param(ParamNum,1),Param(ParamNum,2)) Next ParamNum ErrCode = DSRunJob(JobHandle, DSJ.RUNNORMAL) ErrCode = DSDetachJob(JobHandle) End
☻Page 176 of 243☻
Ans = 0 RunShellCommandReturnStatus: Function RunShellcommandreturnstatus(Command)
Call DSLogInfo('Running command:':Command,'RunShellCommandReturnStatus') Call DSExecute('UNIX',Command,Ans,Ret) Call DSLogInfo('Output from command:':Ans,'RunShellCommandReturnStatus') Return(Ret)
SegKey: Segment_Num: An Integer number representing the order number of the Segment in the IDoc Segment_Parm: A Segment Parameter containing a string of Y's and N's in order of Segment_Num denoting of the segment should be written to in this Module Key: The Value to Be Mapped ErrorLogInd: An Indicator to indicate of errors should be logged (Note this is not yet implemented) Function Seqkey(Segment_Num,segmentparam,key,ErrorLogInd)
* FUNCTION SegKey(Value,ErrorLogInd) * * Executes a lookup against a hashed file using a key * * Input Parameters : Arg1: Segment_Num * Arg2: Segment_Parm * Arg1: Key = An ordered Pip separated set of Seqment Primary Key Fields * Arg2: ErrorLogInd = An Indicator to indicate of errors should be logged (Note this is not yet implemented) * * Return Values: If the Value is not found, return value is: -1. or the Default value if that is supplied * If Format Table not found, return value is: -2 * *
☻Page 177 of 243☻
* RoutineName = 'SegKey' BlankFields = "" CRLF = Char(13) : Char(10) Message = "IN Seg Key" : Segment_Num : "|" : Segment_Parm : "|" : Key : "|" : ErrorLogInd : "|" * Call DSLogInfo(Message, RoutineName )
* Determine if this segment should output Write_Ind = Field(Segment_Parm,"|",Segment_Num) If Write_Ind = "Y" then * Count how many keys NumKeys = Dcount(Key,"|") * Make a list of any keys that are missing Blank_Key_Cnt = 0 ReturnKey = "" For i = 1 to NumKeys Key_Part = Field(Key,"|",i) if Key_Part = "" Then Blank_Key_Cnt = Blank_Key_Cnt + 1 BlankFields = i end ReturnKey = ReturnKey : Key_Part Next i If Blank_Key_Cnt > 0 and ErrorLogInd = "Y" then Message = "Error in Segment Key: ": Segment_Num : " There are " : Blank_Key_Cnt : " Missing Key Parts " : "The Missing Key Parts are" : BlankFields * Call DSLogInfo(Message, RoutineName ) End If Blank_Key_Cnt > 0 then Ans = "Invalid_Key" End Else Ans = ReturnKey End End Else Ans = "Invalid_Key" End
☻Page 178 of 243☻
SetDSParamsFromFile: A before job subroutine to set Job parameters from an external flat file Input Arg should be of the form: ParamDir,ParamFile If ParamDir is not supplied, the routine assumes the Project directory If ParamFile is not supplied, the routine assumes the Job Name (this could be dangerous) The routine will abort the job if anything doesn't go to plan Note: a lock is placed to stop the same job from running another instance of this routine. The second instance will have to wait for the routine to finish before being allowed to proceed. The lock is released however the routine terminates (normal, abort...) The parameter file should contain non-blank lines of the form ParName = ParValue White space is ignored. The Routine may be invoked via the normal Before Job Subroutine setting, or from within the 'Job Properties->Job Control' window by entering "Call DSU.SetParams('MyDir,MyFile',ErrorCode)" For Andrew Webb's eyes only The routine could be made to work off a hashed file, or environment variables quite easily. It is not possible to create Job Parameters onthe-fly because they are referenced within a Job via an EQUATE of the form JobParam%%1 = STAGECOM.STATUS<7,1> JobParam%%2 = STAGECOM.STATUS<7,2> etc This is then compiled up....So forget it!
Subroutinues SetDsparmsformfile(inputArg,Errorcode)
$INCLUDE $INCLUDE $INCLUDE $INCLUDE
DSINCLUDE DSINCLUDE DSINCLUDE DSINCLUDE
DSD_STAGE.H JOBCONTROL.H DSD.H DSD_RTSTATUS.H
Equ SetParams To 'SetDSParamsFromFile'
☻Page 179 of 243☻
ErrorCode = 0 this to non-zero to stop the stage/job
; * set
JobName = Field(STAGECOM.NAME,'.',1,2) ParamList = STAGECOM.JOB.CONFIG If ParamList = '' Then Call DSLogWarn('Parameters may not be externally derived if the job has no parameters defined.',SetParams) Return End Call DSLogInfo("SetDSParmsFromFile inputarg >" : InputArg : "<", SetParms) ArgList = Trims(Convert(',',@FM,InputArg)) ParamDir = ArgList<1> If ParamDir = '' Then ParamDir = '.' End ParamFile = ArgList<2> If ParamFile = '' Then ParamFile = JobName End If System(91) Then Delim = '\' End Else Delim = '/' End ParamPath = ParamDir:Delim:ParamFile
Call DSLogInfo('Setting Job Parameters from external source ':ParamPath,SetParams) Call DSLogInfo(JobName:' ':ParamList,SetParams) OpenSeq ParamPath To ParamFileVar On Error ErrorCode = 1 Call DSLogFatal('File open error on ':ParamPath:'. Status = ':Status(),SetParams) End Else Call DSLogWarn('File ':ParamPath:' not found - using default parameters.',SetParams) Return End
StatusFileName = FileInfo(DSRTCOM.RTSTATUS.FVAR,1) Readvu LockItem From DSRTCOM.RTSTATUS.FVAR, JobName, 1 On Error Call DSLogFatal('File read error for ':JobName:' on ':StatusFileName:'. Status = ':Status(),SetParams) ErrorCode = 1
☻Page 180 of 243☻
Return End Else Call DSLogFatal('Failed to read ':JobName:' record from ':StatusFileName,SetParams) ErrorCode = 2 Return End
StatusId = JobName:'.':STAGECOM.WAVE.NUM Readv ParamValues From DSRTCOM.RTSTATUS.FVAR, StatusId, JOB.PARAM.VALUES On Error Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 1 Call DSLogFatal('File read error for ':StatusId:' on ':StatusFileName:'. Status = ':Status(),SetParams) Return End Else Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 2 Call DSLogFatal('Failed to read ':StatusId:' record from ':StatusFileName,SetParams) Return End
Loop ReadSeq ParamData From ParamFileVar On Error Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 4 Call DSLogFatal('File read error on ':ParamPath:'. Status = ':Status(),SetParams) Return End Else Exit End Convert '=' To @FM In ParamData ParamName = Trim(ParamData<1>) Del ParamData<1> ParamValue = Convert(@FM,'=',TrimB(ParamData)) Locate(ParamName,ParamList,1;ParamPos) Then If Index(UpCase(ParamName),'PASSWORD',1) = 0 Then Call DSLogInfo('Parameter "':ParamName:'" set to "':ParamValue:'"',SetParams)
☻Page 181 of 243☻
Else Call DSLogInfo('Parameter "':ParamName:'" set but not displayed on log',SetParams) End Else Call DSLogWarn('Parameter ':ParamName:' does not exist in Job ':JobName,SetParams) Continue End ParamValues<1,ParamPos> = ParamValue Repeat
Writev ParamValues On DSRTCOM.RTSTATUS.FVAR, StatusId, JOB.PARAM.VALUES On Error Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 5 Call DSLogFatal('File write error for ':StatusId:' on ':StatusFileName:'. Status = ':Status(),SetParams) Return End Else Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 6 Call DSLogFatal('Unable to write ':StatusId:' record on ':StatusFileName:'. Status = ':Status(),SetParams) Return End Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null STAGECOM.JOB.STATUS = ParamValues
setParamsForFileSplit: Using values from a control file this routine will run a job multiple times loading the specified number of rows for each job run. Function setParamsForFileSplit: (ControlFilename,Jobname) ************************************************* ********************** * Nick Bond.... * * This routine retrieves values from a control file and passes them as paramters to * * a job which is run once for each record in the control file. * * *
☻Page 182 of 243☻
************************************************* ********************** $INCLUDE DSINCLUDE JOBCONTROL.H Equate Routine TO 'setParamsForFileSplit' Call DSLogInfo('Starting Routine ', Routine) vFileName = ControlFileName vJobName = JobName vRecord = 1 ******** Open Control File and retrieve split values. Call DSLogInfo('Opening File: ':vFileName, Routine) OPEN vFileName TO vFILE ELSE Call DSLogFatal("Can't open file: ":vFileName, Routine) Call DSLogInfo('File is open: ':vFileName, Routine)
******** Start loop which gets parameters from control file and runs job. Loop
** Check record exists for record id READ vStart FROM vFILE, vRecord Then Call DSLogInfo('Loop Started: ':vFileName, Routine) Call DSLogInfo('Control File ID: ':vRecord, Routine) READV vStart FROM vFILE, vRecord, 4 Then READV vStop FROM vFILE, vRecord, 5 Then Call DSLogInfo('Load Records: ':vStart: ' to ' :vStop, Routine) End End
** Set Job Parameters and Run Job. vNewFile = 'SingleInvoice':vRecord vJobHandle = DSAttachJob(vJobName, DSJ.ERRFATAL) ErrCode = DSSetParam(vJobHandle, 'StartID', vStart) ErrCode = DSSetParam(vJobHandle, 'StopID', vStop)
☻Page 183 of 243☻
ErrCode = DSSetParam(vJobHandle, 'newfile', vNewFile ) ErrCode = DSRunJob(vJobHandle, DSJ.RUNNORMAL) ErrCode = DSWaitForJob(vJobHandle)
vRecord = vRecord+1 End Else ** If record is empty leave loop GoTo Label1 End Repeat ******** End of Loop Label1: Call DSLogInfo('All records have been processed', Routine) Ans = vStart : ', ' : vStop
SetUserStatus: Function Setuserstatus(Arg1)
Call DSSetUserStatus(Arg1) Ans=Arg1
SMARTNumberConversion Converts numbers in format 1234,567 to format 1234.57 Function SMARTNUMBERconversion(arg1) INP = CONVERT(",",".",Arg1) decimal point
; * Commas to
WRK = ICONV(INP,"MD33") convert to internal to 3 decimal places Ans = OCONV(WRK,"MD23") convert to external t 2 decimal places
; * ; *
TicketErrorCommon Required to use the "LogToErrorFile" Routine. This stores variables used by the routine in shared memory: * FUNCTION TicketErrorCommon(Mod_Run_ID,Ticket_Group,Ticket_ Sequence,Ticket_Set_Key,Job_Stage_Name,Mod_Root_P ath) * * Places the current Row Ticket in Common
☻Page 184 of 243☻
* * Input Parameters : Arg1: Mod_Run_ID = The unique number allocated to a run of an Module * Arg2: Ticket_File_ID = The File ID assigned to the source of the Current Row * Arg3: Ticket_Sequence = The Ticket Sequence Number of the Current Row * Arg4: Ticket_Set_Key = Identifies a set of rows e.g. an Invoice number to set of invoice lines * Arg5: Job_Stage_Name = The Name of the Stage in the Job you want recorded in the error log * Arg6: Mod_Root_Path = Root of the module - used for location of error hash file * * Don't Return Ans but need to keep the compiler happy Ans = "" RoutineName = 'ErrorTicketCommon' Common /TicketErrorCommon/ ModRunID, TicketFileID, TicketSequence, SetKey, JobStageName, ModRootPath
ModRunID = Mod_Run_ID TicketFileID = Ticket_File_ID TicketSequence = Ticket_Sequence SetKey = Ticket_Set_Key JobStageName = Job_Stage_Name ModRootPath = Mod_Root_Path RETURN(Ans)
TVARate: Function TvaRate(mtt_Base,mtt_TVA) BaseFormated = "":(Mtt_Base) TvaFormated = "":(Mtt_TVA) If IsNull(BaseFormated) or BaseFormated = "0" or BaseFormated= "" Then Ans = 0 End Else TvaFormated = Ereplace(TvaFormated, ".", "") TvaFormated = Ereplace(TvaFormated, ",", "") BaseFormated = Ereplace(BaseFormated, ".", "") BaseFormated = Ereplace(BaseFormated, ",", "") Ans = Ereplace(TvaFormated/BaseFormated, ".", "") End
☻Page 185 of 243☻
TVATest: Function Tvatest(Mtt_TVA,Dlco) Country = TRIM(Dlco):";" TestCountry = Count("AT;BE;CY;CZ;DE;DK;EE;ES;FI;GB;GR;HU;IE;IT; LT;LU;LV;MT;NL;PL;PT;SE;SI;SK;", Country) Begin Case Case Mtt_TVA <> 0 Reply = "B3" Case Mtt_TVA = 0 And Dlco = "FR" And TestCountry = 0 Reply = "A1" Case Mtt_TVA = 0 And Dlco <> "FR" And TestCountry = 1 Reply = "E6" Case Mtt_TVA = 0 And Dlco <> "FR" And TestCountry = 0 Reply = "E7" Case @True Reply = "Error" End Case Ans = Reply
UnTarFile: Function Untarfile(Arg1) DIR = "/interface/dashboard/dashbd_dev_dk_int/Source/" FNAME = "GLEISND_OC_02_20040607_12455700.csv" *CMD = "ll -tr ":DIR:"|grep ":FNAME *CMD = "cmp -s ":DIR:"|grep ":FNAME CMD = "tar -xvvf ":DIR:FNAME *-------------------------------*---syntax= tar -xvvf myfile.tar *---------------------------------
Call DSExecute("UNIX", CMD, Output, SystemReturnCode) Ans = Output UtilityMessageToControllerLog Write an informational message to the log of the controlling job
☻Page 186 of 243☻
This routine takes a user defined message and displays it in the job log of the controlling sequence as an informational message. The routine should be used sparingly in production jobs to avoid degrading the performance. The return value of the function is always 1.:
Function UtilityMessageToControllerLog(Arg1)
* Write an informational message to the log of the controlling job. * * This function is mainly intended for development purposes, but can be used * within a production environment for tracing data values. The user should * use this function cautiously, as it will cause a decrease in performance * if called often. * $include DSINCLUDE JOBCONTROL.H Equate RoutineName To "UtilityMessageToControllerLog" InputMsg = Arg1 If Isnull(InputMsg) Then InputMsg = " " End Call DSLogToController(InputMsg) Ans = 1
UTLPropagateParms: Routine allows a job to inherit parameter values from Job Control. This routine allows a job to inherit parameter values from Job Control by listing the parameters of child job and thereafter find the parameter in the parent job, getting value and setting parameter value in child job. Input Argument : Job handle (set by using DSAttachJob in Job Control)
☻Page 187 of 243☻
Output : If a parameter is not found the routine returns 3, otherwise 0. Function UTLprapagateparam(Handle) #include DSINCLUDE JOBCONTROL.H Equ Me To 'UTLJobRun' Ans = 0 ParentJobName = DSGetJobInfo(DSJ.ME,DSJ.JOBNAME) ChildParams = Convert(',',@FM,DSGetJobInfo(Handle,DSJ.PARAMLIST )) ParamCount = Dcount(ChildParams,@FM) If ParamCount Then ParentParams = Convert(',',@FM,DSGetJobInfo(DSJ.ME,DSJ.PARAMLIST )) Loop ThisParam = ChildParams<1> Del ChildParams<1> *** Find job parameter in parent job and set parameter in child job to value of parent. Locate(ThisParam,ParentParams;ParamPo s) Then ThisValue = DSGetParamInfo(DSJ.ME,ThisParam,DSJ.PARAMVALUE) ParamStatus = DSSetParam(Handle,ThisParam,ThisValue) Call DSLogInfo ("Setting: ":ThisParam:" To: ":ThisValue, "UTLPropagateParms") End Else *** If the parameter is not found in parent job: *** - write a warning to log file. *** - return code changed to 3. Call DSLogWarn ("Parameter : ":ThisParam:" does not exist in ":ParentJobName, "UTLPropagateParms") Ans = 3 End While ChildParams # '' Do Repeat End Return(Ans)
UTLRunReceptionJob: This routines allows generic starting of reception jobs without creating specific Reception Processing Sequence.
☻Page 188 of 243☻
This routines allows generic starting of reception jobs without creating specific Reception Processing Sequence. - Determines job to launch (sequence or elementary job) - Attaches job - Propagates parameters using routine UTLPropagateParms. - Runs job and takes action upon result (any warning will lead to a return code NOT OK) Obligatory parameters in input are : - Country_Parm - Fileset_Name_Type_Parm - Abort_Msg_Parm - Module_Run_Parm Function Utilrunrece[pationjob(countryparam,fileset_name_typepa ram,modulerunparam,Abort_msg_param)
$INCLUDE DSINCLUDE DSJ_XFUNCS.H $INCLUDE DSINCLUDE JOBCONTROL.H EQU Time$$ Lit "Oconv(Time(), 'MTS:'):': '" Ans = -3 vRecJobNameBase = Country_Parm : "_" : Fileset_Name_Type_Parm : "_Reception" ************************************************* ************************************** *** ################### *** ************************************************* ************************************** *** Define job to launch Sequence or Job (START) *** *** *** L$DefineSeq$START: summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2) started", "ReceptionJob":@FM:vRecJobNameBase)) ** If Sequential Job exists - start Sequential Job. vJobSuffix = "_Seq" vRecJobName = vRecJobNameBase : vJobSuffix GoTo L$AttachJob$START L$DefineJob$START: ** If no Sequential Job - start Elementary Job vJobSuffix = "_Job"
☻Page 189 of 243☻
vRecJobName = vRecJobNameBase : vJobSuffix GoTo L$AttachJob$START L$ErrNoJob$START: ** If no job found - warn and end job Msg = DSMakeMsg("No job found to attach" : vRecJobNameBase : "_Seq or _Job", "") MsgId = "@ReceptionJob" GoTo L$ERROR L$AttachJob$START: Call DSLogInfo(DSMakeMsg("Checking presence of " : vRecJobName : " for " : Module_Run_Parm, ""), "") jbRecepJob = vRecJobName hRecepJob = DSAttachJob(jbRecepJob, DSJ.ERRNONE) If (Not(hRecepJob)) Then AttachErrorMsg$ = DSGetLastErrorMsg() If AttachErrorMsg$ = "(DSOpenJob) Cannot find job " : vRecJobName Then If vJobSuffix = "_Seq" Then GoTo L$DefineJob$START Else GoTo L$ErrNoJob$START End End Msg = DSMakeMsg("DSTAGE_JSG_M_0001\Error calling DSAttachJob(%1)%2", jbRecepJob:@FM:AttachErrorMsg$) MsgId = "@ReceptionJob"; GoTo L$ERROR GoTo L$ERROR End If hRecepJob = 2 Then GoTo L$RecepJobPrepare$START End *** *** *** Define job to launch Sequence or Job (END) *** ************************************************* ************************************** *** ################### *** ************************************************* ************************************** *** Setup , Run and Wait for Reception Job (START) *** *** *** L$RecepJobPrepare$START: *** Activity "ReceptionJob": Setup, Run and Wait for job hRecepJob = DSPrepareJob(hRecepJob) If (Not(hRecepJob)) Then
☻Page 190 of 243☻
Msg = DSMakeMsg("DSTAGE_JSG_M_0012\Error calling DSPrepareJob(%1)%2", jbRecepJob:@FM:DSGetLastErrorMsg()) MsgId = "@ReceptionJob"; GoTo L$ERROR End summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2) started", "ReceptionJob":@FM:vRecJobName)) GoTo L$PropagateParms$START L$PropagateParms$START: *** Activity "PropagateParms": Propagating parameters from parent job to child job using separate routine. summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0058\%1 (ROUTINE %2) started", "PropagateParms":@FM:"DSU.UTLPropagateParms")) RtnOk = DSCheckRoutine("DSU.UTLPropagateParms") If (Not(RtnOk)) Then Msg = DSMakeMsg("DSTAGE_JSG_M_0005\BASIC routine is not cataloged: %1", "DSU.UTLPropagateParms") MsgId = "@PropagateParms"; GoTo L$ERROR End Call 'DSU.UTLPropagateParms'(rPropagateParms, hRecepJob) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0064\%1 finished, reply= %2", "PropagateParms":@FM:rPropagateParms)) IdAbortRact%%Result1%%1 = rPropagateParms IdAbortRact%%Name%%3 = "DSU.UTLPropagateParms" *** Checking result of routine. If <> 0 then abort processing. If (rPropagateParms <> 0) Then GoTo L$ABORT GoTo L$RecepJobRun$START
L$RecepJobRun$START: ErrCode = DSRunJob(hRecepJob, DSJ.RUNNORMAL) If (ErrCode <> DSJE.NOERROR) Then Msg = DSMakeMsg("DSTAGE_JSG_M_0003\Error calling DSRunJob(%1), code=%2[E]", jbRecepJob:@FM:ErrCode) MsgId = "@ReceptionJob"; GoTo L$ERROR End ErrCode = DSWaitForJob(hRecepJob) GoTo L$RecepJob$FINISHED *** *** *** Setup , Run and Wait for Reception Job (END) *** ************************************************* **************************************
☻Page 191 of 243☻
*** ################### *** ************************************************* ************************************** *** Verification of result of Reception Job (START) *** *** *** L$RecepJob$FINISHED: jobRecepJobStatus = DSGetJobInfo(hRecepJob, DSJ.JOBSTATUS) jobRecepJobUserstatus = DSGetJobInfo(hRecepJob, DSJ.USERSTATUS) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0063\%1 finished, status= %2[E]", "ReceptionJob":@FM:jobRecepJobStatus)) IdRecepJob%%Result2%%5 = jobRecepJobUserstatus IdRecepJob%%Result1%%6 = jobRecepJobStatus IdRecepJob%%Name%%7 = vRecJobName Dummy = DSDetachJob(hRecepJob) bRecepJobelse = @True If (jobRecepJobStatus = DSJS.RUNOK) Then GoTo L$SeqSuccess$START; bRecepJobelse = @False If bRecepJobelse Then GoTo L$SeqFail$START *** *** *** Verification of result of Reception Job (END) *** ************************************************* ************************************** *** ################### *** ************************************************* ************************************** *** Definition of actions to take on failure or success (START) *** *** *** L$SeqFail$START: *** Sequencer "Fail": wait until inputs ready Call DSLogInfo(DSMakeMsg("Routine SEQUENCER Control End Sequence Reports a FAIL on Reception Job", ""), "@Fail") GoTo L$ABORT
L$SeqSuccess$START: *** Sequencer "Success": wait until inputs ready Call DSLogInfo(DSMakeMsg("Routine SEQUENCER Control End Sequence Reports a SUCCESS on Reception Job", ""), "@Success") GoTo L$FINISH
☻Page 192 of 243☻
*** *** *** Definition of actions to take on failure or success (END) *** ************************************************* ************************************** *** ################### *** L$ERROR: Call DSLogWarn(DSMakeMsg("DSTAGE_JSG_M_0009\Controller problem: %1", Msg), MsgId) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0052\Exception raised: %1", MsgId:", ":Msg)) bAbandoning = @True GoTo L$FINISH L$ABORT: summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0056\Sequence failed", "")) Call DSLogInfo(summary$, "@UTLRunReceptionJob") Call DSLogWarn("Unrecoverable errors in routine UTLRunReceptionJob, see entries above", "@UTLRunReceptionJob") Ans = -3 GoTo L$EXIT ************************************************* * L$FINISH: If bAbandoning Then GoTo L$ABORT summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0054\Sequence finished OK", "")) Call DSLogInfo(summary$, "@UTLRunReceptionJob") Ans = 0
ValidateField: Checks the length and data type of a value. Also checks value is a valid date if the type is Date. Any errors are logged to the Error Hash File Field_Value: The value from the field being validated Field_Name: The name of the field being validated Length: The maximum length of the field being validated Data_Type: The data type expected - possible values (Numeric, Alpha, Date, Char)
☻Page 193 of 243☻
Date_Format: If Data_Type is 'Date' Then the format must be specified. The syntax for this is the same as for the Iconv function. i.e "D/YMD[4,2,2]" for a date in the format 2004/12/23 $INCLUDE DSINCLUDE JOBCONTROL.H vRoutineName = 'ValidateField' DEFFUN LogToErrorFile(Table,Field_Name,Check_Value,Error _Number,Text_1,Text_2, Text_3, Message) Calling "DSU.LogToErrorFile" Common /HashLookup/ FileHandles(100), FilesOpened Common /TicketErrorCommon/ ModRunID, TicketFileID, TicketSequence, TicketSetKey, JobStageName, ModRootPath Ans = "START" vData_Type = Downcase(Data_Type) BEGIN CASE ******** Check the arguments * Value being checked is null CASE isNull(Field_Value) Call DSTransformError("The value being checked is Null - Field_Name = " : Field_Name, vRoutineName) * Argument for the data type is not valid CASE vData_Type <> "char" AND vData_Type <> "alpha" AND vData_Type <> "numeric" AND vData_Type <> "date" Call DSTransformError("The value " : Data_Type : " is not a valid data type for routine: ", vRoutineName) * Length is not a number CASE Not(Num(Length)) Call DSTransformError("The length supplied is not a number : Field Checked " : Field_Name, vRoutineName) CASE vData_Type = "date" And (Date_Format = "" OR isNull(Date_Format)) END CASE *********
******** Check The Values *** Check the data type of supplied value *** If vData_Type = 'numeric' Then If Num(Field_Value) Then vErr = 'OK' End Else vErr = LogToErrorFile("No Table",Field_Name,Field_Value,'10002','Text1','Te xt2','Text3','Value provided is not numeric') Ans = ' [Not Numeric]' End End Else If vData_Type = 'alpha' Then
☻Page 194 of 243☻
If Alpha(Field_Value) Then vErr = 'OK' End Else vErr = LogToErrorFile("No Table",Field_Name,Field_Value,'10003','Text1','Te xt2','Text3','Value provided is not alpha') Ans = ' [Not Alpha]' End End Else If vData_Type = 'date' Then vErr = Iconv(Field_Value,Date_Format) vErr = Status() If vErr <> 0 Then vErr = LogToErrorFile("No Table",Field_Name,Field_Value,'10004','Text1','Te xt2','Text3','Value provided is not a valid date for mask ':Date_Format) Ans = ' [Invalid Date]' End End Else End End End *** Check the length of the supplied value *** If Len(Field_Value) <= Length Then vErr = 'OK' End Else vErr = LogToErrorFile("No Table",Field_Name,Field_Value,'10001','Text1','Te xt2','Text3','Value provided is not the correct length') Ans = Ans : ' [Length Error]' End Ans = Ans VatCheckSG: Function VatcheckSg(Arg1)
String=Arg1 Slen=Len(String) Scheck=0 CharCheck=0 For Scheck = 1 to Slen Schar=Substrings(String,Scheck,1) If NUM(Schar) <> 1
then
CharCheck=CharCheck+1 end
☻Page 195 of 243☻
Next Ans=CharCheck WriteParmFile: Function writeparamfile(Arg1,Arg2,arg3,arg4) Arg1; Arg2: Arg3: Arg4:
File Path File Name Parameter Name Parameter Value
vParamFile = Arg1 : "/" : Arg2 vParamName = Arg3 vParamValue = Arg4 If Arg4 = -256 Then vParamValue = "" End OpenSeq vParamFile To FileVar Else Call DSLogWarn("Cannot open ":vParamFile , "Cannot Open ParamFile") End Loop ReadSeq Dummy From FileVar Else Exit ;* at end-of-file Repeat MyLine= vParamName : "=" : vParamValue *Write New Error File WriteSeqF MyLine To FileVar Else Call DSLogFatal("Cannot write to ": FileVar , "Cannot write to file") End WeofSeq FileVar CloseSeq FileVar Ans=MyLine WriteSeg:
* FUNCTION SegKey(Value,ErrorLogInd) * * Executes a lookup against a hashed file using a key * * Input Parameters : Arg1: Segment_Num * Arg2: Segment_Parm * * Return Values: If the Segment should be written return value is "Y" * If If not return value is "N" *
☻Page 196 of 243☻
* * RoutineName = 'WriteSeg' * Determine if this segment should output Write_Ind = Field(Segment_Parm,"|",Segment_Num) If Write_Ind = "Y" then Ans = "Y" End Else Ans = "N" End
SET_JOB_PARAMETERS_ROUTINE InputArg……………..Arguments. ErrorCode…………Arguments. Routinuename: SetDSParamsFromFile $INCLUDE DSINCLUDE DSD_STAGE.H $INCLUDE DSINCLUDE JOBCONTROL.H $INCLUDE DSINCLUDE DSD.H $INCLUDE DSINCLUDE DSD_RTSTATUS.H Equ SetParams To 'SetDSParamsFromFile' ErrorCode = 0 this to non-zero to stop the stage/job
; * set
JobName = Field(STAGECOM.NAME,'.',1,2) ParamList = STAGECOM.JOB.CONFIG If ParamList = '' Then Call DSLogWarn('Parameters may not be externally derived if the job has no parameters defined.',SetParams) Return End Call DSLogInfo("SetDSParmsFromFile inputarg >" : InputArg : "<", SetParms) ArgList = Trims(Convert(',',@FM,InputArg)) ParamDir = ArgList<1> If ParamDir = '' Then ParamDir = '.' End ParamFile = ArgList<2> If ParamFile = '' Then ParamFile = JobName
☻Page 197 of 243☻
End If System(91) Then Delim = '\' End Else Delim = '/' End ParamPath = ParamDir:Delim:ParamFile
Call DSLogInfo('Setting Job Parameters from external source ':ParamPath,SetParams) Call DSLogInfo(JobName:' ':ParamList,SetParams) OpenSeq ParamPath To ParamFileVar On Error ErrorCode = 1 Call DSLogFatal('File open error on ':ParamPath:'. Status = ':Status(),SetParams) End Else Call DSLogWarn('File ':ParamPath:' not found - using default parameters.',SetParams) Return End
End Else Call StatusFileName = FileInfo(DSRTCOM.RTSTATUS.FVAR,1) Readvu LockItem From DSRTCOM.RTSTATUS.FVAR, JobName, 1 On Error Call DSLogFatal('File read error for ':JobName:' on ':StatusFileName:'. Status = ':Status(),SetParams) ErrorCode = 1 ReturnDSLogFatal('Failed to read ':JobName:' record from ':StatusFileName,SetParams) ErrorCode = 2 Return End
StatusId = JobName:'.':STAGECOM.WAVE.NUM Readv ParamValues From DSRTCOM.RTSTATUS.FVAR, StatusId, JOB.PARAM.VALUES On Error Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 1 Call DSLogFatal('File read error for ':StatusId:' on ':StatusFileName:'. Status = ':Status(),SetParams) Return End Else Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null
☻Page 198 of 243☻
ErrorCode = 2 Call DSLogFatal('Failed to read ':StatusId:' record from ':StatusFileName,SetParams) Return End
Loop ReadSeq ParamData From ParamFileVar On Error Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 4 Call DSLogFatal('File read error on ':ParamPath:'. Status = ':Status(),SetParams) Return End Else Exit End Convert '=' To @FM In ParamData ParamName = Trim(ParamData<1>) Del ParamData<1> ParamValue = Convert(@FM,'=',TrimB(ParamData)) Locate(ParamName,ParamList,1;ParamPos) Then If Index(UpCase(ParamName),'PASSWORD',1) = 0 Then Call DSLogInfo('Parameter "':ParamName:'" set to "':ParamValue:'"',SetParams) Else Call DSLogInfo('Parameter "':ParamName:'" set but not displayed on log',SetParams) End Else Call DSLogWarn('Parameter ':ParamName:' does not exist in Job ':JobName,SetParams) Continue End ParamValues<1,ParamPos> = ParamValue Repeat
Writev ParamValues On DSRTCOM.RTSTATUS.FVAR, StatusId, JOB.PARAM.VALUES On Error Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 5 Call DSLogFatal('File write error for ':StatusId:' on ':StatusFileName:'. Status = ':Status(),SetParams) Return End Else
☻Page 199 of 243☻
Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null ErrorCode = 6 Call DSLogFatal('Unable to write ':StatusId:' record on ':StatusFileName:'. Status = ':Status(),SetParams) Return End
Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null STAGECOM.JOB.STATUS = ParamValues vnput Arg should be of the form: ParamDir,ParamFile If ParamDir is not supplied, the routine assumes the Project directory If ParamFile is not supplied, the routine assumes the Job Name (this could be dangerous) The routine will abort the job if anything doesn't go to plan Note: a lock is placed to stop the same job from running another instance of this routine. The second instance will have to wait for the routine to finish before being allowed to proceed. The lock is released however the routine terminates (normal, abort...) The parameter file should contain non-blank lines of the form ParName = ParValue White space is ignored. The Routine may be invoked via the normal Before Job Subroutine setting, or from within the 'Job Properties->Job Control' window by entering "Call DSU.SetParams('MyDir,MyFile',ErrorCode)" For Andrew Webb's eyes only The routine could be made to work off a hashed file, or environment variables quite easily. It is not possible to create Job Parameters onthe-fly because they are referenced within a Job via an EQUATE of the form JobParam%%1 = STAGECOM.STATUS<7,1> JobParam%%2 = STAGECOM.STATUS<7,2> etc This is then compiled up....So forget it!
Tokens were replaced below as follows:
☻Page 200 of 243☻
* IdV0S0%%Result2%%1 <= FR_PARIS_End_to_End_Processing_SAct.$UserStatus * IdV0S0%%Result1%%2 <= FR_PARIS_End_to_End_Processing_SAct.$JobStatus * IdV0S0%%Name%%3 <= FR_PARIS_End_to_End_Processing_SAct.$JobName * IdV0S2%%Result1%%4 <= Set_Job_Parameters_Routine.$ReturnValue * IdV0S2%%Name%%6 <= Set_Job_Parameters_Routine. $RoutineName * IdV0S57%%Result2%%8 <= FR_PARIS_Control_Start_Processing_SAct. $UserStatus * IdV0S57%%Result1%%9 <= FR_PARIS_Control_Start_Processing_SAct.$JobStatus * IdV0S57%%Name%%10 <= FR_PARIS_Control_Start_Processing_SAct.$JobName * IdV0S61%%Result2%%11 <= FR_PARIS_Control_End_Processing_SAct.$UserStatus * IdV0S61%%Result1%%12 <= FR_PARIS_Control_End_Processing_SAct.$JobStatus * IdV0S61%%Name%%13 <= FR_PARIS_Control_End_Processing_SAct.$JobName * IdV0S72%%Result1%%14 <= Abort_RAct.$ReturnValue * IdV0S72%%Name%%16 <= Abort_RAct.$RoutineName * *** [Generated at 2005-07-07 09:41:15 - 7.1.0.8] $INCLUDE DSINCLUDE DSJ_XFUNCS.H EQU Time$$ Lit "Oconv(Time(), 'MTS:'):': '" **************************************** * Graphical Sequencer generated code for Job FR_PARIS_Control_Seq **************************************** seq$V0S10$count = 0 seq$V0S43$count = 0 seq$V0S44$count = 0 handle$list = "" id$list = "" abort$list = "" b$Abandoning = @False b$AllStarted = @False summary$restarting = @False *** Sequence start point summary$ = DSMakeMsg("DSTAGE_JSG_M_0048\Summary of sequence run", "") If summary$restarting Then summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0049\Sequence restarted after failure", "")) End Else summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0051\Sequence started", "")) End GoSub L$V0S2$START b$AllStarted = @True GoTo L$WAITFORJOB
☻Page 201 of 243☻
************************************************* * L$V0S0$START: *** Activity "FR_PARIS_End_to_End_Processing_SAct": Initialize job summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2) started", "FR_PARIS_End_to_End_Processing_SAct":@FM:"FR_PAR IS_End_to_End_Processing_Seq")) Call DSLogInfo(DSMakeMsg("SEQUENCE - START End_to_End_Processing_Seq", ""), "@FR_PARIS_End_to_End_Processing_SAct") jb$V0S0 = "FR_PARIS_End_to_End_Processing_Seq":'.': (Module_Run_Parm) h$V0S0 = DSAttachJob(jb$V0S0, DSJ.ERRNONE) If (Not(h$V0S0)) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0001\Error calling DSAttachJob(%1)%2", jb$V0S0:@FM:DSGetLastErrorMsg()) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End h$V0S0 = DSPrepareJob(h$V0S0) If (Not(h$V0S0)) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0012\Error calling DSPrepareJob(%1)%2", jb$V0S0:@FM:DSGetLastErrorMsg()) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End L$V0S0$PREPARED: p$V0S0$1 = (Project_Parm) err$code = DSSetParam(h$V0S0, "Project_Parm", p$V0S0$1) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Project_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$2 = (Module_Parm) err$code = DSSetParam(h$V0S0, "Module_Parm", p$V0S0$2) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Module_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$3 = (Run_Parm)
☻Page 202 of 243☻
err$code = DSSetParam(h$V0S0, "Run_Parm", p$V0S0$3) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Run_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$4 = (Module_Run_Parm) err$code = DSSetParam(h$V0S0, "Module_Run_Parm", p$V0S0$4) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Module_Run_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$5 = (Data_Object_Parm) err$code = DSSetParam(h$V0S0, "Data_Object_Parm", p$V0S0$5) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Data_Object_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$6 = (Interface_Parm) err$code = DSSetParam(h$V0S0, "Interface_Parm", p$V0S0$6) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Interface_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$7 = (Interface_Root_Path_Parm) err$code = DSSetParam(h$V0S0, "Interface_Root_Path_Parm", p$V0S0$7) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Interface_Root_Path_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$8 = (Generic_Root_Path_Parm) err$code = DSSetParam(h$V0S0, "Generic_Root_Path_Parm", p$V0S0$8) If (err$code <> DSJE.NOERROR) Then
☻Page 203 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Generic_Root_Path_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$9 = (Business_Process_Parm) err$code = DSSetParam(h$V0S0, "Business_Process_Parm", p$V0S0$9) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Business_Process_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$10 = (Company_Parm) err$code = DSSetParam(h$V0S0, "Company_Parm", p$V0S0$10) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Company_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$11 = (Country_Parm) err$code = DSSetParam(h$V0S0, "Country_Parm", p$V0S0$11) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Country_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$12 = (SAP_Client_Parm) err$code = DSSetParam(h$V0S0, "SAP_Client_Parm", p$V0S0$12) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "SAP_Client_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$13 = (Source_System_Parm) err$code = DSSetParam(h$V0S0, "Source_System_Parm", p$V0S0$13) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Source_System_Parm":@FM:err$code)
☻Page 204 of 243☻
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$14 = (Fileset_Type_Name_Parm) err$code = DSSetParam(h$V0S0, "Fileset_Name_Type_Parm", p$V0S0$14) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Fileset_Name_Type_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$15 = (Request_Type_Parm) err$code = DSSetParam(h$V0S0, "Request_Type_Parm", p$V0S0$15) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Request_Type_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$16 = (Timestamp_Parm) err$code = DSSetParam(h$V0S0, "Timestamp_Parm", p$V0S0$16) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Timestamp_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$17 = (Fileset_Parm) err$code = DSSetParam(h$V0S0, "Fileset_Parm", p$V0S0$17) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Fileset_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$18 = (File_Parm) err$code = DSSetParam(h$V0S0, "File_Parm", p$V0S0$18) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "File_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End
☻Page 205 of 243☻
p$V0S0$19 = (File_Name_Parm) err$code = DSSetParam(h$V0S0, "File_Name_Parm", p$V0S0$19) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "File_Name_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$20 = (ISDB_Database_Parm) err$code = DSSetParam(h$V0S0, "ISDB_Database_Parm", p$V0S0$20) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_Database_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$21 = (ISDB_User_Parm) err$code = DSSetParam(h$V0S0, "ISDB_User_Parm", p$V0S0$21) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_User_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$22 = (ISDB_Password_Parm) err$code = DSSetParam(h$V0S0, "ISDB_Password_Parm", p$V0S0$22) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_Password_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$23 = (CSDB_Database_Parm) err$code = DSSetParam(h$V0S0, "CSDB_Database_Parm", p$V0S0$23) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_Database_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$24 = (CSDB_User_Parm) err$code = DSSetParam(h$V0S0, "CSDB_User_Parm", p$V0S0$24) If (err$code <> DSJE.NOERROR) Then
☻Page 206 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_User_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$25 = (CSDB_Password_Parm) err$code = DSSetParam(h$V0S0, "CSDB_Password_Parm", p$V0S0$25) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_Password_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$26 = (Retest_Parm) err$code = DSSetParam(h$V0S0, "Retest_Parm", p$V0S0$26) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Retest_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$27 = (Versions_Keep_Cnt_Parm) err$code = DSSetParam(h$V0S0, "Versions_Keep_Cnt_Parm", p$V0S0$27) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Versions_Keep_Cnt_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$28 = (Parm_File_Comma_Parm) err$code = DSSetParam(h$V0S0, "Parm_File_Comma_Parm", p$V0S0$28) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Parm_File_Comma_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$29 = (FTP_Server) err$code = DSSetParam(h$V0S0, "FTP_Server", p$V0S0$29) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Server":@FM:err$code)
☻Page 207 of 243☻
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$30 = (FTP_Target_Path) err$code = DSSetParam(h$V0S0, "FTP_Target_Path", p$V0S0$30) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Target_Path":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$31 = (FTP_Port) err$code = DSSetParam(h$V0S0, "FTP_Port", p$V0S0$31) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Port":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$32 = (FTP_User) err$code = DSSetParam(h$V0S0, "FTP_User", p$V0S0$32) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_User":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$33 = (FTP_Password) err$code = DSSetParam(h$V0S0, "FTP_Password", p$V0S0$33) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Password":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$34 = (Abort_Msg_Parm) err$code = DSSetParam(h$V0S0, "Abort_Msg_Parm", p$V0S0$34) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Abort_Msg_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End
☻Page 208 of 243☻
p$V0S0$35 = (Load_ISDB_Rejects_Parm) err$code = DSSetParam(h$V0S0, "Load_ISDB_Rejects_Parm", p$V0S0$35) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Load_ISDB_Rejects_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$36 = (Load_ISDB_Source_Parm) err$code = DSSetParam(h$V0S0, "Load_ISDB_Source_Parm", p$V0S0$36) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Load_ISDB_Source_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$37 = (Source_Delimiter_Parm) err$code = DSSetParam(h$V0S0, "Source_Delimiter_Parm", p$V0S0$37) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Source_Delimiter_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$38 = (KTGRM_Header_Default_Parm) err$code = DSSetParam(h$V0S0, "KTGRM_Header_Default_Parm", p$V0S0$38) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KTGRM_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$39 = (VTWEG_Header_Default_Parm) err$code = DSSetParam(h$V0S0, "VTWEG_Header_Default_Parm", p$V0S0$39) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VTWEG_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$40 = (SPART_Header_Default_Parm) err$code = DSSetParam(h$V0S0, "SPART_Header_Default_Parm", p$V0S0$40) If (err$code <> DSJE.NOERROR) Then
☻Page 209 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "SPART_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$41 = (WAERS_Local_Default_Parm) err$code = DSSetParam(h$V0S0, "WAERS_Local_Default_Parm", p$V0S0$41) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WAERS_Local_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$42 = (WAERS_Foreign_Default_Parm) err$code = DSSetParam(h$V0S0, "WAERS_Foreign_Default_Parm", p$V0S0$42) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WAERS_Foreign_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$43 = (KURSK_Default_Parm) err$code = DSSetParam(h$V0S0, "KURSK_Default_Parm", p$V0S0$43) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KURSK_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$44 = (AUART_Header_Default_Parm) err$code = DSSetParam(h$V0S0, "AUART_Header_Default_Parm", p$V0S0$44) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "AUART_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$45 = (ZLSCH_Header_Default_Parm) err$code = DSSetParam(h$V0S0, "ZLSCH_Header_Default_Parm", p$V0S0$45) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZLSCH_Header_Default_Parm":@FM:err$code)
☻Page 210 of 243☻
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$46 = (ZTERM_Header_Default_Parm) err$code = DSSetParam(h$V0S0, "ZTERM_Header_Default_Parm", p$V0S0$46) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZTERM_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$47 = (WERKS_Header_Default_Parm) err$code = DSSetParam(h$V0S0, "WERKS_Header_Default_Parm", p$V0S0$47) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WERKS_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$48 = (VKORG_Header_Default_Parm) err$code = DSSetParam(h$V0S0, "VKORG_Header_Default_Parm", p$V0S0$48) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKORG_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$49 = (TAXM1_Line_Default_Parm) err$code = DSSetParam(h$V0S0, "TAXM1_Line_Default_Parm", p$V0S0$49) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "TAXM1_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$50 = (VRKME_Line_Default_Parm) err$code = DSSetParam(h$V0S0, "VRKME_Line_Default_Parm", p$V0S0$50) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VRKME_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End
☻Page 211 of 243☻
p$V0S0$51 = (VKGRP_Line_Default_Parm) err$code = DSSetParam(h$V0S0, "VKGRP_Line_Default_Parm", p$V0S0$51) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKGRP_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$52 = (VKBUR_Line_Default_Parm) err$code = DSSetParam(h$V0S0, "VKBUR_Line_Default_Parm", p$V0S0$52) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKBUR_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$53 = (LGORT_Line_Default_Parm) err$code = DSSetParam(h$V0S0, "LGORT_Line_Default_Parm", p$V0S0$53) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "LGORT_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$54 = (KOSTL_Line_Default_Parm) err$code = DSSetParam(h$V0S0, "KOSTL_Line_Default_Parm", p$V0S0$54) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KOSTL_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$55 = (ZZTAXCD_Default_Parm) err$code = DSSetParam(h$V0S0, "ZZTAXCD_Default_Parm", p$V0S0$55) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZZTAXCD_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$56 = (KBETR_Revenue_Default_Parm) err$code = DSSetParam(h$V0S0, "KBETR_Revenue_Default_Parm", p$V0S0$56) If (err$code <> DSJE.NOERROR) Then
☻Page 212 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KBETR_Revenue_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$57 = (KSCHL_Revenue_Default_Parm) err$code = DSSetParam(h$V0S0, "KSCHL_Revenue_Default_Parm", p$V0S0$57) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KSCHL_Revenue_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$58 = (KSCHL_Surcharge_Default_Parm) err$code = DSSetParam(h$V0S0, "KSCHL_Surcharge_Default_Parm", p$V0S0$58) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KSCHL_Surcharge_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$59 = (Reception_Job_Name_Parm) err$code = DSSetParam(h$V0S0, "Reception_Job_Name_Parm", p$V0S0$59) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Reception_Job_Name_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$60 = (Invocation_Parm) err$code = DSSetParam(h$V0S0, "Invocation_Parm", p$V0S0$60) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Invocation_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$61 = (GUNZIP_Path_Parm) err$code = DSSetParam(h$V0S0, "GUNZIP_Path_Parm", p$V0S0$61) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "GUNZIP_Path_Parm":@FM:err$code)
☻Page 213 of 243☻
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$62 = (Run_Error_Mgmt_Parm) err$code = DSSetParam(h$V0S0, "Run_Error_Mgmt_Parm", p$V0S0$62) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Run_Error_Mgmt_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$63 = "Y" err$code = DSSetParam(h$V0S0, "Run_Move_and_Tidy_Parm", p$V0S0$63) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Run_Move_and_Tidy_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$64 = (Update_Status_In_DB_Parm) err$code = DSSetParam(h$V0S0, "Update_Status_In_DB_Parm", p$V0S0$64) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Update_Status_In_DB_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End p$V0S0$65 = (Run_Reconciliation_Parm) err$code = DSSetParam(h$V0S0, "Run_Reconciliation_Parm", p$V0S0$65) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Run_Reconciliation_Parm":@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End err$code = DSRunJob(h$V0S0, DSJ.RUNNORMAL) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0003\Error calling DSRunJob(%1), code=%2[E]", jb$V0S0:@FM:err$code) msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR End handle$list<-1> = h$V0S0 id$list<-1> = "V0S0"
☻Page 214 of 243☻
Return ************************************************* * L$V0S0$FINISHED: job$V0S0$status = DSGetJobInfo(h$V0S0, DSJ.JOBSTATUS) job$V0S0$userstatus = DSGetJobInfo(h$V0S0, DSJ.USERSTATUS) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0063\%1 finished, status= %2[E]", "FR_PARIS_End_to_End_Processing_SAct":@FM:job$V0S 0$status)) IdV0S0%%Result2%%1 = job$V0S0$userstatus IdV0S0%%Result1%%2 = job$V0S0$status IdV0S0%%Name%%3 = "FR_PARIS_End_to_End_Processing_Seq" rpt$V0S0 = DSMakeJobReport(h$V0S0, 1, "CRLF") dummy$ = DSDetachJob(h$V0S0) If b$Abandoning Then GoTo L$WAITFORJOB b$V0S0else = @True If (job$V0S0$status = DSJS.RUNOK) Then GoSub L$V0S61$START; b$V0S0else = @False If (job$V0S0$status = DSJS.RUNOK) Then GoSub L$V0S43$START; b$V0S0else = @False If b$V0S0else Then GoSub L$V0S44$START GoTo L$WAITFORJOB ************************************************* * L$V0S10$START: *** Sequencer "Run_Jobs_Seq": wait until inputs ready seq$V0S10$count += 1 If seq$V0S10$count < 1 Then Return GoSub L$V0S57$START Return ************************************************* * L$V0S2$START: *** Activity "Set_Job_Parameters_Routine": Call routine summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0058\%1 (ROUTINE %2) started", "Set_Job_Parameters_Routine":@FM:"DSU.SetDSParams FromFile")) Call DSLogInfo(DSMakeMsg("ROUTINE Set_Job_Parameters_Routine", ""), "@Set_Job_Parameters_Routine") rtn$ok = DSCheckRoutine("DSU.SetDSParamsFromFile") If (Not(rtn$ok)) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0005\BASIC routine is not cataloged: %1", "DSU.SetDSParamsFromFile") msg$id = "@Set_Job_Parameters_Routine"; GoTo L$ERROR End p$V0S2$1 = (Parm_File_Comma_Parm)
☻Page 215 of 243☻
Call 'DSU.SetDSParamsFromFile'(p$V0S2$1, r$V0S2) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0064\%1 finished, reply= %2", "Set_Job_Parameters_Routine":@FM:r$V0S2)) IdV0S2%%Result1%%4 = r$V0S2 IdV0S2%%Name%%6 = "DSU.SetDSParamsFromFile" If (r$V0S2 = 0) Then GoSub L$V0S10$START Return ************************************************* * L$V0S43$START: *** Sequencer "Success": wait until inputs ready seq$V0S43$count += 1 If seq$V0S43$count < 3 Then Return Call DSLogInfo(DSMakeMsg("SEQUENCER - Control Sequence Reports a SUCCESS on all Stages", ""), "@Success") Return ************************************************* * L$V0S44$START: *** Sequencer "Fail": wait until inputs ready If seq$V0S44$count > 0 Then Return seq$V0S44$count += 1 Call DSLogInfo(DSMakeMsg("SEQUENCER - Control Sequence Reports at least one Stage FAILED", ""), "@Fail") GoSub L$V0S72$START Return ************************************************* * L$V0S57$START: *** Activity "FR_PARIS_Control_Start_Processing_SAct": Initialize job summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2) started", "FR_PARIS_Control_Start_Processing_SAct":@FM:"FR_ PARIS_Control_Start_Processing_Seq")) Call DSLogInfo(DSMakeMsg("SEQUENCE - START Control_Start_Processes_Seq", ""), "@FR_PARIS_Control_Start_Processing_SAct") jb$V0S57 = "FR_PARIS_Control_Start_Processing_Seq":'.': (Module_Run_Parm) h$V0S57 = DSAttachJob(jb$V0S57, DSJ.ERRNONE) If (Not(h$V0S57)) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0001\Error calling DSAttachJob(%1)%2", jb$V0S57:@FM:DSGetLastErrorMsg()) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End h$V0S57 = DSPrepareJob(h$V0S57) If (Not(h$V0S57)) Then
☻Page 216 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0012\Error calling DSPrepareJob(%1)%2", jb$V0S57:@FM:DSGetLastErrorMsg()) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End L$V0S57$PREPARED: p$V0S57$1 = (Project_Parm) err$code = DSSetParam(h$V0S57, "Project_Parm", p$V0S57$1) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Project_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$2 = (Module_Parm) err$code = DSSetParam(h$V0S57, "Module_Parm", p$V0S57$2) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Module_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$3 = (Run_Parm) err$code = DSSetParam(h$V0S57, "Run_Parm", p$V0S57$3) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Run_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$4 = (Module_Run_Parm) err$code = DSSetParam(h$V0S57, "Module_Run_Parm", p$V0S57$4) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Module_Run_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$5 = (Data_Object_Parm) err$code = DSSetParam(h$V0S57, "Data_Object_Parm", p$V0S57$5) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Data_Object_Parm":@FM:err$code)
☻Page 217 of 243☻
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$6 = (Interface_Parm) err$code = DSSetParam(h$V0S57, "Interface_Parm", p$V0S57$6) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Interface_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$7 = (Interface_Root_Path_Parm) err$code = DSSetParam(h$V0S57, "Interface_Root_Path_Parm", p$V0S57$7) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Interface_Root_Path_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$8 = (Generic_Root_Path_Parm) err$code = DSSetParam(h$V0S57, "Generic_Root_Path_Parm", p$V0S57$8) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Generic_Root_Path_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$9 = (Business_Process_Parm) err$code = DSSetParam(h$V0S57, "Business_Process_Parm", p$V0S57$9) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Business_Process_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$10 = (Company_Parm) err$code = DSSetParam(h$V0S57, "Company_Parm", p$V0S57$10) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Company_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End
☻Page 218 of 243☻
p$V0S57$11 = (Country_Parm) err$code = DSSetParam(h$V0S57, "Country_Parm", p$V0S57$11) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Country_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$12 = (SAP_Client_Parm) err$code = DSSetParam(h$V0S57, "SAP_Client_Parm", p$V0S57$12) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "SAP_Client_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$13 = (Source_System_Parm) err$code = DSSetParam(h$V0S57, "Source_System_Parm", p$V0S57$13) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Source_System_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$14 = (Fileset_Type_Name_Parm) err$code = DSSetParam(h$V0S57, "Fileset_Name_Type_Parm", p$V0S57$14) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Fileset_Name_Type_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$15 = (Request_Type_Parm) err$code = DSSetParam(h$V0S57, "Request_Type_Parm", p$V0S57$15) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Request_Type_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$16 = (Timestamp_Parm) err$code = DSSetParam(h$V0S57, "Timestamp_Parm", p$V0S57$16) If (err$code <> DSJE.NOERROR) Then
☻Page 219 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Timestamp_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$17 = (Fileset_Parm) err$code = DSSetParam(h$V0S57, "Fileset_Parm", p$V0S57$17) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Fileset_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$18 = (File_Parm) err$code = DSSetParam(h$V0S57, "File_Parm", p$V0S57$18) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "File_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$19 = (File_Name_Parm) err$code = DSSetParam(h$V0S57, "File_Name_Parm", p$V0S57$19) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "File_Name_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$20 = (ISDB_Database_Parm) err$code = DSSetParam(h$V0S57, "ISDB_Database_Parm", p$V0S57$20) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_Database_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$21 = (ISDB_User_Parm) err$code = DSSetParam(h$V0S57, "ISDB_User_Parm", p$V0S57$21) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_User_Parm":@FM:err$code)
☻Page 220 of 243☻
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$22 = (ISDB_Password_Parm) err$code = DSSetParam(h$V0S57, "ISDB_Password_Parm", p$V0S57$22) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_Password_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$23 = (CSDB_Database_Parm) err$code = DSSetParam(h$V0S57, "CSDB_Database_Parm", p$V0S57$23) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_Database_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$24 = (CSDB_User_Parm) err$code = DSSetParam(h$V0S57, "CSDB_User_Parm", p$V0S57$24) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_User_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$25 = (CSDB_Password_Parm) err$code = DSSetParam(h$V0S57, "CSDB_Password_Parm", p$V0S57$25) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_Password_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$26 = (Retest_Parm) err$code = DSSetParam(h$V0S57, "Retest_Parm", p$V0S57$26) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Retest_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End
☻Page 221 of 243☻
p$V0S57$27 = (Versions_Keep_Cnt_Parm) err$code = DSSetParam(h$V0S57, "Versions_Keep_Cnt_Parm", p$V0S57$27) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Versions_Keep_Cnt_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$28 = (Parm_File_Comma_Parm) err$code = DSSetParam(h$V0S57, "Parm_File_Comma_Parm", p$V0S57$28) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Parm_File_Comma_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$29 = (FTP_Server) err$code = DSSetParam(h$V0S57, "FTP_Server", p$V0S57$29) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Server":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$30 = (FTP_Target_Path) err$code = DSSetParam(h$V0S57, "FTP_Target_Path", p$V0S57$30) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Target_Path":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$31 = (FTP_Port) err$code = DSSetParam(h$V0S57, "FTP_Port", p$V0S57$31) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Port":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$32 = (FTP_User) err$code = DSSetParam(h$V0S57, "FTP_User", p$V0S57$32) If (err$code <> DSJE.NOERROR) Then
☻Page 222 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_User":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$33 = (FTP_Password) err$code = DSSetParam(h$V0S57, "FTP_Password", p$V0S57$33) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Password":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$34 = (Abort_Msg_Parm) err$code = DSSetParam(h$V0S57, "Abort_Msg_Parm", p$V0S57$34) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Abort_Msg_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$35 = (Load_ISDB_Rejects_Parm) err$code = DSSetParam(h$V0S57, "Load_ISDB_Rejects_Parm", p$V0S57$35) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Load_ISDB_Rejects_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$36 = (Load_ISDB_Source_Parm) err$code = DSSetParam(h$V0S57, "Load_ISDB_Source_Parm", p$V0S57$36) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Load_ISDB_Source_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$37 = (Source_Delimiter_Parm) err$code = DSSetParam(h$V0S57, "Source_Delimiter_Parm", p$V0S57$37) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Source_Delimiter_Parm":@FM:err$code)
☻Page 223 of 243☻
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$38 = (KTGRM_Header_Default_Parm) err$code = DSSetParam(h$V0S57, "KTGRM_Header_Default_Parm", p$V0S57$38) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KTGRM_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$39 = (VTWEG_Header_Default_Parm) err$code = DSSetParam(h$V0S57, "VTWEG_Header_Default_Parm", p$V0S57$39) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VTWEG_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$40 = (SPART_Header_Default_Parm) err$code = DSSetParam(h$V0S57, "SPART_Header_Default_Parm", p$V0S57$40) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "SPART_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$41 = (WAERS_Local_Default_Parm) err$code = DSSetParam(h$V0S57, "WAERS_Local_Default_Parm", p$V0S57$41) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WAERS_Local_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$42 = (WAERS_Foreign_Default_Parm) err$code = DSSetParam(h$V0S57, "WAERS_Foreign_Default_Parm", p$V0S57$42) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WAERS_Foreign_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End
☻Page 224 of 243☻
p$V0S57$43 = (KURSK_Default_Parm) err$code = DSSetParam(h$V0S57, "KURSK_Default_Parm", p$V0S57$43) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KURSK_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$44 = (AUART_Header_Default_Parm) err$code = DSSetParam(h$V0S57, "AUART_Header_Default_Parm", p$V0S57$44) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "AUART_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$45 = (ZLSCH_Header_Default_Parm) err$code = DSSetParam(h$V0S57, "ZLSCH_Header_Default_Parm", p$V0S57$45) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZLSCH_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$46 = (ZTERM_Header_Default_Parm) err$code = DSSetParam(h$V0S57, "ZTERM_Header_Default_Parm", p$V0S57$46) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZTERM_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$47 = (WERKS_Header_Default_Parm) err$code = DSSetParam(h$V0S57, "WERKS_Header_Default_Parm", p$V0S57$47) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WERKS_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$48 = (VKORG_Header_Default_Parm) err$code = DSSetParam(h$V0S57, "VKORG_Header_Default_Parm", p$V0S57$48) If (err$code <> DSJE.NOERROR) Then
☻Page 225 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKORG_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$49 = (TAXM1_Line_Default_Parm) err$code = DSSetParam(h$V0S57, "TAXM1_Line_Default_Parm", p$V0S57$49) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "TAXM1_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$50 = (VRKME_Line_Default_Parm) err$code = DSSetParam(h$V0S57, "VRKME_Line_Default_Parm", p$V0S57$50) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VRKME_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$51 = (VKGRP_Line_Default_Parm) err$code = DSSetParam(h$V0S57, "VKGRP_Line_Default_Parm", p$V0S57$51) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKGRP_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$52 = (VKBUR_Line_Default_Parm) err$code = DSSetParam(h$V0S57, "VKBUR_Line_Default_Parm", p$V0S57$52) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKBUR_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$53 = (LGORT_Line_Default_Parm) err$code = DSSetParam(h$V0S57, "LGORT_Line_Default_Parm", p$V0S57$53) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "LGORT_Line_Default_Parm":@FM:err$code)
☻Page 226 of 243☻
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$54 = (KOSTL_Line_Default_Parm) err$code = DSSetParam(h$V0S57, "KOSTL_Line_Default_Parm", p$V0S57$54) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KOSTL_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$55 = (ZZTAXCD_Default_Parm) err$code = DSSetParam(h$V0S57, "ZZTAXCD_Default_Parm", p$V0S57$55) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZZTAXCD_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$56 = (KBETR_Revenue_Default_Parm) err$code = DSSetParam(h$V0S57, "KBETR_Revenue_Default_Parm", p$V0S57$56) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KBETR_Revenue_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$57 = (KSCHL_Revenue_Default_Parm) err$code = DSSetParam(h$V0S57, "KSCHL_Revenue_Default_Parm", p$V0S57$57) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KSCHL_Revenue_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End p$V0S57$58 = (KSCHL_Surcharge_Default_Parm) err$code = DSSetParam(h$V0S57, "KSCHL_Surcharge_Default_Parm", p$V0S57$58) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KSCHL_Surcharge_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End
☻Page 227 of 243☻
p$V0S57$59 = (Invocation_Parm) err$code = DSSetParam(h$V0S57, "Invocation_Parm", p$V0S57$59) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Invocation_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End err$code = DSRunJob(h$V0S57, DSJ.RUNNORMAL) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0003\Error calling DSRunJob(%1), code=%2[E]", jb$V0S57:@FM:err$code) msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR End handle$list<-1> = h$V0S57 id$list<-1> = "V0S57" Return ************************************************* * L$V0S57$FINISHED: job$V0S57$status = DSGetJobInfo(h$V0S57, DSJ.JOBSTATUS) job$V0S57$userstatus = DSGetJobInfo(h$V0S57, DSJ.USERSTATUS) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0063\%1 finished, status= %2[E]", "FR_PARIS_Control_Start_Processing_SAct":@FM:job$ V0S57$status)) IdV0S57%%Result2%%8 = job$V0S57$userstatus IdV0S57%%Result1%%9 = job$V0S57$status IdV0S57%%Name%%10 = "FR_PARIS_Control_Start_Processing_Seq" rpt$V0S57 = DSMakeJobReport(h$V0S57, 1, "CRLF") dummy$ = DSDetachJob(h$V0S57) If b$Abandoning Then GoTo L$WAITFORJOB b$V0S57else = @True If (job$V0S57$status = DSJS.RUNOK) Then GoSub L$V0S43$START; b$V0S57else = @False If (job$V0S57$status = DSJS.RUNOK) Then GoSub L$V0S0$START; b$V0S57else = @False If b$V0S57else Then GoSub L$V0S44$START GoTo L$WAITFORJOB ************************************************* * L$V0S61$START: *** Activity "FR_PARIS_Control_End_Processing_SAct": Initialize job summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2) started",
☻Page 228 of 243☻
"FR_PARIS_Control_End_Processing_SAct":@FM:"FR_PA RIS_Control_End_Processing_Seq")) Call DSLogInfo(DSMakeMsg("SEQUENCE START Control_End_Processing_Seq", ""), "@FR_PARIS_Control_End_Processing_SAct") jb$V0S61 = "FR_PARIS_Control_End_Processing_Seq":'.': (Module_Run_Parm) h$V0S61 = DSAttachJob(jb$V0S61, DSJ.ERRNONE) If (Not(h$V0S61)) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0001\Error calling DSAttachJob(%1)%2", jb$V0S61:@FM:DSGetLastErrorMsg()) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End h$V0S61 = DSPrepareJob(h$V0S61) If (Not(h$V0S61)) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0012\Error calling DSPrepareJob(%1)%2", jb$V0S61:@FM:DSGetLastErrorMsg()) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End L$V0S61$PREPARED: p$V0S61$1 = (Project_Parm) err$code = DSSetParam(h$V0S61, "Project_Parm", p$V0S61$1) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Project_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$2 = (Module_Parm) err$code = DSSetParam(h$V0S61, "Module_Parm", p$V0S61$2) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Module_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$3 = (Run_Parm) err$code = DSSetParam(h$V0S61, "Run_Parm", p$V0S61$3) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Run_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
☻Page 229 of 243☻
End p$V0S61$4 = (Module_Run_Parm) err$code = DSSetParam(h$V0S61, "Module_Run_Parm", p$V0S61$4) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Module_Run_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$5 = (Data_Object_Parm) err$code = DSSetParam(h$V0S61, "Data_Object_Parm", p$V0S61$5) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Data_Object_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$6 = (Interface_Parm) err$code = DSSetParam(h$V0S61, "Interface_Parm", p$V0S61$6) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Interface_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$7 = (Interface_Root_Path_Parm) err$code = DSSetParam(h$V0S61, "Interface_Root_Path_Parm", p$V0S61$7) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Interface_Root_Path_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$8 = (Generic_Root_Path_Parm) err$code = DSSetParam(h$V0S61, "Generic_Root_Path_Parm", p$V0S61$8) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Generic_Root_Path_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$9 = (Business_Process_Parm) err$code = DSSetParam(h$V0S61, "Business_Process_Parm", p$V0S61$9)
☻Page 230 of 243☻
If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Business_Process_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$10 = (Company_Parm) err$code = DSSetParam(h$V0S61, "Company_Parm", p$V0S61$10) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Company_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$11 = (Country_Parm) err$code = DSSetParam(h$V0S61, "Country_Parm", p$V0S61$11) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Country_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$12 = (SAP_Client_Parm) err$code = DSSetParam(h$V0S61, "SAP_Client_Parm", p$V0S61$12) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "SAP_Client_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$13 = (Source_System_Parm) err$code = DSSetParam(h$V0S61, "Source_System_Parm", p$V0S61$13) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Source_System_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$14 = (Fileset_Type_Name_Parm) err$code = DSSetParam(h$V0S61, "Fileset_Name_Type_Parm", p$V0S61$14) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Fileset_Name_Type_Parm":@FM:err$code)
☻Page 231 of 243☻
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$15 = (Request_Type_Parm) err$code = DSSetParam(h$V0S61, "Request_Type_Parm", p$V0S61$15) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Request_Type_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$16 = (Timestamp_Parm) err$code = DSSetParam(h$V0S61, "Timestamp_Parm", p$V0S61$16) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Timestamp_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$17 = (Fileset_Parm) err$code = DSSetParam(h$V0S61, "Fileset_Parm", p$V0S61$17) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Fileset_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$18 = (File_Parm) err$code = DSSetParam(h$V0S61, "File_Parm", p$V0S61$18) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "File_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$19 = (File_Name_Parm) err$code = DSSetParam(h$V0S61, "File_Name_Parm", p$V0S61$19) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "File_Name_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End
☻Page 232 of 243☻
p$V0S61$20 = (ISDB_Database_Parm) err$code = DSSetParam(h$V0S61, "ISDB_Database_Parm", p$V0S61$20) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_Database_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$21 = (ISDB_User_Parm) err$code = DSSetParam(h$V0S61, "ISDB_User_Parm", p$V0S61$21) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_User_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$22 = (ISDB_Password_Parm) err$code = DSSetParam(h$V0S61, "ISDB_Password_Parm", p$V0S61$22) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ISDB_Password_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$23 = (CSDB_Database_Parm) err$code = DSSetParam(h$V0S61, "CSDB_Database_Parm", p$V0S61$23) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_Database_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$24 = (CSDB_User_Parm) err$code = DSSetParam(h$V0S61, "CSDB_User_Parm", p$V0S61$24) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_User_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$25 = (CSDB_Password_Parm) err$code = DSSetParam(h$V0S61, "CSDB_Password_Parm", p$V0S61$25) If (err$code <> DSJE.NOERROR) Then
☻Page 233 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "CSDB_Password_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$26 = (Retest_Parm) err$code = DSSetParam(h$V0S61, "Retest_Parm", p$V0S61$26) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Retest_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$27 = (Versions_Keep_Cnt_Parm) err$code = DSSetParam(h$V0S61, "Versions_Keep_Cnt_Parm", p$V0S61$27) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Versions_Keep_Cnt_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$28 = (Parm_File_Comma_Parm) err$code = DSSetParam(h$V0S61, "Parm_File_Comma_Parm", p$V0S61$28) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Parm_File_Comma_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$29 = (FTP_Server) err$code = DSSetParam(h$V0S61, "FTP_Server", p$V0S61$29) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Server":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$30 = (FTP_Target_Path) err$code = DSSetParam(h$V0S61, "FTP_Target_Path", p$V0S61$30) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Target_Path":@FM:err$code)
☻Page 234 of 243☻
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$31 = (FTP_Port) err$code = DSSetParam(h$V0S61, "FTP_Port", p$V0S61$31) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Port":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$32 = (FTP_User) err$code = DSSetParam(h$V0S61, "FTP_User", p$V0S61$32) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_User":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$33 = (FTP_Password) err$code = DSSetParam(h$V0S61, "FTP_Password", p$V0S61$33) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "FTP_Password":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$34 = (Abort_Msg_Parm) err$code = DSSetParam(h$V0S61, "Abort_Msg_Parm", p$V0S61$34) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Abort_Msg_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$35 = (Load_ISDB_Rejects_Parm) err$code = DSSetParam(h$V0S61, "Load_ISDB_Rejects_Parm", p$V0S61$35) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Load_ISDB_Rejects_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End
☻Page 235 of 243☻
p$V0S61$36 = (Load_ISDB_Source_Parm) err$code = DSSetParam(h$V0S61, "Load_ISDB_Source_Parm", p$V0S61$36) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Load_ISDB_Source_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$37 = (Source_Delimiter_Parm) err$code = DSSetParam(h$V0S61, "Source_Delimiter_Parm", p$V0S61$37) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Source_Delimiter_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$38 = (KTGRM_Header_Default_Parm) err$code = DSSetParam(h$V0S61, "KTGRM_Header_Default_Parm", p$V0S61$38) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KTGRM_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$39 = (VTWEG_Header_Default_Parm) err$code = DSSetParam(h$V0S61, "VTWEG_Header_Default_Parm", p$V0S61$39) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VTWEG_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$40 = (SPART_Header_Default_Parm) err$code = DSSetParam(h$V0S61, "SPART_Header_Default_Parm", p$V0S61$40) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "SPART_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$41 = (WAERS_Local_Default_Parm) err$code = DSSetParam(h$V0S61, "WAERS_Local_Default_Parm", p$V0S61$41) If (err$code <> DSJE.NOERROR) Then
☻Page 236 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WAERS_Local_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$42 = (WAERS_Foreign_Default_Parm) err$code = DSSetParam(h$V0S61, "WAERS_Foreign_Default_Parm", p$V0S61$42) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WAERS_Foreign_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$43 = (KURSK_Default_Parm) err$code = DSSetParam(h$V0S61, "KURSK_Default_Parm", p$V0S61$43) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KURSK_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$44 = (AUART_Header_Default_Parm) err$code = DSSetParam(h$V0S61, "AUART_Header_Default_Parm", p$V0S61$44) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "AUART_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$45 = (ZLSCH_Header_Default_Parm) err$code = DSSetParam(h$V0S61, "ZLSCH_Header_Default_Parm", p$V0S61$45) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZLSCH_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$46 = (ZTERM_Header_Default_Parm) err$code = DSSetParam(h$V0S61, "ZTERM_Header_Default_Parm", p$V0S61$46) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZTERM_Header_Default_Parm":@FM:err$code)
☻Page 237 of 243☻
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$47 = (WERKS_Header_Default_Parm) err$code = DSSetParam(h$V0S61, "WERKS_Header_Default_Parm", p$V0S61$47) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "WERKS_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$48 = (VKORG_Header_Default_Parm) err$code = DSSetParam(h$V0S61, "VKORG_Header_Default_Parm", p$V0S61$48) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKORG_Header_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$49 = (TAXM1_Line_Default_Parm) err$code = DSSetParam(h$V0S61, "TAXM1_Line_Default_Parm", p$V0S61$49) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "TAXM1_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$50 = (VRKME_Line_Default_Parm) err$code = DSSetParam(h$V0S61, "VRKME_Line_Default_Parm", p$V0S61$50) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VRKME_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$51 = (VKGRP_Line_Default_Parm) err$code = DSSetParam(h$V0S61, "VKGRP_Line_Default_Parm", p$V0S61$51) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKGRP_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End
☻Page 238 of 243☻
p$V0S61$52 = (VKBUR_Line_Default_Parm) err$code = DSSetParam(h$V0S61, "VKBUR_Line_Default_Parm", p$V0S61$52) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "VKBUR_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$53 = (LGORT_Line_Default_Parm) err$code = DSSetParam(h$V0S61, "LGORT_Line_Default_Parm", p$V0S61$53) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "LGORT_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$54 = (KOSTL_Line_Default_Parm) err$code = DSSetParam(h$V0S61, "KOSTL_Line_Default_Parm", p$V0S61$54) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KOSTL_Line_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$55 = (ZZTAXCD_Default_Parm) err$code = DSSetParam(h$V0S61, "ZZTAXCD_Default_Parm", p$V0S61$55) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "ZZTAXCD_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$56 = (KBETR_Revenue_Default_Parm) err$code = DSSetParam(h$V0S61, "KBETR_Revenue_Default_Parm", p$V0S61$56) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KBETR_Revenue_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$57 = (KSCHL_Revenue_Default_Parm) err$code = DSSetParam(h$V0S61, "KSCHL_Revenue_Default_Parm", p$V0S61$57) If (err$code <> DSJE.NOERROR) Then
☻Page 239 of 243☻
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KSCHL_Revenue_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$58 = (KSCHL_Surcharge_Default_Parm) err$code = DSSetParam(h$V0S61, "KSCHL_Surcharge_Default_Parm", p$V0S61$58) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "KSCHL_Surcharge_Default_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$59 = (Invocation_Parm) err$code = DSSetParam(h$V0S61, "Invocation_Parm", p$V0S61$59) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Invocation_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$60 = (Run_Reconciliation_Parm) err$code = DSSetParam(h$V0S61, "Run_Reconciliation_Parm", p$V0S61$60) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Run_Reconciliation_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$61 = (Run_Error_Mgmt_Parm) err$code = DSSetParam(h$V0S61, "Run_Error_Mgmt_Parm", p$V0S61$61) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Run_Error_Mgmt_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$62 = (Update_Status_In_DB_Parm) err$code = DSSetParam(h$V0S61, "Update_Status_In_DB_Parm", p$V0S61$62) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Update_Status_In_DB_Parm":@FM:err$code)
☻Page 240 of 243☻
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End p$V0S61$63 = (Load_ISDB_Cmn_Fmt_Parm) err$code = DSSetParam(h$V0S61, "Load_ISDB_Cmn_Fmt_Parm", p$V0S61$63) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]", "Load_ISDB_Cmn_Fmt_Parm":@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End err$code = DSRunJob(h$V0S61, DSJ.RUNNORMAL) If (err$code <> DSJE.NOERROR) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0003\Error calling DSRunJob(%1), code=%2[E]", jb$V0S61:@FM:err$code) msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR End handle$list<-1> = h$V0S61 id$list<-1> = "V0S61" Return ************************************************* * L$V0S61$FINISHED: job$V0S61$status = DSGetJobInfo(h$V0S61, DSJ.JOBSTATUS) job$V0S61$userstatus = DSGetJobInfo(h$V0S61, DSJ.USERSTATUS) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0063\%1 finished, status= %2[E]", "FR_PARIS_Control_End_Processing_SAct":@FM:job$V0 S61$status)) IdV0S61%%Result2%%11 = job$V0S61$userstatus IdV0S61%%Result1%%12 = job$V0S61$status IdV0S61%%Name%%13 = "FR_PARIS_Control_End_Processing_Seq" rpt$V0S61 = DSMakeJobReport(h$V0S61, 1, "CRLF") dummy$ = DSDetachJob(h$V0S61) If b$Abandoning Then GoTo L$WAITFORJOB b$V0S61else = @True If (job$V0S61$status = DSJS.RUNOK) Then GoSub L$V0S43$START; b$V0S61else = @False If b$V0S61else Then GoSub L$V0S44$START GoTo L$WAITFORJOB ************************************************* * L$V0S72$START: *** Activity "Abort_RAct": Call routine summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0058\%1 (ROUTINE %2)
☻Page 241 of 243☻
started", "Abort_RAct":@FM:"DSX.UTILITYABORTTOLOG")) rtn$ok = DSCheckRoutine("DSX.UTILITYABORTTOLOG") If (Not(rtn$ok)) Then msg$ = DSMakeMsg("DSTAGE_JSG_M_0005\BASIC routine is not cataloged: %1", "DSX.UTILITYABORTTOLOG") msg$id = "@Abort_RAct"; GoTo L$ERROR End p$V0S72$1 = (Abort_Msg_Parm) Call 'DSX.UTILITYABORTTOLOG'(r$V0S72, p$V0S72$1) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0064\%1 finished, reply= %2", "Abort_RAct":@FM:r$V0S72)) IdV0S72%%Result1%%14 = r$V0S72 IdV0S72%%Name%%16 = "DSX.UTILITYABORTTOLOG" Return ************************************************* * L$WAITFORJOB: If handle$list = "" Then GoTo L$FINISH handle$ = DSWaitForJob(handle$list) If handle$ = 0 Then handle$ = handle$list<1> Locate handle$ In handle$list Setting index$ Then id$ = id$list b$Abandoning = abort$list Del id$list; Del handle$list; Del abort$list Begin Case Case id$ = "V0S0" GoTo L$V0S0$FINISHED Case id$ = "V0S57" GoTo L$V0S57$FINISHED Case id$ = "V0S61" GoTo L$V0S61$FINISHED End Case End * Error if fall though handle$list = "" msg$ = DSMakeMsg("DSTAGE_JSG_M_0008\Error calling DSWaitForJob(), code=%1[E]", handle$) msg$id = "@Coordinator"; GoTo L$ERROR ************************************************* * L$ERROR: Call DSLogWarn(DSMakeMsg("DSTAGE_JSG_M_0009\Controller problem: %1", msg$), msg$id) summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0052\Exception raised: %1", msg$id:", ":msg$)) abort$list = Ifs(handle$list, Str(1:@FM, DCount(handle$list, @FM)), "") b$Abandoning = @True GoTo L$WAITFORJOB L$ABORT:
☻Page 242 of 243☻
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0056\Sequence failed", "")) Call DSLogInfo(summary$, "@Coordinator") Call DSLogFatal(DSMakeMsg("DSTAGE_JSG_M_0013\Sequence job will abort due to previous unrecoverable errors", ""), "@Coordinator") ************************************************* * L$FINISH: If b$Abandoning Then GoTo L$ABORT If Not(b$AllStarted) Then Return summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0054\Sequence finished OK", "")) Call DSLogInfo(summary$, "@Coordinator") L$EXIT: Return To L$EXIT
☻Page 243 of 243☻