Quantcast
Channel: SCN : Document List - Data Services and Data Quality
Viewing all 401 articles
Browse latest View live

Hierarchy visualization of objects

$
0
0

Ever got into a situation where you had to list out child workflows and dataflows of a job? May be for documenting, or may be for checking object usages, Real challenge is when you have to represent it in organizational chart.

 

Here is an example:


For better view - click on the image, on the preview window, right click and save to desktop.

data1.png

How do we do?

 

Of-course you can drill down every object in designer, navigate and draw chart in Microsoft Visio, MS-Word etc. But, how about generating hierarchical chart from repository metadata rather than drawing it?

 


Its just three steps away

 

1. Start by populating AL_PARENT_CHILD table, from designer or from command line,

 

          a. From designer:

Untitled2.png

 

 

          b. From command line using al_engine.exe

          command: "%LINK_DIR%\bin\al_engine.exe" -NMicrosoft_SQL_Server -StestSQLhost -Udb_user -Pdb_pass -Qtestdb -ep

Untitled2.png

 

2. Login to repository database and execute the below query:

 

WITHCTEAS(

      SELECT[PARENT_OBJ]

      ,[PARENT_OBJ_TYPE]

      ,[DESCEN_OBJ]

      ,[DESCEN_OBJ_TYPE]FROM[AL_PARENT_CHILD] PC

      WHERE'JOB_CORD_BW_TD_OHS_POPULATE_SDL' = PARENT_OBJ

      UNION ALL SELECTPC.[PARENT_OBJ]

     ,PC.[PARENT_OBJ_TYPE]

      ,PC.[DESCEN_OBJ]

      ,PC.[DESCEN_OBJ_TYPE]FROMCTE INNER JOIN [AL_PARENT_CHILD] PC ON CTE.DESCEN_OBJ=PC.PARENT_OBJ

) SELECT2ASID,[PARENT_OBJ] + '->' + [DESCEN_OBJ] CODEFROMCTE

WHEREPARENT_OBJ_TYPE IN ('Job','WorkFlow','DataFlow')

AND [DESCEN_OBJ_TYPE] IN ('Job','WorkFlow','DataFlow')UNION

SELECT1ASID, 'digraph a { node [shape=rectangle]'UNION

SELECT3ASID, '}'

 

 

Untitled1.png

 

From the output, copy only the contents in second column without header.

 

My data looks like this:

digraph a { node [shape=rectangle]
JOB_CORD_BW_TD_OHS_POPULATE_SDL->WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_BW_Z_BODS_01_STG_TO_BW_Z_BODS_01_SDA
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_1
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_2
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_BW_Z_BODS_02_STG_TO_BW_Z_BODS_02_SDA
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_01
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_02
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL->WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL->WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL->WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL
WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL->WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL
WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL->DF_OH_Src_Z_BODS_01_To_Stg_S_BW_Z_BODS_01_Map_SDL
WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL->DF_OH_Src_Z_BODS_02_To_Stg_S_BW_Z_BODS_02_Map_SDL
}

 

 

3. Open the webpage webgraphviz

  1. Clear the existing contents in text box
  2. Past the code you copied
  3. Click generate graph button and scroll down to see the generated graph.

 


That's all, Org chart of your job is ready !

 


Note:

  1. The SQL Query
    1. Given query works only on MS-SQL Server.
    2. Modify the WHERE clause in the query to match with your job.
    3. You can also use "IN" instead of "=" and put multiple job names.
    4. Query is restricted only to job, workflow & dataflow. You can modify the conditions to include other objects too.
  2. We are generating only parent child hierarchy, not the execution flow. i.e. two child node at same level may not execute in parallel.
  3. Since its not the execution flow, conditional workflows will not appear in chart.
  4. Webgraphviz is alternate for Graphviz tool which supports command line usage when installed.

 

 

Appreciate your comments/feedback. Cheers


Step by step data loading from BODS to BW target

$
0
0


Configurations at BW system:

 

1) Log on to the SAP BW system.
2) Enter T code ‘SALE’ to create new logical system:

1.jpg

3) To create a logical system, choose Define Logical System.

  • Enter a name for the logical system that you want to create.
  • Enter a description of the logical system.
  • Save your entries.

4) Go to Transaction RSA1 to create RFC connection.

5) Select Source Systems in the Modeling navigation pane on the left.

6) Navigate to BO DataServices right click and select create.

2.jpg

7) Enter Logical System Name and Source System Name as shown above and hit Continue.

3.jpg

8) Data Services will start an RFC Server program and indicates to SAP BI that it is ready to receive RFC calls. To identify itself as the RFC Server representing this SAP BI Source System a keyword is exchanged, in the screen shot above it is "BODREP". This is the Registered Server Program, the Data Services RFC Server will register itself with at SAP. Therefore, provide the same Program ID that you want to use for the call of the RFC Server on Data Services side. All other settings for the Source System can remain on the default settings.
To complete the definition of the Source System, save it.

4.jpg

NOTE: We have to use the same Program ID while creating RFC connection in Management Console(BODS).

BO Data Service - Configure a RFC connection

 

  1. Log on to the SAP data services management console system.

5.jpg

2     Expand to the new "SAP Connections" node and open the "RFC Server Interface" item. In the Configuration tab a new RFC Server is added so that it can register itself inside the SAP System with the given PROGRAM_ID.

6.jpg 

3     Start the RFC server from tab ‘RFC server interface status” :

7.jpg

4     Go to BW and check the connection :

8.jpg

  It will show message like below:

9.jpg

 

Creating BW source:

  1. Double click on BODS connection :

10.jpg

   2    Right click on header and create new application component(Here it’s ZZ_EMPDS) :

11.jpg

12.jpg

   3    Right click on application component and create a new datasource:

13.jpg

   4    Fill the information for datasource as shown below :

         General Info. Tab

14.jpg

   Extraction Tab

15.jpg

   Fields Tab: Here we’ll define the structure of the BW target and save it.

16.jpg

   5   Now, BW will automatically create a new InfoPackage as shown below :

17.jpg

 

 

Creating BODS Datastore and Job:

1     Right click and create a new data store for BW target and fill the  required BW system detail as shown below :

18.jpg

2     Right click on transfer structure and import the datasource( here its transaction datasource ‘ZBODSTGT’ :

19.jpg

20.jpg

3     Right click on File format and create a new file format as shown below :

21.jpg

4     Create a BODS job where Source is flat file and target is BW data source(Transfer structure) as shown below :

22.jpg

Where query mappings are as shown below:

23.jpg 

 

Execute the job:

BODS job to load data in BW can be executed from both the systems, BODS designer and BW system.

Before Executing the job, We have to do following cofigurations in BW InfoPackage :

 

Goto BW, Double click on the InfoPackage for respective datasource(ZBODSTGT) and fill the "3rd party selection" details as below and save it.

Repository    : BODS repository name

JobServer     : BODS running jobserver name

JobName     : BODS job name

24.jpg

 

 

Job Execution from BW system: Right click and execute infopackage. It will trigger BODS job which will load the data into BW datasource.

25.jpg

 

OR

Double click on InfoPackag, go to ‘Schedule’ tab and click on start:

26.jpg

Job Execution from BODS designer: Go to BODS designer, Right click on the BODS job and Execute.

27.jpg

28.jpg

 

 

Verifying Data in BW :

  1. Go to Datasource and right click go to Manage :

29.jpg

When we execute the job, it will generate a request for data extraction from BODS :

30.jpg

2     Select latest request and click on PSA maintenance which will show the data in target datasource.

 

After loading data to BW datasource, it can be mapped and loaded to any BW target like DSO or Cube using process chain in “Schedule” tab of InfoPackage:

Go to ‘Schedule Option-> Enter Process Chain name in ‘AfterEvent’


The EIM Bulletin

$
0
0

Purpose

 

The Enterprise Information Management Bulletin (this page) is a timely and regularly-updated information source providing links to hot issues, new documentation, and upcoming events of interest to users and administrators of SAP Data Quality Management (DQM), SAP Data Services (DS), and SAP Information Steward (IS).

 

To subscribe to The EIM Bulletin, click the "Follow" link you see to the right.

 

HotIssues

 

Best Practices for upgrading older Data Integrator or Data Services repositories to Data Services 4.2

  • Finally upgrading your old version of Data Integrator or Data Servcies? Please reference the guide above.

 

 

Latest Release Notes

(updated 2015-09-10)

  • 2129507 - Release Notes for DQM SDK 4.2 Support Pack 4 Patch 1 (14.2.4.772)
  • 2192027 - Release Notes for SAP Data Services 4.2 Support Pack 4 Patch 3 (14.2.4.873)
  • 2195658 - Release Notes for SAP Data Services 4.2 Support Pack 5 Patch 1 (14.2.5.894)
  • 2192015 - Release Notes for SAP Information Steward 4.2 Support Pack 4 Patch 3 (14.2.4.836)
  • 2195665 - Release Notes for SAP Information Steward 4.2 Support Pack 5 Patch 1 (14.2.5.851)

 

New Product Support Features
(Coming Soon)

 

 

Selected New KB Articles and SAP Notes

(updated 2015-08-28)

  • 2209609 - New built-functions are missing after repository upgrade - Data Services 4.2 SP5
  • 2191581 - Error: no crontab for userid
  • 2194903 - Getting error "CMS cluster membership is incorrect" from Data Services Designer
  • 2206478 - How to change location of pCache - Information Steward 4.x
  • 2198166 - Designer use of views as target only possible now from datastore library
  • 2194994 - Unable to start Data Services job server - SAP Data Services 4.x

 

Your Product Ideas Realised!

(new section 2015-06-25)

 

Enhancements for EIM products suggested via the SAP Idea Place, where you can vote for your favorite enhancement requests, or enter new ones.

 

Events

(To be determined)


New Resources

(To be determined)


Didn't find what you were looking for? Please see:


Note: To stay up-to-date on EIM Product information, subscribe to this living document by clicking "Follow", which you can see in the upper right-hand corner of this page.

Connect Data Services 4.2 SP4 With Mainframe ADABAS

$
0
0

Good afternoon,

I would like to develop an ETL using SAP Data Services 4.2 SP4 connecting the ADABAS mainframe.

In the product documentation mentions:

2.5.2.1 Mainframe interface

The software Provides the Attunity Connector datastore que accesses mainframe data sources through Attunity Connect.

The data sources que Attunity Connect accesses are in the list following. For a complete list of sources, refer to the Attunity documentation.

● Adabas

2.5.2.1.1 Prerequisites for an Attunity datastore

Attunity Connector accesses mainframe data using software That You must manually install on the mainframe

server and the client location (Job Server) computer. The software connects to Attunity Connector using its ODBC interface

Attunity.jpg

You must perform the installation on the mainframe server? Through this connector it is possible to access? No need to buy and install the Attunity Server?

In the Attunity site mentions:

The Business Objects Data Integrator product includes direct support for the Attunity Server ODBC interface.

 

Someone could send me the procedures required to perform this setting in ADABAS mainframe?

 

Thank you

 

Hugs

 

This document was generated from the following discussion: Connect Data Services 4.2 SP4 With Mainframe ADABAS

Hierarchy visualization of objects

$
0
0

Ever got into a situation where you had to list out child workflows and dataflows of a job? May be for documenting, or may be for checking object usages, Real challenge is when you have to represent it in organizational chart.

 

Here is an example:


For better view - click on the image, on the preview window, right click and save to desktop.

data1.png

How do we do?

 

Of-course you can drill down every object in designer, navigate and draw chart in Microsoft Visio, MS-Word etc. But, how about generating hierarchical chart from repository metadata rather than drawing it?

 


Its just three steps away

 

1. Start by populating AL_PARENT_CHILD table, from designer or from command line,

 

          a. From designer:

Untitled2.png

 

 

          b. From command line using al_engine.exe

          command: "%LINK_DIR%\bin\al_engine.exe" -NMicrosoft_SQL_Server -StestSQLhost -Udb_user -Pdb_pass -Qtestdb -ep

Untitled2.png

 

2. Login to repository database and execute the below query:

 

WITHCTEAS(

      SELECT[PARENT_OBJ]

      ,[PARENT_OBJ_TYPE]

      ,[DESCEN_OBJ]

      ,[DESCEN_OBJ_TYPE]FROM[AL_PARENT_CHILD] PC

      WHERE'JOB_CORD_BW_TD_OHS_POPULATE_SDL' = PARENT_OBJ

      UNION ALL SELECTPC.[PARENT_OBJ]

     ,PC.[PARENT_OBJ_TYPE]

      ,PC.[DESCEN_OBJ]

      ,PC.[DESCEN_OBJ_TYPE]FROMCTE INNER JOIN [AL_PARENT_CHILD] PC ON CTE.DESCEN_OBJ=PC.PARENT_OBJ

) SELECT2ASID,[PARENT_OBJ] + '->' + [DESCEN_OBJ] CODEFROMCTE

WHEREPARENT_OBJ_TYPE IN ('Job','WorkFlow','DataFlow')

AND [DESCEN_OBJ_TYPE] IN ('Job','WorkFlow','DataFlow')UNION

SELECT1ASID, 'digraph a { node [shape=rectangle]'UNION

SELECT3ASID, '}'

 

 

Untitled1.png

 

From the output, copy only the contents in second column without header.

 

My data looks like this:

digraph a { node [shape=rectangle]
JOB_CORD_BW_TD_OHS_POPULATE_SDL->WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_BW_Z_BODS_01_STG_TO_BW_Z_BODS_01_SDA
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_1
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_2
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_BW_Z_BODS_02_STG_TO_BW_Z_BODS_02_SDA
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_01
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_02
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL->WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL->WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL->WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL
WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL->WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL
WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL->DF_OH_Src_Z_BODS_01_To_Stg_S_BW_Z_BODS_01_Map_SDL
WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL->DF_OH_Src_Z_BODS_02_To_Stg_S_BW_Z_BODS_02_Map_SDL
}

 

 

3. Open the webpage webgraphviz

  1. Clear the existing contents in text box
  2. Past the code you copied
  3. Click generate graph button and scroll down to see the generated graph.

 


That's all, Org chart of your job is ready !

 


Note:

  1. The SQL Query
    1. Given query works only on MS-SQL Server.
    2. Modify the WHERE clause in the query to match with your job.
    3. You can also use "IN" instead of "=" and put multiple job names.
    4. Query is restricted only to job, workflow & dataflow. You can modify the conditions to include other objects too.
  2. We are generating only parent child hierarchy, not the execution flow. i.e. two child node at same level may not execute in parallel.
  3. Since its not the execution flow, conditional workflows will not appear in chart.
  4. Webgraphviz is alternate for Graphviz tool which supports command line usage when installed.

 

 

Appreciate your comments/feedback. Cheers

Step by step data loading from BODS to BW target

$
0
0


Configurations at BW system:

 

1) Log on to the SAP BW system.
2) Enter T code ‘SALE’ to create new logical system:

1.jpg

3) To create a logical system, choose Define Logical System.

  • Enter a name for the logical system that you want to create.
  • Enter a description of the logical system.
  • Save your entries.

4) Go to Transaction RSA1 to create RFC connection.

5) Select Source Systems in the Modeling navigation pane on the left.

6) Navigate to BO DataServices right click and select create.

2.jpg

7) Enter Logical System Name and Source System Name as shown above and hit Continue.

3.jpg

8) Data Services will start an RFC Server program and indicates to SAP BI that it is ready to receive RFC calls. To identify itself as the RFC Server representing this SAP BI Source System a keyword is exchanged, in the screen shot above it is "BODREP". This is the Registered Server Program, the Data Services RFC Server will register itself with at SAP. Therefore, provide the same Program ID that you want to use for the call of the RFC Server on Data Services side. All other settings for the Source System can remain on the default settings.
To complete the definition of the Source System, save it.

4.jpg

NOTE: We have to use the same Program ID while creating RFC connection in Management Console(BODS).

BO Data Service - Configure a RFC connection

 

  1. Log on to the SAP data services management console system.

5.jpg

2     Expand to the new "SAP Connections" node and open the "RFC Server Interface" item. In the Configuration tab a new RFC Server is added so that it can register itself inside the SAP System with the given PROGRAM_ID.

6.jpg 

3     Start the RFC server from tab ‘RFC server interface status” :

7.jpg

4     Go to BW and check the connection :

8.jpg

  It will show message like below:

9.jpg

 

Creating BW source:

  1. Double click on BODS connection :

10.jpg

   2    Right click on header and create new application component(Here it’s ZZ_EMPDS) :

11.jpg

12.jpg

   3    Right click on application component and create a new datasource:

13.jpg

   4    Fill the information for datasource as shown below :

         General Info. Tab

14.jpg

   Extraction Tab

15.jpg

   Fields Tab: Here we’ll define the structure of the BW target and save it.

16.jpg

   5   Now, BW will automatically create a new InfoPackage as shown below :

17.jpg

 

 

Creating BODS Datastore and Job:

1     Right click and create a new data store for BW target and fill the  required BW system detail as shown below :

18.jpg

2     Right click on transfer structure and import the datasource( here its transaction datasource ‘ZBODSTGT’ :

19.jpg

20.jpg

3     Right click on File format and create a new file format as shown below :

21.jpg

4     Create a BODS job where Source is flat file and target is BW data source(Transfer structure) as shown below :

22.jpg

Where query mappings are as shown below:

23.jpg 

 

Execute the job:

BODS job to load data in BW can be executed from both the systems, BODS designer and BW system.

Before Executing the job, We have to do following cofigurations in BW InfoPackage :

 

Goto BW, Double click on the InfoPackage for respective datasource(ZBODSTGT) and fill the "3rd party selection" details as below and save it.

Repository    : BODS repository name

JobServer     : BODS running jobserver name

JobName     : BODS job name

24.jpg

 

 

Job Execution from BW system: Right click and execute infopackage. It will trigger BODS job which will load the data into BW datasource.

25.jpg

 

OR

Double click on InfoPackag, go to ‘Schedule’ tab and click on start:

26.jpg

Job Execution from BODS designer: Go to BODS designer, Right click on the BODS job and Execute.

27.jpg

28.jpg

 

 

Verifying Data in BW :

  1. Go to Datasource and right click go to Manage :

29.jpg

When we execute the job, it will generate a request for data extraction from BODS :

30.jpg

2     Select latest request and click on PSA maintenance which will show the data in target datasource.

 

After loading data to BW datasource, it can be mapped and loaded to any BW target like DSO or Cube using process chain in “Schedule” tab of InfoPackage:

Go to ‘Schedule Option-> Enter Process Chain name in ‘AfterEvent’


The EIM Bulletin

$
0
0

Purpose

 

The Enterprise Information Management Bulletin (this page) is a timely and regularly-updated information source providing links to hot issues, new documentation, and upcoming events of interest to users and administrators of SAP Data Quality Management (DQM), SAP Data Services (DS), and SAP Information Steward (IS).

 

To subscribe to The EIM Bulletin, click the "Follow" link you see to the right.

 

HotIssues

 

Best Practices for upgrading older Data Integrator or Data Services repositories to Data Services 4.2

  • Finally upgrading your old version of Data Integrator or Data Servcies? Please reference the guide above.

 

 

Latest Release Notes

(updated 2015-09-10)

  • 2129507 - Release Notes for DQM SDK 4.2 Support Pack 4 Patch 1 (14.2.4.772)
  • 2192027 - Release Notes for SAP Data Services 4.2 Support Pack 4 Patch 3 (14.2.4.873)
  • 2195658 - Release Notes for SAP Data Services 4.2 Support Pack 5 Patch 1 (14.2.5.894)
  • 2192015 - Release Notes for SAP Information Steward 4.2 Support Pack 4 Patch 3 (14.2.4.836)
  • 2195665 - Release Notes for SAP Information Steward 4.2 Support Pack 5 Patch 1 (14.2.5.851)

 

New Product Support Features
(Coming Soon)

 

 

Selected New KB Articles and SAP Notes

(updated 2015-08-28)

  • 2209609 - New built-functions are missing after repository upgrade - Data Services 4.2 SP5
  • 2191581 - Error: no crontab for userid
  • 2194903 - Getting error "CMS cluster membership is incorrect" from Data Services Designer
  • 2206478 - How to change location of pCache - Information Steward 4.x
  • 2198166 - Designer use of views as target only possible now from datastore library
  • 2194994 - Unable to start Data Services job server - SAP Data Services 4.x

 

Your Product Ideas Realised!

(new section 2015-06-25)

 

Enhancements for EIM products suggested via the SAP Idea Place, where you can vote for your favorite enhancement requests, or enter new ones.

 

Events

(To be determined)


New Resources

(To be determined)


Didn't find what you were looking for? Please see:


Note: To stay up-to-date on EIM Product information, subscribe to this living document by clicking "Follow", which you can see in the upper right-hand corner of this page.

Hierarchy visualization of objects

$
0
0

Ever got into a situation where you had to list out child workflows and dataflows of a job? May be for documenting, or may be for checking object usages, Real challenge is when you have to represent it in organizational chart.

 

Here is an example:


For better view - click on the image, on the preview window, right click and save to desktop.

data1.png

How do we do?

 

Of-course you can drill down every object in designer, navigate and draw chart in Microsoft Visio, MS-Word etc. But, how about generating hierarchical chart from repository metadata rather than drawing it?

 


Its just three steps away

 

1. Start by populating AL_PARENT_CHILD table, from designer or from command line,

 

          a. From designer:

Untitled2.png

 

 

          b. From command line using al_engine.exe

          command: "%LINK_DIR%\bin\al_engine.exe" -NMicrosoft_SQL_Server -StestSQLhost -Udb_user -Pdb_pass -Qtestdb -ep

Untitled2.png

 

2. Login to repository database and execute the below query:

 

WITHCTEAS(

      SELECT[PARENT_OBJ]

      ,[PARENT_OBJ_TYPE]

      ,[DESCEN_OBJ]

      ,[DESCEN_OBJ_TYPE]FROM[AL_PARENT_CHILD] PC

      WHERE'JOB_CORD_BW_TD_OHS_POPULATE_SDL' = PARENT_OBJ

      UNION ALL SELECTPC.[PARENT_OBJ]

     ,PC.[PARENT_OBJ_TYPE]

      ,PC.[DESCEN_OBJ]

      ,PC.[DESCEN_OBJ_TYPE]FROMCTE INNER JOIN [AL_PARENT_CHILD] PC ON CTE.DESCEN_OBJ=PC.PARENT_OBJ

) SELECT2ASID,[PARENT_OBJ] + '->' + [DESCEN_OBJ] CODEFROMCTE

WHEREPARENT_OBJ_TYPE IN ('Job','WorkFlow','DataFlow')

AND [DESCEN_OBJ_TYPE] IN ('Job','WorkFlow','DataFlow')UNION

SELECT1ASID, 'digraph a { node [shape=rectangle]'UNION

SELECT3ASID, '}'

 

 

Untitled1.png

 

From the output, copy only the contents in second column without header.

 

My data looks like this:

digraph a { node [shape=rectangle]
JOB_CORD_BW_TD_OHS_POPULATE_SDL->WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_BW_Z_BODS_01_STG_TO_BW_Z_BODS_01_SDA
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_1
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_2
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_BW_Z_BODS_02_STG_TO_BW_Z_BODS_02_SDA
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_01
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_02
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL->WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL->WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL->WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL
WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL->WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL
WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL->DF_OH_Src_Z_BODS_01_To_Stg_S_BW_Z_BODS_01_Map_SDL
WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL->DF_OH_Src_Z_BODS_02_To_Stg_S_BW_Z_BODS_02_Map_SDL
}

 

 

3. Open the webpage webgraphviz

  1. Clear the existing contents in text box
  2. Past the code you copied
  3. Click generate graph button and scroll down to see the generated graph.

 


That's all, Org chart of your job is ready !

 


Note:

  1. The SQL Query
    1. Given query works only on MS-SQL Server.
    2. Modify the WHERE clause in the query to match with your job.
    3. You can also use "IN" instead of "=" and put multiple job names.
    4. Query is restricted only to job, workflow & dataflow. You can modify the conditions to include other objects too.
  2. We are generating only parent child hierarchy, not the execution flow. i.e. two child node at same level may not execute in parallel.
  3. Since its not the execution flow, conditional workflows will not appear in chart.
  4. Webgraphviz is alternate for Graphviz tool which supports command line usage when installed.

 

 

Appreciate your comments/feedback. Cheers


SAP BODS for Data Migration

$
0
0

AIM:- The aim of this document  is to illustrate data migration steps that involves BODS as an ETL tool to extract data from legacy systems, able to profile data, transform ,validate and cleanse data before loading to SAP systems.

 

WHY BODS:- Over quite period of time after SAP acquired Business Objects, there has been growing demand for Business Objects Data Services aka BODS or simply SAP DS as data migration tool.

The ease of development and maintenance of the code in terms of easy to understand graphical flows via workflows/dataflows help customers to switch from SAP LSMW(Legacy Systems Migration Workbench) to BODS with minimum training and efforts.

 

HOW TO START:- Since the target migration systems is SAP, it is imperative to come up with standard templates of jobs that would deal with all SAP modules like FI,CO,MM,SD,PP,PM etc.

There are best practices BPFDM(Best Practices For Data Migration) under AIO(All In One) umbrella that encapsulates all standard input data migration input files, jobs that can be installed and configured in BODS repository.

 

WHAT ARE THESE BPDM:- The following files can be downloaded from SAP market place.

 

Migration_CRM

Migration_DQ

Migration_ERP

Migration_HCM

Migration_Retail

 

Each folder contains list of standard jobs ,look up files, input standard files, data stores connection necessary to deploy in BODS environment.

 

HOW TO DO:-

   You first need to have local repository created preferably clean one to start with.

1)      Log in to BODS designer.

2)     

RI       Right click on blank space in Local Object Library Area and chose to import the file from the folder you have downloaded from market place.

 

    You will get the message for the import. Click on ok

 

 

3)      After import is completed, you will see set of jobs.

 

jobs.jpg

 

 

These are the standard AIO jobs for migration to SAP ECC pertaining to finance, logistics execution and other ERP modules.

Each job is identical except the business content varies in terms of loading specific SAP data pertaining to specific module.

The jobs were developed after extensive industry best standard approach and interaction with customers which include technical and functional/business users having extremely good experience with each business module.

 

 

Job details and customization points:-

1)      The job begins with Script object for initializing global and local variables that are used same across all jobs except that local variables could be different for each business module.

 

2)      The subsequent conditional  flows depicts on generating data in staging tables for each segments that are used in specific IDoc and finally end with generating/creating IDoc.

You can chose to include or exclude certain workflows by setting the variable associated with workflows during run time.

Each workflow under conditional object is the vital point where we you need to concentrate on the mappings, look-ups, validation, enrichment and ensure that they are as per the requirements.

This is where customization plays a pivot role and would need several rounds of reviews and testing before you actually go for loading data at SAP systems.

3)      The last step is generate Idoc section where all your mappings from legacy fields are mapped to relevant segments of IDoc specific to SAP business module.

 

This section is sometimes not used by some customers as they take help of SAP load programs like LSMW, BDC, ABAP programs etc.

 

 

Job flow screenshot:-

 

job flow.jpg

 

1)      SCR Initialize:-This script initializes all possible and relevant global and local variables that are used to pass the values to the subsequent output fields mappings

 

During customization at certain project/client landscape, you can include more variables in the script objects and initialize them in order to maintain flexibility and enforce re-usability.

 

initialize.jpg

 

2)      In order to map legacy fields to IDoc segments, you need to create staging tables with same structure before that.

It is important first to understand the target IDoc structure carefully.

 

  1. e.g. Customer Master IDOC(DEBMAS) has one segment called E1KNA1M(Master customer basic data) and several sub segments like E1KNA1H(header),E1KNVVM(sales data),E1KNB1M(company code data),E1KNBKM(bank details) etc.

 

There are several fields beneath each parent and child segments.

 

So, it is important to go for each required segment and then go to that conditional flow like CustomerMasterBasicData_E1KNA1M_Required.

Navigate through this flow first as shown below:-

 

 

first flow.jpg

 

This is very important workspace where you need to do customization as per the requirements.

The mapping specifications provided by business has to be applied here carefully with much functional/business engagements.

The  object “Replace with sourceFile or Table” has to be replaced with your input/legacy files provided by business.

Either you can use Excel file or text file as source as per the project guielines.

Accordingly, you need to create Excel or text file format in BODS with required fields and then use it as input in the below dataflow workspace.

In the Query transform, you need to map all required fields from your input file, create additional derived columns if required and load to staging/template tables.

 

 

map flow.jpg

 

After the data is loaded to map staging table, it is time to validate the records and load to valid stagingtable. The data that fails the validation criteria are loaded to invalid staging table that shows the records rejected with reason field as well.

Accordingly, the failed records can be analyzed with business/functional counterparts and re-processed after making necessary configuration changes at SAP system or modifying the proper mappings at BODS level.

There are some mandatory fields at IDoc segments that need some data and not NULL coming from source. Hence you can look at ‘Validate Mandatory Columns’ and add more if required.

The ‘’Validate Lookups” is the most frequently visited place where you actually analyze the look-ups tables used for validation.

The look-up table had to be mostly analyzed and modified after several engagements with business/functional counterparts.

It needs to be always ensured to be proper without any blames on each other.

 

 

validate.jpg

 

After having loaded to valid staging table, it is time to further enrich data and fit them to be able to meet SAP standards by looking at the look-up table like MGMT_ALLOCATION table and other LKP tables.

The lookup_ext function is used to serve this purpose where you can specify multiple fields conditions.

You can add more enhancements as per the business requirements here.

The data after all enrichments are loaded to enriched staging table

 

 

enrich.jpg

 

After having loaded to enriched staging table,it is time to re-visit the Idoc structure for the segment for which the final enriched stagingtable is loaded.

 

 

Similarly, you need to repeat this step for each conditional workflow.

The last step in each conditional segment worklow is generate IDoc where you map each enriched table to each segment in IDoc.

 

 

idoc.jpg

 

The above screenshot is quite self-explanatory what you need to do.

There is quite frequent actions like ‘make current’ and then you map the input field to output field to ensure you make correct mappings

 

 

nesting.jpg

 

The below screenshots show the nested structure that you use BODS to imitate for IDoc.

 

 

imitate1.jpg

imitate2.jpg

 

 

The major part of BODS lies in profiling the source data to understand the max,min, nulls of the source data, perform easy mappings based one easy to use GUI provided by switching back and forth on the same screen. Switching on and off the validations starting from BODS 4 version is a good feature.

The data validation under Data Services Management Console helps you to analyze the data validation statistics.

 

 

mgmt console.jpg

 

This document just shows up the basic preparation for any data migration/conversion step in RICEF of SAP implementation specific to BODS.

I will add more documents like this in future to help us.

Step by step data loading from BODS to BW target

$
0
0


Configurations at BW system:

 

1) Log on to the SAP BW system.
2) Enter T code ‘SALE’ to create new logical system:

1.jpg

3) To create a logical system, choose Define Logical System.

  • Enter a name for the logical system that you want to create.
  • Enter a description of the logical system.
  • Save your entries.

4) Go to Transaction RSA1 to create RFC connection.

5) Select Source Systems in the Modeling navigation pane on the left.

6) Navigate to BO DataServices right click and select create.

2.jpg

7) Enter Logical System Name and Source System Name as shown above and hit Continue.

3.jpg

8) Data Services will start an RFC Server program and indicates to SAP BI that it is ready to receive RFC calls. To identify itself as the RFC Server representing this SAP BI Source System a keyword is exchanged, in the screen shot above it is "BODREP". This is the Registered Server Program, the Data Services RFC Server will register itself with at SAP. Therefore, provide the same Program ID that you want to use for the call of the RFC Server on Data Services side. All other settings for the Source System can remain on the default settings.
To complete the definition of the Source System, save it.

4.jpg

NOTE: We have to use the same Program ID while creating RFC connection in Management Console(BODS).

BO Data Service - Configure a RFC connection

 

  1. Log on to the SAP data services management console system.

5.jpg

2     Expand to the new "SAP Connections" node and open the "RFC Server Interface" item. In the Configuration tab a new RFC Server is added so that it can register itself inside the SAP System with the given PROGRAM_ID.

6.jpg 

3     Start the RFC server from tab ‘RFC server interface status” :

7.jpg

4     Go to BW and check the connection :

8.jpg

  It will show message like below:

9.jpg

 

Creating BW source:

  1. Double click on BODS connection :

10.jpg

   2    Right click on header and create new application component(Here it’s ZZ_EMPDS) :

11.jpg

12.jpg

   3    Right click on application component and create a new datasource:

13.jpg

   4    Fill the information for datasource as shown below :

         General Info. Tab

14.jpg

   Extraction Tab

15.jpg

   Fields Tab: Here we’ll define the structure of the BW target and save it.

16.jpg

   5   Now, BW will automatically create a new InfoPackage as shown below :

17.jpg

 

 

Creating BODS Datastore and Job:

1     Right click and create a new data store for BW target and fill the  required BW system detail as shown below :

18.jpg

2     Right click on transfer structure and import the datasource( here its transaction datasource ‘ZBODSTGT’ :

19.jpg

20.jpg

3     Right click on File format and create a new file format as shown below :

21.jpg

4     Create a BODS job where Source is flat file and target is BW data source(Transfer structure) as shown below :

22.jpg

Where query mappings are as shown below:

23.jpg 

 

Execute the job:

BODS job to load data in BW can be executed from both the systems, BODS designer and BW system.

Before Executing the job, We have to do following cofigurations in BW InfoPackage :

 

Goto BW, Double click on the InfoPackage for respective datasource(ZBODSTGT) and fill the "3rd party selection" details as below and save it.

Repository    : BODS repository name

JobServer     : BODS running jobserver name

JobName     : BODS job name

24.jpg

 

 

Job Execution from BW system: Right click and execute infopackage. It will trigger BODS job which will load the data into BW datasource.

25.jpg

 

OR

Double click on InfoPackag, go to ‘Schedule’ tab and click on start:

26.jpg

Job Execution from BODS designer: Go to BODS designer, Right click on the BODS job and Execute.

27.jpg

28.jpg

 

 

Verifying Data in BW :

  1. Go to Datasource and right click go to Manage :

29.jpg

When we execute the job, it will generate a request for data extraction from BODS :

30.jpg

2     Select latest request and click on PSA maintenance which will show the data in target datasource.

 

After loading data to BW datasource, it can be mapped and loaded to any BW target like DSO or Cube using process chain in “Schedule” tab of InfoPackage:

Go to ‘Schedule Option-> Enter Process Chain name in ‘AfterEvent’


The EIM Bulletin

$
0
0

Purpose

 

The Enterprise Information Management Bulletin (this page) is a timely and regularly-updated information source providing links to hot issues, new documentation, and upcoming events of interest to users and administrators of SAP Data Quality Management (DQM), SAP Data Services (DS), and SAP Information Steward (IS).

 

To subscribe to The EIM Bulletin, click the "Follow" link you see to the right.

 

HotIssues

 

Best Practices for upgrading older Data Integrator or Data Services repositories to Data Services 4.2

  • Finally upgrading your old version of Data Integrator or Data Servcies? Please reference the guide above.

 

 

Latest Release Notes

(updated 2015-09-10)

  • 2129507 - Release Notes for DQM SDK 4.2 Support Pack 4 Patch 1 (14.2.4.772)
  • 2192027 - Release Notes for SAP Data Services 4.2 Support Pack 4 Patch 3 (14.2.4.873)
  • 2195658 - Release Notes for SAP Data Services 4.2 Support Pack 5 Patch 1 (14.2.5.894)
  • 2192015 - Release Notes for SAP Information Steward 4.2 Support Pack 4 Patch 3 (14.2.4.836)
  • 2195665 - Release Notes for SAP Information Steward 4.2 Support Pack 5 Patch 1 (14.2.5.851)

 

New Product Support Features
(Coming Soon)

 

 

Selected New KB Articles and SAP Notes

(updated 2015-08-28)

  • 2209609 - New built-functions are missing after repository upgrade - Data Services 4.2 SP5
  • 2191581 - Error: no crontab for userid
  • 2194903 - Getting error "CMS cluster membership is incorrect" from Data Services Designer
  • 2206478 - How to change location of pCache - Information Steward 4.x
  • 2198166 - Designer use of views as target only possible now from datastore library
  • 2194994 - Unable to start Data Services job server - SAP Data Services 4.x

 

Your Product Ideas Realised!

(new section 2015-06-25)

 

Enhancements for EIM products suggested via the SAP Idea Place, where you can vote for your favorite enhancement requests, or enter new ones.

 

Events

(To be determined)


New Resources

(To be determined)


Didn't find what you were looking for? Please see:


Note: To stay up-to-date on EIM Product information, subscribe to this living document by clicking "Follow", which you can see in the upper right-hand corner of this page.

Add an attachment to BODS Job Notification email using VB Script

$
0
0

Hi All,

 

In some of the ETL Projects there is always a need to send the processed Flat File(CSV, XLS, TXT) to the customer through e-mail. As far as I know there is no functionality in SMTP_TO() function to add an attachment to the email notification.

 

We can achieve this functionality through a VB Script using the exec() function in a script.

 

Below steps describes the steps to achieve this functionality:

 

 

Steps: Add an attachment to BODS Job  Notification email using VB Script

 

Detailed Description:

In order to ease the end user’s effort, it is often required that the processed Flat File(CSV, XLS etc) is sent to the user for validation. In BODS we cannot attach the report using the SMTO_TO() function

 

Below we will see an example of such activity.

 

Current Scenario:

 

There is no functionality in BODS to attach a file and send it to user. The same can be implemented in BODS by calling a VB Script through exec() function.

 

Solution: After the completion of job place a script which calls a VB Script (vbs) file to send email notification. The vbs file must be saved in the Processed location shared folder.

Declare the below Global Variable in the job.

$G_PROCESSED_LOCATION ='\\\XYZ_Location\ Processed';

 

The email.vbs file has the following information;

strSMTPFrom = "User@abc.com"

strSMTPTo = "User@abc.com"

strSMTPRelay = "smtp.abc.com"

strTextBody = "JOB_NAME completed successfully in UAT. Attached is the file load status."

strSubject = "JOB_NAME completed in UAT"

strAttachment = "\\ XYZ_Location \Processed\MyFile.xls"


                         Set oMessage = CreateObject("CDO.Message")

                     oMessage.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2

                     oMessage.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserver") = strSMTPRelay

                     oMessage.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25

                     oMessage.Configuration.Fields.Update

 

 

                     oMessage.Subject = strSubject

                     oMessage.From = strSMTPFrom

                     oMessage.To = strSMTPTo

                     oMessage.TextBody = strTextBody

                     oMessage.AddAttachment strAttachment

                      oMessage.Send

 

Here is the script to send the email

exec('cscript','[$G_PROCESSED_LOCATION]\email.vbs', 8);

 

 

Regards

Arun Sasi


Hierarchy visualization of objects

$
0
0

Ever got into a situation where you had to list out child workflows and dataflows of a job? May be for documenting, or may be for checking object usages, Real challenge is when you have to represent it in organizational chart.

 

Here is an example:


For better view - click on the image, on the preview window, right click and save to desktop.

data1.png

How do we do?

 

Of-course you can drill down every object in designer, navigate and draw chart in Microsoft Visio, MS-Word etc. But, how about generating hierarchical chart from repository metadata rather than drawing it?

 


Its just three steps away

 

1. Start by populating AL_PARENT_CHILD table, from designer or from command line,

 

          a. From designer:

Untitled2.png

 

 

          b. From command line using al_engine.exe

          command: "%LINK_DIR%\bin\al_engine.exe" -NMicrosoft_SQL_Server -StestSQLhost -Udb_user -Pdb_pass -Qtestdb -ep

Untitled2.png

 

2. Login to repository database and execute the below query:

 

WITHCTEAS(

      SELECT[PARENT_OBJ]

      ,[PARENT_OBJ_TYPE]

      ,[DESCEN_OBJ]

      ,[DESCEN_OBJ_TYPE]FROM[AL_PARENT_CHILD] PC

      WHERE'JOB_CORD_BW_TD_OHS_POPULATE_SDL' = PARENT_OBJ

      UNION ALL SELECTPC.[PARENT_OBJ]

     ,PC.[PARENT_OBJ_TYPE]

      ,PC.[DESCEN_OBJ]

      ,PC.[DESCEN_OBJ_TYPE]FROMCTE INNER JOIN [AL_PARENT_CHILD] PC ON CTE.DESCEN_OBJ=PC.PARENT_OBJ

) SELECT2ASID,[PARENT_OBJ] + '->' + [DESCEN_OBJ] CODEFROMCTE

WHEREPARENT_OBJ_TYPE IN ('Job','WorkFlow','DataFlow')

AND [DESCEN_OBJ_TYPE] IN ('Job','WorkFlow','DataFlow')UNION

SELECT1ASID, 'digraph a { node [shape=rectangle]'UNION

SELECT3ASID, '}'

 

 

Untitled1.png

 

From the output, copy only the contents in second column without header.

 

My data looks like this:

digraph a { node [shape=rectangle]
JOB_CORD_BW_TD_OHS_POPULATE_SDL->WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_BW_Z_BODS_01_STG_TO_BW_Z_BODS_01_SDA
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_1
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_2
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_BW_Z_BODS_02_STG_TO_BW_Z_BODS_02_SDA
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_01
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_02
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL->WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL->WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL->WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL
WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL->WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL
WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL->DF_OH_Src_Z_BODS_01_To_Stg_S_BW_Z_BODS_01_Map_SDL
WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL->DF_OH_Src_Z_BODS_02_To_Stg_S_BW_Z_BODS_02_Map_SDL
}

 

 

3. Open the webpage webgraphviz

  1. Clear the existing contents in text box
  2. Past the code you copied
  3. Click generate graph button and scroll down to see the generated graph.

 


That's all, Org chart of your job is ready !

 


Note:

  1. The SQL Query
    1. Given query works only on MS-SQL Server.
    2. Modify the WHERE clause in the query to match with your job.
    3. You can also use "IN" instead of "=" and put multiple job names.
    4. Query is restricted only to job, workflow & dataflow. You can modify the conditions to include other objects too.
  2. We are generating only parent child hierarchy, not the execution flow. i.e. two child node at same level may not execute in parallel.
  3. Since its not the execution flow, conditional workflows will not appear in chart.
  4. Webgraphviz is alternate for Graphviz tool which supports command line usage when installed.

 

 

Appreciate your comments/feedback. Cheers

TUTORIAL: How to duplicate a Job batch in SAP Data Services Designer

$
0
0

Introduction

This tutorial will guide you along the job batch and component duplication processes. We will explain also the duplication mode operation.

 

 

 

Foreword

 

First of all, we must know something before duplicating object. For Data Services Designer (DSD), there is 2 kind of objects concerning duplication: reusable objects and non-reusable objects (or single-use objects).

  • The non-reusable objects will be duplicated when we want the copy of a job and his component. We can cite on this category the scripts, conditions, loops, global variables…
  • The reusable objects won’t be duplicated. We can cited on this category the Worklfows, Dataflows. It will happen that the reusable objects will be referenced and non-copied.

 

For more information, see the SAP Data Services Designer documentation Page 54

 

 

 

The logic duplication illustrated


2015-08-17_14h06_16.png



To show you this politics of DSD kind of object, I’ll show you an example:

I made a copy of my job batch « Job_batch_A » since my repository named Job_Batch_B.

Here’s what happen :

 

 

 

Schema.png

Diagram of duplication of a Job Batch

 

 

The Job_Batch__B will be dupplicated, but the inherited reusable objects won’t be copied: these are original objects references wich be created, duplicated objects.

 

To be more specific, if I made a change in my Job_Batch B, on Workflow_A or on a DataFlow, these changes will be reported on our Job_Batch_A.

Only our inherited reusable objects from our dupplicated object are references to the original objects! Our copied object (which is here a Batch Job) has been really duplicated.

 

So be careful when you make a copy of our jobs, workflows and dataflows.


To further explain:

 

Job_Batch A ≠ Job_Batch B

Job_Batch_A Reusable objects = Job_Batch_B Reusable objects.

 

The logic may seem strange and complex, but it is simpler than it seems. To better illustrate this:

- If I remove my Workflow_A of my Job_Batch_B, it will not also removed from Job_Batch A (and vice versa).

- But if I remove a DataFlow Workflow_A since my Job_Batch B, it will also be removed from Job_Batch A, the two jobs point to the same object that is Workflow_A (and vice versa).

 

 

Although this logic is the same for added component:

- If I add a Workflow in my Job_Batch_B, it will not be added into my Job_Batch_A (and vice versa)

- If I add a DataFlow in my Workflow_A since my Job_Batch_B it will also be added to Job_Batch A (and vice versa).

 

 

I took for example a batch Job and Workflow Dataflow inherited, but this logic copy / reference inherited remains the same on the lower level with what the Workflow Dataflows.

 

 

Job_Batch_example.png

Hierarchy of reusable objects


What to mainly know of this logic:

Duplicating a reusable object creates a new object, but the inherited objects will not be duplicated: they will be references to the original objects!

 

 

This logic of inheritance reusable object stops at our dataflows, because the objects contained in a dataflow are non-reusable objects (These are objects that can’t be reused and therefore are copied.). So if I make a copy of a dataflow, the changes I will do in the duplicated dataflow will not be reflected, because I am at the lowest level of reusable objects. So now I can answer at the question "If I make a change in a copied DataFlow, she will affect in another objects? ». Please note however, all objects contained in our batch Job and our workflows are NOT ALL reusable objects (ex: scripts).

 

 

Here is the presentation of the logic of DSD duplication. This logic reveals surprising at first glance because it doesn’t respect the traditional logic of "copy / paste" we know. But with hindsight, it brings a lot of benefit including the reuse of elements and widespread change.

If you understand the logic, so we answer at the following question:

 

"How to duplicate a job and make it independent ?"

 

Returning to our job illustrated above to explain the procedure to follow.

 

Schema 3.png

 

I want to duplicate our Job_Batch_A for a duplicate and independent Job_Batch_B, where I can do whatever I want on it Job_Batch_B without impacting our Job_Batch_A and vice versa.

 

 

 

Step 1

 

To duplicate a job, we will start duplicating our Job_Batch_A, select the job to copy from the local object library and do a right-click it, and then "replicate". Name this new Job "Job_Batch_B".

 

2015-08-17_14h06_16.png

 

Import the job batch then duplicated in the project area of your choice.

 

 

 

Step 2 :

Delete all Job_Batch_B reusable objects so that it is completely independent (workflows). In this case just delete our Workflow_A. Non-reusable objects will be copied so no bother to remove them.

Note: If some reusable objects need’s dependence with the original object, and you do not mind, you can keep them, but beware of consequences!

 

 

 

Step 3 :

Once our Job_Batch_B created, create our new workflow, we will name it Workflow_B:

 

2015-08-17_14h11_46.png

 

 

And once our Workflow_B created, we have reached the lowest level of reusable objects, we can place our copies of dataflows.

 

 

 

Step 4 :

We will duplicate our dataflows that we have in our Worklow_A. We still do it since the "local library objects."

 

2015-08-17_14h10_44.png

 

 

Rename them.

 

2015-08-17_14h12_57.png

 

 

 

Step 5 :

Place duplicate dataflows in our created  workflow. To place them, do click and drag from the "local object library" into the workflow.

Well, we now have two identical but separate batches.

 

2015-08-17_14h13_06.png

 

 

 

 

References :

https://scn.sap.com/thread/3762580

https://help.sap.com/businessobject/product_guides/boexir32SP2/en/xi321_ds_designer_en.pdf

Add attachment to your mail in BODS- A step by step process

$
0
0

Hello Techbie’s,

 

I had a requirement to send email with attachment in bods.

I read through some of the article in SCN but none of it provided a detailed way of achieving it.


I found the article (Add an attachment to BODS Job Notification email using VB Script ) somewhat interesting but it was not working for me

 

So I did a little research and came up with a solution that can be implemented in BODS.

 

Solution:-

We’ll do it using vb script and then calling that script in our job.

 

Step 1: Use below code to make a vb script file.

Open a notepad, write below code by making necessary changes to highlighted text and then save it as email.vbs


Option Explicit
Dim MyEmail

Set MyEmail=CreateObject("CDO.Message")

MyEmail.Subject =
"Subject Line"
MyEmail.From =
"no-reply@yourcompany.com"
MyEmail.To =
" helpdesk@yourcompany.com "

               MyEmail.TextBody = "This is the message body."
               MyEmail.AddAttachment
"attachment file path"        -- NO EQUAL TO SIGN HERE

(Note: Attachment filepath - This has to be a shared directory or location which is accessible by the DS. Common mistake people include “equal to ‘=’ “sign near Add attachment which results in an error)


MyEmail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendusing")=
2

'SMTP Server
MyEmail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserver")=
"smtp relay server name"

'SMTP Port
MyEmail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserverport")=
25

MyEmail.Configuration.Fields.Update
MyEmail.Send

set MyEmail=nothing

 

 

Step 2: In Job place the below script:

Script_Email which includes a call to email.vbs script file:

                          

                    e.g.        exec('cscript','filepath\email1.vbs', 8);

 

            Filepath where email.vbs is located.


Step by step data loading from BODS to BW target

$
0
0


Configurations at BW system:

 

1) Log on to the SAP BW system.
2) Enter T code ‘SALE’ to create new logical system:

1.jpg

3) To create a logical system, choose Define Logical System.

  • Enter a name for the logical system that you want to create.
  • Enter a description of the logical system.
  • Save your entries.

4) Go to Transaction RSA1 to create RFC connection.

5) Select Source Systems in the Modeling navigation pane on the left.

6) Navigate to BO DataServices right click and select create.

2.jpg

7) Enter Logical System Name and Source System Name as shown above and hit Continue.

3.jpg

8) Data Services will start an RFC Server program and indicates to SAP BI that it is ready to receive RFC calls. To identify itself as the RFC Server representing this SAP BI Source System a keyword is exchanged, in the screen shot above it is "BODREP". This is the Registered Server Program, the Data Services RFC Server will register itself with at SAP. Therefore, provide the same Program ID that you want to use for the call of the RFC Server on Data Services side. All other settings for the Source System can remain on the default settings.
To complete the definition of the Source System, save it.

4.jpg

NOTE: We have to use the same Program ID while creating RFC connection in Management Console(BODS).

BO Data Service - Configure a RFC connection

 

  1. Log on to the SAP data services management console system.

5.jpg

2     Expand to the new "SAP Connections" node and open the "RFC Server Interface" item. In the Configuration tab a new RFC Server is added so that it can register itself inside the SAP System with the given PROGRAM_ID.

6.jpg 

3     Start the RFC server from tab ‘RFC server interface status” :

7.jpg

4     Go to BW and check the connection :

8.jpg

  It will show message like below:

9.jpg

 

Creating BW source:

  1. Double click on BODS connection :

10.jpg

   2    Right click on header and create new application component(Here it’s ZZ_EMPDS) :

11.jpg

12.jpg

   3    Right click on application component and create a new datasource:

13.jpg

   4    Fill the information for datasource as shown below :

         General Info. Tab

14.jpg

   Extraction Tab

15.jpg

   Fields Tab: Here we’ll define the structure of the BW target and save it.

16.jpg

   5   Now, BW will automatically create a new InfoPackage as shown below :

17.jpg

 

 

Creating BODS Datastore and Job:

1     Right click and create a new data store for BW target and fill the  required BW system detail as shown below :

18.jpg

2     Right click on transfer structure and import the datasource( here its transaction datasource ‘ZBODSTGT’ :

19.jpg

20.jpg

3     Right click on File format and create a new file format as shown below :

21.jpg

4     Create a BODS job where Source is flat file and target is BW data source(Transfer structure) as shown below :

22.jpg

Where query mappings are as shown below:

23.jpg 

 

Execute the job:

BODS job to load data in BW can be executed from both the systems, BODS designer and BW system.

Before Executing the job, We have to do following cofigurations in BW InfoPackage :

 

Goto BW, Double click on the InfoPackage for respective datasource(ZBODSTGT) and fill the "3rd party selection" details as below and save it.

Repository    : BODS repository name

JobServer     : BODS running jobserver name

JobName     : BODS job name

24.jpg

 

 

Job Execution from BW system: Right click and execute infopackage. It will trigger BODS job which will load the data into BW datasource.

25.jpg

 

OR

Double click on InfoPackag, go to ‘Schedule’ tab and click on start:

26.jpg

Job Execution from BODS designer: Go to BODS designer, Right click on the BODS job and Execute.

27.jpg

28.jpg

 

 

Verifying Data in BW :

  1. Go to Datasource and right click go to Manage :

29.jpg

When we execute the job, it will generate a request for data extraction from BODS :

30.jpg

2     Select latest request and click on PSA maintenance which will show the data in target datasource.

 

After loading data to BW datasource, it can be mapped and loaded to any BW target like DSO or Cube using process chain in “Schedule” tab of InfoPackage:

Go to ‘Schedule Option-> Enter Process Chain name in ‘AfterEvent’


Add an attachment to BODS Job Notification email using VB Script

$
0
0

Hi All,

 

In some of the ETL Projects there is always a need to send the processed Flat File(CSV, XLS, TXT) to the customer through e-mail. As far as I know there is no functionality in SMTP_TO() function to add an attachment to the email notification.

 

We can achieve this functionality through a VB Script using the exec() function in a script.

 

Below steps describes the steps to achieve this functionality:

 

 

Steps: Add an attachment to BODS Job  Notification email using VB Script

 

Detailed Description:

In order to ease the end user’s effort, it is often required that the processed Flat File(CSV, XLS etc) is sent to the user for validation. In BODS we cannot attach the report using the SMTO_TO() function

 

Below we will see an example of such activity.

 

Current Scenario:

 

There is no functionality in BODS to attach a file and send it to user. The same can be implemented in BODS by calling a VB Script through exec() function.

 

Solution: After the completion of job place a script which calls a VB Script (vbs) file to send email notification. The vbs file must be saved in the Processed location shared folder.

Declare the below Global Variable in the job.

$G_PROCESSED_LOCATION ='\\\XYZ_Location\ Processed';

 

The email.vbs file has the following information;

strSMTPFrom = "User@abc.com"

strSMTPTo = "User@abc.com"

strSMTPRelay = "smtp.abc.com"

strTextBody = "JOB_NAME completed successfully in UAT. Attached is the file load status."

strSubject = "JOB_NAME completed in UAT"

strAttachment = "\\ XYZ_Location \Processed\MyFile.xls"


                         Set oMessage = CreateObject("CDO.Message")

                     oMessage.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2

                     oMessage.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserver") = strSMTPRelay

                     oMessage.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25

                     oMessage.Configuration.Fields.Update

 

 

                     oMessage.Subject = strSubject

                     oMessage.From = strSMTPFrom

                     oMessage.To = strSMTPTo

                     oMessage.TextBody = strTextBody

                     oMessage.AddAttachment strAttachment

                      oMessage.Send

 

Here is the script to send the email

exec('cscript','[$G_PROCESSED_LOCATION]\email.vbs', 8);

 

 

Regards

Arun Sasi


The EIM Bulletin

$
0
0

Purpose

 

The Enterprise Information Management Bulletin (this page) is a timely and regularly-updated information source providing links to hot issues, new documentation, and upcoming events of interest to users and administrators of SAP Data Quality Management (DQM), SAP Data Services (DS), and SAP Information Steward (IS).

 

To subscribe to The EIM Bulletin, click the "Follow" link you see to the right.

 

HotTopics

 

Best Practices for upgrading older Data Integrator or Data Services repositories to Data Services 4.2

  • Finally upgrading your old version of Data Integrator or Data Servcies? Please reference the guide above.

 

 

Latest Release Notes

(updated 2015-09-10)

  • 2129507 - Release Notes for DQM SDK 4.2 Support Pack 4 Patch 1 (14.2.4.772)
  • 2192027 - Release Notes for SAP Data Services 4.2 Support Pack 4 Patch 3 (14.2.4.873)
  • 2195658 - Release Notes for SAP Data Services 4.2 Support Pack 5 Patch 1 (14.2.5.894)
  • 2192015 - Release Notes for SAP Information Steward 4.2 Support Pack 4 Patch 3 (14.2.4.836)
  • 2195665 - Release Notes for SAP Information Steward 4.2 Support Pack 5 Patch 1 (14.2.5.851)

 

New Product Support Features
(Coming Soon)

 

 

Selected New KB Articles and SAP Notes

(updated 2015-08-28)

  • 2209609 - New built-functions are missing after repository upgrade - Data Services 4.2 SP5
  • 2191581 - Error: no crontab for userid
  • 2194903 - Getting error "CMS cluster membership is incorrect" from Data Services Designer
  • 2206478 - How to change location of pCache - Information Steward 4.x
  • 2198166 - Designer use of views as target only possible now from datastore library
  • 2194994 - Unable to start Data Services job server - SAP Data Services 4.x

 

Your Product Ideas Realised!

(new section 2015-06-25)

 

Enhancements for EIM products suggested via the SAP Idea Place, where you can vote for your favorite enhancement requests, or enter new ones.

 

Events

(2015-09-24)


What are we planning here in EIM? Please check out the following opportunities:

 

 

New Resources

(To be determined)


Didn't find what you were looking for? Please see:


Note: To stay up-to-date on EIM Product information, subscribe to this living document by clicking "Follow", which you can see in the upper right-hand corner of this page.

Hierarchy visualization of objects

$
0
0

Ever got into a situation where you had to list out child workflows and dataflows of a job? May be for documenting, or may be for checking object usages, Real challenge is when you have to represent it in organizational chart.

 

Here is an example:


For better view - click on the image, on the preview window, right click and save to desktop.

data1.png

How do we do?

 

Of-course you can drill down every object in designer, navigate and draw chart in Microsoft Visio, MS-Word etc. But, how about generating hierarchical chart from repository metadata rather than drawing it?

 


Its just three steps away

 

1. Start by populating AL_PARENT_CHILD table, from designer or from command line,

 

          a. From designer:

Untitled2.png

 

 

          b. From command line using al_engine.exe

          command: "%LINK_DIR%\bin\al_engine.exe" -NMicrosoft_SQL_Server -StestSQLhost -Udb_user -Pdb_pass -Qtestdb -ep

Untitled2.png

 

2. Login to repository database and execute the below query:

 

WITHCTEAS(

      SELECT[PARENT_OBJ]

      ,[PARENT_OBJ_TYPE]

      ,[DESCEN_OBJ]

      ,[DESCEN_OBJ_TYPE]FROM[AL_PARENT_CHILD] PC

      WHERE'JOB_CORD_BW_TD_OHS_POPULATE_SDL' = PARENT_OBJ

      UNION ALL SELECTPC.[PARENT_OBJ]

     ,PC.[PARENT_OBJ_TYPE]

      ,PC.[DESCEN_OBJ]

      ,PC.[DESCEN_OBJ_TYPE]FROMCTE INNER JOIN [AL_PARENT_CHILD] PC ON CTE.DESCEN_OBJ=PC.PARENT_OBJ

) SELECT2ASID,[PARENT_OBJ] + '->' + [DESCEN_OBJ] CODEFROMCTE

WHEREPARENT_OBJ_TYPE IN ('Job','WorkFlow','DataFlow')

AND [DESCEN_OBJ_TYPE] IN ('Job','WorkFlow','DataFlow')UNION

SELECT1ASID, 'digraph a { node [shape=rectangle]'UNION

SELECT3ASID, '}'

 

 

Untitled1.png

 

From the output, copy only the contents in second column without header.

 

My data looks like this:

digraph a { node [shape=rectangle]
JOB_CORD_BW_TD_OHS_POPULATE_SDL->WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_BW_Z_BODS_01_STG_TO_BW_Z_BODS_01_SDA
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_1
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_2
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_BW_Z_BODS_02_STG_TO_BW_Z_BODS_02_SDA
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_01
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_02
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL->WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL->WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL->WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL
WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL->WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL
WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL->DF_OH_Src_Z_BODS_01_To_Stg_S_BW_Z_BODS_01_Map_SDL
WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL->DF_OH_Src_Z_BODS_02_To_Stg_S_BW_Z_BODS_02_Map_SDL
}

 

 

3. Open the webpage webgraphviz

  1. Clear the existing contents in text box
  2. Past the code you copied
  3. Click generate graph button and scroll down to see the generated graph.

 


That's all, Org chart of your job is ready !

 


Note:

  1. The SQL Query
    1. Given query works only on MS-SQL Server.
    2. Modify the WHERE clause in the query to match with your job.
    3. You can also use "IN" instead of "=" and put multiple job names.
    4. Query is restricted only to job, workflow & dataflow. You can modify the conditions to include other objects too.
  2. We are generating only parent child hierarchy, not the execution flow. i.e. two child node at same level may not execute in parallel.
  3. Since its not the execution flow, conditional workflows will not appear in chart.
  4. Webgraphviz is alternate for Graphviz tool which supports command line usage when installed.

 

 

Appreciate your comments/feedback. Cheers

TUTORIAL: How to duplicate a Job batch in SAP Data Services Designer

$
0
0

Introduction

This tutorial will guide you along the job batch and component duplication processes. We will explain also the duplication mode operation.

 

 

 

Foreword

 

First of all, we must know something before duplicating object. For Data Services Designer (DSD), there is 2 kind of objects concerning duplication: reusable objects and non-reusable objects (or single-use objects).

  • The non-reusable objects will be duplicated when we want the copy of a job and his component. We can cite on this category the scripts, conditions, loops, global variables…
  • The reusable objects won’t be duplicated. We can cited on this category the Worklfows, Dataflows. It will happen that the reusable objects will be referenced and non-copied.

 

For more information, see the SAP Data Services Designer documentation Page 54

 

 

 

The logic duplication illustrated


2015-08-17_14h06_16.png



To show you this politics of DSD kind of object, I’ll show you an example:

I made a copy of my job batch « Job_batch_A » since my repository named Job_Batch_B.

Here’s what happen :

 

 

 

Schema.png

Diagram of duplication of a Job Batch

 

 

The Job_Batch__B will be dupplicated, but the inherited reusable objects won’t be copied: these are original objects references wich be created, duplicated objects.

 

To be more specific, if I made a change in my Job_Batch B, on Workflow_A or on a DataFlow, these changes will be reported on our Job_Batch_A.

Only our inherited reusable objects from our dupplicated object are references to the original objects! Our copied object (which is here a Batch Job) has been really duplicated.

 

So be careful when you make a copy of our jobs, workflows and dataflows.


To further explain:

 

Job_Batch A ≠ Job_Batch B

Job_Batch_A Reusable objects = Job_Batch_B Reusable objects.

 

The logic may seem strange and complex, but it is simpler than it seems. To better illustrate this:

- If I remove my Workflow_A of my Job_Batch_B, it will not also removed from Job_Batch A (and vice versa).

- But if I remove a DataFlow Workflow_A since my Job_Batch B, it will also be removed from Job_Batch A, the two jobs point to the same object that is Workflow_A (and vice versa).

 

 

Although this logic is the same for added component:

- If I add a Workflow in my Job_Batch_B, it will not be added into my Job_Batch_A (and vice versa)

- If I add a DataFlow in my Workflow_A since my Job_Batch_B it will also be added to Job_Batch A (and vice versa).

 

 

I took for example a batch Job and Workflow Dataflow inherited, but this logic copy / reference inherited remains the same on the lower level with what the Workflow Dataflows.

 

 

Job_Batch_example.png

Hierarchy of reusable objects


What to mainly know of this logic:

Duplicating a reusable object creates a new object, but the inherited objects will not be duplicated: they will be references to the original objects!

 

 

This logic of inheritance reusable object stops at our dataflows, because the objects contained in a dataflow are non-reusable objects (These are objects that can’t be reused and therefore are copied.). So if I make a copy of a dataflow, the changes I will do in the duplicated dataflow will not be reflected, because I am at the lowest level of reusable objects. So now I can answer at the question "If I make a change in a copied DataFlow, she will affect in another objects? ». Please note however, all objects contained in our batch Job and our workflows are NOT ALL reusable objects (ex: scripts).

 

 

Here is the presentation of the logic of DSD duplication. This logic reveals surprising at first glance because it doesn’t respect the traditional logic of "copy / paste" we know. But with hindsight, it brings a lot of benefit including the reuse of elements and widespread change.

If you understand the logic, so we answer at the following question:

 

"How to duplicate a job and make it independent ?"

 

Returning to our job illustrated above to explain the procedure to follow.

 

Schema 3.png

 

I want to duplicate our Job_Batch_A for a duplicate and independent Job_Batch_B, where I can do whatever I want on it Job_Batch_B without impacting our Job_Batch_A and vice versa.

 

 

 

Step 1

 

To duplicate a job, we will start duplicating our Job_Batch_A, select the job to copy from the local object library and do a right-click it, and then "replicate". Name this new Job "Job_Batch_B".

 

2015-08-17_14h06_16.png

 

Import the job batch then duplicated in the project area of your choice.

 

 

 

Step 2 :

Delete all Job_Batch_B reusable objects so that it is completely independent (workflows). In this case just delete our Workflow_A. Non-reusable objects will be copied so no bother to remove them.

Note: If some reusable objects need’s dependence with the original object, and you do not mind, you can keep them, but beware of consequences!

 

 

 

Step 3 :

Once our Job_Batch_B created, create our new workflow, we will name it Workflow_B:

 

2015-08-17_14h11_46.png

 

 

And once our Workflow_B created, we have reached the lowest level of reusable objects, we can place our copies of dataflows.

 

 

 

Step 4 :

We will duplicate our dataflows that we have in our Worklow_A. We still do it since the "local library objects."

 

2015-08-17_14h10_44.png

 

 

Rename them.

 

2015-08-17_14h12_57.png

 

 

 

Step 5 :

Place duplicate dataflows in our created  workflow. To place them, do click and drag from the "local object library" into the workflow.

Well, we now have two identical but separate batches.

 

2015-08-17_14h13_06.png

 

 

 

 

References :

https://scn.sap.com/thread/3762580

https://help.sap.com/businessobject/product_guides/boexir32SP2/en/xi321_ds_designer_en.pdf

Viewing all 401 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>