Quantcast
Channel: SCN : Document List - Data Services and Data Quality
Viewing all 401 articles
Browse latest View live

Slowly Changing Dimension - Type 1

$
0
0

Slowly Changing Dimension – Type 1 is useful when you need not to maintain the historical data. Below are the two cases explained for implementation of SCD -Type 1 using ‘Auto Correct Load’ Option available in the target table.

Case 1:New record from source

The new records from source will be passed/written to the target table as ‘Insert’ records.

 

Case 2:Existing record with update in non-key columns

The update value from the source will overwrite the existing values in the target table. These records will be passed as ‘update’ records.

 

How to implement?

  1. Map your source to the target table. If needed any transformations use a query/any other you like.
  2. Map your output schema of the last transform to the target table.
  3. Open the target table and navigate to the ‘Options’ tab.
  4. Under ‘Advanced’ section, in ‘Update Control’ set ‘Auto Correct Load’ option to ‘Yes’. This should set the option ‘Allow merge or upsert’ option to ‘Yes’ by default.
  5. Save your mapping. Run the job.

 

Screenshots of mapping:

 

Data in the table before implementation of SCD - Type 1:

Before.JPG

 

Mapping of the Dataflow:

dfdesign.JPG

Query Transform Mapping:

querymapping.JPG

Target Table Options:

targetoptions.JPG

Data in the after before implementation of SCD1:

after.JPG

 

Note:

Company Name changed from Tgw to Microsystems Inc for Customer ID 12345

Phone Number changed from 1-837-853-9045 to 1-837-853-9055 for Customer ID 12345

New entry for CustomerId 12370

 

How it Works?

  Dataflow generates a merge statement when ‘Auto Correct Load’ option is set to yes. If you set the second option ‘Allow merge or upsert’ to No manually, the dataflow generates a transact sql code.


Thanks

Santhosh


SQL Transformation

$
0
0

How to use SQL Transform in SAP BODS?

 

SQL Transformation:


This transform is used to perform SQL operation retrieve the data from one or multiple table.

When we use: Sometimes it may not be possible or complex to design ETL with complex Business logic.

It is very easy to build. We need the SQL select statement.

Create a new PROJECT and a JOB as shown in below screen

1.JPG


Bring the SQL Transform into Data flow.

Paste the below simple SQL code inside the transform.


selecte.empno,e.ename,e.sal,d.dnamefromempe,deptdwhered.locin('NEW YORK','DALLAS')ande.deptno=d.deptnoande.empnoin(selecte.empno

fromempewheree.jobin('MANAGER','ANALYST')ande.commisnull)

orderbyd.locasc;


Select the Data Store Name and Database type


Keep the other options as shown in screen

2.JPG

Click on Update schema

 

Change the data type of the column names.

 

 

In this case, EMPNO shows data type as decimal. But we can change to int.

 

Use the Query transform and Template table as the destination.

3.JPG

 

SAVE the work.

 

Run the JOB.

 

Target table's data after the JOB run.

 

4.JPG

Date Generation Transformation for TIME Dimension table in SAP BODS

$
0
0

Date Generation Transformation:

 

This transform will help us to create the dates incremented as our requirement.

 

When we use: To create Time Dimension fields and Table


Create a new PROJECT and a JOB as shown in below screen.


Bring the Date_Generation Transform into Data flow.


Set the options as shown below.

1.JPG

 

Start Date : 2000.01.01

 

End Date : 2100.12.31

 

Increment: Daly

 

 

 

Bring the Query_Transform and map the columns as shown.

2.Date_Functions.jpg

 

Map the columns as shown below:

 

DATE:Date_Generation.DI_GENERATED_DATE

WEEK: week_in_year(DI_GENERATED_DATE)

MONTH:month(DI_GENERATED_DATE)

QUARTER: quarter(Date_Generation.DI_GENERATED_DATE )

YEAR: year(DI_GENERATED_DATE)

2.Date_Functions.jpg

 

3.JPG

How to use is_validate_date function?

$
0
0

Is_valid_date is commonly commonly used function in SAP BODS.

 

It will validate the date field with the given format as shown below:

 

is_valid_date("DATE", 'YYYYMMDD') <>0

 

DATE > Date Field object

 

Create a data flow as mention:

 

Source table or file > Query_Transform > Validation_Transform > Target Table

 

valid_date1.jpg

 

 

Validation_Transform:


Use this function in this transform - is_valid_date("DATE", 'YYYYMMDD') <>0

 

Untitled.png

 

 

Thank You.

 

Best Regards,

 

Arjun

How to create the destination file name dynamically with time stamp?

$
0
0

We can create the destination file names like CSV,TXT,XML with dynamic date names in SAP BODS.

 

Create a job with Script > Data flow

 

Capture1.JPG

 

 

Script Code:

$G_CurrentDate= sysdate();

print($G_CurrentDate);

$G_Filename = 'Dynamic_File_Name_'||substr(sysdate(),0,10)||'.csv';

print($G_Filename);

 

Declare the variable $G_CurrentDate as date and $G_Filename as varchar.

 

Place this variable $G_Filename in the File name of the File Format Editor

 

Create the Global Variables:

Capture2.JPG

 

 

Create the data flow with the following:

 

 

Capture5.JPG

 

Create a file format Editor

 

Choose the directories for destination file path.

 

Capture4.JPG

First and Last Date of a current and Previous Months

$
0
0

FirstDay_PrevMonth: add_months(last_date(sysdate()), -2) + 1

LastDay_PrevMonth: last_date( add_months(sysdate(),-1))

1.png

First day of the current month:add_months(last_date(sysdate()), -1) + 1

Last Day of the current month : last_date(sysdate())

2.jpg

 



Case Transform

$
0
0

The Case Transform is used to route one row to different outputs depending on conditions. Specifies multiple paths in a single transform (different rows are processed in different ways).

The Case transform simplifies branch logic in data flows by consolidating case or decision making logic in one transform. Paths are defined in an expression table.

 

 

Here I have taken an example EMPLOYEE table.

 

 

Employee Location are 3 coutries - IN,SG and US.

 

 

Case1.png

 

Requirement:

 

 

To load the LOCATION data into 4 different tables.

 

 

If LOCATION=IN then load the data into IN_EMP

If LOCATION=SG then load the data into SG_EMP

If LOCATION=US then load the data into US_EMP

If LOCATION is other than IN,SG and US then load the data into OTHER_EMP

 

 

Bring the Case Transform after the Query Transform as shown below:

 

Case2.png

 

Inside the Case Transform use the code as shown in below screen.

 

Case3.png

 

Run the Job to load the data as per our Case transform logic and check.

 

Case4.png

BODS Recovery and Delta Logic 'Template Job' Creation Guidelines

$
0
0

1      INTRODUCTION

 

 

 

1.1   Purpose

 

The purpose of this document is to define the BODS Job Recovery template for Business Objects Data Services (BODS) projects. This Recovery template will enable the development team to implement module wise recovery logic in case of any failure of the Job.

 

The Recovery template will enable the BODS job server to run the BODS job from the point where an error was detected in the previous run. As the recovery logic is implemented using BODS job design techniques, this standard job can be run multiple times. In every run, the steps, which were failed /not run in the previous run, will be initiated .This default behaviour can also be overridden (by setting a global variable ) in case the developer wants to run all the steps of his/her job again (ignoring the previous run status information ) .

 

In addition, this standard template will enable the design and development team to follow specific guidelines for BODS projects.

The job template is designed in a way to:

 

o    Log the job run parameters in a database for auditing purpose.

o    Log the status and point of failure in case of failed jobs to allow the job to resume from the step of failure when re-run, thus avoiding the redundant reprocessing of already processed data.

o    Allow Initial and Delta runs of the jobs wherever required.

 

 

2     DESIGN COMPONENTS

 

 

1.png

 

2.1   Try - Catch Blocks:

 

The Try-Catch Blocks continuously function within their scopes and capture exceptions.

 

Try-Catch Blocks are implemented at two levels

 

i)              Job level- All the data processing components are placed within the scope of a main Try- Catch block.

ii)             Work Flow level – Each workflow has its own try - catch block. This is to ensure that an error in one workflow will be logged at that point itself and will not stop the job execution of other workflows placed parallel with that workflow.

2.2   Data Processing Section

 

      The Data Processing area could be split on the High Level into 3 different steps. You have to replicate the number of Recovery Workflows as required by the job . These steps are:

-         Initialize

-         Recovery Workflows

-         Status Update Workflow

 

2.2.1     Initialize

 

The Initialize step performs the task of

o    Initializing all the variables required in the Job.

o    Determining whether the Job failed in the previous run or not.

o    Fetching the Next Job run id.  

o    Logging the Job details like Job name, Job start time, Status etc. into Job_Control table.

 

2.2.2     Recovery Workflows

 

In this part the recovery logic is implemented. Based on the job requirements ( number of dataflows/workflows required ) the recovery workflows can be replicated and named based on the functionality. The recovery workflows can be connected in parallel/ serial based on the job functionality.   

 

 

 

Each Recovery workflow has the following components:

 

1)     Try Block

2)     Initializing Script

3)     Conditional

4)     Catch Block

 

2.2.3     Status Update Workflow

 

In this part we are checking that whether any of the workflow failed during the job run. In case any of the workflow had failed, we will update the job status as Failed.

 

 

 

3     DESIGN PROCESS

 

 

3.1   Initialize Step

 

The section is completely contained in an Initialize workflow and contains an Initialize script which performs the process of setting up most of the Global Variables required in the job.

 

Before start of any of the SAP DS projects, JOB_CONTROL table needs to be created.

This table contains the job run status information for every run of the job. For every new run of the job, a new entry is inserted with the job details.

 

 

 

 

  1. No.

Field Name

Field Description

Field Type

Field Length

1

JOB_RUNID

Unique ID to store the instance of the job run

INTEGER

 

2

JOB_NAME

Name of the Job

VARCHAR

250

3

JOB_START_TIME

Job Start Date & Time

DATETIME

 

4

JOB_END_TIME

Job End Date & Time

DATETIME

 

5

JOB_STATUS

Process status of a batch

R – Running

C – Completed

F – Failed

VARCHAR

2

 

 

 

The Global variables that are set in this step are

 

Parameter Name

Type

Description                

Value Assigned

$G_JOB_NAME         

VARCHAR(250)

The name of the job currently executing 

job_name()

$G_RUNID

INT

Unique id for the current job run which is used to populate Job_Runid of Job_control table . In case the previous instance of the job has failed, this variable will contain the previous run’s Job_Runid

via custom function

$G_JOB_STAT

US

VARCHAR(1)

The status of the previous job run

via custom function

$G_RECOVER

Y_FLAG

VARCHAR(1)

The unique id assigned for the current instance of the job run. In case the previous instance of the job fails, this variable will store the previous run id.

via custom function

$G_ERROR_M

SG

VARCHAR(1000)

The error message in case the job fails

 

$G_LOAD_TYP

E

VARCHAR(10)

This determines whether the job is a Delta load or Full Refresh. Default value is set as ‘DELTA’. Can be overridden during the job run.

‘DELTA’ or ’FULL REFRESH’ 

$G_PARALLEL

_LOAD

VARCHAR(1)

This determines whether the various units in the job need to run in Independent or Dependent mode. If $G_PARALLEL_LOAD = ‘Y’, failure in any workflow will not stop the job execution of the job at that point, i.e other workflows which come after the failed workflow will continue to be initiated .

Default value is set as ‘Y’ . Can be overridden at job run.

‘Y’ or ’N’

$G_RECORDS

_COUNT

INT

The count of the records in the target which is processed in the current run.

 

$G_CDCLOW

VARCHAR(30)

The min value used to fetch records from source during delta load.

 

$G_CDCHIGH

VARCHAR(30)

The max value used to fetch records from source during delta load.

 

$G_DEFAULT_

CDCLOW

VARCHAR(30)

Used to specify the value of $G_CDCLOW during the first run. In ‘DELTA’ load, in the first instance of the job run $G_CDCLOW will be equal to $G_DEFAULT_CDCLOW. From the next run onwards $G_CDCLOW = $G_CDCHIGH value of the previous successful run.

For ‘FULL REFRESH’ load,  $G_CDCLOW will be equal to $G_DEFAULT_CDCLOW for all runs.

 

Note: All these variables values can be overridden during job execution or in the workflow level scripts.

 

 

 

Custom Functions Used:

 

Function

Description

Custom_Function

Fetches a new run-id for a new instance of the job run. In case the previous instance of the job failed, this function will fetch the previous run-id. Also logs an entry with the run-id into Job_Control table.

 

 

 

  3.2   Recovery Workflows

 

In these workflows, the recovery logic is designed. Depending on the job’s complexity and number of target tables loaded, these workflows needs to replicated as many times as required. The thus replicated recovery workflows can be connected either in parallel or in serial.

 

The salient features of these workflows are as follows:

 

o    Log each workflow running status in a database.

 

o    The workflow name is captured automatically during the job run. Even if there is an addition /deletion/ renaming of workflows, the modified details will be automatically captured in the control tables during the next job run

 

 

o    Enables recovery from the point of failure .For every run of the job, the workflow/s, which failed/not executed in the previous run will be executed from the point of failure under the same conditions as the previous job run. The workflow/s which were successfully initiated in the previous run will not be initiated (thus avoiding the redundant reprocessing of already processed data) until and unless all the workflows of the job has been successfully executed. 

 

o    Allow Initial and Delta runs of the workflows wherever required.

 

 

o    The various workflows can run in Independent mode or in Dependent mode. For  Independent mode , any failure of the workflow will not stop the job execution of the job at that point, i.e other workflows which come after the failed workflow will continue to be initiated .For Dependent mode, any failure at a workflow will stop the job execution at that very point.

 

 

 

  1. No.

Field Name

Field Description

Field Type

Field Length

1

JOB_RUNID

Unique ID to store the instance of the job run

INTEGER

 

2

JOB_NAME

Name of the Job

VARCHAR

250

3

WF_NAME

Name of the Workflow

VARCHAR

250

4

WF_START_TIME

Workflow Start Date & Time

DATETIME

 

5

WF_END_TIME

Workflow End Date & Time

DATETIME

 

6

WF_STATUS

Process status of a batch

R – Running

C – Completed

F – Failed

VARCHAR

2

7

WF_LOADMINDATE

CDC Low value i.e. min. date range extracted from source

VARCHAR

30

8

WF_LOADMAXDATE

CDC high value i.e.  max. date extracted from source

VARCHAR

30

9

WF_RECORDS_COUNT

Total records loaded in the target

INT

 

 

 

Custom Functions Used:

 

Function

Description

custom function 2

This function logs entry determines whether the workflow needs to run in the current job run or not.

 

The recovery workflow consists of the below steps:

      i)              Try- Catch Blocks

ii)             Workflow Initialize Script

iii)            Conditional Flow

2.png

 

 

Try- Catch Blocks - These Try-Catch blocks are at workflow level. Any exception in the workflow will be handled in the catch block.

 

 

Workflow Initialize Script- In the script SC_START_WF all the local variables used in the workflow is initialized. The script calls the custom function custom function 2 , which logs the entry into wf_control table. The flag $LV_RECOV_FLAG is assigned the value of Y or N .

The logic for this

 

$LV_RECOV_FLAG-> Y

 

A)    The Job failed in the previous instance AND this workflow had failed

B)    The Job failed in the previous instance AND this workflow was not executed

C)    The Job successfully executed in the previous run.

 

$LV_RECOV_FLAG-> N

 

A)    The Job failed in the previous instance but this workflow had successfully run

 

 

Conditional Flow- In this step the value of the variable $LV_RECOV_FLAG is checked.

$LV_RECOV_FLAG-> Y -The dataflow/workflow inside conditional will be initiated.

$LV_RECOV_FLAG-> N -The dataflow/workflow inside conditional will not be initiated.

 

 

 

41.PNG

Custom Functions Used:

 

Function

Description

custom function 3

This function logs the success/Failure status in WF_CONTROL table

 

 

3.3   Status Update Workflow

 

A custom function is called to check whether any of the workflow failed during the job run.

 

 

Custom Functions Used:

 

Function

Description

custom function 4

This function logs the success/Failure status in JOB_CONTROL table


SAP Data Services Blueprints

$
0
0

A blueprint is a sample SAP Data Services job that is configured with best practices to solve a specific scenario. Each blueprint is an end-to-end job that includes sample data and may be run in the customer environment with only a few modifications. Some jobs include batch data flows and some include real-time data flows; some jobs include party data and some include product data. Data quality jobs include structured data, and text data processing jobs include unstructured data.

 

Data Quality Management and Text Data Processing 4.2 blueprints and other tools

  • These blueprints are compatible with 4.2 and all 4.2 Support Packs, unless otherwise noted.
  • To configure and set up SAP Data Services blueprints and other objects, download the appropriate User's Guide in the Documentation column below.
  • To view a list of new blueprints and other feature in this release, download the What's New document.
  • To view a detailed list of the available blueprints, other objects, and their contents, download the Content Objects Summary - SAP Data Services 4.2 SP2.
  • To contribute your own blueprints, visit the How to Contribute page.

 

Data Quality Management 4.2 regional blueprints:

Blueprint (version 4.2)

Description

Documentation

Data Quality  Management Blueprints 4.2 Brazil

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in Brazil.

DQM_Regional.pdf

Data Quality Management Blueprints 4.2 – China

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in China.

Data Quality Management Blueprints 4.2 – China

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in China.

Data Quality Management Blueprints 4.2 France

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in France.

Data Quality Management Blueprints 4.2 Germany

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in Germany.

Data Quality Blueprints 4.2 – Global

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data when the data consists of multiple countries.

Data Quality Management Blueprints 4.2 India

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in India.

Data Quality Management Blueprints 4.2 Japan

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in Japan.

Data Quality Management Blueprints 4.2 Mexico

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in Mexico.

Data Quality Management Blueprints 4.2 – South Korea

Contains a sample Global Address Cleanse transform configuration that contains best practice settings for cleansing address data in South Korea. This transform configuration can only be used with version 4.2 SP2 or later.

Data Quality Management Blueprints 4.2 USA

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in the United States.

 

Data Quality Management Match 4.2 blueprints:

Blueprint

(version 4.2)

Description

Documentation

Data Quality Management Blueprints 4.2 Match

Contains miscellaneous jobs configured to illustrate best practice settings for specific Data Quality Management matching use cases.

DQM_Match.pdf

 

Data Quality Management product 4.2 blueprints:

Blueprint

(version 4.2)

Description

Documentation

Data Quality Management Blueprints 4.2 – Product

Contains sample jobs configured to illustrate how to cleanse product data using a custom cleansing package.

DQM_Product.pdf

 

Text Data Processing language 4.2 blueprints:

Blueprint (version 4.2)

Description

Documentation

Text Data Processing Blueprints 4.2 – English

Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving unstructured text in the English language.

TDP_Lang.pdf

Text Data Processing Blueprints 4.2 – German

Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving unstructured text in the German language.

Text Data Processing Blueprints 4.2 – Japanese

Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving unstructured text in the Japanese language.

Text Data Processing Blueprints 4.2 – Simplified Chinese

Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving multi-byte content in the Simplified Chinese language. It also showcases enriched support for core extraction pre-defined entity types and sentiment analysis.

 

Text Data Processing Data Quality Management 4.2 blueprints:

Blueprint

(version 4.2)

Description

Documentation

Text Data Processing Blueprints 4.2 – Data Quality Management

Contains sample jobs configured to illustrate the use of Text Data Processing in conjunction with Data Quality Management. It also helps you visualize the extracted concepts and sentiments using an SAP BusinessObjects BI 4.0 Universe and SAP BusinessObjects Web Intelligence reports.

TDP_DQM.pdf

 

Text Data Processing miscellaneous 4.2 blueprints:

Blueprint

(version 4.2)

Description

Documentation

Text Data Processing Blueprints 4.2 – Miscellaneous

Contains sample jobs configured to illustrate the use of Text Data Processing for language identification and how input text is parsed to support custom rule creation.

TDP_Misc.pdf

 

Other 4.2 tools:

Tool

(version 4.2)

Description

Documentation

Data Quality Management Custom Functions 4.2

Contains custom functions that perform additional manipulation of data that is common with the cleansing and matching of party data.

Custom_Functions.pdf

Text Data Processing 4.2 – Entity Extraction Dictionary File Generator

An Excel spreadsheet with a macro that generates a dictionary source file based on the content in the spreadsheet and compiles the source file into a ready-to-use dictionary file for the Text Data Processing Entity Extraction transform.

TDP_Dictionary.pdf

 

Text Data Processing 4.1 SP1 blueprints and other tools

  • To configure and set up SAP BusinessObjects Data Services blueprints and other objects, download the appropriate User's Guide in the Documentation column below.
  • To view a list of new blueprints and other feature in this release, download the What's New document.
  • To view a detailed list of the available blueprints, other objects, and their contents, download the Content Objects Summary – version 4.1.1.
  • To contribute your own blueprints, visit the How to Contribute page.

 

Text Data Processing language 4.1 SP1 blueprints:

Blueprint (version 4.1 SP1)

Description

Documentation

Text Data Processing Blueprints 4.1.1 – English

Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving unstructured text in the English language.

TDP_Lang.pdf

Text Data Processing Blueprints 4.1.1 – German

Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving unstructured text in the German language.

Text Data Processing Blueprints 4.1.1 – Japanese

Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving unstructured text in the Japanese language.

 

Text Data Processing Data Quality Management 4.1 SP1 blueprints:

Blueprint

(version 4.1 SP1)

Description

Documentation

Text Data Processing Blueprints 4.1.1 – Data Quality Management

Contains sample jobs configured to illustrate the use of Text Data Processing in conjunction with Data Quality Management. It also helps you visualize the extracted concepts and sentiments using an SAP BusinessObjects BI 4.0 Universe and SAP BusinessObjects Web Intelligence reports.

TDP_DQ.pdf

 

Other 4.1 SP1 tools:

Tool

(version 4.1 SP1)

Description

Documentation

Text Data Processing 4.1.1 – Entity Extraction Dictionary File Generator

An Excel spreadsheet with a macro that generates a dictionary source file based on the content in the spreadsheet and compiles the source file into a ready-to-use dictionary file for the Text Data Processing Entity Extraction transform.

TDP_Dictionary.pdf

 

Data Quality Management 4.1 blueprints and other tools

  • These blueprints are compatible with 4.1 and all 4.1 Support Packages
  • To configure and set up SAP BusinessObjects Data Services blueprints and other objects, download the appropriate User's Guide in the Documentation column below.
  • To view a list of new blueprints and other feature in this release, download the What's New document.
  • To view a detailed list of the available blueprints, other objects, and their contents, download the Content Objects Summary – version 4.1.
  • To contribute your own blueprints, visit the How to Contribute page.

 

Data Quality Management blueprints:

Blueprint (version 4.1)

Description

Documentation

Data Quality  Management Blueprints 4.1 Brazil

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in Brazil.

DQM_Regional.pdf

Data Quality Management Blueprints 4.1 – China

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in China.

Data Quality Management Blueprints 4.1 France

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in France.

Data Quality Management Blueprints 4.1 Germany

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in Germany.

Data Quality Blueprints 4.1 – Global

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data when the data consists of multiple countries.

Data Quality Management Blueprints 4.1 India

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in India.

Data Quality Management Blueprints 4.1 Japan

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in Japan.

Data Quality Management Blueprints 4.1 Mexico

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in Mexico.

Data Quality Management Blueprints 4.1 USA

Contains sample jobs configured to illustrate best practice settings for common Data Quality Management use cases involving party data in the United States.

Data Quality Management Blueprints 4.1 Match

Contains miscellaneous jobs configured to illustrate best practice settings for specific Data Quality Management matching use cases.

DQM_Match.pdf

 

 

Other 4.1 tools:

Tool

(version 4.1)

Description

Documentation

Data Quality Management Custom Functions 4.1

Contains custom functions that perform additional manipulation of data that is not part of the functionality of Data Quality Management transforms, but are common functions that assist with the cleansing and matching of party data.

Custom_Functions.pdf

 

Data Quality and Text Data Processing 4.0 blueprints and other tools

  • To configure and set up SAP BusinessObjects Data Services blueprints and other objects, download the appropriate User's Guide in the Documentation column.
  • To view a detailed list of the available blueprints, other objects, and their contents, download the Content Objects Summary – version 4.0.
  • To contribute your own blueprints, visit the How to Contribute page.

 

Data Quality blueprints:

Blueprint (version 4.0)
Description
Documentation
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data in Brazil.
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data in France.
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data in Germany.
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data when the data consists of multiple countries.
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data in India.
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data in Japan.
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data in Mexico.
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data in the United States.
Contains sample jobs configured to illustrate best practice settings for common Data Quality use cases involving party data in the United States, with regulatory address certification enabled.
Contains miscellaneous jobs configured to illustrate best practice settings for specific Data Quality matching use cases.

 

Text Data Processing language 4.0 blueprints:

Blueprint (version 4.0)
Description
Documentation
Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving unstructured text in the English language.
Contains sample jobs configured to illustrate best practice settings for common Text Data Processing use cases involving unstructured text in the German language.

 

Text Data Processing Data Quality 4.0 blueprints:

Blueprint
(version 4.0)
Description
Documentation
Contains sample jobs configured to illustrate the use of Text Data Processing in conjunction with Data Quality. It also helps you visualize the extracted concepts and sentiments using an SAP BusinessObjects BI 4.0 Universe and SAP BusinessObjects Web Intelligence reports.

 

Other 4.0 tools:

Tool
(version 4.0)
Description
Documentation
Contains custom functions that perform additional manipulation of data that is not part of the functionality of Data Quality transforms, but are common functions that assist with the cleansing and matching of party data.
An Excel spreadsheet with a macro that generates a dictionary source file based on the content in the spreadsheet and compiles the source file into a ready-to-use dictionary file for the Text Data Processing Entity Extraction transform.

 

 

Loading Data from BW Open Hub Table to SQL using BODS

$
0
0

In this job we need to import open hub table from BW system to BODS. Make sure that RFC connection from BW to BODS is established properly and is running successfully. Before executing the BODS job all the objects in BW should be activated. The steps for creating and importing open hub table from BW to BODS are:

Step 1: Log on to SAP GUI and type T-Code RSA1.

Step 2: Click on Open Hub Destination under Modelling tab which is in the left pane. Right click on the Info Area created and select Create open Hub Destination.

1.jpg

Step 3: Provide the name for OHD and its description. Also provide the object name which could be info cube or DSO or multi provider or info object etc., and its corresponding name. Enable create transform and DTP option, it will create transformation and DTP while we activate the OHD table. Click OK.

1.jpg.png

Step 4: Select third party tool in destination type and provide RFC destination which is running successfully from BW to BODS.

1.jpg.png

Step 5: Activate the OHD table.

Step 6: Click on OHD created and double click on DTP and activate it.

1.jpg.png

Step 7: Go to T-code RSPC. Right click on Process chains header and create display component.

1.jpg.png

Step 8: Provide a name for application component and its long description.

1.jpg.png

Step 9: Now right click on application component and select create process chain.

1.jpg.png

Step 10: Provide a name for process chain and its long description. Click ok.

1.jpg.png

Step 11: Click on the highlighted option to create process variant.

1.jpg.png

Step 12: Provide a name for process variant and its long description. Click ok.

1.jpg.png

Step 13: Select start using meta chain or API option and save it and go back.

1.jpg.png

Step 14: Click ok.

1.jpg.png

Step 15: Provide the DTP technical name which was created during the activation of OHD. Click Ok.

1.jpg.png

Step 16: Provide connection from start to DTP and activate it.

1.jpg.png

Step 17: Logon to BODS. Go to datastores tab in local object library and right click on free space and select new.

1.jpg.png

Step 18: Select datastore type as SAP BW Source.Provide the details of application server, user name and password of BW. Click on advanced option and provide details of client number, system number. Select data transfer method as RFC and provide RFC destination name. Provide routing string of BW server. Click on apply and then select ok.

1.jpg.png

 

Step 19: Now expand the datastore and right click on open hub tables and select import by name.

1.jpg.png

Step 20: Enter the OHD table name which we have created at BW side and select import.

1.jpg.png

Step 21: Create a job in project area after selecting a project. Drag and drop workflow inside the job. Double click on work flow and drag and drop data flow. Double click on data flow and drag and drop OHD table which we have imported as source. Create a template table as a target. Connect source and target using query transform. Map the columns from input schema to output schema inside the query transform.

1.jpg.png

Step 22: Double click on OHD table. Click on execute process chain before reading and provide the corresponding process chain name which we have created.

1.jpg.png

 

Step 23: Logon to Management console and select administrator. In administrator expand SAP Connections and select RFC Server Interface and select the corresponding RFC connection and click on start.

1.jpg.png

Step 24: At BW side enter T-code as SM59 and select the corresponding RFC in TCP/IP connections.

1.jpg.png

Step 25: Click on connection test. If connection test is successful then execute the job in BODS.

1.jpg.png

1.jpg.png

Step 26: After the job is executed we can see that BODS job triggers the process chain.

1.jpg.png

1.jpg.png

SAP Announces Changes for SAP Rapid Mart Support

$
0
0

SAP is committed to providing its customers with world-class solutions that help address data quality and information management challenges. In order to make these types of ongoing investments, at times we need to manage investments in certain technologies as well. As a result, SAP is planning no further significant functional enhancements to the SAP Rapid Mart products. The goal of this communication is to provide a notification to all customers of this action, provide you with details on what products are affected, make available to you various resources for more detailed information, and point you in the right direction for replacement solutions that you can upgrade to.

 

The following is a list of SAP Rapid Mart products impacted by this action:

 

  • BA&T SAP Accounts Receivable Rapid Mart
  • BA&T SAP Accounts Payable Rapid Mart
  • BA&T SAP Cost Center Rapid Mart
  • BA&T SAP General Ledger Rapid Mart
  • BA&T SAP Sales Rapid Mart
  • BA&T SAP Purchasing Rapid Mart
  • BA&T SAP Inventory Rapid Mart
  • BA&T SAP Project Systems Rapid Mart
  • BA&T SAP Prod. Plan. Systems Rapid Mart
  • BA&T SAP Plant Maint. Sys. Rapid Mart
  • BA&T SAP HR Rapid Mart
  • BA&T SAP Fixed Assets Rapid Mart
  • BA&T SAP Rapid Marts, Edge edition

 

Please note that all of these Rapid Mart solutions have reached the end of their mainstream maintenance.  It is important to clearly state that SAP will continue to honor all current maintenance agreements.  However, future support will be limited to technical upgrades and fixes to bugs in the product.  SAP customers should contact their Account Executive or the Maintenance and Renewals team with questions on the expiration of their current contracts.

 

Product Upgrade and Future Support Options

 

DatalyticsTechnologies (”Datalytics”) has announced that it will sell, maintain , support and enhance its own Rapid Mart Solutions based on SAP Rapid Mart technology.  Please  refer to Datalytics’ press release http://www.news-sap.com/partner-news/) on this important news. Customers may contact Datalytics Technologies directly for more information. SAP makes no representation or warranty with respect to any services or products provided Datalytics.

 

Resources and Contacts for additional information

 

If you have any questions about your Rapid Mart products and options moving forward, please contact your SAP Account Executive. If you do not know who your AE is, please contact the customer interaction center (CIC). The CIC is available 24 hours a day, 7 days a week, 365 days a year, and it provides a central point of contact for assistance. Call the CIC at 800 677 7271 or visit them online at https://websmp207.sap-ag.de/bosap-support. Additionally, you can reach out directly to the SAP reseller or VAR partner you have worked with in the past.

 

Again, we want to emphasize that SAP is highly committed to providing leading enterprise information management solutions that help you to meet your rapidly growing business requirements. We thank you very much for your continued support.

Add an attachment to BODS Job Notification email using VB Script

$
0
0

Hi All,

 

In some of the ETL Projects there is always a need to send the processed Flat File(CSV, XLS, TXT) to the customer through e-mail. As far as I know there is no functionality in SMTP_TO() function to add an attachment to the email notification.

 

We can achieve this functionality through a VB Script using the exec() function in a script.

 

Below steps describes the steps to achieve this functionality:

 

 

Steps: Add an attachment to BODS Job  Notification email using VB Script

 

Detailed Description:

In order to ease the end user’s effort, it is often required that the processed Flat File(CSV, XLS etc) is sent to the user for validation. In BODS we cannot attach the report using the SMTO_TO() function

 

Below we will see an example of such activity.

 

Current Scenario:

 

There is no functionality in BODS to attach a file and send it to user. The same can be implemented in BODS by calling a VB Script through exec() function.

 

Solution: After the completion of job place a script which calls a VB Script (vbs) file to send email notification. The vbs file must be saved in the Processed location shared folder.

Declare the below Global Variable in the job.

$G_PROCESSED_LOCATION ='\\\XYZ_Location\ Processed';

 

The email.vbs file has the following information;

strSMTPFrom = "User@abc.com"

strSMTPTo = "User@abc.com"

strSMTPRelay = "smtp.abc.com"

strTextBody = "JOB_NAME completed successfully in UAT. Attached is the file load status."

strSubject = "JOB_NAME completed in UAT"

strAttachment = "\\ XYZ_Location \Processed\MyFile.xls"

 

Here is the script to send the email

exec('cscript','[$G_PROCESSED_LOCATION]\email.vbs', 8);

 

 

Regards

Arun Sasi


How to use Current_Configuration() function to send job status for different environments

$
0
0

Detailed Description:

This document describes the steps for sending the Job status for different environments. This functionality was used in my previous project where the requirement was to send job notifications in DEV, UAT and PROD.

Data Services Version: 4.1 SP2


Pre-requisite: To achieve this functionality you need to create three different Datastore Configuration called DEV, UAT and PROD pointing to their respective environments.


**Please note that Code page is set to US-ASCII***. If this setting is changed then the job wont work properly

 

Solution:Below mention scripts need to be used

1) Upon failure the jobs sends out an email. Email address is set in the substitution parameter “$$MAIL_JOB_NAME”. Here is the script that sends the email

smtp_to($G_Email, '[$G_JOB_NAME]' || ' failed in ' || current_configuration('Datastore_Name'), '[$G_JOB_NAME]' || ' failed in ' || current_configuration('Datastore_NameI'), 10, 10);

The job takes the job name and the datastore current configuration (DEV/UAT/PRD) and uses that in the title and subject of the email.

                  2) Upon completion the job calls another script to send email notification.

smtp_to($G_Email, '[$G_JOB_NAME]' || ' completed in ' || current_configuration('Datastore_Name'), '[$G_JOB_NAME]' || ' completed successfully in ' || current_configuration('Datastore_Name'), 26, 0);

 

Please note that we need to assign the substitution parameter( $$MAIL_JOB_NAME) in to the $G_Email global parameter

IDocs and BODS - Message Variant

$
0
0

Introduction

IDoc processing using BODS is used for both real-time and batch processing of data. IDocs are increasingly used for both data migration and data integration.

There is hence a possibility for IDocs from one stream/process overlap another thus reating a need for ensuring that specific groups of IDocs from a given process/program. Message Variants come in handy under these circumstances to be able to group the IDocs from a specific process or even a specific BODS Job execution.

 

Steps to Define Message Variant

 

Message Variants can be defined for IDocs in their partner profile for the given message type. In the example below message variant definition for IDoc message type "CRMXIF_PARTNER_SAVE" in SAP CRM. The example demonstrates the message variant creation from Inbound Parameter. The same can be performed for Outbound Parameter too.


To open and maintain the Partner Profiles use transaction code WE20.


In Fig1 below, the message type "CRMXIF_PARTNER_SAVE" does not have a message variant defined.

 

Fig1.jpg

 

Hence reviewing the IDocs that were generated by a BODS job for a given execution of the BODS job becomes difficult as the number of IDocs generated can only be filtered using the created time stamps and sender partner profile. As the same partner profile is usually used by BODS jobs it is hence useful to have message variants defined.

 

In order to add message variant for the IDoc message Type click on the "Create Inbound Partner" button on the maintain partner profile screen as shown in Fig2 below.

 

 

Fig2.jpg

 

In the "Partner profiles: Inbound parameters" screen enter the parameters as shown in Fig3 below.

 

Fig3.jpg

The message code holds the message variant value. The Message Variant can be three characters long. Any combination of unique three characters (alphanumeric) can be used for defining the Message Variant. Fig4 below shows the created Message Variant under the inbound parameters.

 

Fig4.jpg

Use Message Variant in BODS


The defined message variant can be used in BODS either as a constant value in the MESCOD mapping or as a global variable.

 

View IDocs by the message variant

 

The IDocs created using this message variant can be viewed using the WE02 transaction either for a given timestamp range and sender partner or for the entire IDocs list generated. A screenshot of the parameter options in the WE02 transaction code is shown below in Fig5.

 

Fig5.jpg

 

The same message variant value can be used in filtering the list IDocs in table EDIDC to filter the list of IDocs generated under this Message variant for the givene Message Type and Partner Number. If partner and number and message type are not specified the same message code if used in more than one partner profile or message type can produce results that contain IDocs not generated by the specific BODS job.

Alias and their application during runtime

$
0
0

Introduction

 

There is an increasing number of scenarios where the database schema that is connected to using Business Objects Data Services in databases like ORACLE and SQL Server change from one environment to another. For example, an ORACLE Schema called DEV_PRJ1 can be assigned to datastore for development repository and TST_PRJ1 can be assigned for test repository. In case of development there is also a possibility that there are multiple development database schemas are available so that each developer in a multi-user environment is assigned a specific database schema associated to their repository which will provide the ability to perform unit testing in a more realistic fashion. So there can be database schemas like DEV1_PRJ1, DEV2_PRJ1, DEV3_PRJ1, and so on each being a database schema that can be connected through a different BODS Repository assigned to an user for development and unit testing.

 

In such cases there is a need to have a consistent naming convention to ensure that the database objects' metadata like table, views, etc, that are imported into one BODS repository can be consistent when the code is exported to another repository. In such cases the use of Aliases in datastore come in handy.

 

Alias Fig1.jpg

 

In Fig1 above, a database schema ABC is assigned to alias DBO for a MS SQL Server datastore. In this case, the schema name dbo is the standard schema that is used in environments like pre-production and produciton where there will be a single database to host the data for the BODS objects to connect to during runtime. However there can be multiple schemas defined under a single database in dev and test environments to cater for parallel development and testing to enable delivery.

 

Do please provide feedback on adding any more information that can be of use in this context.


The EIM Bulletin

$
0
0

Purpose

 

The Enterprise Information Management Bulletin (this page) is a timely and regularly-updated information source providing links to hot issues, new documentation, and upcoming events of interest to users and administrators of SAP Data Quality Management (DQM), SAP Data Services (DS), and SAP Information Steward (IS).

 

To subscribe to The EIM Bulletin, click the "Follow" link you see to the right.

 

HotIssues

 

IMPORTANT: 2015 date conversion issue_all versions of SAP Data Services

 

  • Are you seeing the year 1915 in your output?

 

After installation of IS 4.2 SP3 Patch 0-1, accessing the Information Steward link via the CMC or launching the IS application results in exceptions:

 

  • Error: "HTTP Status 500 - Servlet execution threw an exception" .............. javax.servlet.ServletException: Servlet execution threw an exception com.businessobjects.pinger.TimeoutManagerFilter.doFilter(TimeoutManagerFilter.java:159)
  • IS 4.2 SP3 Patch 0-1 is NOT supported for use with BIP/IPS 4.1 SP5. Please review KBA 1740516.
  • Resolution to issue will be in IS 4.2 SP3 Patch 2 (tentatively due out mid to end of January) or move to IS 4.2 SP4 (you must also move to DS 4.2 SP4).

 

Latest Release Notes

 

 

New Resources

(To be determined)

 

Selected New Knowledge Base Articles

 

  • 2120830 - How to compile Text Data Processing rules - Data Services 4.x
  • 2116648 - Salesforce Dot Com, SFDC, drops SSL support
  • 2109863 - Overview of system status is green despite job failures - Data Services 4.2
  • 2103998 - Duplicates being created when running a Data Services dataflow that includes a hierarchy flattening transform - Data Services

 

Events

 

  • SAP SAPPHIRE NOW - Orange County Convention Center | Orlando, Florida | May 5 - 7

 

Recent Ideas

(Enhancements for EIM products suggested via the SAP Idea Place, where you can vote for your favorite enhancement requests, or enter new ones.)

 

Didn't find what you were looking for? Please see:


Note: To stay up-to-date on EIM Product information, subscribe to this living document by clicking "Follow", which you can see in the upper right-hand corner of this page.

How to remove header and footer rows from a flat file?

$
0
0

Case 1: process a flat file with header information


 

If you know how many header lines there are, removing them can easily be done by specifying the correct value for “Skipped rows” in the File Format:

 

 

The output will look like the Data Preview in the File Format Editor. Just make sure to add a query transform so that the correct column headers are generated:

 




Case 2: process a flat file with header and footer information


Unfortunately, the “Skipped rows” only deal with rows at the beginning of the file and there’s no equivalent setting for at the end. If you don’t do anything special, your data flow may run into errors when processing the extraneous rows that are in a different format.

 

Removing header and footer lines can be done in a single data flow:

 

 

To that extent an alternate single-field fixed-width File Format is defined, that is used for both source and target in the data flow:

 

 

The ReadFile Query transform reads all rows from the file and adds a sequence number:

 

 

The Sort Query transform orders the lines in descending order:

 

 

The GenRowNum Query transform adds a 2nd sequence number. Because the rows were sorted in descending order, this sequence will start counting from the bottom of the file:

 

 

Finally, the Filter Query transform strips off all header and footer rows:

 

 

The global variables $RowsInHDR and $RowsInFTR contain the number of header and footer lines respectively. The end result will look like this, a file apt for further processing:

 

 

How to transform a flat file into a master-detail structure using lookup_seq?

$
0
0

Have you ever tried to understand the description of the lookup_seq function in the DS Reference Guide? And if you managed to do so, are you still wondering what it can be used for? Well, here’s a nice example.

 

Case: process a flat file with 2 row types and link master and details together

 

 

This file contains travel data of the employees of ACME Inc. For every employee there is

  • one row with name and ID
  • a number  of rows with travel details, one for each trip

The requested output is two tables in a database linked by a master-detail relationship:

 

 

DS is not able to directly deal with two different row formats within a single file, but you bet there is a workaround. Just start by defining a generic fixed-width Flat File Format:

 

 

And use that file as a source in a data flow:

 

 

In the Query transform, exclude the row header lines (starting with ‘Trip,’) from the input file and store all remaining (data) rows in a staging table with 2 extra control columns:

 

 

HDRFLAG will contain 1 for any row with employee information, 0 for every trip detail line.

ROWNUM is just a sequence number, from 1 till the number of rows in the file.

 

The staging table will then serve as a source for generating the master data:

 

 

Note the usage of the word_ext built-in function to derive the value of the individual attributes:

 

 

The output will look like this:

 

 

Then, the details are also derived from the same source in staging:

 

 

With the Query transform:

 

 

The foreign key column is calculated with another built-in function:

 

     lookup_seq(HANA_DS_STAGE.DS_STAGE.SCN_TRIPINFO, ROWNUM, 0, ROWNUM, SCN_TRIPINFO.ROWNUM , HDRFLAG, 1)

 

Don’t worry too much about the syntax of this function call and the meaning of its parameters as it can easily generated with the Function Wizard:

 

 

And here are the results:

 

 

Every row with trip details is magically linked to the employee who made the trip.

 

Note: The function wizard allows to specify one Compare column / Expression pair only. But in fact lookup_seq can accept more (just like the normal lookup can do, as well). In case your comparison is based on multiple columns, generate the call on one comparison with the wizard first and then manually add more.

Process to create Batch Script to START & STOP Services For SAP Business Objects Application

$
0
0

Purpose:

The purpose of this document is to create Batch Script to START/ STOP 'Server Intelligence Agent' (SIA) & Tomcat Application on Windows Server Step-by-Step for SAP Business Objects Data Services.


Overview:

Environment Details:

Operating system: Windows Server 2008 64 Bit

Database:Microsoft SQL Server 2008 R2

Web Application:Tomcat

SAP Business Objects Tools:SAP Business Objects Information Platform Services 4.1 SP2; SAP Data Services 4.1 SP2

Repository version:BODS 4.X

 

Description:
Requirement of script is to automate housekeeping activity during off business hours.

 

In this post, I will go through the necessary steps to create/ setup and schedule a batch script to START/ STOP Tomcat Application/Server Intelligence Agent/SAP Business Objects Data ServicesApplication service on Windows 2008 Server.

 

Steps can be followed for SAP BusinessObjects XI3.0, XI3.1 & BI4.0 or BI4.1 or BOIPS 4.X:

 

  • Scripts can be scheduled on multiple Windows Servers across the network
  • It can be very well utilized in those environments where backup activities happens automatically using 3rd party tools
  • Same script can be embedded with the Backup tools to ensure BI services get stopped and start automatically during and after the backup process starts/ finishes.

 

Configuration Steps for Automate SAP Business Objects Application:

 

Step I: Create script for automate stop of Tomcat, Data Services & Server Intelligence Agent Application

  • Log in to Windows Server using Administrator account/ Domain user_id.
  • Go to (START >> Administrative Tools >> Services)
  • Look for the service called as ‘Apache Tomcat for BI 4’
  • Go to its properties screen and COPY its service name, for e.g. (BOEXI40Tomcat)

1.png

  • Look for the service called as ‘SAP Data Services’
  • Go to its properties screen and COPY its service name, for e.g. (DI_JOBSERVICE)

2.png

  • Look for the service called as ‘Apache Tomcat for BI 4’
  • Go to its properties screen and COPY its service name, for e.g. (BOEXI40SIABI4SERVER)

3.png

  • Create 3 batch scripts file for Stop SAP Business Objects Application – follow below screenshots to create using Notepad

Batch Script File 1 >>

  • Open Notepad and type following mentioned in screenshot for Stop Tomcat Application

4.png

  • Open Notepad and type following mentioned in screenshot for Stop SAP Business Objects Data Services Application

5.png

 

  • Open Notepad and type following mentioned in screenshot for Stop SAP Business Objects Services Application

6.png

 

 

Step II: Create script for automate start of Tomcat, Data Services & Server Intelligence Agent Application

  • Create 3 batch scripts file for Start SAP Business Objects Application – follow below screenshots to create using Notepad

Batch Script File 2 >>

  • Open Notepad and type following mentioned in screenshot for Start Tomcat Application

7.png

  • Open Notepad and type following mentioned in screenshot for Start SAP Business Objects Data Services Application

8.png

 

  • Open Notepad and type following mentioned in screenshot for Stop SAP Business Objects Services Application


9.png

 

Step III: Now you can schedule above BATCH file using Windows Scheduler or other 3rd Party Tools, Currently I am using Windows Scheduler in this blog

  • Go to, (START >> Control Panel >> Administrative Tools >> Task Scheduler)

10.png

1. Create a New Folder “Tomcat Script”

11.png

2. Click on BOBJ Script Folder and Create Task

12.png


3. On the General tab, Set the NAME/ DESCRIPTION for the task:

13.png

4. On Triggers Tab, click on new button and set the followings:

14.png

5. On Actions Tab, click on New button and set the followings:

15.png

6. In the setting window select ‘BO_TOMCATSTOP_Script’.bat’ file from Automate Script directory

7. Set the following settings on Conditions & Settings tab

16.png

8. Click on Ok button, program will prompt for Username/ Password window, please enters the domain credentials.


9.Task scheduler will show the created task as follows:

17.png

 

10. Repeat the # 2 to 7 for Stop Tomcat, Start & Stop SIA and Finally Start & Stop Data Services script


11. One completed, task scheduler window should look like as follows:

18.png

  • In this step select the 2nd script created to start the Tomcat Application, Start & Stop SIA, Start & Stop Data Services
  • As per the backup activity and the duration required for entire process, set the Start time of the script accordingly.
  • This is to ensure Tomcat Application has sufficient time to shut-down and restart tomcat services appropriately.


Step III:  You can also monitor the services below is the list of exe of SAP Business Objects Application Services


S.No

BusinessObjects Server

Windows Deployment

1

32-bit Connection Server

ConnectionServer32.exe

2

64-bit Connection Server

ConnectionServer.exe

3

Tomcat Application

tomcat7.exe

4

Adaptive Job Server

JobServer.exe

5

Adaptive Job Server Child

JobServerChild.exe

6

Central Management Server

cms.exe

7

Crystal Reports 2011 Processing Server

crproc.exe

8

Crystal Reports Cache Server

crcache.exe

9

Dashboards Cache Server

xccache.exe

10

Dashboards Processing Server

xcproc.exe

11

Event Server

EventServer.exe

12

File Server

fileserver.exe

13

Report Application Server

crystalras.exe

14

Web Intelligence Processing Server

wireportserver.exe

15

SAP Data Services Job Server

al_engine.exe

16

SAP Data Services Access Server

al_accessserver.exe

 
Step IV: Now you can check the log file status.


Log File Status:

19.png

After completion of Windows schedule job below is the log file status

20.png

Status in Log File, For e.g Tomcat_Stoplog

21.png

& Tomcat_Startlog

  22.png

Reference Material:

  • For execution of Batch Script below are the rights required for Domain User (For Windows Environement)
  • Below Rights required in Domain Server for execution of Batch Script.
    • Act as part of the operating system
    • Allow log on locally
    • Create a token object
    • Log on as a batch job
    • Log on as a service
    • Replace process level token


Link:

http://www.blog.c-bip.ch/

 

http://scn.sap.com/docs/DOC-50471

 

SAP Knowledge Base Article:

 

http://www.forumtopics.com/busobj/viewtopic.php?t=124355

 

1305228 - How to start or stop SIA and Tomcat with command line or Windows scripts in Business Objects XI3.x?

https://service.sap.com/sap/support/notes/1305228

 

1203539 - How to use a Windows script to start or stop BusinessObjects services

https://service.sap.com/sap/support/notes/1203539

 

1287046 - How to use Windows script to start or stop BusinessObjects services

for Business Objects XI 3.0?

https://service.sap.com/sap/support/notes/1287046

 

1406782 - How do you schedule a batch script for stopping and restarting the SIA properly on Windows platform?

https://service.sap.com/sap/support/notes/1406782

 

1634962 - BI4 - How to import or export a LifeCycle Manager BIAR file using command line utility

https://service.sap.com/sap/support/notes/1634962

 

2058607 - How to restart BI4.1 Explorer services automatically on linux using a script?

https://service.sap.com/sap/support/notes/2058607

 

1292866 - How to Automatically start services which go down in BOE XI R2

https://service.sap.com/sap/support/notes/12928667788.

Featured Content Data Services and Data Quality

$
0
0

The EIM Bulletin

The EIM Bulletin is a timely and regularly-updated information source providing links to hot issues, new documentation, and upcoming events of interest to users and administrators of SAP Data Quality Management (DQM), SAP Data Services (DS), and SAP Information Steward (IS).

 

EIM Integration Use Cases

Read Ina Felsheim's blog to find out about integration use cases related to SAP Data Services, Information Steward and Master Data Governance.

 

 

SAP Announces End of Life for SAP Rapid Marts

Important information for our Rapid Mart customers

 

 

IMPORTANT: 2015 date conversion issue

We are currently aware of a 2015 date conversion issue which started January 1, 2015. It affects all versions of SAP Data Services.

 

For further information, please readKBA 2114308

 

 

Data Quality Performance Guide

This document from Dan Bills provides performance throughput numbers for data quality capabilities within Data Quality Management and Data Services v4.2.


SAP Data Services Product Tutorials

This document outlines several videos, blogs, tutorials, and other resources to help you learn Data Services.

Viewing all 401 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>