Quantcast
Channel: SCN : Document List - Data Services and Data Quality
Viewing all 401 articles
Browse latest View live

BIW-230108 Load data to BW failed The current application triggered a termination with a short dump

$
0
0

Symptom:

Error when executing a BODS job with SAP BW Transaction Target Structure through SAP BW Process chain

BIW-230108  Load data to BW failed

The current application triggered a termination with a short dump.[SAP NWRFC 711][SAP Partner 700 ][SYSNAME][APPSERVER_NAME][RFC_USER_NAME][4102]

 

Environment:

 

SAP BASIS VERSION 7.0 SP31

 

SAP BW Version: 7.0

 

SAP BODS Version: 4.2 SP4

 

DATABASE VERSION: DB2 10.5.4

 

Cause: The RFC Destination on Application server was marked to non-unicode

 

Resolution: Modify the Unicode setting under the RFC destination as shown below

 

 

 

.


How to get number of rows count from file using BODS without loading the data.

$
0
0

Hi All,

 

I have created this document to show you how to get the no of rows count from file without loading it into table.

 

I have used BODS EXEC command to get the rows count in file.

 

My BODS is installed on Linux server.

 

-----------------------------------------------

Suppose I have file with name contract_master.csv .

 

In this file there are 10 columns and 100 rows.

Now if I want to get the count of rows in file then I have to write the following code in script:

 

$row_count = exec('/bin/sh','-c "wc -l /C:/Incoming_Files/contract_master.csv',0);

print($row_count);

 

where row_count is the global variable of type varchar.

C:/Incoming_Files is path where file is kept.

 

Now when you run the job you will get the following output.

 

100 /C:/Incoming_Files/contract_master.csv

 

here first 100 represent total number of rows in file.

 

Now to remove the file path & file name use following code

 

$actual_count =  rtrim_blanks( ltrim_blanks( replace_substr('/C:/Incoming_Files/contract_master.csv','')));

 

I hope this will help you.

 

You can do testing vary easily using this functionality,let's say you have 100 file in your project and you have to load all the files & post loading to varify the count whether all data loaded properly you can apply logic mentioned above.

 

Feel free to ask any doubt.

 

Thanks,

SB

How to extract data from different file types using Wait_For_File().

$
0
0

Requirement was to process various master data mapping files such chart-of-account and Sub-Department and also process transactional Monthly General Ledger and Monthly Sales data files. We accomplished this task by building a batch job with Wait_For_File and Conditional workflows.

 

Chart_of_Account: FileName_COA_MAP.TXT

SubDepartment: FileName_CC_MAP.TXT

GL File: FileName_GL.TXT

Sales File: FileName_Sales.TXT

 

wait_for_file('FolderName/COA_Map.csv',0,0,1,$FileType);

(If $FileType is NOT NULL)

Begin

Processing COA Work Flow

End

 

You can replicate above wait_for_file and conditional workflows for various file types such as “FileName_CC_MAP.TXT”, “FileName_GL.TXT” and “FileName_Sales.TXT”

Error in BODS ETL job: ODBC data source error message for operation

$
0
0

Symptom:

Error when executing a BODS job with database views as a source with incorrect definition in one of the view. The job will run for some time and fetch some records from the database view, but will error out after some record count

 

Error: ODBC data source <10.194.32.46> error message for operation <SQLFreeStmt>: <>.

 

 

Environment:

 

SAP BODS Version: 4.2 SP4

 

DATABASE VERSION: SQL Server 2008 R2

 

Cause: One of the columns in the view has an incorrect definition at the query level

 

You can check this by doing a select distinct on the column which gives error. You will get the below error message:


Error: Msg 512, Level 16, State 1, Line 1

Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.

 

 

Resolution: Correct the definition of the view for the column which is returning incorrect value with the help of DBA team.

How to apply Permanant License Key in BODS

$
0
0

This document describes the process to apply license in BODS system with screenshot.However it is not a substitute of the main administration guide which is available in Market place.

 

License Manager 

 

After initial BODS installation system runs on a trial license for 90 days.After that we have to go for permanant license key.License Manager can be used only in command-line mode. You can use it to manage your product activation keycodes—the alphanumeric codes that are referred to each time that you run certain software. By using License Manager, you can view, add, and remove product activation keycodes for SAP solution portfolio software (such as SAP Data Services) that require them.

 

License Manager accesses keycodes on the local system only; you cannot access the keycodes from a remote system. When updating keycodes, make the changes on all SAP Data Services computers by launching License Manager on each computer, including Designer and Job Server computers

 

If you are running a Windows operating system, you will not be able to add or remove license keycodes unless you have Administrator privileges. For those with non-administrator privileges, only the -v and --view parameters are available for use.

 

ScreenHunter_1220.jpg

Starting License Manager on Windows

 

You can run License Manager after the SAP Data Services installation has completed.

 

  1. Choose Start-->Programs-->SAP Data Services 4.2-->SAP License Manager-->Run as administrator.

A command line window opens displaying License Manager command line options.

 

ScreenHunter_1221.jpg

 

Request a permanent license key

 

Please follow SAP Note# 1251889 to know the steps for requesting permanent license key.

 

ScreenHunter_1224.jpg

 

Apply License

 

Use below command to apply the license

LicenseManager -a <Key downloaded from Marketplace>

ScreenHunter_1226.jpg

 

SAP Data Services System & Performance Optimization

$
0
0

The following post is an attempt to summarize the Data Services Performance Optimization Guidelines and make them shorter and easier to read. In order to achieve an improvement I suggest you to stick to the following sequence: ‘measure’, ‘act’, ‘measure’.

Before moving to actual DS optimization it is recommended to first test your environment.

1.     Enhance the system capabilities

     - Source OS and database server

    • Increase the value of the read-ahead protocol offered by Windows
    • Expedite the SELECT statement of the DB through indexing, caching and increase of the database server’s I/O size

     - Target OS and database server

    • Turn on the asynchronous I/O
    • Expedite the INSERT and UPDATE of your target database

     - The network

    • Set network buffers to reduce the number of round trips to the database servers across the network

2.      Measure job performance

     - Check system utilization

    • CPU utilization
    • Memory utilization

     - Read log files

     - Read Monitor Logs

     - Read Performance Monitor

     - Read Operational Dashboard

 

To compare previous execution times you can view the Operational Dashboard or the Job Execution History found in the Administrator section, both located in the Management Console.

3.     Job execution strategies

     - Push down operations- applicable for database sources and targets

 

          For SQL sources and targets, SAP Data Services creates database-specific SQL statements based on the data flow diagrams in a job. The software           generates SQL SELECT statements to retrieve the data from source databases. To optimize performance, the software pushes down as many SELECT           operations as possible to the source database and combines as many operations as possible into one request to the database. The operations within the           SELECT statement that the software can push down to the database are: Aggregations, Distinct rows, Filtering, Joins, Ordering, Projection, Functions.

     - Improving throughput

    • Use caching as much as possible
    • Bulk load to the target
    • Minimize extracted data
    • Change array fetch size
    • Increase the rows per commit
    • Increase the Degree of parallelism

Connect Data Services 4.2 SP4 With Mainframe ADABAS

$
0
0

Good afternoon,

I would like to develop an ETL using SAP Data Services 4.2 SP4 connecting the ADABAS mainframe.

In the product documentation mentions:

2.5.2.1 Mainframe interface

The software Provides the Attunity Connector datastore que accesses mainframe data sources through Attunity Connect.

The data sources que Attunity Connect accesses are in the list following. For a complete list of sources, refer to the Attunity documentation.

● Adabas

2.5.2.1.1 Prerequisites for an Attunity datastore

Attunity Connector accesses mainframe data using software That You must manually install on the mainframe

server and the client location (Job Server) computer. The software connects to Attunity Connector using its ODBC interface

Attunity.jpg

You must perform the installation on the mainframe server? Through this connector it is possible to access? No need to buy and install the Attunity Server?

In the Attunity site mentions:

The Business Objects Data Integrator product includes direct support for the Attunity Server ODBC interface.

 

Someone could send me the procedures required to perform this setting in ADABAS mainframe?

 

Thank you

 

Hugs

 

This document was generated from the following discussion: Connect Data Services 4.2 SP4 With Mainframe ADABAS

New Feature of BODS 4.2 and BODS 4.1

$
0
0

This Document is to provide information on some of the new features of SAP Data Services 4.2 and 4.1 which could be useful while working on the DS.

 

New Features of BODS 4.1:


The major improvement of the 4.1 version is its integration with SAP and NON SAP systems.


Enhanced extraction capabilities for SAP Business Suite:

 

  • Data streaming in ABAP data flows: This version includes a new data transfer method option RFC, which lets us stream data from the source SAP system directly to the Data Services data flow process using RFC.

 

  • SAP table reader in regular data flows: The Data Services SAP table reader in regular data flows can now fetch data in batch from an SAP system.
    This new implementation allows the reader to process large volumes of data and mitigates out-of-memory errors. Also included are an Array fetch size option, which allows the data to be sent in chunks, avoiding large caches on the source side, and an Execute in background (batch) option, which lets us run the SAP table reader in batch mode (using a background work process) for time-consuming transactions.

 

  • Parallel reading from business content extractors: Data Services now supports multithreading for
    SAP extractors for improved performance.

 

  • New ABAP functions for improved security: This version includes several enhancements to
    functions, procedures, and authorizations that provide secure integration with
    SAP systems.

 

  • SNC authentication and load-balancing support in SAP data stores: All SAP data store types now include options for authentication using Secure
    Network Communications (SNC). SNC provides a more secure and efficient method for moving data from ABAP executable programs directly to the Data Services Job Server engine.


Hadoop integration: Data Services now offers connectivity to Hadoop sources including Hadoop distributed file systems (HDFS) and Hive data warehouses.

 

Monitor log enhancements: Time-based sampling.

SAP Solution Manager Improvements: Alerting and Heartbeat monitoring lets us use the SAP Solution Manager to check whether a component such as a Job Server or Access Server is up and running. We can also get information about real-time services for Access Servers.

 

Improved XML support through the XML_Map transform: This version of Data Services adds a new XML_Map transform that simplifies support for hierarchical data structures such as XML and iDocs. The new XML_Map transform provides a simplified interface for nesting and un-nesting hierarchical data and converting from one hierarchical structure to another without an intermediate flattened-data step.

 

Enhanced SAP HANA support:

  • SAP HANA repository support
  • SAP HANA performance improvements
  • Bulk updates enhancement
  • Support for HANA stored procedures   

 

Data Services Workbench: The Data Services Workbench is a new application that simplifies the migration of data and schema information between different database systems.The Data Services Workbench automates this migration process. Instead of creating many dataflow manually, we now provide connection information for the source and target databases and select the tables that we want to migrate. The Workbench automatically creates Data Services jobs, workflows, and data flows and imports them into a Data Services repository. We can execute and monitor these jobs from within the Workbench. In addition, the Workbench supports more advanced options such as bulk loading and delta loading.

 

In this version of Data Services, the Workbench supports migration from Data Services-supported databases and SAP applications to SAP HANA, Sybase IQ, and Teradata targets.

 

1.jpg

 

Design-Time Data Viewer: Previously in the SAP Business Objects Data Services Designer, we could view data for only static sources in the object library such as tables, files, XML, COBOL copybooks, and corresponding sources and targets. To view data going through transforms, you had to use the Debugger. Now, the Design-Time Data Viewer feature lets us view and analyze the input and output for a data set in real time as we design a transform. The dataflow does not need to be complete or valid, although it must use a valid, accessible source that contains data. We can display and configure Design-Time Data Viewer from the Debug menu.


New Features of BODS 4.2:

 

The major improvement of the 4.2 version is its usability. It’s highly simplified compared to the previous one.

 

A new Platform transform added in SAP Data Services 4.2 SP4:The Data Mask transform enables you to protect personally identifiable information in your data.

 

 

SAP LT Replication Server integration: Data Services has been enhanced to integrate with SAP LT Replication Server (SLT) by leveraging the new version of ODP API.

 

  • The existing extractor interface in Data Services has been enhanced and replaced with ODP in the Object Library and Metadata browsing. ODP allows uniform access to all contexts provided by ODP API.
  • A new option has been added to the SAP data store: “Context”. SAP data store in Data Services has been enhanced to support the SLT objects. The ODP context allows us to connect to both the extractors and the SLT.
  • SLT enhances the CDC (Change Data Capturing) scenario in Data Services, because with the trigger-based technology SLT adds delta-capabilities to every SAP or non-SAP source table which then allows for using CDC and transferring the delta data of the source table.

 

Retrieving the time zone of a Management Console machine: The new Get_MC_Machine_Timezone operation allows us to retrieve the time zone of the Management Console machine.

 

There are some new Built-in functions added in DS 4.2.

 

Expanded Designer search capabilities: Using SAP Data Services Designer, now we can search for a text string in every part of the object, such as table name and variable name.

Native Microsoft SQL Server support on UNIX: Data Services 4.2 provides native Microsoft SQL Server support on UNIX as a source or a target. When using the UNIX job server or engine to access the MS SQL Server datastore, the following functionality is available:

 

  • CDC support, which includes the Replication method for SQL Server 2008 and later and the CDC and Change Tracking methods
    for SQL Server 2008 and later.
  • Bulk loading support. 
  • Allow merge or upsert option support for SQL Server 2008 and later.
  • Linked data stores support, which provides a one-way communication path from one database server to another.

Batch Mode functionality added to the XML_Map transform: We can now use the XML_Map transform in Batch mode to accumulate data as a block
of rows instead of a single row. This block of rows is sent as a unit to the next transform.

 

Introduced Google Big Query application data store :

 

 

5.jpg

 

 

 

 

Included MongoDB support:


6.jpg

DS 4.2 supports HDFS data preview:

7.jpg

 

Additional database support for Replication Server real-time CDC: Replication functionality introduced
in SP03 has been extended in SP04 to include MS SQL Server, IBM DB2 and SAP ASE.

 

Harness the power of HANA:

8.jpg

 

Thanks,

Tanvi


Hierarchy visualization of objects

$
0
0

Ever got into a situation where you had to list out child workflows and dataflows of a job? May be for documenting, or may be for checking object usages, Real challenge is when you have to represent it in organizational chart.

 

Here is an example:


For better view - click on the image, on the preview window, right click and save to desktop.

data1.png

How do we do?

 

Of-course you can drill down every object in designer, navigate and draw chart in Microsoft Visio, MS-Word etc. But, how about generating hierarchical chart from repository metadata rather than drawing it?

 


Its just three steps away

 

1. Start by populating AL_PARENT_CHILD table, from designer or from command line,

 

          a. From designer:

Untitled2.png

 

 

          b. From command line using al_engine.exe

          command: "%LINK_DIR%\bin\al_engine.exe" -NMicrosoft_SQL_Server -StestSQLhost -Udb_user -Pdb_pass -Qtestdb -ep

Untitled2.png

 

2. Login to repository database and execute the below query:

 

WITHCTEAS(

      SELECT[PARENT_OBJ]

      ,[PARENT_OBJ_TYPE]

      ,[DESCEN_OBJ]

      ,[DESCEN_OBJ_TYPE]FROM[AL_PARENT_CHILD] PC

      WHERE'JOB_CORD_BW_TD_OHS_POPULATE_SDL' = PARENT_OBJ

      UNION ALL SELECTPC.[PARENT_OBJ]

     ,PC.[PARENT_OBJ_TYPE]

      ,PC.[DESCEN_OBJ]

      ,PC.[DESCEN_OBJ_TYPE]FROMCTE INNER JOIN [AL_PARENT_CHILD] PC ON CTE.DESCEN_OBJ=PC.PARENT_OBJ

) SELECT2ASID,[PARENT_OBJ] + '->' + [DESCEN_OBJ] CODEFROMCTE

WHEREPARENT_OBJ_TYPE IN ('Job','WorkFlow','DataFlow')

AND [DESCEN_OBJ_TYPE] IN ('Job','WorkFlow','DataFlow')UNION

SELECT1ASID, 'digraph a { node [shape=rectangle]'UNION

SELECT3ASID, '}'

 

 

Untitled1.png

 

From the output, copy only the contents in second column without header.

 

My data looks like this:

digraph a { node [shape=rectangle]
JOB_CORD_BW_TD_OHS_POPULATE_SDL->WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_BW_Z_BODS_01_STG_TO_BW_Z_BODS_01_SDA
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_1
WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL->DF_S_BW_Z_BODS_01_to_SDA_BW_Z_BODS_01_SDL_2
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_BW_Z_BODS_02_STG_TO_BW_Z_BODS_02_SDA
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_01
WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL->DF_S_BW_Z_BODS_02_to_SDA_BW_Z_BODS_02_SDL_02
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL->WF_CORD_BW_TD_OHS_BW_To_Z_BODS_01_SDL
WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL->WF_CORD_BW_TD_OHS_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Container_CORD_BW_TD_OHS__SDL->WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_01_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_CORD_BW_TD_OHS_POPULATE_S_BW_Z_BODS_02_SDL
WF_Job_Workflow_SSP_Group_CORD_BW_TD_OHS_SDL->WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL
WF_Master_Workflow_Staging_CORD_BW_TD_OHS_SDL->WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL
WF_Staging_Workflow_Container_CORD_BW_TD_OHS_SDL->WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL
WF_Staging_Z_BODS_01_to_S_BW_Z_BODS_01_SDL->DF_OH_Src_Z_BODS_01_To_Stg_S_BW_Z_BODS_01_Map_SDL
WF_Staging_Z_BODS_02_to_S_BW_Z_BODS_02_SDL->DF_OH_Src_Z_BODS_02_To_Stg_S_BW_Z_BODS_02_Map_SDL
}

 

 

3. Open the webpage webgraphviz

  1. Clear the existing contents in text box
  2. Past the code you copied
  3. Click generate graph button and scroll down to see the generated graph.

 


That's all, Org chart of your job is ready !

 


Note:

  1. The SQL Query
    1. Given query works only on MS-SQL Server.
    2. Modify the WHERE clause in the query to match with your job.
    3. You can also use "IN" instead of "=" and put multiple job names.
    4. Query is restricted only to job, workflow & dataflow. You can modify the conditions to include other objects too.
  2. We are generating only parent child hierarchy, not the execution flow. i.e. two child node at same level may not execute in parallel.
  3. Since its not the execution flow, conditional workflows will not appear in chart.
  4. Webgraphviz is alternate for Graphviz tool which supports command line usage when installed.

 

 

Appreciate your comments/feedback. Cheers

TUTORIAL: How to duplicate a Job batch in SAP Data Services Designer

$
0
0

Introduction

This tutorial will guide you along the job batch and component duplication processes. We will explain also the duplication mode operation.

 

 

 

Foreword

 

First of all, we must know something before duplicating object. For Data Services Designer (DSD), there is 2 kind of objects concerning duplication: reusable objects and non-reusable objects (or single-use objects).

  • The non-reusable objects will be duplicated when we want the copy of a job and his component. We can cite on this category the scripts, conditions, loops, global variables…
  • The reusable objects won’t be duplicated. We can cited on this category the Worklfows, Dataflows. It will happen that the reusable objects will be referenced and non-copied.

 

For more information, see the SAP Data Services Designer documentation Page 54

 

 

 

The logic duplication illustrated


2015-08-17_14h06_16.png



To show you this politics of DSD kind of object, I’ll show you an example:

I made a copy of my job batch « Job_batch_A » since my repository named Job_Batch_B.

Here’s what happen :

 

 

 

Schema.png

Diagram of duplication of a Job Batch

 

 

The Job_Batch__B will be dupplicated, but the inherited reusable objects won’t be copied: these are original objects references wich be created, duplicated objects.

 

To be more specific, if I made a change in my Job_Batch B, on Workflow_A or on a DataFlow, these changes will be reported on our Job_Batch_A.

Only our inherited reusable objects from our dupplicated object are references to the original objects! Our copied object (which is here a Batch Job) has been really duplicated.

 

So be careful when you make a copy of our jobs, workflows and dataflows.


To further explain:

 

Job_Batch A ≠ Job_Batch B

Job_Batch_A Reusable objects = Job_Batch_B Reusable objects.

 

The logic may seem strange and complex, but it is simpler than it seems. To better illustrate this:

- If I remove my Workflow_A of my Job_Batch_B, it will not also removed from Job_Batch A (and vice versa).

- But if I remove a DataFlow Workflow_A since my Job_Batch B, it will also be removed from Job_Batch A, the two jobs point to the same object that is Workflow_A (and vice versa).

 

 

Although this logic is the same for added component:

- If I add a Workflow in my Job_Batch_B, it will not be added into my Job_Batch_A (and vice versa)

- If I add a DataFlow in my Workflow_A since my Job_Batch_B it will also be added to Job_Batch A (and vice versa).

 

 

I took for example a batch Job and Workflow Dataflow inherited, but this logic copy / reference inherited remains the same on the lower level with what the Workflow Dataflows.

 

 

Job_Batch_example.png

Hierarchy of reusable objects


What to mainly know of this logic:

Duplicating a reusable object creates a new object, but the inherited objects will not be duplicated: they will be references to the original objects!

 

 

This logic of inheritance reusable object stops at our dataflows, because the objects contained in a dataflow are non-reusable objects (These are objects that can’t be reused and therefore are copied.). So if I make a copy of a dataflow, the changes I will do in the duplicated dataflow will not be reflected, because I am at the lowest level of reusable objects. So now I can answer at the question "If I make a change in a copied DataFlow, she will affect in another objects? ». Please note however, all objects contained in our batch Job and our workflows are NOT ALL reusable objects (ex: scripts).

 

 

Here is the presentation of the logic of DSD duplication. This logic reveals surprising at first glance because it doesn’t respect the traditional logic of "copy / paste" we know. But with hindsight, it brings a lot of benefit including the reuse of elements and widespread change.

If you understand the logic, so we answer at the following question:

 

"How to duplicate a job and make it independent ?"

 

Returning to our job illustrated above to explain the procedure to follow.

 

Schema 3.png

 

I want to duplicate our Job_Batch_A for a duplicate and independent Job_Batch_B, where I can do whatever I want on it Job_Batch_B without impacting our Job_Batch_A and vice versa.

 

 

 

Step 1

 

To duplicate a job, we will start duplicating our Job_Batch_A, select the job to copy from the local object library and do a right-click it, and then "replicate". Name this new Job "Job_Batch_B".

 

2015-08-17_14h06_16.png

 

Import the job batch then duplicated in the project area of your choice.

 

 

 

Step 2 :

Delete all Job_Batch_B reusable objects so that it is completely independent (workflows). In this case just delete our Workflow_A. Non-reusable objects will be copied so no bother to remove them.

Note: If some reusable objects need’s dependence with the original object, and you do not mind, you can keep them, but beware of consequences!

 

 

 

Step 3 :

Once our Job_Batch_B created, create our new workflow, we will name it Workflow_B:

 

2015-08-17_14h11_46.png

 

 

And once our Workflow_B created, we have reached the lowest level of reusable objects, we can place our copies of dataflows.

 

 

 

Step 4 :

We will duplicate our dataflows that we have in our Worklow_A. We still do it since the "local library objects."

 

2015-08-17_14h10_44.png

 

 

Rename them.

 

2015-08-17_14h12_57.png

 

 

 

Step 5 :

Place duplicate dataflows in our created  workflow. To place them, do click and drag from the "local object library" into the workflow.

Well, we now have two identical but separate batches.

 

2015-08-17_14h13_06.png

 

 

 

 

References :

https://scn.sap.com/thread/3762580

https://help.sap.com/businessobject/product_guides/boexir32SP2/en/xi321_ds_designer_en.pdf

Add attachment to your mail in BODS- A step by step process

$
0
0

Hello Techbie’s,

 

I had a requirement to send email with attachment in bods.

I read through some of the article in SCN but none of it provided a detailed way of achieving it.


I found the article (Add an attachment to BODS Job Notification email using VB Script ) somewhat interesting but it was not working for me

 

So I did a little research and came up with a solution that can be implemented in BODS.

 

Solution:-

We’ll do it using vb script and then calling that script in our job.

 

Step 1: Use below code to make a vb script file.

Open a notepad, write below code by making necessary changes to highlighted text and then save it as email.vbs


Option Explicit
Dim MyEmail

Set MyEmail=CreateObject("CDO.Message")

MyEmail.Subject =
"Subject Line"
MyEmail.From =
"no-reply@yourcompany.com"
MyEmail.To =
" helpdesk@yourcompany.com "

               MyEmail.TextBody = "This is the message body."
               MyEmail.AddAttachment
"attachment file path"        -- NO EQUAL TO SIGN HERE

(Note: Attachment filepath - This has to be a shared directory or location which is accessible by the DS. Common mistake people include “equal to ‘=’ “sign near Add attachment which results in an error)


MyEmail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendusing")=
2

'SMTP Server
MyEmail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserver")=
"smtp relay server name"

'SMTP Port
MyEmail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserverport")=
25

MyEmail.Configuration.Fields.Update
MyEmail.Send

set MyEmail=nothing

 

 

Step 2: In Job place the below script:

Script_Email which includes a call to email.vbs script file:

                          

                    e.g.        exec('cscript','filepath\email1.vbs', 8);

 

            Filepath where email.vbs is located.

Export each object to individual ATL

$
0
0
Hello Readers

 

Ever ended up in a situation to export each object to a separate ATL file?

Like, you have 500 jobs in a repository and you want to export it to 500 ATL files.

 

Classic way is to export each object one by one from designer which is error prone and time consuming.

 

Here is the flow chart for alternate way, which makes use of al_engine:

ObjectsExtractor.jpg

Well, that is pretty simple. Lets go through each step for further detail.

 

 

Reading user inputs:

 

  1. Get login details of repository
    1. Repository host name
    2. Repository database name
    3. Repository database user name
    4. Repository database Password
  2. Get object type, which can be one of the following.
    1. J - For Jobs
    2. W - For Workflows
    3. D - For Dataflows
    4. F - For Flatfiles
    5. S– Datastores
    6. C - For Custom functions
  3. Get the folder location to store exported files

 

 

Loading all the object names of required type:

 

These are the queries, one of which needs to be executed against the repository database depending on user input.

 

TypeNameQuery
JJobSELECT NAME FROM [AL_LANG]
WHERE ([OBJECT_TYPE] = 0  AND [TYPE] = 0 )
AND ( [NAME] not in ('CD_JOB_d0cafae2','di_job_al_mach_info'))
WWorkflowSELECT NAME FROM [AL_LANG]
WHERE ([OBJECT_TYPE] = 0  AND [TYPE] = 1 )
AND ( [NAME] not in ('CD_JOB_d0cafae2','di_job_al_mach_info'))
DDataflowSELECT NAME FROM [AL_LANG]
WHERE ([OBJECT_TYPE] = 1  AND [TYPE] = 0 )
AND ( [NAME] not in ('CD_DF_d0cafae2','di_df_al_mach_info'))
FFileformatSELECT NAME FROM [AL_LANG]
WHERE ([OBJECT_TYPE] = 4  AND [TYPE] = 0 )
AND ( [NAME] not in ('di_ff_al_mach_info','Transport_Format'))
SDatastoreSELECT NAME FROM [AL_LANG]
WHERE ([OBJECT_TYPE] = 5  AND [TYPE] = 0 )
AND ( [NAME] not in ('CD_DS_d0cafae2','Transport_Format'))
CCustom functionSELECT FUNC_NAME NAME FROM
[ALVW_FUNCINFO]

 


Generating al_engine.exe commands

 

Commands for al_engine should be generated and executed in a loop for every row returned by one of the above listed query.

 

Example command-line for SQL server repository will look like this:

"%Link_Dir%\bin\al_engine.exe" -NMicrosoft_SQL_Server -passphraseATL -U<SQLUN>-P<SQLPWD>-S<SQLHost>-Q<SQLDB>-Xp@<ObjectType>@<path>\<RepoObject>.atl@<RepoObject>@D

 

Text within angular brackets are place holders

  • Blue ones are parameters which were provided by user.
  • Red one is loop variable, which is nothing but output of the query.

 

Windows users can implement it in VB-Script. It served us the best

 

 

In case if you don't want to write any code,

 

You can create simple command generator in excel as shown in the screenshot.

Capture.PNG

You will need object names before proceeding - which can be easily obtained by executing the given queries.

 

Formula used in cell B11 is

="""%Link_Dir%\bin\al_engine.exe"" -NMicrosoft_SQL_Server -passphrase" & $B$7 & " -U" & $B$3 & " -P" & $B$4 & " -S" & $B$1 & " -Q"&$B$2 & " -Xp@" & $B$5 & "@""" & $B$6 & "\" & A11 & ".atl""@" & A11 & "@D"

 

Copy all the generated commands and paste it in command prompt. Objects will be exported one by one to their individual ATL files.

 

Hope this helps you saving your time and effort.

Cheers

Different ways to execute/call Stored Procedure in BODS

$
0
0

Hi All,

 

Here are some ways you can call/execute a Stored Procedure in BODS -

 

       1. One way is you make a stored procedure in database then call it in job using script below..

              

               sql('datastore','exec Stored_Procedure_Name');

 

      2. Other way to achieve it - import your stored procedure in function..

              

               Goto Datastore -> Function

               right click on function say import - provide your stored procedure name and import

 

      3. If you have already imported the Stored procedure then you can call it using lookup function in query transform

         

              Once imported you will be able to see the datastore name in lookup- further select the datastore and then you'll see the Stored Procedure name.



Note:  You can also refer this article for step by step information - Execute Stored Procedure from BODS and then start data extraction from SQL Table



Regards,

Gokul

+91-7588589933

SAP BODS Implementation options

$
0
0

Below analysis has been done to implement SAP  Data Service in SAP landscape where main purpose of data servcie  is ETL . Data Service solely use for extraction data from non - sap ,flat file  source to transform and load into SAP BW on HANA. It contain the information to setup Data Service infrastructure ,sizing and Licensing .

This information prepared with reference to Data Service 4.2 .

 

Why SAP Data Services ?

 

SAP Data Services delivers a single enterprise-class solution for data integration, data quality, data profiling, and text data processing that allows you to integrate, transform, improve, and deliver trusted data to critical business processes.


Data Services deployment prerequisites

In the typical SAP Data Services landscape, you must first install one of the following products. These products provide platform services such as security, scalability, and high availability for Data Services.

SAP BusinessObjects Information Platform Services (IPS) if you only want to use

   features of  Data Services or Information Steward

SAP BusinessObjects Business Intelligence platform (BI platform) if you also want to use Business Intelligence clients such as Web Intelligence documents or Crystal Report


DS1.PNG


Data Services deployment options


ds2.PNG

Below is the Best practice

ds3.PNG

 

 

SAP Data Services 4.2 compatibility with BIP and IPS

ds4.PNG

For more information please refer

SAP Note :  1740516 - SAP Data Services 4.x and SAP Information Steward 4.x compatibility with

SAP Business Intelligence platform and Information Platform Services for active releases

 

Data Services supported operating system

ds5.PNG

Data Services supported DBMS for Database Repository

ds6.PNG

ds7.PNG

for  more information please visit http://service.sap.com/PAM

Data Services minimum Hardware requirements

This is the minimum hardware requirement to run data service in single machine and distributed environment .


Single machine

The following are total requirements to install all BI/IPS and Data Services products in one system:

Minimum Hardware Requirements

4 processors (or 2 dual core processors) with minimum 2 GHz recommended

16-18 GB RAM Recommended

Disk Space Requirements (not including Operating System)

20 GB for default installation with English language only installed

23 GB for default installation with all languages


Distributed Landscape

Data Services Designer (supported on Windows only)

1 processor with a minimum of 1 GB RAM (4 GB recommended)

4 GB free disk space

Data Services Job Server

2 processors with a minimum of 2 GB RAM (8 GB recommended)

4 GB free disk space

Recommended minimum page able cache directory size: 4 GB (8 GB recommended)

Data Services Management Console

1 processor with at least 512 MB RAM (1 GB recommended)

150 MB free disk space

 

Note: Data Services Management Console must be deployed with

SAP BusinessObjects BI Web Application component

Data Services Adaptive Processing Server (APS) Services

1 processor with a minimum of 2GB free RAM

1 GB free disk space (plus additional disk space for referential data, 2.71 to 9.34 GB)

  Data Services APS Services requires SAP BusinessObjects BI or Information platform services to deploy.


Before sizing Data service you need to consider below list of point for successful sizing.

  • Please reference the Product Availability Matrix (PAM) for information regarding minimum requirements.
  • Please reference the Data Services Performance Optimization Guide  in order to understand how you job design impacts the resources you need.
  • Run some test jobs in a sandbox environment, bench-mark under different loads, and then make estimates for individual jobs.
  • Carefully study your ETL needs. Your ETL window and how much data of what type and in which manner has to be integrated, cleansed and moved, as well as availability requirements of your system will impact appropriate sizing.




Data Services License

SAP BODS license available in different format ,few are listed below ,for more information please refer the SAP standard document .


SAP Data Services, enterprise edition

The total number of Cores licensed represents the maximum total cumulative Cores on which all of the Software included in SAP Data Services, enterprise edition may be installed and Used, excluding SAP Power Designer Enterprise Architect and SAP Replication Server which do not count against total Cores. SAP Data Services, enterprise edition includes ten (10) Concurrent Sessions of SAP Power Designer Enterprise Architect, and SAP Replication Server that may be deployed on a separate server with a maximum of two (2) Cores.


Note: Directories are not included and must be licensed separately.


SAP Business Objects Enterprise Information Management Solutions


SAP Business Objects Data Services. The following is included in each license of the SAP Business Objects Data Services:

Five Named Users of SAP Business Objects Data Insight (except for licenses bundled or otherwise provided in combination with or for use with a third party product)

Runtime license for 2 CPU licenses of SAP Business Objects Information Steward. Use of the Business Objects Information Steward is limited to Cleansing Package Builder and the Basic and Advanced Profiling capabilities that are contained in Data Insight

One license of each of Real Time Transactional Processing, Data Source Web Service Access, Multi-user Team Development and Grid Computing

Database Interface licenses to an uncapped number of Types of databases

Salesforce.com Technology Interface

JMS Technology Interface

 

Note: For more information on License of Data service with combination of other software please refer SAP Standard document


***********************************************************************************************************

Above information has been collected from BODS master guide ,various SAP Notes and BODS performance optimization guide

Step by step data loading from BODS to BW target

$
0
0


Configurations at BW system:

 

1) Log on to the SAP BW system.
2) Enter T code ‘SALE’ to create new logical system:

1.jpg

3) To create a logical system, choose Define Logical System.

  • Enter a name for the logical system that you want to create.
  • Enter a description of the logical system.
  • Save your entries.

4) Go to Transaction RSA1 to create RFC connection.

5) Select Source Systems in the Modeling navigation pane on the left.

6) Navigate to BO DataServices right click and select create.

2.jpg

7) Enter Logical System Name and Source System Name as shown above and hit Continue.

3.jpg

8) Data Services will start an RFC Server program and indicates to SAP BI that it is ready to receive RFC calls. To identify itself as the RFC Server representing this SAP BI Source System a keyword is exchanged, in the screen shot above it is "BODREP". This is the Registered Server Program, the Data Services RFC Server will register itself with at SAP. Therefore, provide the same Program ID that you want to use for the call of the RFC Server on Data Services side. All other settings for the Source System can remain on the default settings.
To complete the definition of the Source System, save it.

4.jpg

NOTE: We have to use the same Program ID while creating RFC connection in Management Console(BODS).

BO Data Service - Configure a RFC connection

 

  1. Log on to the SAP data services management console system.

5.jpg

2     Expand to the new "SAP Connections" node and open the "RFC Server Interface" item. In the Configuration tab a new RFC Server is added so that it can register itself inside the SAP System with the given PROGRAM_ID.

6.jpg 

3     Start the RFC server from tab ‘RFC server interface status” :

7.jpg

4     Go to BW and check the connection :

8.jpg

  It will show message like below:

9.jpg

 

Creating BW source:

  1. Double click on BODS connection :

10.jpg

   2    Right click on header and create new application component(Here it’s ZZ_EMPDS) :

11.jpg

12.jpg

   3    Right click on application component and create a new datasource:

13.jpg

   4    Fill the information for datasource as shown below :

         General Info. Tab

14.jpg

   Extraction Tab

15.jpg

   Fields Tab: Here we’ll define the structure of the BW target and save it.

16.jpg

   5   Now, BW will automatically create a new InfoPackage as shown below :

17.jpg

 

 

Creating BODS Datastore and Job:

1     Right click and create a new data store for BW target and fill the  required BW system detail as shown below :

18.jpg

2     Right click on transfer structure and import the datasource( here its transaction datasource ‘ZBODSTGT’ :

19.jpg

20.jpg

3     Right click on File format and create a new file format as shown below :

21.jpg

4     Create a BODS job where Source is flat file and target is BW data source(Transfer structure) as shown below :

22.jpg

Where query mappings are as shown below:

23.jpg 

 

Execute the job:

BODS job to load data in BW can be executed from both the systems, BODS designer and BW system.

Before Executing the job, We have to do following cofigurations in BW InfoPackage :

 

Goto BW, Double click on the InfoPackage for respective datasource(ZBODSTGT) and fill the "3rd party selection" details as below and save it.

Repository    : BODS repository name

JobServer     : BODS running jobserver name

JobName     : BODS job name

24.jpg

 

 

Job Execution from BW system: Right click and execute infopackage. It will trigger BODS job which will load the data into BW datasource.

25.jpg

 

OR

Double click on InfoPackag, go to ‘Schedule’ tab and click on start:

26.jpg

Job Execution from BODS designer: Go to BODS designer, Right click on the BODS job and Execute.

27.jpg

28.jpg

 

 

Verifying Data in BW :

  1. Go to Datasource and right click go to Manage :

29.jpg

When we execute the job, it will generate a request for data extraction from BODS :

30.jpg

2     Select latest request and click on PSA maintenance which will show the data in target datasource.

 

After loading data to BW datasource, it can be mapped and loaded to any BW target like DSO or Cube using process chain in “Schedule” tab of InfoPackage:

Go to ‘Schedule Option-> Enter Process Chain name in ‘AfterEvent’



Execution of jobs and retrieve logs

$
0
0

This tutorial provides an overview of the execution of a job from the SAP Designer steps to run the jobs, debug errors and change options the server job. It thus contains how we can view the logs from the Administration Console SAP DataServices.

 

1. Overview of job execution
It is possible to execute jobs in three different ways. Depending on your needs, you can configure:
• Immediate Jobs
Designer launches batch jobs and real-time jobs and executes them immediately from Designer. For these jobs, Designer and Job Server designated (where the job runs, usually several times on the same computer) must be running. You probably will run immediate jobs only during the development cycle.

• Scheduled Jobs
The batch jobs are planned. To schedule a job, use the Administrator or a third party planner.
When jobs are planned by third-party software:
o The job is started outside the software.
o The job operates on a batch job (or shell script for UNIX) that has been exporting software.
When a job is called by third planner:
o The corresponding Job Server must be running.
o Designer does not need to be running.


• Services
Real-time jobs are configured as continuous services that listen requests an access server and process requests to the application upon receipt. Use the Administrator to create a service from a real-time job.

2. Preparing to run jobs

 Validation jobs and jobs components
It is also possible to explicitly validate the jobs and their components when they are created by:

 

IconDescription
2015-09-10_16h45_40.pngEverything clicking validate in the toolbar (or sélectionnantValider All objects in the view from the Debug menu). This controls the syntax of the object definition for the active workspace and all the objects that are called recursively from the view of the active workspace.
2015-09-10_16h46_28.png

  By clicking the Submit button the current display in the toolbar (or Validate Current Vue sélectionnant from the Debug menu. This controls the syntax of the object definition for the active workspace.

 

 

You can set Designer options (Outils-->Options-->Designer-->General) to validate jobs started in Designer before running the job.

The default setting is to not confirm. The software also validates jobs before exporting them.

If during the validation software discovers an error in the object definition, it opens a dialog box indicating that an error exists, then it opens the Output window to view the error.

If there are errors, double-click the error in the Output window to open the editor of the object containing the error.

If you can not read the complete error text in the window, you can access additional information by right-clicking on the error list and selecting View from the context menu.

 

2015-09-10_16h50_16.png

 

Error messages have the following severity levels:

 

SeverityDescription
2015-09-10_17h03_41.pngInformational message only; does not prevent the job to run. No action is required.
2015-09-10_17h04_03.pngThe error is not serious enough to stop the execution of the job, but it is possible that you get unexpected results. For example, if the type of a source column data processing in a data stream does not match the data type of a column in a processing target, the software will alert you with a warning message.
2015-09-10_17h04_14.pngError is Gave enough to stop the execution of the job. You must fix the error before the job runs.

 

Ensure that the job Server is running


Before running a job (as an immediate or scheduled task), make sure that the Job Server is associated with the repository in which the client runs. When Designer starts, it displays the status of the job server for the repository to which you are connected.

 

IconDescription
2015-09-10_17h10_04.pngThe Job Server is running
2015-09-10_17h10_21.pngThe Job Server is inactive

 

3. Setting Options for the execution of the job


Options for jobsincludedebuggingand monitoring.Although the objectoptions(they affect the function of the object), they are either inthe windoworpropertyin the Immediate window associatedwith thejob.
Theexecution options for jobs can be defined for a single instanceor as adefault.


  • Run right-click menu defines options only fora single execution and replacesthe default settings.
  • Right-click Properties menu sets the default parameters.

 

Set execution options for each job execution


    • From the Project box, click the right mouse button on the job name and select Properties.
    • Select the options in the Properties window.

 

2015-09-10_17h20_55.png

 

 

Execution of jobs as immediate task


Les immediate tasks or "on demand" are initiatedfrom Designer. Designerand Job Servermust both berunningforthe job runs.

 

  • In the Project box, select the job name.
  • Click the right mouse button and select Run.

The software prompts you to save all the objects which have changes that have not been recorded.

2015-09-10_17h26_58.png

 

The software prompts you to save all the objects which have changes that have not been recorded.

 

2015-09-10_17h28_59.png

Nested data management - XML File to Dimension Table

$
0
0

In this tutorial, learn how to design a batch type of job for extracting data from a hierarchical XML file, transform and load them into a table MS SQL SERVER 2012.

 

Requirements:

  • Access to a type of MS SQL SERVER 2012 database or otherwise,
  • An XML file containing the raw data
  • Version: 4.2 SAP Data Services
  • Application: Data Services Designer

2015-09-10_17h37_53.pngThe steps that constitute this tutorial are:

 

  • Added "MtrlDim" job, work and dataflows
  • Import a DTD (definition of an XML schema format)
  • Definition of the data stream "DF_MtrlDim"
  • Validation of the data stream
  • Running the job

 

Adding components "Work" and "Data" flows in the Job design space

 

procedure

 

  • Create New Job entitled "JOB_MtrlDim" In the project area,
  • Right-click and "New batch job"

2015-09-10_17h42_14.pngCreate a "workflow"

 

In the project area, click the created job.

Select the "Workflow" icon in the tool palette.

Place the "workflow" in the job Workspace and named the "WF_MtrlDim".

 

 

2015-09-10_17h46_13.png

The EIM Bulletin

$
0
0

Purpose

 

The Enterprise Information Management Bulletin (this page) is a timely and regularly-updated information source providing links to hot issues, new documentation, and upcoming events of interest to users and administrators of SAP Data Quality Management (DQM), SAP Data Services (DS), and SAP Information Steward (IS).

 

To subscribe to The EIM Bulletin, click the "Follow" link you see to the right.

 

HotIssues

 

Best Practices for upgrading older Data Integrator or Data Services repositories to Data Services 4.2

  • Finally upgrading your old version of Data Integrator or Data Servcies? Please reference the guide above.

 

 

Latest Release Notes

(updated 2015-09-10)

  • 2129507 - Release Notes for DQM SDK 4.2 Support Pack 4 Patch 1 (14.2.4.772)
  • 2192027 - Release Notes for SAP Data Services 4.2 Support Pack 4 Patch 3 (14.2.4.873)
  • 2195658 - Release Notes for SAP Data Services 4.2 Support Pack 5 Patch 1 (14.2.5.894)
  • 2192015 - Release Notes for SAP Information Steward 4.2 Support Pack 4 Patch 3 (14.2.4.836)
  • 2195665 - Release Notes for SAP Information Steward 4.2 Support Pack 5 Patch 1 (14.2.5.851)

 

New Product Support Features
(Coming Soon)

 

 

Selected New KB Articles and SAP Notes

(updated 2015-08-28)

  • 2209609 - New built-functions are missing after repository upgrade - Data Services 4.2 SP5
  • 2191581 - Error: no crontab for userid
  • 2194903 - Getting error "CMS cluster membership is incorrect" from Data Services Designer
  • 2206478 - How to change location of pCache - Information Steward 4.x
  • 2198166 - Designer use of views as target only possible now from datastore library
  • 2194994 - Unable to start Data Services job server - SAP Data Services 4.x

 

Your Product Ideas Realised!

(new section 2015-06-25)

 

Enhancements for EIM products suggested via the SAP Idea Place, where you can vote for your favorite enhancement requests, or enter new ones.

 

Events

(To be determined)


New Resources

(To be determined)


Didn't find what you were looking for? Please see:


Note: To stay up-to-date on EIM Product information, subscribe to this living document by clicking "Follow", which you can see in the upper right-hand corner of this page.

Concatenating Multiple Row Column Values into Single Row Delimited Value

$
0
0

Description

This document describes a couple of approaches to a common problem encountered in day-to-day Data Services development, namely how to concatenate the values of a column across 2 or more rows into a single row with the values given as a delimited set of values in a single column.

The solution given here is SQL Server specific though I am sure similar solutions are possible in other databases.

To demonstrate the goal, given the data below in the table PAYMENT_METHODS:-

Master_SourcePrimary KeyCompany CodePayment Method
MDS_00100001IE01C
MDS_00100001IE01D

The desired outcome is :-

Master_SourcePrimary KeyCompany CodePayment Methods
MDS_00100001IE01C, D

The 2 approaches that will be described here are specific to SQL Server and they are:-

  • Use a User Defined Function that returns a scalar value containing the delimited values
  • Use FOR XML Path

 

References

 

Much of the material here is based upon Aaron Bertrand's excellent piece http://sqlperformance.com/2014/08/t-sql-queries/sql-server-grouped-concatenation

User Defined Function

The user defined function given below uses the SQL Server COALESE() function to construct a string containing the PAYMENT_METHOD values delimited by [comma][space], returning that string as a scalar.

CREATE FUNCTION dbo.Get_List_ZWELS ( @Master_Source nvarchar(20), @Primary_Key nvarchar(50), @Company_Code nvarchar(4) )

RETURNS NVARCHAR(4000) WITH SCHEMABINDING AS

BEGIN

DECLARE @s NVARCHAR(4000);

 

SELECT @s = COALESCE(@s + N', ', N'') + PAYMENT_METHOD

     FROM

          dbo.PAYMENT_METHOD

     WHERE

          MASTER_SOURCE = @Master_Source and

          PRIMARY_KEY = @Primary_Key and

          COMPANY_CODE = @Company_Code

     ORDER BY PAYMENT_METHOD RETURN (@s);

END

GO

The COALESE() function will return the first non-NULL value encountered in the list of expressions given.

So walking through the example data, the first time the COALESE() is executed @s is NULL so COALESE() returns N'' to this is concatenated the first PAYMENT_METHOD, so @s = 'C'.

On the second execution @s is not NULL so COALESE() returns @s + N', ', to this is concatenated 'D'. So @s = 'C, D'.

There are only 2 rows in the sample dataset so @s is now returned to the caller.

If we were to execute the SQL

SELECT

     MASTER_SOURCE, PRIMARY_KEY, COMPANY_CODE,

    PAYMENT_METHODS = dbo.Get_List_ZWELS( MASTER_SOURCE, PRIMARY_KEY, COMPANY_CODE )

FROM

     PAYMENT_METHOD

Then what would be returned is:

MASTER_SOURCEPRIMARY_KEYCOMPANY_CODEPAYMENT_METHODS
MDS_00100001IE01C, D
MDS_00100001IE01C, D

Simply adding a GROUP BY to this gives us the desired output:-

SELECT

     MASTER_SOURCE, PRIMARY_KEY, COMPANY_CODE, PAYMENT_METHODS = dbo.Get_List_ZWELS( MASTER_SOURCE, PRIMARY_KEY, COMPANY_CODE )

FROM

     PAYMENT_METHOD

group by

     MASTER_SOURCE, PRIMARY_KEY, COMPANY_CODE

Translating this approach into something usable from Data Services is as simple as:-

  • Import the Get_List_ZWELS() function into the Datastore that contains the PAYMENT_METHOD table
  • Create a dataflow using the PAYMENT_METHOD table as a source
  • Add a group by to the initial Query, group by MASTER_SOURCE, PRIMARY_KEY and COMPANY_CODE and add the aggregating count(*) to the output schema.
  • In the next Query insert the Get_List_ZWELS() function call into the Query transform passing the parameters MASTER_SOURCE, PRIMARY_KEY and COMPANY_CODE
  • Rename the return value as PAYMENT_METHODS
  • Output to target table.

This will give the desired output of:

MASTER_SOURCEPRIMARY_KEYCOMPANY_CODEPAYMENT_METHODS
MDS_00100001IE01C, D

FOR XML Path

The second method uses the SQL Server FOR XML feature.

SELECT

     MASTER_SOURCE, PRIMARY_KEY, COMPANY_CODE, PAYMENT_METHODS = STUFF((SELECT N', ' + PAYMENT_METHOD

FROM

     dbo.PAYMENT_METHOD AS p2

WHERE

     p2.MASTER_SOURCE = p.MASTER_SOURCE and

     p2.PRIMARY_KEY = p.PRIMARY_KEY and

     p2.COMPANY_CODE = p.COMPANY_CODE

ORDER BY

     PAYMENT_METHOD FOR XML PATH(N''), TYPE).value(N'.[1]', N'nvarchar(max)'), 1, 2, N'')

     FROM dbo.PAYMENT_METHOD AS p GROUP BY MASTER_SOURCE, PRIMARY_KEY, COMPANY_CODE

Gives the same output as the final example in the UDF section above.

The difference is in the implementation in Data Services.

To use the FOR XML Path in Data Services it is necessary to specify the above SQL in an SQL transform which serves as the source data in a dataflow.

One of the drawbacks of this FOR XML approach is that if the data being processed contains character(s) that it is not possible to represent in XML then the SELECT will fail. An example of this that I have actually experienced is the character x'1A' embedded in the data. This will result in the error:

Msg 6841, Level 16, State 1, Line 51FOR XML could not serialize the data for node 'NoName' because it contains a character (0x001A) which is not allowed in XML. To retrieve this data using FOR XML, convert it to binary, varbinary or image data type and use the BINARY BASE64 directive

Summary

Aaron's article - referenced above - indicates that from a performance perspective that the FOR XML approach will out perform the UDF approach.

From a Data Services point of view, I think that the UDF approach is more visible and more maintainable.

Connect Data Services 4.2 SP4 With Mainframe ADABAS

$
0
0

Good afternoon,

I would like to develop an ETL using SAP Data Services 4.2 SP4 connecting the ADABAS mainframe.

In the product documentation mentions:

2.5.2.1 Mainframe interface

The software Provides the Attunity Connector datastore que accesses mainframe data sources through Attunity Connect.

The data sources que Attunity Connect accesses are in the list following. For a complete list of sources, refer to the Attunity documentation.

● Adabas

2.5.2.1.1 Prerequisites for an Attunity datastore

Attunity Connector accesses mainframe data using software That You must manually install on the mainframe

server and the client location (Job Server) computer. The software connects to Attunity Connector using its ODBC interface

Attunity.jpg

You must perform the installation on the mainframe server? Through this connector it is possible to access? No need to buy and install the Attunity Server?

In the Attunity site mentions:

The Business Objects Data Integrator product includes direct support for the Attunity Server ODBC interface.

 

Someone could send me the procedures required to perform this setting in ADABAS mainframe?

 

Thank you

 

Hugs

 

This document was generated from the following discussion: Connect Data Services 4.2 SP4 With Mainframe ADABAS

Viewing all 401 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>