Tuesday, 12 May 2015

Write Optimized DSO

Overview

The concept of Write Optimized DSO was introduced in BI-7.0. Write Optimized DSO unlike Standard DSO has only one relational table, i.e. Active Table and moreover there is no SID generated in Write Optimized DSO, and hence loading the data from Data Source to Write Optimized DSO takes less time and acquires less disk space.

Business Case

Require Data storage for storing detailed level of data with immediate reporting or further update facility. No over write functionality required.

Limitation of Standard DSO

  • A standard DSO allows you to store detailed level of information; however activation process is mandatory.
  • Reporting or further update is not possible until activation is completed.
     

Write Optimized DSO - Properties

  • Primarily designed for initial staging of source system data
  • Business rules are only applied when the data is updated to additional InfoProviders.
  • Stored in at most granular form
  • Can be used for faster upload
  • Records with the same key are not aggregated ,But inserted as new record, as every record has new technical key
  • Data is available in active version immediately  for further Processing
  • There is no change log table and activation queue in it.
  • Data is saved quickly.
  • Data is stored in it at request level, same as in PSA table. 
  • Every record has a new technical key, only inserts.
  • It allows parallel load, which saves time for data loading.
  • It can be included in a Process Chain, and we do not need an activation step for it.
  • It supports archiving.  

Write-Optimized DSO - Semantic Keys

Semantic Key identifies error in incoming records or Duplicate records.
Semantic Keys protects Data Quality such that all subsequent Records with same key are written into error stack along with incorrect Data Records.
To process the error records or duplicate records, Semantic Group is defined in DTP.
Note: if we are sure there are no incoming duplicate or error records, Semantic Groups need not be defined.

 

Write Optimized DSO -  Data Flow

1.     Construct Data Flow model.
2.     Create Data source
3.     Create Transformation
4.     Create Info Package
5.     Create  DTP

 

Write-Optimized - Settings

If we do not check the check box  "Do not Check Uniqueness of Data", the data coming from source is checked for duplication; i.e. if the same record (semantic keys) already exist in the DSO, then the current load is terminated.
If we select the check box, duplicate records are loaded as a new record; there is no relevance of semantic keys in this case.


When Write Optimized DSO is Recommended?

  • For faster data loads, DSOs can be configured to be Write optimized
  • When the access to source system is for a small duration.
  • It can be used as a first staging layer.
  • In cases where delta in DataSource is not enabled, we first load data into Write Optimized DSO and then delta load can be done to Standard DSO. 
  • When we need to load large volume of data into Info Providers, then WO DSO helps in executing complex transformations. 
  • Write Optimized DSO can be used to fetch history at request level, instead of going to PSA archive.

Functionality

  • It contains only one table, i.e. active data table (DSO key: request ID, Packet No, and Record No)
  • It does not have any change log table or activation queue.
  • Every record in Write Optimized DSO has a new technical key, and delta in it works record wise.
  • In Write Optimized DSO data is stored at request level like in PSA table.
  • In Write Optimized DSO SID is not generated.
  • In Write Optimized DSO Reporting is possible but it is not a good practice as it will effect the performance of DSO.
  • In Write Optimized DSO BEx Reporting is switched off.
  • Write Optimized DSO can be included in InfoSet or Multiprovider.
  • Due to Write Optimized DSO performance is  better during data load as there is no Activation step involved. The system generates a unique technical key
  • The technical key in Write Optimized DSO consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). 

 

Points to Remember

  • Generally Write Optimized DSO is not preferred for reporting, but If we want to use it for reporting then it is recommended to define a semantic in order to ensure the uniqueness of the data.
  • Write-optimized DSOs can force a check of the semantic key for uniqueness when data is stored.
  • If this option is active and if duplicate records are loaded with regard to semantic key, these are logged in the error stack of the Data Transfer Protocol (DTP) for further evaluation.
  • If we need to use error stack in our flow then we need to define the semantic key in the DSO level.
  • Semantic group definition is necessary to do parallel loads.

Reporting

If we want to use write-optimized DataStore object in BEx queries (not preferred), it is recommended to:
1. have a semantic key and
2. ensure that the data is unique.
Here the Technical key is not visible for reporting, so it looks like any regular DSO 

Use

Data that is loaded into write-optimized DataStore objects is available immediately for further processing.
They can be used in the following scenarios:

  You use a write-optimized DataStore object as a temporary storage area for large sets of data if you are executing complex transformations for this data before it is written to the DataStore object. The data can then be updated to further (smaller) InfoProviders. You only have to create the complex transformations once for all data.

  You use write-optimized DataStore objects as the EDW layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders.
The system does not generate SIDs for write-optimized DataStore objects and you do not need to activate them. This means that you can save and further process data quickly. Reporting is possible on the basis of these DataStore objects. However, we recommend that you use them as a consolidation layer, and update the data to additional InfoProviders, standard DataStore objects, or InfoCubes.

Structure

Since the write-optimized DataStore object only consists of the table of active data, you do not have to activate the data, as is necessary with the standard DataStore object. This means that you can process data more quickly.
The loaded data is not aggregated; the history of the data is retained. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, so that the aggregation of data can take place later in standard DataStore objects.
The system generates a unique technical key for the write-optimized DataStore object. The standard key fields are not necessary with this type of DataStore object. If there are standard key fields anyway, they are called semantic keys so that they can be distinguished from the technical keys. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). Only new data records are loaded to this key.
You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
Use in BEx Queries
For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. However, in comparison to standard DataStore objects, you can expect slightly worse performance because the SID values have to be created during reporting.
If you want to use write-optimized DataStore objects in BEx queries, we recommend that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, you may experience unexpected results when the data is aggregated in the query.

 ********************************************************************************

Write Optimized DSO is used when a Data storage object is required for storing lowest granularity records such as address and when overwrites functionality is not needed. It consists of the table of active data only, hence no need for data activation which increases data process. Data store object is available immediately for further processing; it is used as a temporary storage area for large set of data.
Write-Optimized DSO has been primarily designed to be the initial staging of the source system data from where the data could be transferred to the Standard DSO or the Info Cube.
 
  1. PSA receives data unchanged to the Source system
  2. Data is posted at document level, After loading in to standard DSOs data is deleted
  3. Data is posted to Corporate memory write –optimized DSO from pass thru write-optimized DSO
  4. Data is Distributed from write-optimized “pass thru” to Standard DSOs as per business requirement.
Write Optimized DSO Properties:
  • It is used for initial staging of source system data.
  • Data stored is of lowest granularity.
  • Data loads can be faster since it does not have the separate activation step.
  • Every record has a technical key and hence aggregation of records is not possible. New records are inserted every time.
Creation Of Write-Optimized DSO:
Step 1)
  1. Go to transaction code RSA1
  2. Click the OK button.
Step 2)
  1. Navigate to Modelling tab->Info Provider.
  2. Right click on Info Area.
  3. Click on “Create Data Store Object” from the context menu.
Step 3)
  1. Enter the Technical Name.
  2. Enter the Description.
  3. Click on the “Create” button.
Step 4)
Click on the Edit button of “Type of DataStore Object”.
Step 5)
Choose the Type “Write-Optimized”.
Technical keys include Request ID, Data package, Record number. No additional objects can be included under this.
Semantic keys are similar to key fields, however, here the uniqueness is not considered for over write functionality. They are instead used in conjunction with setting “Do not check uniqueness of data”.
The Purpose of Semantic Key is to identify error in incoming records or Duplicate records .
Duplicate Records are written into error stack in the subsequent order. These records in the error stack can be handled or re-loaded by defining Semantic Group in DTP.
Semantic Groups need not be defined if there will be no possibility of duplicate records or error records.
If we do not check the Check Box  “Allow Duplicate Data Record “, the data coming from source is checked for duplication, i.e, if the same record (semantic keys) already exist in the DSO, then the current load is terminated.
If we select the check box , Duplicate records are loaded as a new record. There is no relevance of semantic keys in this case.
Step 6)
Activate the DSO.

7 comments: