Buy

Buy

List of Posts

Get this widget
To get notifications, Please like our Facebook Page >>>>>>

Use Search box to find a topic in this Blog!!!

Create Index is taking more time to complete

In my scenario we are loading full load from one data source and delta load from DSO (The data DSO is coming from differenct DS) to the same target 0FIAP_C30.
and we are not deleting the data from cube because of the delta records so the data records are increasing day to day in cube.
due to this create index variant is taking long time around 2:30 hr.

Process chian flow:

Data is loading from 0FI_AP_30 (DS) to cube directly with full update,
and the data of  0FI_AP_3 & 0FI_AP_4 both are loading to DSO(standard) with full updated thru DSO we are loading delta to cube.

Could you please someone help me how to reduce the completion time of the create index variant.

Sol: 


Hope you are dropping indices before you load to Cube. This will improve loading performance. Creating secondary indexes on Cubes is not mandatory unless your reporting performance is decreased. If there is no such issue, you can remove dropping/building indexes from your process chain. Because, eventually data will increase in the Cube. It will be very difficult to maintain in terms of roll up of aggregates , compression etc..

These indexes are just secondary indexes on the cube. Your actual performance of the cube depends on the design of your dimensions/chars. You can improve its performance by Aggregates, Compression, partitioning etc..
Creating indexes is always time consuming as your cube is full load based. The data will be increasing like hell..


It is not mandatory to drop the index before loading the data to the cube. If your daily delta load has more than 10 % of the data(not a hard and fast rule) in the cube then it makes sense to drop the index and recreate it after load. Also if the cube is not compressed then create index time will be more as each request forms a segment and index creation happens on each of these segment. So do the following for your issue.
 1) Try loading without dropping index
2) if you get DBIF_RSQL_SQL_ERROR during load which happens due to index not being dropped then go for drop and recreate index
3) compress the cube if you dropping and recreating index.

If data size is high the creation of indexes is going to take time.
Generally it is advisable to delete and rebuild the indexes during the time of data load, benefits it will faster the data load, as loading data on already index table is slower than that of without indexes.
Keep the indexes as it is and load data here you are compromising on data load time on the cost of saving index creation time.
One more thing to consider in above case, dead lock issue may arise if your DTP is using more than 1 batch process, so make it 1 to avoid oracle dead lock issue during loading (this way you are further increasing the data load time).
You have to make decision based on the scenario/time.

No comments:

Post a Comment

Note

This blog is solely for the purpose of getting educated in SAP. This blog does not encourage distribution of SAP hold copyright and any other publication and/or materials bearing SAP mark. If you find any copyright materials, please mail me so I can remove them. This blog is not in association with SAP.

ALL Posts

Get this widget