Buy

Buy

List of Posts

Get this widget
To get notifications, Please like our Facebook Page >>>>>>

Use Search box to find a topic in this Blog!!!

Partition Count



What many DBAs and SAP Basis admins don't seem to fully get is that partitioning is not a static thing. Of course, the handling of partitions is very much like dealing with rather fixed stuff as tables and indexes. For the later objects you usally set them up, use them and leave them alone for the rest of the lifetime. Sometimes a reorg may be required, but this pretty much is it.

Partitioned objects on the other hand are usually way more 'vivid' (dynamic, volatile, changing... hard to find a good matching word for this).
These objects change with the data you store in them.
And this data changes over time.
So your partitioned table from yesterday will be a different one than the one of today.

In SAP BW we use partitioning for InfoCubes in two ways:
1. the F-fact tables are partitioned by request.
Every new request that gets loading into the InfoCube is stored in its own partition.
That way, we can easily remove requests e.g. if the data is not correct or during compression/condension.

2. the E-fact table CAN by partitioned by a time-InfoObject.
With that, we can improve query and archiving performance, when these actions are based on a time dimension-InfoObject (which is most often the case).


The problem now is, that the first kind of partitioning is done fully automatic.
Whenever a new request is loaded into an InfoCube, the F-fact table gets a new partition and the data is stored in it.
What doesn't happen fully automatic is that the partitions are removed again.
To remove the partitions from the F-fact table the corresponding request (and all requests that have been loaded before that) needs to be compressed or better condensed into the E-fact table.
Basically this operation does nothing else then adding up the numbers from the F-fact table partition to the E-fact table, stores the result in the E-fact table and then drops the partition from the F-fact table.
Of course, now you cannot remove the data anymore based on the loading request, since it has been summed together with the other data in the E-fact table. On the other hand, now this addition doesn't need to be performed at query runtime anymore and the database can use the partitioning scheme of the E-fact table for a more efficient execution plan.


Besides performance issues, having many partitions can lead to multiple problems:
  • usually aggregate tables tend to have even more partitions than their basic cubes (for technical reasons), so there is a multiplication effect
  • DDL statements that are generated for the F-fact tables can become too large for export/import/migrations or reorganisations on DB level.
  • Index creation can become very slow for so many partitions, since all indexes on F-fact tables are also locally partitioned, again a multiplication factor.
  • during attribute change runs a high number of partitions can lead to crahes as explained in notes
    #1388570 - BW Change Run
    #903886 - Hierarchy and attribute change run
  • It may even happen, that it's not even possible anymore to perform change runs or compression of requests, if there are too many partitions!
For all these reasons there's a recommendation out for a long time now:

COMPRESS! COMPRESS! COMPRESS!

Note #590370 - Too many uncompressed request (f table partitions)
I really don't know how many support messages have already been closed by simply compressing the requests.
And because of that and because it's so simple to figure out whether or not there are F-fact tables with too many partitions (usually not more than 20 - 30 are recommended) I decided to start a little competition here.
Just run the following little SELECT command on your BW database to get a list of F-fact tables that have more than 50 partitions:
\
select table_name, substr(table_name, 7) infocube_name, partition_count  \  from user_part_tables  \  where table_name like '/BI_/F%'  \  and partition_count >50  \  order by partition_count desc;  \    \  -----------------------------------------------  \  |TABLE_NAME     |INFOCUBE_NAME|PARTITION_COUNT|  \  -----------------------------------------------  \  |/BIC/FZ123456  |Z123456      |         8.279 |  <<< come on, beat this :-)  \  |/BIC/F123456784|123456784    |           999 |  \  |/BIC/FTPEDBIF5X|TPEDBIF5X    |           636 |  \  |/BI0/F0RKG_IC3 |0RKG_IC3     |           375 |  \  |/BIC/F100197   |100197       |           334 |  \  |/BIC/FRSTTREP01|RSTTREP01    |           281 |  \  |/BIC/FRS_C5    |RS_C5        |           253 |  \  |/BIC/F100184   |100184       |           238 |  \  |/BIC/F100183   |100183       |           238 |  \  [...]  \  -----------------------------------------------  \    \  
\
(be aware that this statement obviously does only work for InfoCubes tables in the standard name schema /BIC/, /BI0/, /BI... - you can of course adapt it to your naming scheme).
If you like to, just post your TOP partition count into the comments section - would be interesting to see, what extreme examples come up...
Although there's no price to win, you might at least get awareness that there is something to keep an eye onto in your BW database.

No comments:

Post a Comment

Note

This blog is solely for the purpose of getting educated in SAP. This blog does not encourage distribution of SAP hold copyright and any other publication and/or materials bearing SAP mark. If you find any copyright materials, please mail me so I can remove them. This blog is not in association with SAP.

ALL Posts

Get this widget