Partition count contest!
via Content in SCN by Lars Breddemann on 4/23/12
Let's have a contest!
One of my last blog posts started off a discussion (e.g. Bala posted his thought in a blog) about the importance of the partitioning feature for a SAP BW installation.No doubt about it - partitioning is something we use heavily for SAP BW.
Breaking the work down into smaller pieces that are easier to handle is one of the great common problem solving heuristics and it really works wonders in many cases.
Beware, it moves!
What many DBAs and SAP Basis admins don't seem to fully get is that partitioning is not a static thing. Of course, the handling of partitions is very much like dealing with rather fixed stuff as tables and indexes. For the later objects you usally set them up, use them and leave them alone for the rest of the lifetime. Sometimes a reorg may be required, but this pretty much is it.These objects change with the data you store in them.
And this data changes over time.
So your partitioned table from yesterday will be a different one than the one of today.
1. the F-fact tables are partitioned by request.
Every new request that gets loading into the InfoCube is stored in its own partition.
That way, we can easily remove requests e.g. if the data is not correct or during compression/condension.
With that, we can improve query and archiving performance, when these actions are based on a time dimension-InfoObject (which is most often the case).
So far so good.
The problem now is, that the first kind of partitioning is done fully automatic.Whenever a new request is loaded into an InfoCube, the F-fact table gets a new partition and the data is stored in it.
What doesn't happen fully automatic is that the partitions are removed again.
To remove the partitions from the F-fact table the corresponding request (and all requests that have been loaded before that) needs to be compressed or better condensed into the E-fact table.
Basically this operation does nothing else then adding up the numbers from the F-fact table partition to the E-fact table, stores the result in the E-fact table and then drops the partition from the F-fact table.
Of course, now you cannot remove the data anymore based on the loading request, since it has been summed together with the other data in the E-fact table. On the other hand, now this addition doesn't need to be performed at query runtime anymore and the database can use the partitioning scheme of the E-fact table for a more efficient execution plan.
Our performance is good - so what's the hassle about?
Besides performance issues, having many partitions can lead to multiple problems:- usually aggregate tables tend to have even more partitions than their basic cubes (for technical reasons), so there is a multiplication effect
- DDL statements that are generated for the F-fact tables can become too large for export/import/migrations or reorganisations on DB level.
- Index creation can become very slow for so many partitions, since all indexes on F-fact tables are also locally partitioned, again a multiplication factor.
- during attribute change runs a high number of partitions can lead to crahes as explained in notes
#1388570 - BW Change Run
#903886 - Hierarchy and attribute change run - It may even happen, that it's not even possible anymore to perform change runs or compression of requests, if there are too many partitions!
COMPRESS! COMPRESS! COMPRESS!
Note #590370 - Too many uncompressed request (f table partitions)I really don't know how many support messages have already been closed by simply compressing the requests.
And because of that and because it's so simple to figure out whether or not there are F-fact tables with too many partitions (usually not more than 20 - 30 are recommended) I decided to start a little competition here.
Just run the following little SELECT command on your BW database to get a list of F-fact tables that have more than 50 partitions:
\
select table_name, substr(table_name, 7) infocube_name, partition_count \ from user_part_tables \ where table_name like '/BI_/F%' \ and partition_count >50 \ order by partition_count desc; \ \ ----------------------------------------------- \ |TABLE_NAME |INFOCUBE_NAME|PARTITION_COUNT| \ ----------------------------------------------- \ |/BIC/FZ123456 |Z123456 | 8.279 | <<< come on, beat this :-) \ |/BIC/F123456784|123456784 | 999 | \ |/BIC/FTPEDBIF5X|TPEDBIF5X | 636 | \ |/BI0/F0RKG_IC3 |0RKG_IC3 | 375 | \ |/BIC/F100197 |100197 | 334 | \ |/BIC/FRSTTREP01|RSTTREP01 | 281 | \ |/BIC/FRS_C5 |RS_C5 | 253 | \ |/BIC/F100184 |100184 | 238 | \ |/BIC/F100183 |100183 | 238 | \ [...] \ ----------------------------------------------- \ \
\ (be aware that this statement obviously does only work for InfoCubes tables in the standard name schema /BIC/, /BI0/, /BI... - you can of course adapt it to your naming scheme).
If you like to, just post your TOP partition count into the comments section - would be interesting to see, what extreme examples come up...
Although there's no price to win, you might at least get awareness that there is something to keep an eye onto in your BW database.
No comments:
Post a Comment