Jul 10, 2012

Partition count contest!

Partition count contest!

via Content in SCN by Lars Breddemann on 4/23/12

Let's have a contest!

One of my last blog posts started off a discussion (e.g. Bala posted his thought in a blog) about the importance of the partitioning feature for a SAP BW installation.

No doubt about it - partitioning is something we use heavily for SAP BW.
Breaking the work down into smaller pieces that are easier to handle is one of the great common problem solving heuristics and it really works wonders in many cases.

Still, partitioning is something that needs to be done right and unfortunately also something that can be done wrong.

Beware, it moves!

What many DBAs and SAP Basis admins don't seem to fully get is that partitioning is not a static thing. Of course, the handling of partitions is very much like dealing with rather fixed stuff as tables and indexes. For the later objects you usally set them up, use them and leave them alone for the rest of the lifetime. Sometimes a reorg may be required, but this pretty much is it.

Partitioned objects on the other hand are usually way more 'vivid' (dynamic, volatile, changing... hard to find a good matching word for this).
These objects change with the data you store in them.
And this data changes over time.
So your partitioned table from yesterday will be a different one than the one of today.

In SAP BW we use partitioning for InfoCubes in two ways:
1. the F-fact tables are partitioned by request.
Every new request that gets loading into the InfoCube is stored in its own partition.
That way, we can easily remove requests e.g. if the data is not correct or during compression/condension.

2. the E-fact table CAN by partitioned by a time-InfoObject.
With that, we can improve query and archiving performance, when these actions are based on a time dimension-InfoObject (which is most often the case).

So far so good.

The problem now is, that the first kind of partitioning is done fully automatic.
Whenever a new request is loaded into an InfoCube, the F-fact table gets a new partition and the data is stored in it.
What doesn't happen fully automatic is that the partitions are removed again.
To remove the partitions from the F-fact table the corresponding request (and all requests that have been loaded before that) needs to be compressed or better condensed into the E-fact table.
Basically this operation does nothing else then adding up the numbers from the F-fact table partition to the E-fact table, stores the result in the E-fact table and then drops the partition from the F-fact table.
Of course, now you cannot remove the data anymore based on the loading request, since it has been summed together with the other data in the E-fact table. On the other hand, now this addition doesn't need to be performed at query runtime anymore and the database can use the partitioning scheme of the E-fact table for a more efficient execution plan.

Our performance is good - so what's the hassle about?

Besides performance issues, having many partitions can lead to multiple problems:
  • usually aggregate tables tend to have even more partitions than their basic cubes (for technical reasons), so there is a multiplication effect
  • DDL statements that are generated for the F-fact tables can become too large for export/import/migrations or reorganisations on DB level.
  • Index creation can become very slow for so many partitions, since all indexes on F-fact tables are also locally partitioned, again a multiplication factor.
  • during attribute change runs a high number of partitions can lead to crahes as explained in notes
    #1388570 - BW Change Run
    #903886 - Hierarchy and attribute change run
  • It may even happen, that it's not even possible anymore to perform change runs or compression of requests, if there are too many partitions!
For all these reasons there's a recommendation out for a long time now:

COMPRESS! COMPRESS! COMPRESS!

Note #590370 - Too many uncompressed request (f table partitions)
I really don't know how many support messages have already been closed by simply compressing the requests.
And because of that and because it's so simple to figure out whether or not there are F-fact tables with too many partitions (usually not more than 20 - 30 are recommended) I decided to start a little competition here.
Just run the following little SELECT command on your BW database to get a list of F-fact tables that have more than 50 partitions:
\
select table_name, substr(table_name, 7) infocube_name, partition_count  \  from user_part_tables  \  where table_name like '/BI_/F%'  \  and partition_count >50  \  order by partition_count desc;  \    \  -----------------------------------------------  \  |TABLE_NAME     |INFOCUBE_NAME|PARTITION_COUNT|  \  -----------------------------------------------  \  |/BIC/FZ123456  |Z123456      |         8.279 |  <<< come on, beat this :-)  \  |/BIC/F123456784|123456784    |           999 |  \  |/BIC/FTPEDBIF5X|TPEDBIF5X    |           636 |  \  |/BI0/F0RKG_IC3 |0RKG_IC3     |           375 |  \  |/BIC/F100197   |100197       |           334 |  \  |/BIC/FRSTTREP01|RSTTREP01    |           281 |  \  |/BIC/FRS_C5    |RS_C5        |           253 |  \  |/BIC/F100184   |100184       |           238 |  \  |/BIC/F100183   |100183       |           238 |  \  [...]  \  -----------------------------------------------  \    \  
\
(be aware that this statement obviously does only work for InfoCubes tables in the standard name schema /BIC/, /BI0/, /BI... - you can of course adapt it to your naming scheme).
If you like to, just post your TOP partition count into the comments section - would be interesting to see, what extreme examples come up...
Although there's no price to win, you might at least get awareness that there is something to keep an eye onto in your BW database.

Jul 9, 2012

What's new in SAP NetWeaver 7.3 - A Basis perspective Part-I

What's new in SAP NetWeaver 7.3 - A Basis perspective Part-I

Some folks will say that not all features are introduced in 7.3 some of them were introduced in 7.1 and 7.2, but author wanted to increase number of features in this blog so included them as well :). Beside it will be helpful to those who haven't got chance to work with 7.1 or 7.2 and directly started working with 7.3.

So lets begin...
1. SAP NetWeaver 7.3 Goes Green : With NW7.3 you can save more energy from architectural perspective, you can get details of it here, I find it interesting.


2. SAP NetWeaver 7.3 – Lean Avatar: In the process integration, a Java-only, lightweight advanced adapter engine is now available for NetWeaver 7.3, eliminating the need to run SAP NetWeaver Process Integration (SAP NetWeaver PI) as a dual stack. 

From SAP NetWeaver 7.30, customers can reduce their hardware needs as a result of common deployment options for all Java usage types, including enterprise portals, SAP NetWeaver BW, and SAP NetWeaver 

Composition Environment (SAP NetWeaver CE), with one unified Java application server. 
NetWeaver Portal 7.3 uses half as much memory on average to execute navigations. 
NetWeaver Portal 7.3 server node starts up much faster than 7.01, with improvement of 33% in average. 


3. Instances Naming convention : As of SAP NetWeaver 7.1, the concept and naming of SAP system instances has changed. The terms "central instance" and "dialog instance" are no longer used. Instead, the SAP system consists of the following instances:
Application server instances 

Application server instances can be installed as "primary application server instance" (PAS) or "additional application server instances" (AAS).
Central services instance 
Database instance 


4. The Central Services Instance ABAP - ASCS: The central services instance for ABAP (ASCS instance) is now installed with every SAP ABAP system distribution option:

❶ Standard System ❷ Distributed System ❸High-Availability System

The enqueue replication server instance (ERS instance) can now be installed together with the central services instance for every installation :
Standard System (optional) 
Distributed System (optional) 
High-Availability System (mandatory) 


At the time of SCS installation if we can select the above option it will install ERS instance as well.


5. The Central Services Instance ABAP - ASCS: With new Installation Master for ABAP+Java, SAPInst does not provide option for separate ASCS and SCS Instance. Though it can be separated manually after installation on different host.


6. Split Off ASCS Instance: SAPInst now has an option to "Split Off ASCS Instance"

With the option Split Off ASCS Instance from Existing Primary Application Server Instance, you can split off an central services instance for ABAP (ASCS instance) from the primary application server instance of an existing ABAP system or ABAP+Java (dual-stack) system.


7. Solution Manager Key: As of SAP NetWeaver 7.3, Solution-Manager-Key at the time of Installation is not asked/required by SAPInst. Even in previous installation people found out the way to generate the Solution Manager Key out of SolMan System.


8. Start Profile Merged : As of SAP NetWeaver 7.3, Start Profile has been removed as separate file.
In earlier versions of NetWeaver there were 1 Default profile per SAP system, 1 Start profile per Instance and 1 Instance profile per instance.

Now the Start profile contents are merged with Instance profile. With help of new Instance profile SAP processes are started and at the same time instance specific parameters are read.

This removed total number of profile files. 1 Default profile per SAP System, 1 instance profile per instance. 

Now Profile Directory will look neater !!


9. JSPM (Java Support Package Manager) Initial credential requirement changed: While starting JSPM – SDM password is not being prompted, Instead you need to provide Java Admin User ID and password. 

While deploying and upgrading components it needs restart of just Java and sometime complete SAP restart, for which Admin User ID at OS level and its password is asked. 


SDM is replaced with JSPM and directory SDM is altogether removed.


Un-deployment of SCA's/EAR files are not possible using JSPM. You have to use NWDI for this purpose. 
No support for PAR files. All portal applications are now EAR (Enterprise Archive) based, PAR migration tool for converting PAR files to EAR files 


10. JCMON changed menu


After NetWeaver 7.01 JCMON menu 20: Local Administration Menu is non functional.


And you will able to see which state of different components/nodes. Unlike previous version we didn't find refresh option here, so go back and come to this menu again for recent view.


11. Visual Admin Vs NWA: As of SAP NetWeaver 7.1, Visual Admin has been replaced with NWA



12. Support Pack Stack


Earlier in NetWeaver 7.0, For a Support Pack Stack the release level of BW component was generally 2 release ahead of the ABAP and Basis component.

Now all components are released on same level.


Author :  Ishteyaque Ahmad 

Jul 5, 2012

How to transport changes to very large ODSes in SAP BW

How to transport changes to very large ODSes in SAP BW

n almost all BW systems which have been live for 3-4 years , it is a standard sight to see the following :
1. ODSes like Delivery Items , Material Movements etc having millions of entries ( this of course depends on the usage of these functions in SAP )
Very often a change request comes along which requires an enhancement to these ODSes and in lieu of the the time this would take - a lot of sister ODSes spring up all around these mega ODSes leading to a proliferation of Infosets and cubes and Multiproviders.. which in turn lead us into very difficult data modeling choices

Read more at : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/27892

Jul 4, 2012

SAP BW 7.3 Hierarchy Framework – a real-life example

SAP BW 7.3 Hierarchy Framework – a real-life example

Introduction

So what do you when your client asks you to activate the customer hierarchy in reporting? Simple your say, you deep dive into ECC, look for the 0CUSTOMER hierarchy DataSource and activate. You do a quick RSA3 check and receive....no data.
Your day just became a little more interesting. After some investigation you find out your client did not implement “Classification” and therefore has no hierarchy on 0CUSTOMER. They do however have a hierarchy on the logistical view of the customer. As 0CUSTOMER is part of your data model you decide not to introduce 0CUST_SALES, but instead load it to your 0CUSTOMER object.

Read more at : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/27798

Jul 2, 2012

All about SAP HANA – composite post


SAP HANA – composite post
Martin Maruskin



I'm very well aware that there are already initiatives to get all information about HANA into single place. Therefore consider this blog as yet another HANA blog.
As I started similarly with SAP BW 7.3 I try to collect all relevant information sources about in this case SAP HANA here. BTW: What’s HANA? In short it is in-memory solution from SAP. It stands for High-Performance Analytic Appliance and basically it is appliance (plus its software components) that can absorb large volumes of data (e.g. tera bytes) into its “operational” memory. The reason why it is so high performant is that all the data is in memory not stored in hard drives. It can be set up on top of SAP ERP or BW system (plus non SAP databases) without necessity to materialize data via transformations as in contrary to current DWH solutions. HANA bundles several components: in-memory computing engine, real-time replication service, data modeling and data services. As it is delivered as appliance it is depended currently on few vendors are supported: Fujitsu, HP, and IBM.
Basically we can say that HANA successor of SAP NetWeaver BW Accelerator as moving forward into in-memory computing. Here HANA acts as persistence mechanism for SAP NetWeaver BW.
HANA components:

1. The core of HANA is called SAP In-Memory Computing Engine (IMCE or ICE, also called as NewDB or BAE). It is an engine kind of in memory database which uses row/column/object based database technology to store data. It is for parallel data processing using state-of-the art CPU possibilities.

2. HANA Studio (client app, or Studio Repository, Eclipse based editor connected to a HANA
Server backend) consists of:
2.1 Administration console - administer and monitor database
2.2 Information modeler - data modeling
2.3 Lifecycle mngt - provides an HANA stack update using SAP Software Update Manager (SUM)

3. HANA Load Controller -resides in SAP HANA, coordinates the entire replication process: It starts the initial load of source system data to the IMDB in SAP HANA, and communicates with the Sybase Replication Server to coordinate the start of the delta replication

4. Host Agent -handles login authentication between source system and target system

5. Sybase Replication Server -accepts data from Replication Agent, distributes and applies this data to the target database using ECDA/ODBC for connectivity.

Current versions:
SAP HANA 1.0 SP02 - as of 12th June 2011 in general availability


Operating system: //only following one:
64-bit SuSE Linux Enterprise Server (SLES) 11 SP1 operating system

Upcoming version:
Nov 2011 - SAP starts rump up program for their customers to run BW-on-HANA as its database

Read more at : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/28011