Jul 21, 2014

Major Lessons Learnt from BW 7.30 Upgrade


Dear SAP BW-ers,

It's about journey when upgrading to BW 7.30 SP05 from BW 7.01 SP07.
Whatever written below is purely an opinion and may not be best practice.

It has been a painful project yet rewarding since this will enable the BW to be on HANA and have much better integration with other Business Objects tools.

Data Modelling Issue
·         There is a new data validation check procedure in BW 7.30, some of our daily batch jobs fails because of this new behavior. We have to remove conversion routine in 0TCTTIMSTMP and also write an ABAP routine to do the conversion during data loading. The same thing happened for 0WBS_ELEMT, we wrote a routine to fix this thing as well.

·         Cannot activate an InfoSource after the upgrade, run program RS_TRANSTRU_ACTIVATE_ALL in debugging mode and set the value of the parameter "v_without" to "x".

·         3.x DBConnect Data source stopped working after upgrade, needs to be regenerated because it was corrupted after the upgrade Go-Live

B WA Issue
·         If you are running on BWA 7.20 lower than revision 22, please set the query properties back into using Individual access instead of Standard/Cluster access. You can also do mass maintenance of every queries in an InfoProvider in transaction RSRT. If you don't do this you will be having major performance problem, for example a query that used to take 9 secs before upgrade will come back in 800 seconds if the cluster access is enabled.

Reporting Issue
·         Error #2032 because of dashboard crosstab data binding, we experienced this with most query that has 0CALDAY. You have to remove the 0CALDAY and then put it back again in the query so that the BICS connection will be refreshed from the query to the dashboard.

·         UOM conversion issue after the upgrade for some query, implements this SAP Note 1666937 to solve the issue.

·         Cell defined value was not impacted by scaling factor in BW 7.01, but in BW 7.30 it does. We have to make lots of adjustment in few queries because of this.

·         A condition on a hidden key figures no longer supported in BW 7.30, again some queries has to be adjusted because of this.

·         Dashboard Design 4.0 cannot be connected to BW 7.30 system until we upgrade SAP GUI version to 7.20 SP10.

This is the list of major issues that we encountered so far a week after Go-Live; hope that this will help your journey. I personally say to run the upgrade better we need to have a copy of production right before the upgrade and do the heavy testing few times and involve the business when doing so. You will then expect only few minor issues during the go-live.

Jul 14, 2014

How to handle Year Wise Cubes in a Multiprovider


Applies to: Up to BI 7.0

Introduction

                 After the BW implementation is over in any client, the data from the Source systems will keep on extracting to BW system on a daily basis. The size of the data volume increases day by day incrementally. This fills up(accumulates) all our Infocubes what we have developed in our Data Mart Layer over a period of time. Then the question of maintenance will come into the picture. That's exactly, I am going to demonstrate about the steps and Strategy to handle it.

Disadvantages with the current Infocubes

  • Large data volumes in Cubes, which leads to higher administration efforts
  • Dropping and Creating indices takes longer times which can delay process chains
  • Aggregates Roll-up will be longer
  • Attribute Change run takes substantial time during Master data loads
  • Reporting performance will be a big question
  • Recreating Aggregates may require every now & then to meet reporting performance.
  • Initial Fill up will take substantial times to fill up new aggregates
  • Adhoc Querying during Year Ends will be tough, while fetching data from bigger Cubes/Aggregates

                 I am sharing my current Data Mart Layer which is under a Multiprovider. The Go-Live had happened in 2008.This tells you that the data is available from 2008 to till date in our Infocubes. Our implementation partner had created Year wise Cubes like below.

Yearwise MP.JPG

                This above Layer has been serving good so far. Because our data volume is able to adjust with the available cubes. Now the data volume has substantially hiked since 2012 on wards. The data volume is roughly around 13 Crores records in (2012 and 2013) Cube. The previous years data was also some big figure. We have other small cubes in the Multiprovider also. Hope you got the bottle neck situation now.

                I have proposed a solution to overcome above mentioned all disadvantages. Archiving is not possible in my case as it is against our management policy. But we are trying our best to convince the management to get the approvals . So I have made a proposal to organize all years data as per the extent of usage. You could see "Very Less usage" and "Mostly Used in Trend Analysis" in the above figure for your understanding.

                Strategy 1 :
I have proposed to club the older years cubes by keeping the extent of usage in mind. Usually we do Trend Analysis by considering Last Year and Current year figures. Some companies may go with last 3 years also, depends up on the client's requirement.

                                                                      My Proposed Dart Mart LayerYearwise MP1.JPG

      � �          I have clubbed 2008 to 2011 into a single Infocube as it is hardly used for business analysis. If you have observed, I have split  (2012 & 2013) Cube into separate Cube for each year. Because, As I stated in the beginning, I have 13 Crore records for both years in a single Cube. It has been observed that reports are running slower when the users do trend analysis, as we approach towards the year end.

                 Eventually, we will have to club 2012, 2013 and 2014(may be) into a single Infocube when the importance of that period is very less. The whole idea is to merge older Cubes as the time passes by based on the importance and usage of data by Business Analysts.

Advantages of this Strategy

  • Lower Administration efforts by focusing on  current Cubes
  • Faster dropping and re-building indices during data loads
  • Trend Analysis can be done quickly
  • All the other disadvantages (which are mentioned in the beginning) can be overcome by this approach

Steps to carry out this entire activity without disturbing the existing setup

  1. Draw flow charts like above in a white paper. So that you will have great idea on what you are going to do in your system
  2. Take any one year Cube and make 4 copies(only Structures) as per the second image
  3. Do not forget to partition these new cubes according to the respective years by 0calmonth. Because, the copied Cubes will have old partitions based on first image. This is a key step to organize your data systematically
  4. Note: We have EDW Layer(Central DSO which is our total warehouse) under Data Mart Layer. So I will be uploading data to these new Cubes from the central DSO
  5. Copy the transformation bet ween DSO and existing Cube(any year, because transformation is uniform for all year wise Cubes) to the DSO and New Cubes
  6. Create Full load DTPs for all new Cubes and load all the respective Cubes by chunks
  7. Copy the existing Multiprovider with a new technical name and extend the description by "Back up".
                    
         Strategy 2 :  The idea in copying is, I will detach the existing year wise Cubes from the original Multiprovider and add the new year wise Cubes to the original Multiprovider(make sure assignments are done to the new cubes). By this, we need not to wo rry about our Bex Queries, WAD and Portal. Because, all the Queries which are based on the existing Multiprovider will continue to run as usual. The only difference is, the data will be retrieved from the new year wise Cubes. This strategy will completely avoid the risk of touching front-end.

  8.   Regenerate all your Queries in RSRT to refresh them from the database level. This will ensure all your queries will fetch data from the new Cubes
  9.   Make sure to change the 0INFOPROV restriction to the new Cubes in your queries, if any
10.   Modify all your Process Chains with the new Cube variants
11.   Verify the data thoroughly in your Portal Report s by comparing with the data display of you Back Up Multiprovider with same Query restrictions/Filters etc..
12.   You can also verify the data by copying Queries from your main Multiprovider to Back up Multiprovider by using RSZC
13.   Observe your Portal reports for 2-3 days by comparing with the Back up Multiprovider. If everything is ok, then you can drop all the old Cubes

Conclusion

                 This is a pure Production Support/maintenance activity, which will have to be taken up over a period of time. As Archiving is not in my scope, I have proposed above ideas and implemented the same.

Jul 8, 2014

Data Functions in Bex Query Designer


Hi All,

I have made an attempt to explain some of the data functions which can be used as operands when we define formula or Calculated Key Figure

1) COUNT()
 
      This data function is used when we would like to display value of the count made and it returns value 1 if the is <> 0, otherwise it would return
      value as 0
      Eg: To display Count of Open Sales orders.

2) DATE
   
    The value is seen as the number of days from 01.01.0000 onwards and is output as a date.
    The value can be between 0 and one million, which results in a date between 01.01.0000 and 27.11.2738.

3) NDIV()
  
   It is used when we do some calculations based on division taking into account numerator and denominator. This function returns value 0 if the expression causes a division by 0, otherwise the result is the value of the expression. This is used to avoid error messages or so that we can continue to calculate with a defined result.
  E.g: NDIV{5/(2-2)} would result value as 0.

4) NOERR()

    Returns 0 if the calculation of

  E.g: NOERR(sqrt(-1)) = 0

5) SUMCT ;  

Returns the result of the operand to all rows or columns

Example table for operator SUMCT
YearRegionSalesSUMCT "Sales"
1999North3060

South3060

Result for 199960180
2000North60120

South60120

Result for 2000120180
Overall Result

180180

6) SUMGT
Returns the overall result of the operand
Example table for operat or SUMGT
YearRegionSalesSUMGT "Sales"
1999North30180

South30180

Result for 199960180
2000North60180

South60180

Result for 2000120180
Overall Result

180180

7) TIME  
The TIME operator takes any key figure or (partial) formula and makes a key figure of type TIME
For example, the formula is then TIME(Sales).
The system cuts off the decimal places, interprets the value as seconds, and displays the value in format +-hh:mm:ss. For example, 4812 is displayed as 1:20:12

Jul 3, 2014

Some key points about SAP HANA Technology & Architecture


The term SAP HANA is used for both (in-memory) database and an application platform.
SAP HANA is a platform offered as both an appliance and a cloud service.

SAP HANA in-memory database is a specialized database with following features
  • All the read & write operations, take place in memory.
  • Data can be stored both in row and column format.
  • Every column is a secondary index and there is no need to maintain separate indexes.
  • It has a much better Data compression as data from the same column can be stored together
  • It utilizes distributive, multiprocessing & massive parallel processing capabilities of modern CPUs.

It comes with a tightly integrated lightweight application server out-of-the-box, SAP HANA XS or Extend ed Application Services. SAP HANA XS is esigned/engineered specificly for SAP HANA database and is closely integrated with it to take care of
  • validation logic
  • control flow logic
  • service enablement logic
as most of
  • calculation logic
  • data-intensive processing
are pushed down into the database layers to leverage the power of the features mentioned above.

HANA Extended Application Services uses
  • server-side JavaScript as the main programming language 
  • supports REST-based OData services with both JSON and Atom XML rendering
using which complex transnational business applications can be build.

Data-intensive logic can be written in
    SQL and SQL Script and

  • it also has built-in support for XML for Analysis (XMLA) and Multidimensional Expression language (MDX) access for analytics.