Aug 25, 2014

The HANA Journey: Determining When HANA is Right for You

Via Content in SCN

I would like to thank my colleague Robert Hernandez, Director of In-Memory Services, North America for his collaboration and input to this blog.

If you're reading this blog, SAP HANA has caught your attention.  By now, you've probably heard a significant amount about SAP HANA. It's the next-generation platform with blazing speed, real-time reporting, and powerful analytics, able to capitalize on unprecedented opportunities and deliver significant competitive advantages.  As the SAP HANA topic has matured, you've probably also heard of different ways to deploy it in your business; solutions such as SAP Business Suite accelerators, SAP HANA applications, SAP BW on HANA, SAP Business Suite on HANA, and more.  You've possibly heard so much in fact that you feel the need to take a step back and ask - so what does it all mean to me?  It's fast – great.  It can help me – fine.  But, where do I start, and how do I take advantage of it?  Do accelerators help me?  Should I start with SAP BW on SAP HANA?  Does the introduction of SAP Business Suite on SAP HANA now change everything?  Like other break-through innovations, SAP HANA on its own will not deliver value.  But, SAP HANA applied to a particular business problem or challenge can deliver exceptional value.  So, if you still have some questions about whether SAP HANA is right for you, or more importantly, how it's right for you, then read on to determine how best to realize value from SAP HANA.


Beginning the HANA Journey

When evaluating SAP HANA for your business, remember to keep the following in mind: (1) how can HANA help my business? (2) what type of HANA solution do I need to achieve this? (3) how should I plan my deployment strategy?  These high level concepts are summarized below in Figure 1, as well as further examined in the text which follows.

Blog Graphic 2.PNG

1.  Understanding Value Opportunities in Your Business: before deploying SAP HANA, an organization should take the time to understand where SAP HANA can deliver maximum benefit based on your business goals.  Some examples to consider:

    1. Are analytics a problem in your organization?  Do you have plans to grow the business, but need to stay lean in terms of your total operations?  Would access to real-time information allow you to accomplish that more economically?  Are you an SAP BW customer, looking to improve the way you deploy your analytics today?
    2. What about your day-to-day business processes?  If you could run your materials planning processes faster or differently, would that change your business?  For Consumer Products, do you have access into the real-time demand in your various markets, allowing you to focus on the right ones?  For Retail, could you grow customer loyalty and in-store excellence through access to customer data by your sales personnel while your customers are still in the store?
    3. Finally, are there new business processes you could create today, something that could transform your business but you haven't thought about doing because of technical limitations?  Could a new application be developed, purpose-built to bring these new ideas to reality?  For Healthcare, is there a way to manage patient data, allowing you to better serve patients or deliver medical care.

2.  Map Value Opportunities to SAP HANA Solutions: once an organization understands the business value – how SAP HANA can enable business soluti ons – you next need to understand how to implement SAP HANA.  As identified in the opening to this blog, SAP continues to enhance HANA and offer additional capabilities.

    1. For the customer looking at real-time analytics to grow the business, perhaps an agile data mart deployed on SAP HANA is the answer.  Or, for existing SAP BW customers or customers requiring a complete Enterprise Data Warehouse, BW on HANA may be the right place to start.
    2. For the customers needing to enhance operational processes, perhaps SAP Business Suite on SAP HANA is the right option.  Or, if you need a smaller first step, beginning with a focused accelerator powered by SAP HANA targeting a single, specific process may be the logical place to begin.
    3. For the customers looking at creating a new, transformative solution, perhaps an application powered by SAP HANA is appropriate.

3.  Deploy SAP HANA: at this stage you've identified the business value, you understand what type of SAP HANA solution should be implemented to realize that value, now it's time to evaluate your deployment options.

    1. For those organizations with strong IT operational capabilities deploying SAP HANA in-house, on-premise may be the most efficient.  It will allow you maximum control of your own environment and positions you well to grow your HANA deployment alongside SAP's growing HANA coverage.
    2. If your organization is looking to quickly deploy SAP HANA and doesn't have the time or resourcing to manage it in-house, SAP's HANA Enterprise Cloud (HEC) may be the right approach.  This allows you to utilize all the benefits and capabilities of SAP HANA, but leaves the environment manag ement to someone else allowing you to focus on solving the business problem.
    3. Ultimately the right solution might involve a hybrid between an On-Premise and SAP HANA Enterprise Cloud approach.  Perhaps business timelines can't wait on hardware procurement and thus a HEC approach for your development and test environments with an on-premise production system allows you to deliver on time.  Or perhaps there are some applications you would prefer to deploy on the cloud, while your core business applications reside on SAP HANA in-house.  Either way it's all about finding the right combination that fit s your needs.
    4. Finally, whether on-premise or in the HEC, Rapid Deployment Solutions (RDS) should be part of any customer's HANA deployment decision process.  SAP has constructed many pre-packaged solutions in a box targeted at the most common HANA use cases.  Perhaps one of these RDS fits your business needs and provides a low risk, out-of-the-box approach to quickly roll out SAP HANA.  Even if the RDS only covers a portion of your business need, it may provide a stable foundation to quickly deploy business value on top of which you can then build.

Conclusion - Creating a Strategic Roadmap

And there you have it, you're on your way to your own SAP HANA journey.  Let the business drive the need, map that to how SAP HANA solutions can help, a nd plan out your deployment.  Now the next question – where to go from here?  Look to create a strategic project roadmap for SAP HANA.  For example, perhaps you are an existing SAP Business Suite customer, and realize that you want to move to SAP HANA, but not just yet.  However, you recognize that in the short term you have a specific use case – say how you manage your process for analyzing cost and profitability – that you want to accelerate.  Your roadmap could start with an early phase deployment of the CO-PA accelerator, with perhaps a migration to Business Suite on SAP HANA at a later date.  Or maybe you have SAP BW deployed with Business Suite on HANA and would like to start with migrating your BW to HANA.  The key is your business drivers will feed your implementation plan, and that can all be represented through your strategic roadmap.

Future Blogs

In conclusion, please refer back to this blog in the coming weeks, as content introduced will be further developed in subsequent blogs.  We will take a more tactical look at SAP HANA; now that you understand the theory portrayed above, what are some tactical steps to get started?  We will also examine some motivating factors for considering SAP Business Suite on SAP HANA.  Be sure to keep an eye out for these and other topics in future blogs.

Standardizing Data Flow Patterns using Data Flow Templates


Standardization is a key aspect of SAP BW Layered, Scalable Architecture (LSA), SAP's best practice in Enterprise Data Warehousing. One of the ways to realize standardization in the data staging process is using Data Flow Templates.
SAP BW release 7.3 introduced a new modeling object called Data Flow. A Data Flow acts as a container for storing the data modeling objects of a data flow, e.g. InfoProviders, Transformations, DTPs, etc. It can also be used to incorporate documentation belonging to its data modeling objects. Furthermore, it's possible to define customized / tailor-made Data Flow Templates to facilitate standardization of data flow patterns in the context of your SAP BW implementation and architecture guidelines. Please refer to SAP Help for more information on Graphical Modeling, Data Flows and Data Flow Templates.
In this blog I would like to discuss standardizing data flow patterns using Data Flow Templates, creating new Data Flows based on such a Data Flow Template and the advantages of this approach.

Data Flow Templates

The purpose of Data Flow Template is standardization of data flow patterns. Every Data Flow should be based on a Data Flow Template. From an architecture point-of-view, any deviation from Data Flow Templates should be justified and motivated. It can potentially identify the need for an additional Data Flow Template, to be decided upon by the responsible person or team.

An example of a Data Flow Template can be found in the next screenshot.

Figure_1_Example_Data_Flow_Template.jpg
Figure 1: Example of Data Flow Template

The data modeling objects are represented by the blocks. These blocks act as place holders for the future data modeling objects which can either be created from scratch or reused as an already existing object. The technical name gives an implementing hint for the proper naming convention to be applied.
You can also store documentation. In the context of Data Flow Templates, you can find here modeling tips and procedural aspects.

The following screenshot shows an LSA compliant example implem entation with Data Flow Templates Please note a strict segregation between Data Warehouse Layer and Data Mart Layer.

Figure_2_Data_Flow_Templates.jpg
Figure 2: Data Flow Templates

Data Flows

You create a new Data Flow in an appropriate InfoArea. Here you will see a blank canvas where you have to insert a Data Flow Template.

Figure_3_Create_New_Data_Flo   w.jpg
Figure 3: Create new Data Flow

All data modeling objects of the Data Flow Template are copied into the new Data Flow as place holders. From here you can either create the data modeling objects from scratch or reuse already existing data modeling objects.

Figure_4_New_Data_Flow_with_Place_Holders.jpg
Figur e 4: New Data Flow with place holders

At any point in time you can use the function Complete Data Flow to add already existing additional objects, such as DTPs, Transformations and InfoPackages. This is usually also necessary for SPO to complete the Data Flow with all objects related to the SPO.

Figure_5_Complete_Data_Flow.jpg
Figure 5: Complete Data Flow

You can complete the Data Flow in an incremental way until it's finished. Don't forget to add any interesting or crucial support information using the documentation feature.

Figure_6_Incremental_Completion_of_Data_Flow.jpg
Figure 6: Incremental completion of Data Flow

Conclusion

In this blog author presented a way to facilitating your (Enterprise) Data Warehouse Architecture by standardizing data flow patterns using Data Flow Templates. In his opinion it's easy to use but very powerful functionality. Author can highly recommend using Data Flow Templates in order to not only increase standardization of data flow patterns but also to provide gui ded implementation with documentation of necessary steps and naming convention hints. Furthermore, Data Flows can help reducing the need for a complex InfoArea and Application Component Hierarchy. Another benefit is the documentation feature to incorporate on-line documentation of your Data Flows. Last but not least, Data Flows offer a great help in collecting the right data modeling objects using the Transport Connection.

Aug 21, 2014

Where to find information on SAP BW on HANA migrations


I've been part of projects to migrate to BW on HANA recently and one of the things that I noticed was that resources can be fragmented and tricky to find. I thought I'd curate a list of places to go to find information. If I have missed something then please ping me so it can be added to here.

1) Best Practice Guide

Boris Zarske maintains a Best Practice Guide - Classical Migration of SAP NetWeaver AS ABAP to SAP HANA and this is a great place to start. It covers all a spects of a migration and should be in your toolkit because Boris is aggregating information directly from the development team.

However, it only covers classical migrations, and if you're doing BW on HANA then you should ideally be considering DMO.

2) Database Migration Option (DMO)

Roland Kramer maintains SAP First Guidance - Using the DMO Option to Migrate BW on HANA and this is the place to find out information about this. It is applicable to BW 7.0 and above and can help automate the upgrade and migration to SAP HANA. DMO doesn't work in every scenario, so make sure that it can do what you need.

3) Migration Cockpit & Checklist

If you go SAP Note 1909597 - SAP NetWeaver BW Migration Cockpit then you can install and configure program ZBW_HANA_MIGRATION_COCKPIT. This program runs on BW 3.5 or above, which is very cool.

In addition, as recommended by Ali S Qahtani, you should consider applying SAP Note 1729988 - SAP NetWeaver BW powered by SAP HANA - Checklist Tool, which prov ides program ZBW_HANA_CHECKLIST or ZBW_HANA_CHECKLIST_3x, depending on your version of BW. This is a pretty neat checklist and a presentation is attached to the note.

4) Architecting BW on HANA

I wrote a blog on Licensing, Sizing and Architecting BW on HANA. In addition, Marc Hartz' guide on SAP NetWeaver BW Powered by SAP HANA Sc ale Out - Best Practices is important if you have a large system, as is Marc Bernard's How NOT to size a SAP NetWeaver BW system for SAP HANA.

5) Managing your BW on HANA Project

I wrote blogs on 10 Golden Rules for SAP HANA Project Managers, and 10 Golden Rules for SAP BW on HANA Migrations. Hopefully they are useful for you.

Roland Kramer also wrote Three things to know when migrating NetWeaver BW on HANA, which is worth reading. This refers to SAP First Guidance Collection for SAP NetWeaver BW  powered by SAP HANA, which in turn refers to Implementation - BW on HANA Export/Import, SAP First Guidance - Using the DMO Option to Migrate and SAP First Guidance - SAP-NLS Solution with Sybase IQ. Wow, this is recursive documentation!

5) HANA Basis Reference Guide

Andy Silvey has written the awesome The SAP Hana Reference for NetWeaver Basis Administrators, which is a go-to guide on HANA Administration. It is well worth reading if you're a Basis consultant moving to the HANAsphere.

6) ABAP Post-Copy Automation

Michaela Pastor wrote a very handy blog about ABAP Post-Copy Automation, which is all about reducing the time to do system copies, and using the same ABAP sourc e system for more than one BW system.

7) Some additional blogs

Sunny Pahuja's blog on Some points to remember for Database Migration to HANA  is very detailed though a little out of date.

Final Words

Since I've written this, I've realized that there is a lot of information out there, which may be overwhelming. I do encourage though, if you are planning a BW on HANA migration, that you take a look at this information before you build out your plan. You will be much better informed and I have no doubt that you will change your plan for BW on HANA.

Thanks to all of those that helped curate this, especially Thomas Zurek, Klaus Nagel, Boris Zarske, Rolan d Kramer, Lloyd Palfrey, Marc Bernard, Lothar Henkes.

If you have some content that I should link to here, then please let me know!

Aug 18, 2014

How to Generage a datasource from a Custom Report in ECC

How to Generage a datasource from a Custom Report in ECC
Via Content in SCN

When more analysis on a custom report in ECC system is required, users ask for a bw report that shows exactly the same data with the custom report in ECC. In standard cases, the rational behavior would be to search for business content if there is any corresponding content for the requirement. Most of the time, we can't find it in business content (That may be the reason why a custom report is written in ECC J). In such cases we have some alternatives to go with. In this blog I am going to discuss three alternatives, compare the advantages and explain the solution to the one which I mostly prefer.
The information I give does not include any detailed ABAP knowledge, but gives an understanding on how we can handle these types of requirements. I am not an ABAP developer, so, in this blog I will only give the sufficient ABAP code to make changes in necessary spots in your report and function modules.

The Alternative Solutions:
1. Creating a function module:

In this approach, we can write a function module that exactly behaves the same way as the program of the custom report. Then we create a datasource using function module. This approach is nothing different than creating a totally new datasource according to a new requirement. You can only use the logic in th e program. You can directly upload your infoprovider using this datasource.

2. Using the program of the custom report to fill in a Z table:

With a minor change in the program, we can add some code to fill in a Z table. Then we can create a datasource with extraction from view.  We use z table as the source. This is an easier way compared to the previous alternative.  You don't need to write the whole logic once more. Everything is thought once. When a change request in the logic comes from the user, the change is implemented only in the program. As long as the fields of the custom report are not changed, there is no maintenance for change requests ( I am assuming full upload to infoprovider). Even if a change in fields arrives, the only thing we need to do would be replicating the datasource and changing the Z table.

3. The final approach I am going to give in detail is changing the report code so that we can call it from another function module. Then we use this function module to create the datasource. This is even better than the second approach. In this approach we don't need to create a Ztable. So we have some performance related gains. We don't spend a space in ECC for Ztable. We don't require the time to write into that table and also read from that table. When the function is called to upload an infoprovider, the code calls the report code to generate the data we require. Now let's go with the details. I will explain this approach with a sample report where we show a very small information from MARA table.


Detailed Explanation for 3rd Approach:

For the detailed explanation, I got some help from my ABAP developer colleague, Gozde Candan. We have created a very simple program that gets the list of materials according to a material type selected in the selection screen. We also have created a transaction for this program. This is only for illustration. It does not matter how complicated the c ode is, you can reorganize the code in such a way I describe in this blog. We have written the program with a single select statement, so that it would be easier to show how we change it.

Suppose we have a transaction called ZMATLIST.  This transaction gets the list of materials according to a material type selected.
resim1.JPG

To find the name of the program behind this transaction, we go to the system menu on top of the screen:
resim2.JPG

When we select status, a screen appears showing SAP data:
resim3.JPG

In the field "P ROGRAM", we see the name of the program that we make the changes so that the code inside can be called within a function module.
With the transaction code se38 we can view the code for this program:
resim4.JPG

For this program,  we have defined a structure zmaterial:

This structure is created with TCODE: SE11. This part is important, because we will use this same structure for the datasource. If the design of the code does not include (most of the time it does, but in some cases it may not include) a structure like this, then it should be changed so that before it flows to the gui screens, an internal table defined by such a structure should be filled with the data .

Now we go on by editing this program. What we are going to do is adding a flag to the program to understand if it is being imported by a function module. In the function module, we are going to set this flag. So we will import the value of this flag from function module to this program. Reading the value of the flag, we are going to export the to the function module. That is; we are doing one import for the flag value (from function module) and one export for the data (to the function module).
resim6.JPG
The code in rectangles is added to the code.

REPORT zmateriallist.
TABLES:mara.

DATA:it_mara TYPE zmaterial OCCURS 0 WITH HEADER LINE.
*pflag is the name of the flag we define. You can give any name.
DATA:pflag(1).

*This part is the simple selection statement according to the parameter (material type) selected in the
*selection screen of the gui. The data is filled in an internal table called it_mara. For complicated z reports,
*this part can be much more longer. The idea here is filling in the internal table it_mara.
parameter:mtart TYPE mara-mtart.
SELECT * FROM mara INTO CORRESPONDING FIELDS OF TABLE it_mara WHERE mtart = mtart.


*What we do with the below IMPORT statement is that, we get the value of pflag importing from the function

Memory id is a unique id, we can give any name. We will use
*module using memory id 'ZFLAGFROMBWFM'.
*this same id in the function module to export pflag.
IMPORT pflag FROM MEMORY ID 'ZFLAGFROMBWFM'.

*We add an if statement to make any necessary changes for showing the data.
IF pflag IS INITIAL.

*In this part, since pflag is not set, we understand that the data generated should be shown in gui screen.
*So any gui related code is written in this part. 
LOOP AT it_mara.
   
WRITE:/ it_mara-matnr, it_mara-ersda, it_mara-ernam.
 
ENDLOOP.

ELSE.



*In this part, pflag is set from the function module as 'X'. Thus, we export the data from it_mara to the internal table defined in the function module with the memory id 'ZMATERIALLISTTOBWFM'. This unique memory is
*going to be used in the function module to import the material list to i_e_t_data defined in the function module.


EXPORT it_mara[] TO MEMORY ID 'ZMATERIALLISTTOBWFM'.
ENDIF.

That is all we do in the report code. Now, it is time to create a function module for our datasource. What we need is a function group where LRSAXD01 is included in the top. I will not explain in detail how to create a function module for bw. Though, here are the screenshots of the function module we have created:
Import tab:
resim7.JPG
Tables tab:
resim8.JPG
Exceptions:
resim9.JPG
And the source code is:
resim10.JPG


  DATA: l_s_select TYPE srsc_s_select.
 
STATICS: s_s_if              TYPE srsc_s_if_simple,
           s_counter_datapakid TYPE
sytabix,
           i_e_t_data          TYPE
zmaterial OCCURS 0
                              
WITH HEADER LINE.
  DATA:
ls_rsselect TYPE rsselect.

*We need to define pflag here, too. 
  DATA: pflag(1).
*This part is standart for all bw functions, lr_mtart is defined to enable      
*the material type for selection. 'ZBW_MATLIST' is the name of the datasource
*we will define.
 
RANGES:lr_mtart FOR zmaterial-matnr.
 
IF i_initflag = sbiwa_c_flag_on.
   
CASE i_dsource.
     
WHEN 'ZBW_MATLIST'.
        s_s_if
-t_select[]
= i_t_select[].
      WHEN
OTHERS.
        log_write
'E' 'R3' '009' i_dsource ' '.
       
RAISE error_passed_to_mess_handler.
   
ENDCASE.
    s_s_if
-requnr    = i_requnr.
    s_s_if
-dsource   = i_dsource.
    s_s_if
-maxsize   = i_maxsize.
   
APPEND LINES OF i_t_select TO s_s_if-t_select.
    APPEND LINES OF< /span> i_t_fields TO s_s_if-t_fields.

    s_counter_datapakid
= 0.
 
ELSE.
    IF
s_counter_datapakid = 0.

     
REFRESH:lr_mtart.

      LOOP AT s_s_if-t_select INTO ls_rsselect.
        CASE
ls_rsselect-fieldnm.
          WHEN
'MTART'.
           
MOVE-CORRESPONDING ls_rsselect TO lr_mtart.
            APPEND
lr_mtart.
        ENDCASE
.
     
ENDLOOP.

*In this part, we do everything necessary to for loading data. As an initial
*step, we need to send the value of our flag to the report code so that it can
*export the data to this function. Remind that we use the same memory id in report code to import the value for pflag. With this EXPORT statement, in
*memory to a space called 'ZFLAGFROMBWFM', the value of pflag is written as
*'X'. The IMPORT statement has to be called from the report code with the same
*memory id, to get the value of pflag.     
pflag = 'X'.
     
EXPORT pflag TO MEMORY ID 'ZFLAGFROMBWFM'.

*Now, the value of pflag is exported to the program ZMATERIALLIST. When we
*submit, the report code is called. We also need to send the required
*selections in this submit command. If no selection is required for BW data
*upload, then in this part, we can remove the assignment of material type. But
*in this case, we need to add some extra code to the selection statement in
*the report code.
     
SUBMIT zmateriallist  WITH mtart = lr_mtart-low AND RETURN.

*And as the final step, we import it_mara from ZMATERIALLIST program to our
*internal table i_e_t_data with this unique memory id: 'ZMATERIALLISTTOBWFM'.     

IMPORT it_mara = i_e_t_data[]   FROM MEMO RY ID 'ZMATERIALLISTTOBWFM'.
   
ENDIF.

    IF
i_e_t_data[] IS INITIAL.
     
RAISE </ span>no_more_data.
    ENDIF
.

   
DO i_maxsize TIMES.
      s_counter_datapakid
= s_counter_datapakid + 1.


     
READ TABLE i_e_t_data INDEX s_counter_datapakid.
      IF
sy-subrc NE 0.
       
CLEAR i_e_t_data[].
       
EXIT.
     
ELSE.
       
APPEND
i_e_t_data TO e_t_data.
      ENDIF
.
   
ENDDO.

 
ENDIF.



Finally we create our datasource using extraction by FM. We use ZBW_MATERIAL_LIST function, we created. And as the structure we select ZMATERIAL which we used in both report code and the function module itself.
As a result, this approach uses the same code for both ECC and BW reports, thus, helping prevent the repetitive software development. One more advantage is that, when a change request arrives from the users, it is only done in report code. No extra effort is spent for BW side. 

Aug 13, 2014

Manage big data in SAP BW more cost-effective and with optimized performance using SAP NetWeaver BW Near-Line Storage (NLS) rapid-deployment solution


Near-line storage is a concept in the big data topic that many analysts are talking about. What is it? Near-line storage is the activity of storing data the organization is not continuously using, in an easily-accessible place that can quickly be made available to the user's analytical tools—on demand—efficiently. While it is not real-time access—like with SAP HANA—it is near-line: near the tools, but not exactly "off-line." Your data is within reach in the SAP NetWeaver BW Near-Line Storage and available in seconds.

October 1, SAP released the SAP NetWeaver BW Near-Line Storage rapid-deployment solution. Based on best practices, this rapid-deployment solution helps to implement the NLS solution in a really short timeframe. We're talking about weeks.
 
Let me give you some background on the NLS solution and the rapid-deployment solution:


What´s the concept of the near-line storage solution and how fast is it?

The SAP NetWeaver BW Near-Line Storage solution helps to have huge amounts of data on hand - quickly. This is valid for both types of data, current or frequently used data held in memory as well as large volumes of rarely used data stored on disk.
 
With SAP NetWeaver BW Near-Line Storage, the users can access their data via SAP NetWeave r BW in almost real-time without any disruption in the look and feel of their reports.
"It feels like real time—that's how fast it is."

The NLS solution is a combination of the following main parts:
  
  • The native near-line storage connector in SAP NetWeaver BW. This connector enables organizations to manage, combine, and query data from online in-memory data from SAP HANA or disk-based data from SAP Sybase IQ. It allows archiving online data from the SAP NetWeaver BW system and storing it into SAP Sybase IQ as a near-line store.

  • The SAP HANA database (optional). This flexible in-memory database allows you to analyze SAP NetWeaver BW data in real-time, even at large volumes. Replacing the underlying database for SAP NetWeaver with SAP HANA gives you more speed, more flexibility and an easier way of maintaining your IT landscape.
  • The SAP Sybase IQ software that stores massive volumes of rarely used data, cost-effectively on disk. It has great data compression rates (70 % -90 % smaller than the original input data) which, by the way, will help to lower your TCO drastically, with reductions in data storage costs. Like SAP HANA, it is a column-oriented database management system, it stores and retrieves data by columns of attributes. This is a proven way to make data highly accessible for analysis.
As indicated with the word optional behind the SAP HANA database, the SAP NetWeaver BW Near-Line Storage solution can be implemented for traditional SAP NetWeaver BW (SAP BW) customers or for customers that are using SAP NetWeaver BW, powered by HANA (BW on HANA). In both cases, you will benefit from a leaner SAP BW system, which allocates less memory in your database. Considering this, you might lower the TCO for your SAP HANA database. 


What are the benefits of this solution?

Besides achieving a complet e data archiving strategy, you will
  • Gain a high-performance near-line storage with the functionalities of query and reporting.
  • Reduce your total cost of ownership for your BW on HANA system, by implementing a clear and scheduled data aging strategy (which reduces the storage on your HANA system)
  • Avoid further interfaces to near-line storage, as there is full access to data by SAP NetWeaver BW
  • Reduce maintenance efforts for SAP NetWeaver BW
  • Reduce
system downtimes for SAP NetWeaver BW


How can SAP NetWeaver BW Near-Line Storage help you?

Are you wondering if NLS is the right solution for you? It´ll be the right solution if you can answer one or more of the following statements with yes.
Do you (or your customer) have…

  • Massive amounts of data
in your SAP BW system combined with a high maintenance effort
  • Long downtimes for SAP BW system maintenance 
  • Need to transfer data into a new data archiving landscape
  • Intent to reduce the total cost of ownership of your SAP BW or BW on HANA system
  • Heterogeneous system landscapes including non-SAP near-line storage solutions
  • Need to have archived data still at hand.< /li>

  • If you feel nodding at one of these statements, NLS will be the right solution for you. 


    How can you implement the solution for your enterprise (or your customer)?

    SAP offers the SAP NetWeaver BW Near-Line Storage rapid-deployment solution which can be implemented in a matter of weeks. This rapid-deployment solution delivers additional content and services that guide you safely through the implementation of your data archiving strategy. A detailed step-by-step-guide delivers all important assets and steps in a cl ear and understandable way. The rapid-deployment solution has the following main components: 
    • Implementation of a data aging strategy – A document that guides through the process of setting up a data aging strategy, with an example of a data aging strategy for sample info cubes in SAP BW

    • SAP Sybase IQ general configuration – A configuration guide and templates for a fast and easier installation and configuration of SAP Sybase IQ software

    • Near-line storage configuration – A step-by-step guide that supports through the process of configuring the SAP BW application for near-line storage with SAP Sybase IQ.


    Do you have the right skill set to implement the solution?
     
    The typical skill set that is needed to implement the solution is SAP BW knowledge and knowledge on SAP Sybase IQ (Administration).