Documents

Emerging Best Practices for SAP BW HANA Data Modeling Dr Berg

Description
Emerging best practices for SAP BW HANA data modeling & reporting: Q&A with Dr. Bjarne Berg (transcript) by Dr. Berg and Molly Brien August 6, 2013 The SAP HANA data model is still evolving. What’s in store for InfoCubes, and what’s the status of the LSA architecture? In the future, will we see more reporting directly from SAP ERP on SAP HANA? Comerit’s Dr. Bjarne Berg took questions from our audience to discuss the emerging best practices for SAP BW on HANA data models and reporting. Dr. Berg i
Categories
Published
of 15
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  Emerging best practices for SAP BW HANA data modeling & reporting: Q&A with Dr. Bjarne Berg (transcript) by Dr. Berg and Molly Brien  August 6, 2013 The SAP HANA data model is still evolving. What’s  in store for InfoCubes, and what’s  the status of the LSA architecture? In the future, will we see more reporting directly from SAP ERP on SAP HANA?   Comerit’s  Dr. Bjarne Berg took questions from our audience to discuss the emerging best practices for SAP BW on HANA data models and reporting.    Dr. Berg is an author, consultant, professor, and long-time speaker at our BI conferences. He is presenting SAPinsider's upcoming   SAP BI seminar on dashboards and reporting   and is co-author, with  IBM’s  Penny Silvia, of  SAP    HANA: An Introduction (2nd edition)  .    Molly Folan, SAPinsider Events: Welcome, Dr. Berg! Dr. Berg: Before we start today's session, let me give some reflections on the need for BW InfoCubes.  Do We Need InfoCubes?  Currently, there is significant debate on Internet blogs and forums concerning whether InfoCubes are needed with an SAP HANA system. However, for the interim period, InfoCubes are needed for several reasons. First, transactional InfoCubes are needed for Integrated Planning (IP) and write- back options. InfoCubes are also needed to store and manage noncumulative key figures, and the direct write interface (RSDRI) only works for InfoCubes. In addition, the transition from SAP NetWeaver BW to SAP HANA is simplified by allowing customers to move to the new platform without having to rewrite application logic, queries, MultiProviders, and data transformations from DSOs to InfoCubes. However, the continued use of InfoCubes has to be questioned. The introduction of the star schema, snowflakes, and other dimensional data modeling (DDM) techniques in the 1990s reduced costly table joins in relational databases, while avoiding the data redundancy of data stored in first normal form (1NF) in operational data stores (ODSs). The removal of the relational database from SAP HANA’s in -memory processing makes most of the benefits of DDM moot, and continued use of these structures is questionable. In the future, we may see multilayered DSOs with different data retention and granularity instead. But, for now, InfoCubes will serve a transitional data storage role for most companies. Thanks, Dr. Berg Mariano Filiberto : Dear Dr. Berg, Thanks for the opportunity of discussing this topic!! It was a pity that it was not included in the SAP BI conference. It is a major topic that deserves more than one  hour, but let's start somewhere. I am looking forward to your view/update of the evolution of the LSA model/framework. Some generic questions:    LSA ++ ?    Distinction between Datawarehouse Layer vs. Reporting Layer still valid on BW on HANA?    How to deal with the impact of BW on HANA with a LSA project already running?    Reporting on BW or directly on ECC/CRM on HANA? More detailed questions to come. Dr. Berg: Hi Mariano, Many organizations have created a Layered Scalable Architecture (LSA) in their SAP BW systems. This is mainly done to partitioning and isolating the data to maintain the srcinal data, isolate transformations, scaling the system using  partitioning, providing performance for queries and allowing companies to create 'corporate memory areas' of old data that can be off loaded to technologies like  NLS. While many of these concepts still are valid, there are significant overhead of this approach. First, when creating the system as part of a project, it includes many layers and takes a very long time to develop. I recently worked on moving a 40 TB BW system to HANA that was using LSA. Almost all data flows (85) had all LSA layers and were also partitioned into 7 geographical areas. This resulted in over 40 InfoProviders for most data flows (3,000+ overall). It took forever to develop any new content using this approach. Secondly, any changes like adding a field to existing data flows, required 40 changes in the InfoProviders and the same amount of changes to all data loads  between the layers. It was simply not sustainable from a TCO standpoint. With LSA++, we are removing the layers and simplifying the architecture to take advantage of the inherent performance of HANA. For example, in the BW to HANA migration above, we could remove 31 of the 40 InfoProviders and almost 2,500 InfoProviders in the whole system. So, this is significant benefits in-terms on new development, but also from the total cost of ownership of existing architecture. While not required, I recommend that almost all companies plan for a gradual change over to the LSA++ simplified model with 3 layers instead of the traditional 7, and that partitioning of InfoCubes are done only done those that are extremely large, have very high frequency or those that can be off-loaded to NLS. This is not easy and requires a lot of work and should probably be done post-migration to HANA for most companies. But it allows for much simpler systems that takes 100% advantage of HANA. You can read more on traditional LSA at Juergen Haupt’s  great  blog and see   a great  presentation  by Juergen on BW on HANA LSA++.   Thanks, Dr. Berg  Mariano Filiberto : Dear Dr. Berg, Thanks for your answer. If I understood you correctly (and I also read this somewhere in your site) first you need a technical migration and then in order to take full advantage of HANA a functional migration. In the example that you mentioned (big company with a full LSA, 3000 cubes and more) it means that first you need to put a lot of money to do the technical part (your current database) and when the functional part is complete done (after convertion from LSA to LSA ++) where I assume that a lot of space in memory is reduced ... Then how you justified all that money that you put in HANA for BW after the functional project that is not use anymore? (Answer from SAP -> for future grow/answer from the senior management -> how much money, time and ROI?) Is there any approach on how to deal with this chicken and egg situation ow due to the fact of how expensive HANA is ? Any methodology or best practice to follow? Or your advise? Kind regards, Mariano Dr. Berg: Hi Mariano, Great question... What we do first is a BW clean-up. This include a -12 step  program with lots of removal of non-needed data and temp tables in BW (see chapter-5 in my HANA book from SAP Press on all the transactions and step-by-step apprpach). After we have a smaller system (normally 20-30% smaller), we then implement  NLS to keep the volume down and save thousands on hardware and licencing costs. For example, the 40TB HANA migration I mentioned above have another 72TB of data on NLS. The only the 'smaller' system is moved. However, as you correcly observed, if the LSA++ remodeling was done prior to the move of BW to HANA, you could  probably save even more... And yes, getting a BW HANA project off the ground in large companies can be challenging. The planning steps can be many and you often have to convince the organization that the system can actually be migrated. We created a short movie that shows the 5-steps for moving BW to HANA and  published it this spring (14,000+ people have seen it so far). This shows the most critical pre- steps that you should do before you start your project and also demonstrates some of the key tools available to you from SAP to find out what else needs to be done. It is very useful for those in the planning stages of a BW to HANA migration You can read the blog and view the demo here.    Mariano Filiberto: But in order to convert to LSA++ I need to be on HANA first.  LSA ++is the revamp of LSA for BW on HANA -> means I need to be on HANA first to be able to reduce from 7 to 3 layers-> again chicken and egg.  If I understood your answer correctly the steps are:    BW cleanup (12 step + chapter 5 of your HANA book )     NLS (if required)    Technical migration to HANA.    Functional migration-> conversion from LSA to LSA ++ (usage of DSO optimized for HANA and if required cube optimized for HANA) ....and after you see what you really need pay SAP the license cost of BW on HANA :) (last step is also optional) Kind regards Mariano Dr. Berg: Hi Mariano, That is funny :-) ... But you can actually start the LSA simplification in the Oracle DB for BW before the migration starts. Some are actually copying the BW system, keeping cloned set of the delta queues (PCA tool), then modifying the LSA in the copy and finally migrating the copy. This is very unusual, but technically possible if you need to save some HANA  bucks :-) Thanks, Dr. Berg Jeyakumar Gopalan :    Hello Dr. Berg, First of all, I am a huge fan of you. I always follow your lr.edu homepage whenever I get a chance. I have couple of questions for you: 1. We are using snapshot scenario for our materials movement inventory reports in BW. What are the possibilities to implement snapshot scenario ideally in HANA? 2. It would be great if we have RDS on Inventory Management for BW. Any idea when it will be GA? We have non-cumulative key figures sitting around the inventory InfoCube and so I would like to see how HANA is addressing this. Thanks, Jeyakumar Gopalan Dr. Berg: Hi Jeyakumar, Yes, you have to plan a little around noncumulative key figures in cubes like inventory. Because SAP HANA loads the initial noncumulative, delta, and historical transactions separately, two DTPs are required for InfoCubes with noncumulative key figures (i.e., inventory cubes). In this case, one DTP is required to initialize the noncumulative data, and one is required to load data and historical transactions (for more details, see SAP Notes 1548125 and 1558791). Also, traditional InfoCubes with noncumulative key figures can only be converted to SAP HANA- optimized InfoCubes if they aren’t included in a 3.x data flow.
Search
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks