Mobile Product Catalog


Last week, in Belgium, we had an event which can be considered as a mini TechEd. Ofcourse, it's not called TechEd here, as it is not organised by SAP, but by the SapIence commision. (which is some sort of user group supported by SAP and Partners)

In any case, it's the biggest SAP Technology focused event we have and it attracted quite a crowd. We at EoZen make it a matter of honor for being present every year. We put in quite some effort in creating demo applications and do not want to hold this out on you.

Mobile Product Catalog

The Mobile Product Catalog combines the availability and user experience of your mobile, with the power of an SAP backend.
Your product data, pictures and attachments may come directly from your ERP, SRM, MDM, or CRM server and are stored locally on your device. This allows for quick access even when no connection is available.
Later versions will also feature direct sales order creation, availability features, even better user experience, and many more...

We created a small video (for back-up purposes) which we like to share with you. (notice: no sound in the video) The actual demo was done live, and repeated several times at the bar on our own phone.

Functionalities which we show:

  • preferences: set username and password. (next version will also allow to set the categories to refresh and the service url's)
  • Offline storage: first access of the catalog shows the products which were available in the offline storage
  • triggering of synchronization
  • browsing through the catalog (next version to provide search functionality as well)
  • details of a product:
  • Header details: fully flexible and dynamic. Just change the backend service to add fields, and the frontend will automatically generate them on the screen.
  • Pictures: attached to the material in the document management system
  • Attachments: non-picture attachments to the material in the backend system
  • Gestures to switch between screens

Video has no sound:

Technological remarks

Note that this application is actually running on an android device, which is not yet supported by Sybase. Therefore, the entire synchronization and storage layer is custom development.

The entire application took us about 15 mandays, spread out over several evenings and weekends, making this quite a remarkable effort.

Attachments and pictures house in the SAP document management system (too often neglected functionality)

The services used for data retrieval are custom services built on an ERP system. However, they can just as well be implemented on a CRM, SRM or even an MDM system, making this a very flexible application.

How MDM PI adapter works

This Blog explains how MDM – PI adapters works ,in 2007 I worked on MDM(5.5 SP04) and Data base integration using SAP XI, at that time there was no MDM-PI adapter, I have used File adapter to place the files in to FTP server, then MDM Import Server imported data from FTP and updated in to MDM Tables.

In reverse case MDM Syndicator syndicated XML files in to FTP, from there using File adapter picked up the data and processed to target system.

But now if you are working on MDM 7.1 integration with any other system using SAP PI (7.0 and 7.1), you don’t require File adapter, directly you can interact with MDM using MDM –PI Adapter.


SAP Net Weaver MDM 7.1

PI 7.0 (SP 15 and higher)

PI 7.1 (SP 07and higher) or PI 7.11 (SP01 and higher)

I am not going to explain how to configure MDM-PI adapter in PI point of view, it is pretty much simple.

Refer below link, which explains everything about MDM adapter configuration and deployment process and everything.

MDM PI adapter uses MDM Java API ports functionality to receive/send messages to MDM and it supports only asynchronous communication (QOS: EO and EOIO).

1)     Whenever you send data to MDM system using MDM PI adapter, it will not directly update data in MDM tables, first it will send data to MDM ready folder, then it will call MDM Master data import server (MDIS), MDIS looks at this file and run the import map for it, this Import map is responsible to import the file in MDM Tables.

2)     In this process data will be updated in to MDM Tables, once the import process is completed successfully, MDIS sends back an acknowledgment to MDM adapter saying that the import is successful, if there is any error, it tells that to MDM Adapter and you will see it in Communication channel monitoring of PI.

3)     MDMimports the message independently using the MDM Import Server (MDIS). Once the import is completed, a processed import event is triggered.

4)     The MDM PI Adapter captures the processed import event and updates the PI monitoring system with the message import status.

5)     When the incoming message is marked with a request for an application ACK (acknowledgement), an acknowledgement message is sent back to the sender system (for example, when the incoming message is an IDoc.

6)     When you use MDM PI adapter in sender mode to send data to PI, The data is available as an XML file(s) in the corresponding port folder.

7)     The MDM PI Adapter captures the syndication processed event and retrieves the data by using the functionality of the MDM Java API ports; in this process a syndication processed event is triggered.

MDM Error Handling:

1) Suppose MDM PI adapter delivered data to Ready folder, but data not updated in to MDM tables, in this case as a PI consultant point of view our job done, but MDM point of view to there is a some problem with import sever configuration, to identify this MDM Consultant have to look in to import server configuration file

2) There is a file in "Ready" folder and waiting to be picked by MDIS and imported in MDM. if the port stays in this status for long time (generally more than 1-2 mins), it means MDIS is not picking the file and there is some issue in MDIS server. Ask MDM basis to look into this.

3)If it says "Blocked", this means MDIS tried to pick the file and run the import process but the XML schema of the incoming file (which you sent from PI) does not match the XML schema of the Import map. In this case you need to either process this file manually (via Import Manager) or delete this file and send correct file from PI.

MDM-PI adapter Advantages:

1) Message flow between PI and MDM can be monitored end to end (messages have been received and properly updated into MDM).

2) Error handling is improved because the communication channel is PI is pointing to one specific Repository port,

3) Restarting of messages if a service in unavailable in possible.

4) MDM Adapter can be used instead of the File adapter.

5) Higher reliability of data transfer.

6) Handles huge data.


This year Sapphire 2011 in Orlando, FL will be a blast. In order to get started with the event in the right way, SAP is conducting 2 great workshops on Sunday before the actual Conference:

  1. Platform Innovation with SAP
    Learn from the Experts How SAP Technology Enables You to Unlock the Potential for Innovation
    This session will be hosted by Darren Crowder and Mike Stambeck.
    Darren, myself and others, developed a few years ago SAP's SOA Reference Architecture - so I will make sure to catch up with him. If you have never seen attended one of his whiteboarding/flipchart Workshops, you should definetly attend.
    Here the Abstract of the workshop:
    Want to get the most out of your investment in SAP software while supporting strategic initiatives and leading your company’s march to more flexible target architectures? Join SAP experts to go deep into release 7.3 of the SAP NetWeaver technology platform and the rest of our technology offerings, all designed to integrate closely with your SAP application landscape and third-party systems. SAP product gurus will provide breakthrough insights into how to extend your business application architecture successfully by taking a service-oriented approach. We walk through SAP solutions that use service-oriented architecture (SOA), including recommendations on governance, Enterprise Services Repository, reference architecture, human and system process orchestration, enterprise business rules, and deployment. Each topic includes detailed, real-life examples of use cases that our master practitioners have personally implemented in partnership with SAP customers in a variety of industries.
  2. Master Data Governance
    Learn How to Tame the Beast of Inconsistent Data While Leveraging Your Investment in SAP Business Suite Software
    This session will be hosted by Vikas Lodha & Simer Grewal. Vikas conducted one of the top session in last year Teched Technology Roadmap Pre-Conference. So if you want to talk to some of the SAP Experts around MDM/EIM, you should make sure to attend this session

    Session Abstract:
    Lock your horns with SAP’s national leaders of data management to learn how the SAP Master Data Governance application, the newest solution in the master data management portfolio, is designed and what’s going on behind the scenes. Ever since SAP Master Data Governance went into ramp-up in December 2010, we have had an enthusiastic and overwhelming response from customers of the SAP ERP application. In this workshop, you learn how master data governance approaches the data management problem within the SAP software landscape with the utmost ease and efficiency. White board–led sessions provide insight into all aspects of the solution (data model, security, workflow, data replication, extensibility, and customization). The workshop equips you with more knowledge to continue your master data journey with SAP Master Data Governance and start your projects off on the right foot. This session will be led by Vikas Lodha and Simer Grewal from the enterprise information management (EIM) center of excellence and product management team.

To register for the workshop, please e-mail your inquiry to
Please include full contact information and indicate which track you want attend.

SAP Enterprise Information Management Now on Twitter

Enterprise Information Management (EIM) is a multi-facetted strategy that is all about keeping information and data as an enterprise asset. The top-line and bottom-line benefits of trusted information materialize in reliable company-wide analytics leading to the right business decisions, streamlined operational business processes and transactions, as well as enterprise-wide data governance and compliance. This also includes, for example, disciplines like information governance, master data management, data archiving, system decommissioning, retention management and content management.

Ina Mutschelknaus has exemplified the great impact of EIM in her recent SDN blog.

Get in Touch. Stay Connected

To accommodate the EIM relevance, we have broadened our social communication channels and also started to feature main capabilities on Twitter. To stay tuned with what's going on in SAP's key EIM domains, you can follow these new SAP-run EIM Twitter accounts:


Group, sharing information about SAP solutions for master data management and governance (MDM)

Group, sharing information about SAP solutions for information lifecycle management (ILM), e.g. archiving, system decommissioning, retention management

Group, sharing information about SAP solutions for enterprise content management (ECM)

Group, sharing information about SAP BusinessObjects Enterprise Information Management (EIM) solutions for Data Services, Data Quality, Data Migration and Information Governance

I recently spotted a Bob Dylan tour poster saying "Don't You Dare Miss It!". This holds true here as well. So take this chance to also follow SAP EIM on Twitter.

Extending SAP Master Data Governance for Supplier – Part 2

Applies to:

ERP 6 EhP5 – Master Data Governance for Supplier


SAP Master Data Governance provides an out-of-the box solution for the central management of various master data objects such as financial objects, supplier and material. But SAP Master Data Governance also provides the flexibility to customize the solution, in cases where the predelivered content does not fully match customer requirements. The article series will provide an introduction to extensibility for SAP Master Data Governance for Supplier. In this first article the focus will be on extending the data model. The following articles will build on the extended data model and explain enhancements to the SAP Master Data Governance UIs, processes, validations and derivations, data replication, Enterprise Search and print output.


Lars Rueter  
Company :    SAP AG, Germany   
Created on:    4. March 2011
Author(s) Bio
Mr. Rüter works at SAP in the area of Master Data Management. In the past 11 years at SAP he has held different positions in Asia Pacific and EMEA. He has extensive experience in SAP's Master Data Management product portfolio, Java Development and SAP NetWeaver Portal. As a Consultant and Solution Architect Mr Rüter has been involved in a large number of SAP implementations worldwide.


  • Part 1: Data Modeling
  • Part 2: UI Configuration
  • Part 3: Process Modeling
  • Part 4: Validation and Derivation
  • Part 5: Data Replication
  • Part 6: Enterprise Search (ES)
  • Part 7: Print Output

This article covers part 2.

UI Configuration

The UI is configured with the Floorplan Manager. The Floorplan Manager (FPM) is a Web Dynpro ABAP application that provides a framework for developing new Web Dynpro ABAP application interfaces consistent with SAP UI guidelines.

An FPM application is composed of a number of different Web Dynpro components (most of which are instantiated dynamically at runtime). However, the following two components are always present:

  • A floorplan-specific component (FPM_GAF_COMPONENT or FPM_OIF_COMPONENT)
  • A component for the Header Area (FPM_IDR_COMPONENT)
    In simple terms, the configuration of an FPM application is the configuration of these two components.

Floorplan Manager Configuration

It is recommended to make your changes only in the Floorplan Manager customizing layer of the SAP delivered UI-configuration. In this section you will find step-by-step instructions on how to do this.

In addition to the approach outlined below there are at least two other possibilities for making UI changes. You can either copy a SAP delivered UI-configuration and change the copied configuration or make your changes directly in the configuration layer of a SAP delivered UI-configuration. Both of these approaches are not recommended due to the following disadvantages:

If you make changes directly in the configuration layer of a SAP delivered UI-configuration these changes will always be overwritten if a new version from SAP is imported. Changes within this layer are treated as modifications. Therefore you should not make changes in the configuration layer of the SAP delivered UI-configuration.

If you copy a SAP delivered UI-configuration and change the copied configuration you will also have to copy the UI BAdI. The disadvantage of copying the UI BAdI is, that adjustments made by SAP do not run automatically into your coding.
Working with this example you should make your changes only in the customizing layer of a SAP delivered UI-configuration.
By making your changes in the customizing layer they will be client-dependent and the SAP delivered original will be unchanged. The customizing data will only contain the differences to the original configuration, so you are not disconnected from corrections or new features delivered by SAP in future Service Packs or releases. At runtime the changes of the customizing layer are added to the original configuration and the application will look like you customized it.
Activating administrator mode
You can launch the configuration editor in customizing mode by running the application in administrator mode. This can be achieved by simply adding URL parameter sap-config-mode=X or by launching the application from SE80 by navigating to the application (configuration) you want to start and then Shift+F8 (or via menu 'Web Dynpro Configuration' -> 'Test' -> 'Execute in Administration Mode'.

After starting the application in administrator mode you will see a link in the upper right corner of your application (Adapt Configuration), which will lead you to the configuration editor in customizing mode. The configuration editor in customizing mode will look identical as in the configuration mode, only your changes will not be stored within the original configuration file, but instead in the customer adaptation file.

Figure 1: Execute in Administration Mode

Figure 2: Adapt Configuration Link

To call up administrator mode, you need the S_DEVELOP or S_WDR_P13N authorization profile.

If you do not see the Adapt Configuration link in the top right hand corner of the window after following the steps above you can also try to activate the administrator mode using the steps below.

Alternative method to activate administrator mode

  • On the SAP Easy Access screen, choose System -> User Profile -> Own Data
  • On the Maintain User Profile screen, select the Parameters tab.
  • Enter FPM_CONFIG_EXPERT for the parameter ID and A for the parameter value.
  • Choose Save

Adding custom fields
After starting the Floorplan Manager Editor via the Adapt Configuration link you can start adding additional fields to the UI.

For example to add the field ZLARS01 to the Organisational Data tab in the Address subview follow the steps below.

  • In the main window make sure Organizational Data and Address items are selected (default)
  • In the Address subview locate the top UI Building Block (Configuration Name: MDG_BS_BP_FORM_UIBB_ADMIN)
  • In this UI Building Block click on Configure UUIB

Figure 3: Navigate to UIBB configuration

  • You are now in the Component Configuration for MDG_BS_BP_FORM_UIBB_ADMIN
  • Click the Add Group button
  • Locate the Group Attributes section
  • In the input field Text enter Extensibility as the group name and press return
  • You have created a group Extensibility
  • Press the Add Melting Group button
  • To add fields to the Melting Group press the Configure Melting Group button
  • In the new window move the fields you want to add from the Available Fields table to the Displayed Fields table
  • When you have finished adding fields click the OK button

Figure 4: Configure Group Dialog

  • You then assign the FPM event ID USMD_ENTER to the new field. Specifying this means that the system triggers a roundtrip when a value is selected from the dropdown list box, allowing dependent UIBB fields to be changed by means of a Business Add-In (USMD_UI_EVENT2).
  • Click the Save button to save the configuration changes

Figure 5: UI Field Extension

(Optional) UI BAdI Implementation

You can override the standard processing of the UI configuration by means of the Business Add-In: Adjust User Interface for Single Processing (USMD_UI_EVENT2). This BAdI allows for extensive UI adjustments. You have options for making changes in the following areas:

  • Adjust the definition of attributes or add new attributes
  • Initialize the displayed data (when creating a new entity type, for example)
  • Restrict the values displayed in a dropdown list field or selection field group
  • Restrict the values displayed in the input help
  • Dynamically control the visibility of fields on the user interface and of the property that determines if fields are required or display-only
  • Define navigation destinations of UI elements of the type hyperlink (or pushbutton)
  • Check if the lead selection of a table may be changed

Figure 6: BAdI Adjust User Interface for Single Processing

If you want to use fields or set default values that do not exist in the data model but that are instead calculated, derived, or defaulted on the UI, you must implement a User Interface BAdI (Business Add In).

Under UI Modeling -> Business Add-Ins -> BAdI: Adjust User Interface for Single Processing (BAdI USMD_UI_EVENT2), create your own implementation.

  • In your implementation, allow the methods that you want to adjust to be inherited from the implementing class of a delivered standard implementation. If the implementing class does not allow inheritance because it is marked as FINAL, you need to create a copy of the class.
  • Deactivate the existing BAdI implementation and activate your new BAdI implementation
Copying a UI BAdI

If you have copied a SAP delivered UI-configuration in a previous step you have to copy the corresponding UI BAdI as well and change the filter value to the new UI configuration ID.

Note that the copy does not automatically contain corrections from Support Packages or SAP Notes. Therefore ensure you check the relevant SAP Notes for the implementing class and manually implement any corrections.

In this activity you

  • Copy enhancement implementation MDG_UI_EVENT_SUPPLIER to Z_LR_ MDG_UI_EVENT_SUPPLIER
  • Copy implementation class CL_MDG_BS_UI_EVENT_SUPPLIER to Z_LR_ CL_MDG_BS_UI_EVENT_SUPPLI

  • In your enhancement implementation Z_LR_MDG_UI_EVENT_SUPPLIER change the implementing class from CL_MDG_BS_UI_EVENT_SUPPLIER to Z_LR_CL_MDG_BS_UI_EVENT_SUPPLI

  • In your enhancement implementation Z_LR_MDG_UI_EVENT_SUPPLIER change the filter value to the previously copied UI configuration (for example MDG_BP_SUPPLIER_APPL_RT_1) and delete any obsolete filter entries.

  • Activate your changes

In order to test if your custom UI configuration works together with your copy of the UI BAdI you have to create a new Change Request Type in customizing (activity Create Change Request Type) and set the UI configuration parameter to your previously copied UI configuration (for example MDG_BP_SUPPLIER_APPL_RT_1).

Launching the Create Supplier UI will now allow you to select your custom change request type. This in turn will invoke your custom UI Configuration and customized UI and BAdI.

Testing data model changes

To verify your changes are working create a change request for a new business partner and note the number of the change request from the table My Change Requests. When creating the business partner make sure you enter a value for our custom field ZSDN_CI. To activate the change request you can go through the approval workflow. As an alternative you can start the Business Object Builder transaction SWO1. Enter BUS2250 in the Object Typeinput field and press the_Test_ button. In the pop-up window enter your change request number. In the following screen execute the method ACTIVATE_2 to activate the business partner.

After the activation you should check tables LFA1 and BUT000 if the value of your custom field was correctly transferred to the active area.


The flowing parts will be made available in the future.

  • Part 3: Process Modeling
  • Part 4: Validation and Derivation
  • Part 5: Data Replication
  • Part 6: Enterprise Search (ES)
  • Part 7: Print Output

Related Content

From apples to products: semantic technologies in SAP products

The interviews with Cirrus Shakeri about semantics at SAP and the one with Matthias Kaiser about the relevance of semantics in software ("why apple computers don't taste sweet") gave you hopefully a rough idea what semantics are.

In this blog I want to talk about an SAP project called "FindGrid" that addresses some of the concepts that Cirrus and Matthias talked about in their interviews like self learning machines.

A few questions beforehand. Have you ever conducted research work related to markets, products or customers? And if so, have you ever wished

  • when you came on to existing and similar work of others, all of the knowledge and insights were easily accessible and conveniently stored in one place?
  • you could synthesize knowledge to create credible and reliable knowledge assets e.g. as basis for decision making?
  • you could easily access key information that is likely only sitting on unknown sources somewhere?

FindGrid can help you on these taks.

FindGrid is a software to "build organizational memory via a collaborative, self-learning environment that enables knowledge capture across people, teams and enterprises". In other words this software learns constantly which knowledge is available in your company and connects it with relevant processes, roles, teams and individuals. How cool is that?

This software is available as Solution in Early-Adoption Phase, currently rolled out at Colgate-Palmolive’s Market Researchers und in pilot stage at Fujitsu, Kaeser and other SAP customers and their feedback is rolled into the next version which is planned to available as a standard product by end of 2011.

To get an introduction to the solution concept and to see this software in action you can watch the following three videos:

The Story of FindGrid

Introduction to FindGrid

An application demo with the use case "Review SAP Brand Performance"

I'm planning to interview Stephan Brand, Vice President of Corporate Functions Platform, soon. Stephan worked together with Archim Heimann (Chief Architect for Semantic Business Applications) and colleagues from SAP's Technology Innovation Platform (TIP)  to implement semantic concepts into software solving requirements in the day-to-day enterprise-work.

Stephan will share, how SAP can solve real business issues by leveraging the capabilities of semantic technologies in a very pragmatic, end-user and solution-oriented way. Also, we will discuss how SAP could use a semantic framework to bring the power of semantics to a broader variety of users and applications.

So stay tuned and watch the Technology Innovation page on SDN for news around semantics and other technology topics at SAP.

SAP Insider Article on SAP ILM 702

This SAP Insider article highlights the most important features and enhancements of the new SAP NetWeaver Information Lifecycle Management 702 release. It comes straight from the core team that conceived and developed this new release, and provides you with a quick but yet comprehensive overview of what the new product brings along.

So take a look yourself to learn how you can benefit most from SAP ILM 702, which is currently in ramp-up until mid May 2011!

SAP MDM Tutorials | SAP MDM Training | SAP MDM Interview Questions |SAP MDM Books

Enterprise Information Management for EAM

Asset Information Complexity

Information management of Enterprise Asset Management (EAM) data in asset-intensive industries is a highly complex and dynamic problem area.  Asset-intensive companies are typically very large, run multiple facilities, are geographically dispersed and have IT systems that are not always integrated.  This creates three major sources of strain in regards to information management: 1) standardization of master data, 2) effective asset data governance and 3) optimized asset operations and maintenance.  A weakness in one of these areas impacts the ability to manage the others.

The problem becomes even more complex when considering the full lifecycle of an asset, starting with engineering design and construction, hand over to production and ongoing asset operations.  Many systems of record are used during the design and construction phases of a project, and each stores a myriad of documents and data records that are crucial to that asset’s operational use and maintenance.  Complex assets can also undergo rework for alternate uses or for enhancements that go beyond what they were originally designed for.  In these cases, information will continue to flow from information sources that typically live outside of the boundaries of ERP and operational asset management systems.

The collection, aggregation and governance of this complex asset information are also impacted by a large range of user types.  IT is involved from a data quality and master data management perspective while system experts in ERP leverage the power inherent in complete end-to-end business processes.  But business users that may have little involvement with the core ERP systems also play a large role.  Maintenance workers that perform inspection rounds or experts with years of experience in specific types of machinery all contribute to the overall definition and understanding of an asset.  Therefore, information management systems for asset data must include technical data management tools as well as the business workflows and industry-specific user interfaces that allow all roles to participate in the information management process.


Unique Customer Requirements for Success

When considering the range of IT solutions for attacking this problem, the business must weigh factors such as the current state of business processes and IT systems in use as well as what the optimized or end-goal processes and systems should look like.  Is the business simply focused on data migration between applications or are ongoing asset operations impacted by the solutions that remain in place?  Will the resulting IT systems be centralized or will the processes surrounding asset data governance need to include “continual data migration” and integration between disparate systems?  Is the information problem purely an IT exercise or will business end-users play a role in the ongoing collection and infusion of critical asset information?  Questions such as these can muddy the water in terms of what an optimal solution will look like and what combination of IT solutions will meet the overall goals of the business.

In a follow-on post I will discuss technologies to consider for solving these complex issues.  I am also very interested in your own experiences with the topic so drop me a line or post a reply!

A lesson in information as an enterprise asset from the airline industry

All you travelers out there have undoubtedly heard about Southwest Airline's issue last Friday: A Boeing 737-300 experienced a ripped fuselage shortly after take-off. Southwest has since voluntarily cancelled at least 600 flights since Friday's emergency landing. Photo cutesy of Ross D Franklin/AP.

After further inspection and information analysis, the "Federal Aviation Administration (FAA) said it will issue an emergency directive Tuesday to require operators of certain Boeing 737-300, -400 and -500 models that have accumulated more than 30,000 takeoff-and-landing cycles to conduct electromagnetic inspections for early signs of incipient fatigue damage. Those inspections must be repeated at intervals not more than 500 cycles." (See the Seattle Times article here.)

Imagine the information governance that both the FAA, Boeing, and individual airlines must have in place. First they must (very quickly!) identify the causal issue, based on detailed maintenance logs, manufacturer data, and even the specific manufacturing processes involved.

Next, they must (just as quickly) do some predictive analytics to find where similar problems may be lurking to stop a disaster before it occurs. These predictive analytics had to be run across the individual airlines, and take into account multiple parameters: manufacturer, manufacturing process, model numbers, inspection logs, and more.

As if that wasn't enough, Southwest Airlines alone had to cancel flights, notify passengers, move aircraft around, and rebook at least 600 flights.Pronto. One example follows:

"Last night, another Southwest flight was diverted. The flight, headed from Oakland, Calif., to San Diego, Calif. made an emergency landing because of a burning electrical smell. "

To track all of this information and react quickly, these organizations had to, in advance, clearly identify critical data elements that they needed to track (maintenance logs, manufacturing processes, etc.). They also had to document their information policies: how long to keep information, how to analyze information in maintenance logs, service level agreements (SLAs) for notifying passengers of flight changes, and so on.

I can't understate how the business processes above absolutely rely on clean, consistent information. If a model number is not stored according the defined data standard, quick access to that aircraft would not happen. The risk of NOT applying information governance principles is well-known and unacceptably high.

Now think about your business. Which information elements are absolutely critical to your business? Which business processes depend on those information elements being of high quality? What does "high quality" mean to each individual business process? Which elements must be dealt with first? Who owns understanding the ramifications of poor governance and driving the information governance program forward? Do you have an accurate assessment of where you are now? Could you quickly use predictive analytic techniques to proactively avoid large problems? Of course, SAP tools and services can help you with these tasks. Check out our Enterprise Information Management Suite, and throw in a healthy dash of SAP BusinessObjects Business Intelligence (for your predictive analytics), and SAP NetWeaver Business Process Management (for your business processes).

Above all, get your organization talking about how quality, timely information is critical to business function, value, and risk reduction. In independent agencies, government contractors, and the federal government can do it, it's hard for you to say "it's too hard." :-)

SAP MDM Tutorials | SAP MDM Training | SAP MDM Interview Questions |SAP MDM Books

How to get started with Information Governance

By now, you’ve heard the phrase “data as a strategic asset” or “enterprise data assets”. On the surface, this sounds really good. But, seriously, how many of you know what it *means*? Most corporations are just scratching the surface of which practices to put in place to actually support data as an enterprise asset. Enter information governance.

Before we get to a definition, let’s talk about why you should care. If your company has any of the following characteristics, then governing your information is key to mitigating costs (operational and fines) and optimizing value:

  • Legal requirements
  • Regulatory and standards compliance
  • Information-heavy projects like business process engineering, predictive analytics, and strategic initiatives.

By following the approaches outlined below, you can deliver information-heavy projects both on-time and without excessive rework. AND you can use these new-found skills for your next information project, which reduces cost.  

Information governance is a management process that provides oversight to ensure successful execution of your information initiatives.  Information governance involves the management of people, information assets, and processes to policies, standards, and objectives, including the ongoing oversight necessary to manage data risks.

It can be applied to one project, can be started with one data domain, and should be extended to the enterprise.

Information Governance framework

In this article, let's talk about starting on a single, small project and then extending your skills to a larger, more strategic project.

First, how do you choose the project? Ideally, this is a project with tangible pain. Yes. A good place to start is in the mailroom. Your mailroom can likely quanitfy, both in terms of money and resources, the value of having poor address governance. Work with that group to get a realistic cost number.

Step number two is to track down the business group that is feeling the pain as a result of this poorly-governed information. In the case of the mailroom, perhaps your marketing department is feeling the pain because their campaigns aren't as successful. Or perhaps your Accounts Receivable group is feeling the pain because their bills aren't getting paid in a timely manner. Talk to these groups. Find the most vocal group--who also understands the value of data--and partner with them.

With that small stakeholder group, establish an owner for the key data element. Sit down with the data owner and try to truly discover the current quality state of the applicable key data elements. SAP provides SAP BusinessObjects Information Steward for exactly that purpose.

Now that you know where you ARE, you can make more informed decisions on both (1) what "good" means for this data element; and (2) which metric to capture to show success.

Now comes the hard part...get started!

SAP Rapid-Deployment Solution for Customer Data Integration

Unleash the combined Power of SAP NetWeaver MDM & SAP BusinessObjects Data Services

What is an SAP Rapid-Deployment Solution?

With this Rapid Deployment Solution SAP brings together software and services in a new offering that provides essential MDM and Data Services functionality quickly and affordably.The bundle involves the required software for the given business scenario, preconfigured business content and predefined services to ensure a rapid deployment and implementation in only a couple of weeks.

Introduction to the RDS for CDI:

Companies suffering from sub-optimal customer relationships can now benefit from the SAP NetWeaver MDM Rapid-Deployment Solution (RDS) for customer data integration (CDI). This package includes SAP software, predefined services, and pre-configured content enabling one single view of customer information. The package helps companies to consolidate and harmonize customer data from heterogeneous systems into one single version of the truth.

Key Scenarios included:
  1. Customer Data Consolidation
  2. Collaborative Customer Data Correction
  3. Global Duplicate Check against Central Customer Hub
Overview of Scenario 1 - Customer Data Consolidation

In this key process, the RDS helps you with the following steps:

  1. Extract data from SAP and non SAP systems using SAP BusinessObjects Data Services
  2. Increase the master data quality through certain cleansing steps, e.g. address cleansing against address directories
  3. Match data against each other to identify potential duplicates
  4. Validate against predefined data quality metrics and collect statistical information to ensure subsequent monitoring and data corrections
  5. Load data into the central customer master data repository based on SAP NetWeaver MDM

In the following:

  • A data steward interactively  merges the data in MDM
  • SAP NetWeaver MDM keeps the golden record and provides the key mapping information
  • Afterwards this data can also be syndicated to connected remote systems
Overview of Scenario 2 - Collaborative Customer Data Correction

In this key process, data that is already present in the customer master data hub is validated and any inconsistencies are flagged for correction. A line of business owner makes the required corrections and the data steward then approves to finalize the record in the central hub. All these activities are orchestrated by a SAP NetWeaver Business Process Management (BPM) workflow. Such a process can be triggered on a periodical basis.

Overview of Scenario 3 - Global Duplicate Check against Central Customer Hub

In this process, when a new record (contact, person) is to be added to an SAP CRM system, it is matched against the records in the CRM system (local check) and against the records in the MDM hub (global check). If matches are identified, the user is notified of matching records, preventing duplicates from being created. Furthermore the user has the possibility to reuse and enhance the golden record information with local data.

The main technical components involved are:
  • SAP NetWeaver MDM 7.1
  • SAP BusinessObjects Data Services XI 3.2
  • SAP NetWeaver Composition Environment 7.2 (BPM)
  • SAP BusinessObjects Data Quality Management for SAP CRM
  • SAP BusinessObjects Dashboards

For more details and information on how to obtain this RDS, see the RDS for CDI site on SAP Service Marketplace (SMP log-in required) and the RDS site on

Cross System Master Data Processes with SAP Master Data Governance

Applies to:

ERP 6 EhP5 – Master Data Governance for Supplier


This article provides implementation details for a simplified cross system Supplier On-boarding scenario leveraging SAP’s Enterprise Master Data Management portfolio consisting of SAP NetWeaver Master Data Management (available since 2004) and SAP Master Data Governance (currently in Ramp-Up). The overarching process is modeled using SAP NetWeaver Business Process Management.


Lars Rueter  
Company :    SAP AG, Germany   
Created on:    4. March 2011
Author(s) Bio
Mr. Rüter works at SAP in the area of SAP Master Data Governance. In the past 11 years at SAP he has held different positions in Asia Pacific and EMEA. He has extensive experience in SAP's Master Data Management product portfolio, Java Development and SAP NetWeaver Portal. Mr Rüter has been involved in a large number of SAP implementations worldwide.

Cross system create supplier process

In our example we will build a cross-system supplier self-service registration and approval process. A supplier registers via a website and enters some initial data such as company name, street, city and postal code. These global attributes are stored in NetWeaver MDM for further distribution to Non-SAP systems. When the supplier is approved by the master data specialist a change request is automatically generated in SAP Master Data Governance. A workflow in SAP Master Data governance ensures that all ERP specific attributes are maintained. After final approval in SAP Master Data Governance the new supplier is activated and distributed. After activation a notification is send to the original requester.

Figure 1: High-level process overview

In figure 2 below you see which systems are part of the process:

  • (A) SAP Master Data Governance – maintenance and distribution of ERP specific attributes
  • (B) SAP NetWeaver MDM - maintenance and distribution of global attributes
  • (C) NetWeaver CE – process runtime, process specific UIs, process worklist, web service consumption and provisioning
  • (D) NetWeaver Developer Studio – process designtime

Figure 2: System Landscape

As mentioned above the process was implemented using NetWeaver BPM for the design and execution of the cross-system process. But we also leverage the out-the-box governance process in Master Data Governance for maintenance of ERP specific attributes.

Figure 3: Technical process overview

The figure above provides a more technical process overview using the Business Process Modeling Notation (BPMN) notation from SAP NetWeaver BPM:

  • (1) The initial supplier registration web page triggers the start web-service of the BPM process
  • (2) The global attributes from the registration web-page are used to create a new supplier record in SAP NetWeaver MDM
  • (3) In this human interaction step a new supplier is being approved
  • (4) BPM calls a SAP Master Data Governance Web Service to create a change request with the initial supplier data. This also triggers a SAP Business Workflow in SAP Master Data Governance.
  • (5) This step in BPM is called an intermediate message event. The process waits for a message to come in from Master Data Governance before the flow commences. Early in the SAP Business Workflow process we have inserted a task to call BPM. In this call we transmit the ID of the change request.
  • (6) BPM uses the change request ID from SAP Master Data Governance to send an e-mail to the original requestor. The e-mail contains a link to the SAP Business Workflow log. Using this link the original requestor can monitor the status of the change request in MDG.
  • (7) After sending an e-mail the BPM process waits again for a message from SAP Master Data Governance. This time SAP Master Data Governance sends a message at the end of the SAP Business Workflow process and after the Supplier has been finally approved and activated. The message includes the final ID of the Business Partner in the primary persistence.
  • (8) The last step in the BPM process informs the original requestor that the new Business Partner has been created and activated.

Implementation Steps

Integration between NetWeaver MDM and BPM has already been sufficiently documented on SDN. In this section the focus is on the integration between SAP Master Data Governance and NetWeaver BPM. Therefore we look specifically at the three integration points numbered step 4, step 5 and step 7 in figure 3 above. In step 4 we show how the inbound communication to SAP Master Data Governance was realized using a standard Web Service. The steps 5 and 7 are technically very similar in the sense that they both use a web service client proxy to transmit process status information from the SAP Business Workflow back to SAP NetWeaver BPM.

Using the inbound Business Partner Web Service

The ESR Web Service used to create a Business Partner in our scenario is called BusinessPartnerSUITEBulkReplicateRequest_In. In order to leverage this Web Service to automatically create a change request and key-mapping in SAP Master Data Governance , the method INBOUND_PROCESSING of BAdI MDG_SE_BP_BULK_REPLRQ_IN in Enhancement Sport MDG_SE_SPOT_BPBUPA has to be implemented.


METHOD if_mdg_se_bp_bulk_replrq_in~inbound_processing.

DATA ls_user_setting TYPE mdg_user_proxy_setting.
DATA lt_user_setting TYPE mdg_user_proxy_setting_t.
DATA lv_crtype TYPE mdg_sup_change_req.

if in-message_header-business_scope-id-content = 'BPM'.

ls_user_setting-field_name = 'PROXY_PERSISTANCE'.
ls_user_setting-field_value = '1'.
APPEND ls_user_setting TO lt_user_setting.
ls_user_setting-field_name = 'SUPPLIER_CHANGE'.

SELECT SINGLE usmd_creq_type INTO lv_crtype FROM usmd1601 WHERE usmd_process = 'SUP1'.
ls_user_setting-field_value = lv_crtype.
APPEND ls_user_setting TO lt_user_setting.

CALL METHOD cl_mdg_bp_bupa_si_in=>if_mdg_upload_proxy~setup_for_file_upload
iv_instance = 1
it_user_setting = lt_user_setting.



The code first checks the scope-id-element in the message header. The SAP Master Data Governance load will only continue if the scope-id-element is set to BPM. The proxy implementation of the inbound service uses the context of the SAP Master Data Governance-file-upload-framework to determine how the incoming data has to be processed. We use the enhancement spot to set the file upload framework context in such a way that the incoming data is stored in the SAP Master Data Governance staging area and a change request of type SUPPL01 (Create Supplier) is being created. If key-mapping information was send as part of the Web Service call, the key-mapping for the new supplier will automatically be updated.

The ABAP code in the Enhancement Sport looks for the first process type SUP1 in table USMD1601 and takes the change request type from that line. In our example LRDEMO will be selected as change request type when the web service is called (refer to table USMD1601 in figure 4 below). You may have to adapt the ABAP code to ensure your custom change request type (as defined in the following section _ Customizing the governance process_) is correctly assigned in the Enhancement Spot.

Figure 4: Table USMD1601

An example XML document to test the web service is attached to this wiki.

Test your scenario – Inbound Web Service

You should now test if the implementation is working. Using a Web Service test tool such as the SAP Web Service Navigator you can call the Web Service. After successful execution you should find a new change request in the POWER -List (Personal Object Work Entity Repository). You can access the POWER-List via the supplier role in SAP Master Data Governance.

Customizing the governance process

Your Web Service is working? Good! Your inbound connection to SAP Master Data Governance is now ready. Next we need to establish the outbound connection to NetWeaver BPM. In our example we extend the governance process for create supplier by two additional SAP Business Workflow tasks. Each of the two tasks sends a message to NetWeaver BPM.

Since we do not want to modify the SAP delivered workflow template and change request type, we first create a copies.

  • Look up the id of the workflow template for Create Supplier: Open MDG IMG activity Create Change Request Type and find the row with change request type SUPPL01 (Create Supplier). In the same row you find the workflow template id for this change request type.

  • Open the Workflow Builder (transaction swdd) and create a copy of the SAP delivered workflow template for Create Supplier (use the workflow template id from the previous step). Do not forget to save and activate you new workflow template.

  • In MDG IMG activity Create Change Request Type create a copy of the SAP delivered change request type SUPPL01 (Create Supplier). Assign the new workflow template id from the previous step to the new change request. In our example we have created a custom change request type LRDEMO which is linked to workflow template WS99900008 (Firgure 5 below).

Figure 5: IMG activity Create Change Request Type

  • In MDG IMG activity Define workflow step numbers create a copy of the rows from the SAP delivered workflow template for the create supplier workflow template. In your copied rows change the workflow template id to the id of your custom workflow template.

Figure 6: IMG activity Define Workflow Step Numbers

  • To ensure the receiver determination will work for your new change request type enhance the BRF+ table GET_AGENT. Start transaction BRF+ and search for MDG as shown in figure 7 below.

Figure 7: BRF+ Search

Navigate to the GET_AGENT table as shown in figure 8. Create one row in the table for each workflow step. In column CREQUEST_TYPE enter the name of your custom change request type. In columns OTYPE and OBJID enter the object type (eg. user / organization) and corresponding value respectively.

Figure 8: BRF+ table get_agent

Test your scenario – custom change request type

This a good point to test your new change request type and workflow template. From the supplier role menu in SAP Master Data Governance choose the Create Supplier menu item. You should see your custom change request type in the drop down box of the create supplier screen. Select your custom change request type. Enter the new business partner details and approve the individual workflow steps until activation. At the end of the workflow you should have created a new Business Partner in the primary persistence. You can check by doing a search for the business partner id. You may have to change the receiver determination in BRF+ to ensure you can approve all the workflow steps.

Test your scenario – inbound web service with custom change request type

If you have confirmed that your new change request type and workflow template are working, repeat the test using the BusinessPartnerSUITEBulkReplicateRequest_In web service. Verify that after calling the web service a change request is created and the associated custom workflow is started. You can use transaction swud to find the workflow instance.

Exchanging process context information between SAP Business Workflow and NetWeaver BPM

In our example we decided to implement the exchange of process context information between NetWeaver BPM and SAP Business Workflow using asynchronous web services. In particular for exchanging the MDG change request id and business partner id. In NetWeaver BPM two Trigger Eventswere created (wsdl1, wsdl2) for this purpose. The consumption side is technically realized by generating an ABAP proxy for each service and calling this proxy from a task in the SAP Business Workflow. In step 5 (figure 3) we are using the Web Service call to transfer the change request-id to the SAP NetWeaver BPM context. In step 7 (figure 3) we transfer the business partner id after activation in the back end. After transferring the change request id step 5 (figure 3) we can use the change request id in NetWeaver BPM to generate an URL to the SAP Business Workflow log for this change request. We use this URL in our BPM process to send an e-mail notification to the original requestor with the link to the workflow log.

Generating a SAP Business Workflow log URL from a change request id in SAP MDG

https:// <HOSTNAME> : <PORT> /sap/bc/webdynpro/sap/usmd_crequest_protocol2?SAP-CLIENT=<CLIENT> &SAP-LANGUAGE=EN&CREQUEST= <insert the change request number here>

For sending the change request id to NetWeaver BPM a task was added after the Set Initial Status of Change Request task at the beginning of the workflow (figure 9). In the object method section of the task maintenance screen (figure 10) a custom class and method is selected. The called method calls the web service proxy to transmit the change request id. To be able to select your custom class from the workflow task it must implement the interfaces BI_OBJECT, BI_PRESISTENT and IF_WORKFLOW (figure 13). You can find more information regarding ABAP OO for workflow in the references section at the bottom of the article.

Figure 9: Create business partner workflow with custom task

Figure 10: Custom task maintenance

For sending the change request id to NetWeaver BPM a task was added after the Set Initial Status of Change Request task at the beginning of the workflow (figure 9). In the object method section of the task maintenance screen (figure 10) a custom class and method is selected. The called method calls the web service proxy to transmit the change request id. To be able to select your custom class from the workflow task it must implement the interfaces BI_OBJECT, BI_PRESISTENT and IF_WORKFLOW (figure 13). You can find more information regarding ABAP OO for workflow in the references section at the bottom of the article.

After successful activation of the Business Partner we can send the Business Partner Id to NetWeaver BPM. A new task was created after the Set Status of Change Request close to the end of the process (figure 11). In the task maintenance (figure 12) we assign a custom method that in turn uses the web service proxy to call NetWeaver BPM.

Figure 11: Create business partner workflow with custom task

Figure 12: Custom task maintenance

Figure 13: Interface tab of the class that gets called from the custom workflow task

Test your scenario – complete process

You can now call the inbound SAP Master Data Governance web service. In turn a change request will be created. The change request id will be transmitted to NetWeaver BPM. After approval and activation of the Business Partner the Business Partner id is transmitted to NetWeaver BPM. You can use transaction sxi_monitor to view incoming and outgoing messages.

NetWeaver BPM implementation considerations

While the focus of this article is not so much on NetWeaver BPM, but more on the interfaces of SAP Master Data Governance one important aspect with regard to intermediate message events in NetWeaver BPM should be mentioned here. In NetWeaver BPM it is necessary to define a message correlation between the incoming messages from SAP Master Data Governance and the process instance in NetWeaver BPM. Any unique id (eg. process id) can be used for this purpose. The unique id can be send as part of the BusinessPartnerSUITEBulkReplicateRequest_In web service call, for example in the message header. Subsequent calls from SAP Master Data Governance to NetWeaver BPM can then transmit this unique id as part of the message payload. NetWeaver BPM uses the unique id to find the right process instance for the incoming message.


In this article it was shown how SAP Master Data Governance can be used as part of a NetWeaver BPM cross-system master data process. The article also highlights how the capabilities of the build in governance processes can be easily extended using SAP NetWeaver BPM. The combination of SAP Master Data Governance with SAP NetWeaver BPM and SAP NetWeaver Master Data Management addresses a wide range of scenarios in the area of Enterprise Master Data Management.

Related Content

Extending SAP Master Data Governance for Material

SAP Master Data Governance provides a flexible framework for extending and enhancing the predelivered solution components. The following guides provide supplementary documentation on selected extensibility topics.

SAP MDM Tutorials | SAP MDM Training | SAP MDM Interview Questions |SAP MDM Books

MDM Data Modeling - Common mistakes & How to avoid it?

There are some well known mistakes that are often committed by data architects while designing their MDM data model. This blog tries to highlight the top 11 among them and presents the details of the impact that each of the fault can result in. By understanding these mistakes, one can effectively avoid typical problems before they arise. All listed caveats are valid for SAP NW MDM 7.1 and are derived out of real life observations of MDM implementations.

The impact details for some of the mistakes that are highlighted under section 5.1 of the guide and the How to Avoid Problems with your Data Model in SAP NetWeaver MDM – Do’s and Don’ts have also been covered in this blog.

Common mistakes at Repository level

The following points are related to a MDM repository that contains the overall data model and the involved data types (e.g. transactional / operational data), languages used.

1: Storing target system specific master data attributes (special characteristics) in a MDM main table

Often, in order to support the execution of master data creation through a guided process, a local MDM store is defined. In such a case, the data model construct would be designed to temporally keep all local attributes of the master data owned by a specific target system to which MDM had to syndicate the data.

This approach does not follow SAP best practice for the central master data management in SAP MDM. This design implies quite intensive data maintenance – number of syndications, imports, data updates, data deletion, workflow, records check-in and check-out, matching and searches operations which form a significant workload on the MDM server.

Caveat: It is a known fact that by design MDM server places an exclusive repository lock each time it needs to process a data access operation which supposes to change any data in the repository. In case an exclusive data lock is set, a complete repository is blocked and all other requests to this repository will be on hold until the operation initiated the lock is complete and the lock is released. Assuming a case where there are significant number of concurrent processes that tries to access the data from such a main table, there would be high risk that these processes will block each other preventing required smooth business operations. This is especially critical when a background process is blocking a dialog process as a user will be waiting for an online feedback from the system and may get a timeout instead.

What to do: First of all proper performance tests are required on such  models. If the results of performance tests do not meet the business requirements in respect to the performance of corresponding processes, the solution and the data model need to be re-designed.

2: Main table containing no primary master data

Typically, a main table should not be established to support any external processes, a BPM workflow process for example. The object contained in the data model may be directly related to the processing of the master data, but by nature it does not contain the primary master data. This main table as well as all the referenced objects (Tuples and lookup tables) related to it would obviously be updated during data maintenance processes in the repository and therefore will create additional set of concurrent accesses to the repository. In fact, this is similar to the one above, but in this case MDM is used to keep additional information required for some external process to work and this is not recommended by SAP.

Caveat: The impact is the same as described in the issue above.

What to do: Consider to keep this information either in a dedicated repository or out of the MDM server.

3: Significantly bigger data model

In general, significant size and the complexity of the data model can be mostly justified by a combination of two factors, main business requirements of a given MDM implementation and handling of special characteristics of master data attributes in MDM.

For example, if we take an SAP ERP system, main attributes imply all common mandatory attributes which define an object independently from their usage in the enterprise’ subsidiaries, organizations, and locations; while special attributes will imply all other specific characteristics of master data. The main idea behind this division is that MDM data keep only those attributes and characteristics which are common for more than one receiving system to which MDM data are distributed. All other (special) attributes and characteristics are maintained locally is each receiving system following their dedicated requirements to the data content, validation, and business needs.

Caveat: The approach of keeping only global data attributes in MDM repository is generally recommended by SAP. It allows keeping the MDM data model compact, performing well, simple in maintenance, and cost effective in the implementation because the data structure does not need to be too complex and it is relatively easy to ensure a smooth and reliable integration with other systems (data maintenance and distribution).

If this approach is not followed, the corresponding risks take place and they should be mitigated on a project basis. In general, the larger the data model is and the more data it has, the slower MDM Server will be in regard to certain operations on the repository level. Examples of such operations are: loading of the repository, update of the repository indexes, and archiving.

In addition to that, there is the following technical issue behind the situation when both the repository schema and total amount of lookup values have significant volume. When an MDM client needs to send a request to MDM Server (MDS) a communication between this client and MDS needs to be established. Depending on the client architecture / realization, an initialization of this communication may imply the following main technical activities:

  1. getting a connection session,
  2. getting information about the schema, and
  3. getting indexes of lookup tables.

The actions 2 and 3 can be quite time consuming depending on the size of the schema and the volume of the lookup entries. Therefore, the size of the data model may negatively influence the above mentioned activities.

What to do: Avoid storing target system specific special attributes in MDM.

Common mistakes at Table level

The following points are related to the tables in a given MDM repository and tables’ definition (types and structure), identification of redundant or unreferenced tables, tables’ property setting (e.g. key mapping, display fields, unique fields).

4: Hiddel Tables

Typically, unused tables and the tables that were inherited from a past MDM implementation are marked as Hidden because they are not used anymore.

Caveat: The same as point 3 above.

What to do: Remove all tables which are not used from the repository.

5: Non-referenced tables

In many instances. many not referenced tables (a table not referencing any other table in the repository) could be found in a repository. Not referenced tables are stand-alone tables in terms of integrated structure of all repository objects.

Furthermore, there are cases where there would be no records at all in such tables. In a finalized data model this usually means that either these tables are obsolete and need to be removed, or they contain additional “technical” information and therefore cannot be directly addressed in the business data model, or they are involved in other processes which are not directly related to the main objectives of the existing data model. Of course, there might be other reasons.

Caveat: The overall performance of the MDM system may be negatively influenced by additional data maintenance which is unnecessary for the main data flow processing related to a particular repository. CPU and Memory consumption is higher than expected because of additional load from not directly referenced data.
What to do: Review the tables and make a decision for each table. If a table is not required, delete it. If it is required and belongs to the same business object, consider to better integrate it in the existing model. In general, stand-alone tables are possible, but not recommended because of the reasons give above.

6: Not all tables having unique fields

Often it has been observed that many repository tables would not have any fields defined as Unique Fields (UF). Tables with defined UFs of their combinations (combined UF key) get an advantage of data uniqueness enforced by uniqueness of the corresponding field values or field values combinations. Data which do not fit the uniqueness rules are denied at the stage of their creation. It is clear that the definition of Unique Fields or their unique combinations is driven by the business requirements and types of handled data.

Caveat: Data in tables without defined UFs do not get value added by this feature

What to do: Review all the listed tables to ensure that at least one UF is defined per table where it makes sense.

7: Key Mapping activation for many tables

Enabling Key Mapping is yet another common mistake that data modelers do. Often the key mapping would be enabled for not just the main tables but also for the lookup tables and hierarchies.

Caveat: The usage of Key Mapping (with maintained key values) will increase the loading time of the repository, and may negatively influence the performance of update operations as well as the data syndication. The Key Mapping can be considered as an additional text field of objects which key values are maintained for.

What to do: Below you find some use cases which make the Key Mapping useful for data maintenance:

Existing key mappings are always preserved, even for destination records set to be replaced with the Replace action, as follows:

  1. if a source field is mapped to the [Remote Key] field, MDM will reject the source record rather than perform the replace; and
  2. if a source field is not mapped to the [Remote Key] field, MDM will perform the replace after first transferring the [Remote Key] values of the target replaced records to the new record.

In each case, the Replace action will not result in the loss of [Remote Key] values.

Each key in the source data is represented by a value in the list of source values. When a Source Field is mapped to a Destination Field that has key mapping enabled, the MDM Import Manager loads value mapping from the table of remote keys rather than from the map, automatically mapping each Source Value to a Destination Value if the source/destination value pair exists in the key mapping (similar to the way value mappings are restored when a map file is loaded).

[Remote Key] can be used as a record matching field. Each source record will match one destination record if a key mapping exists for the source remote key, and each remote key is associated with a single destination record during the matching step. If no matching record is found and the Create option is selected, the remote key will be associated with the newly created record. All details regarding the Key Mapping functionality can be found in the MDM Reference Guides.

Also verify the particular need of the Key Mapping feature for each repository object and disable it where it is not required. The Key Mapping can be activated any time at later stages of the solution development.

Common mistakes at Field level

The following pointws are related to tables’ fields and at fields’ property setting usage (e.g. keyword search, sort indexing, search tab), identification of unnecessary fields (use of fill rate analysis) and verification of field types usage.

8: Change Tracking activation for many fields

For some reason or the other, in most of the MDM implementations, to quickly enable auditablity of the MDM repository, the Change Tracking functionality is enabled for a significant amount of fields. While there would be an obvious business requirement to have it in place, but because of the possible negative impact on the MDM Server performance from the change tracking activation, the number of enabled fields should be kept as small as possible.

Caveat: Enabling a significant number of fields for the Change Tracking will negatively impact the performance of all operations updating data in the repository in case such data imply many change tracking activities.

What to do: In general, it is recommended to enable as small number of fields for the Change Tracking as possible to reduce the negative impact on the performance of all data update operations (import and maintenance).

Unfortunately, there is no out-of-the-box alternative solution which could be applied if a significant number of fields need to be enabled for the Change Tracking. Therefore, following the action plan could be adopted to mitigate the risk related to this issue:

  • Verify if number of fields enabled for the Change Tracking can be reduced; for instance, fields of many not referenced tables are enabled for the change tracking and can be considered as candidates for which this functionality will be deactivated.
  • Test the performance of all essential update operations using representative data volumes; verify if the system performance meets the business requirements;
  • If the performance is not satisfactory, similar tests need to be executed to check what level of performance improvement can be achieved with deactivated Change Tracking functionality; if the results are satisfactory, an alternative project based solution should be developed.
9: Enabling Sort Index for many fields

It is always good to understand if there is a definite requirement of having a possibility to sort values in all these fields in the MDM Data Manager, or if there are any other reasons why this attribute has to be switched on for the fields.

Caveat: If the Sort Index attribute is activated for a field, the system will need to maintain additional dedicated index. This will increase CPU and memory consumption from the MDM server side and it will slow down such operations as repository load and all operations which imply a data update (change) because the
corresponding indexes have to be updated as well. At the same time, there is also a positive influence on the performance of searches executed by fields with sorted indexes – this is a side effect of this attribute.

What to do: Double-check if all the fields require the activation of this attribute and disable it for each field where it is not really required. This attribute is an additional technical definition of a field and it can be enabled or disabled any time. This means if business users will require this function, the attribute can be activated in
the productive environment with only limitation that the repository should be re-loaded with the option of indices update.

Also it is worth noting that all Display fields are already sorted and there is usually no need for these fields to enable Sort Index attribute in addition. You need to estimate the following capabilities of the Sort Index attribute making your decision regarding its activation:

  • the Sort Index field property makes a field sortable in the Records grid of the MDM Data Manager and through the API;
  • it accelerates free-form search for the equals and starts with Text operators, and for the =, <, <=, >, and >= numeric operators;
  • it also greatly improves matching speed on Equals matches in the Data Manager’s Matching mode.

Your decision should finally be based on all pros and cons from the usage of this property.

10: Suspicious field definitions

Sample 1: Flat table with a field of datatype Text (255)

Sample 2: Flat table with a field of datatype Text Large

It is always possible that for all such cases another field type or the same type but with shorter length could be chosen, or the field could be made the multi-valued.

Impact: In order to ensure maximum performance of MDM server and its minimum rate of the memory and CPU utilization, the data model should be as compact as possible. If this approach is not followed all corresponding risks are applied.

Recommendations: Verify if there are real business and technical requirements behind such definition for each field, and in each case select the minimum required size (and the field type).

11: Smaller cardinality value for fields

The meaning of the cardinality term in this context is the percentage of how unique the field is. Cases where there are fields with their cardinality value very small (far below the pre-defined threshold of 50%), the field could be changed from the normal text type to a lookup.

Impact: There are many factors which make the MDM server performing well for a repository it manages. One of them is the size of the repository. Usually, the optimal approach is to define few broadly used values by means of a lookup table instead of repeating such values thousand times in a certain table field.
Furthermore, arranging often repeated values in a lookup table generally improve the performance of related read type operations (searches, mapping, etc.) and provide additional opportunities to ensure a better data quality.

Recommendations: Examine the fields with small cardinality and, convert them from the normal text type to a lookup.

As said earlier, this list is not complete in any way, but at the same time it tries to highlight the most common ones.

SAP MDM Tutorials | SAP MDM Training | SAP MDM Interview Questions |SAP MDM Books

MDM lifecycle approach

I am not addressing a new terminology; I know most of us are well versed with a typical implementation life cycle. As for sure SAP’s ASAP (Accelerated SAP) methodology is one of the best methods for any SAP implementation.

Let’s consider the big picture for MDM Implementation life cycle which is a mix blend of business Information integration (BII) and its overall quality and also requires some industry experts along with master data operation handling guys.  Navigate the link for more on MDM consulting  approach.

By looking at below five phases of ASAP methodology we have a typical MDM implementation life cycle, although I tried collected some universal steps but may be vary according to needs.

Picture is not yet complete as after these lots of transition strategies & impacts, variety of change management as MDM is a joint effort between business & IT. As a result the golden rule is crystal clear now “MDM is an ongoing activity & not onetime project”. Again thanks to SAP for coming up with MDG approach which is integrated with SAP Business suite application and providing the “Nirvana” state for MDM landscapes by providing flexible governance along with some customize applications.

SAP MDM Tutorials | SAP MDM Training | SAP MDM Interview Questions |SAP MDM Books

SAP Developer Network Latest Updates