Benefit from SAP NetWeaver MDM global data synchronization - GDS 2.1

SAP NetWeaver Master Data Management (MDM) global data synchronization - GDS 2.1 has successfully completed the Ramp-Up phase, and is now generally available for customers:

Highlights of the new release include:

  • Additional connection to SA2Worldsync data pool
  • Ability to exchange price in the GS1 format
  • Enable the exchange of catalogue selections via peer-to-peer
  • Automated initial publication of trade items
  • Display of images assigned to trade items
  • Propagate data through a hierarchy based upon material number
  • Limit user visibility by target market
  • Create trade items for multiple target markets

For more information about global data synchronization, see the SAP NetWeaver MDM global data synchronization site.

An Eye Opener for Transporting MDM Repositories

Its been a long time since i have not written a new blog in SAP MDM. Being SAP MDM as part of EIM, I was busy working in SAP BW and some other person reasons. Today, i am gonna talk about transport of MDM objects especially MDM repositories in different Environments like (Development, Quality and Production).
Once a developer done with the repository design, maps and all other stuffs and then need to transport it to Quality/Production Environment. What is the right approach.
I would always suggest you should move the repository schema with no records or Export/Import schema to quality environment and then finally to Production.
Its strange to know that some guys keep asking about how to move values of all LUT (look-up Table) to new environment without Main table records. So what they usually do..? They take the archive .a2a file of Developement repository and then unarchive it in Quality environment where they keep the LUT records and delete manually Main table data for this repository which sometimes causes lot of issues.

Issues Threat:

1). Since you have kept the same archive file of development repository with all records. After deleting main table records using MDM Data Manager, it may be still possible that these records are available in Database as Database also keep track of these MDM records using Internal ID key. So though you have even deleted records from MDM Data Manager but because of Database there might probablilty that you might face some issues while creating records in Data Manager like The Requested Record could not be found. So after deleting main table records one should ideally load repsoitory with Update Indices to refresh the linking of records in database too but still after doing it there may be probablilty that you can still face some other issues.
2). In order to keep LUT table values while transporting, you usually forget to integrate ECC system with MDM for populating Refrence table values which is not correct. In future, if you increase some values for look up tables in particular environemnt of ECC, same would not be reflected in MDM as you took these LUT values from ECC Development Environemnt only not from Quality, similarly it could be one of the biggest threat if you dont stick to basics and forget the same thing for Production Environment. You might configure MDMGX extraction for each of Environments for populating reference table values in MDM from ECC for smooth business operations.
3). Since in each environment your Remote System might be with different name and can have different LUT key value, so sometimes it may not allow to save main table records along with LUT Values.
For Example: For LUT table Countries, Country India has Key as IND in ECC Dev Env with remote System ECC DEV. It may or may not be different in ECC Quality Env. If India country Key maintianed as IN in Quality ECC so while importing it will lead to Data discrepancies. So sometimes you can face issues like Error 5611520 - Error Saving Key Mapping and Key mapping value must be unique. You cannot overwrite key.
4). If you are having Auto ID field maintianed in Main table of Repository so you have to make some alternative to count Auto ID starts again from 0 (Zero) since if you deleted exisiting records from MDM repository which you took as complete .a2a file from Development Environment it does not start from 0 (Zero) automatically.
There might be some other issues as well you can face. So I would suggest always move repository transport from one environment to different environment with no records just schema of .a2a file or with help of Export/Import Repository Schema in order to achieve smooth implemenation. One should not bothered about tranporting all Lookup table values from one environment to another using MDM repositories as it can lead to data discrepancies.
Note: All above are my personal views, there is nothing as such strong evidences i can furnish to support above and what other issues you can face, if you transport MDM repository.a2a file from one environment to another with complete main as well as lookup table records and then later deleting main table records to work on MDM repository.

Update on MDM Info Collector

MDM Info Collector Updates

After the initial version of MDM Info Collector (Klaus David blogged about it) that proved pretty successful in message handling by delivering required contextual information for thorough analysis and streamlined processing, the tool now comes with additional features.

The MDM Info Collector creates a comprehensive snapshot (zip file) of MDM system info that can be used later by SAP Support for offline failure analysis. This accelerates the analysis and resolution of the reported failure.

What's New in Version 2.0 of MDM Info Collector

New features include:

  • Remote execution of the MDM Info Collector tool by "SAP MMC" (uses "Server Snapshot" functionality)
    When you do not have access to MDM Server on machine (e.g. the machine is managed by the IT department) but you still need to provide the MDM system info to SAP support, then you would like to use the remote execution.
    From your local Windows PC, run SAP MMC to collect info from a remote UNIX/Windows MDM machine, then download snapshot (containing info package) to your local machine (requires new SAP-Framework (720-path90) and updated MDM profile (available in MDM7.1 SP7 Patch7).
  • Supports also Master Data Import Server (MDIS) and Master Data Syndication Server (MDSS), in addition to the Master Data Server (MDS)
  • Collects much more information than before
    This additional information accelerates the analysis time of customer messages by SAP Support
  • Supports SSL
    As of MDM7.1 SP7

For more info, see the how-to information (PDF) that is attached to SAP Note  1522125.

Hope this info is useful for you.

SAP Customers Talk MDM: Part Two - SentrySafe

General Information About the Blog Series

In this SCN blog series, we'd like to let you hear directly the customer's voice when it comes to mastering master data in the enterprise. The blogs include video statements from SAP MDM customers plus additional info on the MDM initiatives.

The Customer's Voice: SentrySafe

Part Two features Sentry Group, a privately held manufacturer of fireproof files and safes. The company uses SAP NetWeaver Master Data Management to get a single view of their product data.

Click the image to start the video and listen to Gregg Griebel, ERP Program Manager at Sentry Group.

More Information on SentrySafe's MDM Initiative

Type of MDM Initiative
Consolidation of Product Data to provide one single source of truth
Aggregate and consolidate product information from various sources into central MDM repository. Feed SAP ERP back-end with consolidated product information required for streamlined transactional processing.

Stay tuned for the next session.

Regards,

SAP Inside Track 2011: EIM sessions

SAP Inside Track 2011 Midwest is jointly held in two locations: St. Louis and Chicago. I'll be speaking at the Chicago meeting. Hey, did I mention that it's FREE?

Of course, my topic is Information Governance. I'm calling the session "Faster and Cheaper Information Governance. Seriously." If implementing information governance sounds like a 5-year plan involving more meetings than you can stand, then this is the session for you. Whether you are just getting started in your governance initiatives, or whether your company is struggling to reach *success* with governance, we can help. In this session, we’ll discuss common pitfalls of information programs. Most of our time, however, will be spent on how to start small to ensure future success. In this session, you'll learn these skills: - identify good target projects - shape lean teams to accomplish the work - identify which technologies will accelerate your project - prove the value of the initiative to your organization.

The stellar SAP Mentor Gingler Gatling will also be there. That alone is worth a trip. She wrote the book on workflow, you know. :-) Ginger has three sessions, spanning workflow, data migration, and EIM in support of workflow.

Understanding SAP Business Workflow and how it fits in with SAP's BPM Strategy: Sharpen your knowledge of SAP Business Workflow, what it can and can’t do, and how it fits into your SAP architecture. Find out how other companies are leveraging workflows to improve and streamline processes, and the business drivers that make SAP Business Workflow a critical piece of the SAP infrastructure.

Comprehensive Guide to Data Migration: This session takes a deep dive into SAP BusinessObjects Data Services and offers best practices for using both the integration and data quality management capabilities provided in SAP BusinessObjects Data Services.  Get step-by-step instruction for getting started with SAP BusinessObjects Data Services, such as leveraging the pre-delivered migration content it provides and understanding the various connectivity options. Walk though a demonstration of SAP BusinessObjects Data Services migration content for SAP ERP and SAP CRM data migrations, showing the connectivity to SAP configuration tables for data validation and how to map common structures such as customer basic data.   Explore the jobs, workflows, and transformation capabilities within SAP BusinessObjects Data Services, and learn how to ensure SAP application configuration is referenced for data validation.  Discover the data quality major capabilities that can ensure your data migration project enables data management and data governance.   Learn the pitfalls to avoid when using SAP BusinessObjects Data Services for data migration, such as not taking into consideration the customizations – such as Z tables – within your target SAP systems. Take home links to the data migration content within SAP BusinessObjects Data Services.

Enterpirse Information Management and how it relates to business processing: This session will provide a  'what is Enterprise Information Management" and then discuss what this means for the SAP Business Suite, business processing, and business process experts.  The goal of this session is to provide insight into Enterprise Information Management including what it is, why it’s important to business, how it fits into SAP’s strategy and where you can go to get more information.

Tammy Powlas will also be there. Join us at the SAP Inside Track 2011 Midwest meeting.

A bucket of smart ideas - Master Data Governanace Framework for Enterprises

In May of 2011 SAP Master Data Governance Solution aka Embedded MDM solution became generally available. Some brief information about the value proposition of this framework for your organisation can be read here.

Before I dive into describing some of the features of MDG which I find really useful let me describe the need that this solution majorly targets.

Need of an embedded MDM (embedded on ECC system) solution : Managing a solution built on the same/similar system where master data resides is highly desirable from application architecture point of view. It reduces the complexity of the solution to be built and thus also helps reduce long term maintenance cost. It also reduces the footprint of the application largely. Also reusage of the components (out of the box validations) etc helps to create uniformity on what the user experiences while creating data conventionally and via MDG change request framework. This also helps manage the change management activies easily for example end user training etc.

Now some of the smart features in MDG application framework which can help you design an optimum Master Data Governance Solution according to me are :-

1. Change request based processes : Having uniform user experience for differenet data domains such as suppliers, material etc. This framework also exposes historical data changes in the form of change documents (both from the change request and outside of change requests).

2. Flexible processes : The processes (change requests for create / update etc) can be designed on a generic out of the box business workflow. In broader terms this workflow solution has two components - 1. business workflow and 2. BRFPlus application. These two tied up together create a very basic but effective BPM environment where via quick and easy to do configuration, a process can be easily setup / modified to meet business requirements. This again helps expedite application building and also reduces process maintenance turn around times.

3. Data model & related UI views and validations : The MDG solutions need a data model which mirrors the relational model of the master data in question for creation of change request UI, staging area (explained below) and to create validations (both out of the box or custom). The smart thing here is that the application inherits the validations related to master data directly into the change requests as well gives the end user option to code validations via BADI's or though BRFPlus rule sets.The framework also has the capability where data duplication can be prevented via configuration.

4. Staging Area : The work in progress data is not stored in the master data tables but a temporary database until completly approved by the concerned user. This helps control availability of data in transactions.

5. Smart UI :  The UI for the embedded MDM solution is primarily ABAP WebDynpro based and can easily configured using FPM configurations. The resuting views can be same as the conventional master data maintenance transactions (SAP GUI) or can be modified as per business requirements.

6. Integration with Enterprise Search : Easy information search options.

7. EHP5 added functionaity like BCV (Business Context Viewer) : The BCV can be used as a side bar addon on the NetWeaver Business Console (NWBC) for integrating master data key elements to other relevant data and view it in different avaliable options. For example Data relevant to a given material can be obtained from PLM system or CRM sysem etc. So with this information integrations becomes seamless and thus representation of data is better.

8. Deployement Options :  Can be standalone on one system (ECC 6.0 and EHP5) or can be on top of an existing ECC 6.0 system by updating it with EHP5. The MDG system can also integrate with other non SAP systems to create one enterprise solution. More about this can be read here.

With these features, SAP has given the customerrs an option to create solutions which are robust, which donot compromise on data quality aspects but at the same time are flexible and have low cost of development and maintenance.

One SAP for Data Quality: Building an Operational Data Management Organization

Lessons from SAP

Launch of CDM

Customer Data Management (CDM) came into sharp focus at SAP as the data management topic was launched as a board sponsored program in 2008.  To support the CDM program a cross line of business leadership team was established and a multi-year business case and plan was developed.  CDM’s overall purpose was to address highest priority customer master data issues.  Improvement opportunities existed from both short and long term perspectives.   The team set about to fix the immediate issues such as getting the “basics” in place.  In addition, long term solutions included the design and build of an operational customer data management solution.   The successful program spanned a year with staff of 6 full time resources supported by an extended team of subject matter experts from the business and IT.

CDM program scope and focus

Very quickly the CDM team realized that to be successful scope definition and control was required as well as a sharp focus on the most important customer master data issues.  It was apparent that customer data had not received attention over the years. Without governance and quality management many data challenges existed.  The CDM team explored the issues in detail, from an outside in perspective, and found that 20% of the master data elements were delivering the most value to SAP.  That input was used to narrow the scope and evolved into a guiding principle to “get those right”.  To address the longer term root cause information governance issues, master data accountability and standard definitions were included in scope.  

CDM approach overview

Given the CDM scope and the broad nature of the requirements, the program methodology reflected a phased rollout of global data standards which were tested via a pilot and then rolled out to a broader audience. 

Country pilots of global data standards and ownership model (for the most critical elements of customer master data) was successful in delivering business benefits, particularly in the sales and marketing areas.  Data quality reporting and cleansing services leveraged SAP’s EIM suite and clearly demonstrated that the global CDM approach would successfully operate within a regional model.  

The pilots also served to validate the design of the future state data organization and transition plan which reflected a phased rollout to other areas and lines of business within SAP.  For example, the CDM program leadership evolved to form a Global Data Council, consisting of the newly appointed data leads within each line of business many of whom were directly involved in the pilot launch.  

Part of the operational plan also included ongoing data cleansing and resolving the priority customer master data issues.  That scope ran parallel to the establishment of the data management organization which was being established.  The tactical data quality cleansing enabled the program to stay on track and deliver business benefits in the shorter term while building the foundation for longer term data management solutions. 

The successful pilots and data quality improvements enabled the longer term CDM vision to move forward.  Upon program closure in Q1 2010, the team executed a smooth 4 month transition to the operational  “run” state.

SAP's current CDM organization

Fast forward to today as the transition is complete with Maria now leading  SAP’s  operational global data management team.  Maria wears “two hats”, leading both the global data management across the lines of business as well as detailed management within the global field organization.  She is responsible for driving both the global master data strategy and the sales line of business data strategy.   There’s also a close collaboration with the business process owners and a continuation of the data standards and governance established by the CDM program.

Cross LOB governance model

The global data management organization extends to all lines of business (LOB) which have identified data leads who are tasked with driving tactical and strategic data programs.  The LOBs are also responsible for delivering business process engineering and ensuring information governance and accountability is enforced throughout their organization. 

To support the cross-LOB collaboration a Global Data Council was established as a vehicle for the business leads, along with IT and others, to work together to address common data issues and foster cross-LOB alignment.  For example, the Council handles strategic functions such as defining information governance requirements, as well as having direct responsibility for business rollout/adoption.  The council also prioritizes and manages the IT portfolio for the data tool and technology changes , providing one business voice to IT for master customer data related IT projects 

The Council is supported by a cross-LOB Executive Data Steering Committee which provides oversight and decision support.  Executive engagement is a critical success factor for the master data management approach as their attention to the topic results in budget and resource allocation to the data programs.  

Regional data centerswill support process execution

One additional operational component established this year is the development of a Regional Data Management Center model.  Each region now has a team of skilled data resources who focus on delivering improvements to core master data such as enriching customer records, creating master data and updating account assignments.  The regional centers currently support marketing and sales data requirements but will soon expand to support other LOBs such as Finance and Customer Support.

These centers not only provide data services but they are also engaged in driving quality process improvements via best practices and tools.  The DMCs have the master data knowledge and are best positioned to best deliver best practice solutions.  For example, business partner creation processes can be developed and tested by the global data team and once proven can then be enabled via the regional data operations team.   The Global data management team, under Maria, provides the DMCs with common global tools, processes, resources.  They have established global KPIs by which to measure the DMC’s effectiveness.  

2011 areas of improvement

SAP’s global data quality program is measured on both specific KPIs such as reducing duplicates but also process improvements such as reduced data creation time.   KPIs are tracked with data quality reports which use EIM tools such as SAP BusinessObjects Explorer and Dashboard Design (formerly Xcelsius).  These online reports measure both regional and global data improvements and are tailored for the specific audience (e.g. executives vs. data managers) with drill down capabilities. 

Process improvements contribute to overall data management business case in a significant way, particularly in areas that impact the bottom line such as days sales outstanding.  These metrics are tracked by the global data team and are reporting to the executives through dashboards and scorecards. 

Tracking of these improvements is formalized into data quality readouts to the Executive Data Steering committee.  We are now also tracking specific line of business contributions via a holistic scorecard that monitors each line of business’ support of the data portfolio and helps provide a complete picture of all contributions to master data quality.  

Data management technology vision

Technology and tools are critical enablers of SAP’s data strategy and will allow us to more quickly scale and address additional master data requirements.  The vision or roadmap centers on a partnership with IT to deliver solutions that align with best in class technology capabilities.  For example, the roadmap reflects the use of tools to centralized workflow and business rules for easier management and control.  Data models and management tools will help us diagnose and automate changes as needed for more active governance.   Analytics and dashboard tools are required for transparency to all of the processes.  Thus, there’s a significant amount of funding allocated to a multiyear portfolio of IT projects to deliver Information management tools required by the business and prioritized by the Global Data Council.   We use SAP technology from the EIM product portfolio, as well as the SAP business suite.  SAP uses  SAP !! We  partner with the product develop team to provide our feedback  on future and current products .

EIM success factors

While information governance programs are tailored to a company’s Information Maturity,  company pain points  and culture,  there are many common success factors.  First, the company has to be ready for a top’s down enterprise governance approach where all functions participate, collaborate and play a part.  Business sponsorship is also key- the higher in the organization the better as roadblocks will be encountered that will need to be resolved . Funding is another  EIM success factors.  Perhaps even more important is having the right visibility on the required data capabilities along with business sponsorship and readiness for successful change management.   Data tool delivery goes hand in hand with having an organization in place to best leverage the automation with the right people and skills.

What does great data management look like?

Through the journey from focused program to creating a data management business capability , we have defined a few key practices for “great” data management at SAP.  It’s not just about creating roles and building work teams but data management must become an integral part of the organization with cross business decision making forums who will actively engage in driving accountability. The organization must ensure  the data programs enable business goals and don’t exist just for the sake of creating “good” data.  Great data management is embedded as a core part of the business and IT process so that whenever data is created, changed or updated and information governance is clearly defined and easy to follow.  This requires engagement from business leaders and IT leaders with simple language and clear accountability. Data quality is not just an organization practice but extends to trusted sources of data as those sources should adhere to the same standards.  These lessons learned have helped shape SAP’s data management organization; our priorities reflect a desire to achieve high levels of data confidence across the company.

Support for SAP NetWeaver 7.30

Support for SAP NetWeaver 7.30 has been qualified for the following MDM components:

● MDM Web Dynpro Components

● MDM Portal iViews

● MDM Portal Content (Product and Business Partners)

● MDM Web Services (design time and runtime)

● MDM Enrichment Controller

● MDM PI Adapter

● MDM Java API

● MDM ABAP API

● Collaborative Processes for Material Master Data Creation

For more information, see the SAP NetWeaver MDM 7.1 Master Guide on SAP Service Marketplace at http://service.sap.com/installmdm71.

What's New in SAP NetWeaver MDM 7.1 - Release Notes

SAP NetWeaver MDM 7.1 SP07

SAP NetWeaver MDM 7.1 SP06

SAP NetWeaver MDM 7.1 SP05

SAP NetWeaver MDM 7.1 SP04

SAP NetWeaver MDM 7.1 SP00-SP03

Table Types

A traditional SQL DBMS stores data in the records and fields (rows and columns) of a collection of flat database tables. All tables have the same rectangular structure in SQL. A SQL database is relational because of the relationships set up between the different tables.

In an relational DBMS (RDBMS), information about a single record can be combined from multiple tables by relating values in matching columns. This helps to eliminate redundant data; beyond that, however, an RDBMS does not support any additional structuring of the data itself.

By contrast, the MDM system supports a variety of different table types that are specifically suited for the particular requirements of storing, organizing, structuring, classifying, managing, and publishing information in an MDM repository (including efficient support for category-specific attributes, which are inherently non-relational), as shown in the following table.

Table Type

Description

Main table and subtables

Flat

Main table or subtable. A flat table has the standard, rectangular SQL structure consisting of records and fields (rows and columns). The main table of an MDM repository is always a flat table.

Hierarchy

Subtable. A hierarchy table organizes information in a hierarchy, where each record is related to a parent record (even if the only parent is the root) and may also be related to sibling records and/or child records. The main table in an MDM repository typically contains some fields whose data may be hierarchical in nature. For example, a Manufacturer field may need to accommodate division and subdivision information for manufacturers. This hierarchical information is stored in a separate, hierarchy subtable associated with the Manufacturer lookup field in the main table. Most of the hierarchy tables used in an MDM repository contain lookup information for fields in the main table. Other hierarchy tables in MDM include taxonomy tables, the Masks table, and the Families table, described below. MDM supports hierarchies with an unlimited number of parent/child levels.

Note that a hierarchy table is useful even when it is flat (i.e. only leaf nodes below the root), because it stores the ordered sequence of sibling records, allowing you to override the unordered sequence of values in a flat table and instead put the values in a fixed order.

Taxonomy

Subtable. A taxonomy is the classification scheme that defines the categories and subcategories that apply to a collection of records. Categorizing records enables you to isolate subsets of records for various organizing, searching, editing and publishing purposes.

A taxonomy table in MDM stores a hierarchy of categories and subcategories and also supports attributes, “subfields” that apply to particular categories rather than to the entire collection of records. MDM supports multiple simultaneous taxonomies.

Qualified

Subtable. A qualified table in MDM stores a set of lookup records, and also supports qualifiers, “subfields” that apply not to the qualified table record by itself, but rather to each association of a qualified table record with a main table record. MDM supports multiple simultaneous qualified tables.

Qualified tables can be used to support product applications and application-based search, and also to store any large set of subtable records that contain fields whose values are different for each main table record, such as multiple prices for different quantities, divisions, regions, or trading partners, cross-reference part numbers, and additional distributor/supplier/customer-specific information for different distributors, suppliers, or customers.

Object tables

Images

A single table named Images. Stores image files, where each image is stored as a record in the table.

Text Blocks

A single table named Text Blocks. Stores blocks of text, where each text block is stored as a record in the table.

Copy Blocks

A single table named Copy Blocks. Stores blocks of text interpreted as copy, where each text block is stored as a record in the table.

Text HTMLs

A single table named Text HTMLs. Stores blocks of text interpreted as HTML, where each text block is stored as a record in the table.

PDFs

A single table named PDFs. Stores PDF files, where each PDF is stored as a record in the table.

Sounds

A single table named Sounds. Stores sound files, where each sound file is stored as a record in the table.

Videos

A single table named Videos. Stores video files, where each video file is stored as a record in the table.

Binary Objects

A single table named Binary Objects. Stores other binary object files, where each binary object file is stored as a record in the table.

Special tables

Masks

A single hierarchy table named Masks. In concept, a mask acts like a stencil, in that it blocks (“masks”) all main table records from view except the defined subset of records that are included in the mask, to allow the subset to be viewed and manipulated as a whole. A mask is a static snapshot of the set of records that are included in the mask (as opposed to a view or a named search, where the results set is determined dynamically every time the search is run). Each record in the Masks table is the name of a subset of main table records. MDM supports an unlimited hierarchy of masks.

Named Searches

A single flat table named Named Searches. A named search is a static snapshot of the search selections that were in effect when the named search was saved (as opposed to a mask, which is a snapshot of the subset of records), where the results set itself is determined dynamically when it is selected. Each record in the Named Searches table returns a subset of main table records. MDM supports 400 named searches per repository.

Families

A single hierarchy table named Families. Used to further partition main table records in each category into smaller groups based upon the values of other fields and/or attributes. You can associate family data (a paragraph, an image, bullets) once with a family of products rather than with each individual product, and also define the table layout of the field and/or attribute data (field order; stack, vertical, and horizontal pivots; and other display options). This table is available only in Family mode.

Image Variants

(Does not appear anywhere in the MDM Client)

A single table named Image Variants. Used to define the structure and format of each of the variants for each image. Each variant is a modified version derived from an original image; the original image is never modified. This table is managed in the MDM Console and is not visible in the MDM Client.

Relationships

(Does not appear anywhere in the MDM Client)

A single table named Relationships. Used to define each of the different record-level relationships. Each relationship can be either bidirectional (sibling) or unidirectional (parent-child). This table is managed in the MDM Console and is not visible in the MDM Client, although the relationships between records can themselves be created and edited in Record mode.

Workflows

A single table named Workflows. Stores the workflows of an MDM repository, where each workflow is stored as a record in the table. Workflows are created and edited in the MDM Client.

Data Groups

A single hierarchy table named Data Groups. Stores the hierarchy of data groups used to break the entire set of objects in the MDM repository into manageable subgroups.

Validation Groups

A single hierarchy table named Validation Groups. Stores the hierarchy of validation groups used to organize multiple validations for subsequent execution as a group.

System tables

Roles

(Does not appear anywhere in the MDM Client)

A single table named Roles. One of three tables used to implement MDM repository security and access control. Each role can selectively grant or deny access to any MDM function and to any table or field. This table is managed in the MDM Console.

Users

(Does not appear anywhere in the MDM Client)

A single table named Users. One of three tables used to implement MDM repository security and access control. Each user can have one or more roles. This table is managed in the MDM Console.

Logins

(Does not appear anywhere in the MDM Client)

A single table named Logins. One of three tables used to implement MDM repository security and access control. Contains an entry for each currently connected MDM client application, which can be terminated by the MDM Console user.

Change Tracking

(Does not appear anywhere in the MDM Client)

A single table named Change Tracking. Allows you to specify the fields for which adds, modifies, and deletes should be tracked and stored in the Change Tracking table.

Remote Systems

(Does not appear anywhere in the MDM Client)

A single table named Remote Systems. Used to define the different remote systems for import and export. Each remote system specifies whether it supports import only, export only, or both.

Ports

(Does not appear anywhere in the MDM Client)

A single table named Ports. Used to encapsulate the logistical and configuration info for inbound and outbound processing of MDM data, for consolidation and distribution respectively.

URLs

(Does not appear anywhere in the MDM Client)

A single table named URLs. Used to specify the URLs that can be used as the target of an embedded browser in the Web tab in the MDM Client.

XML Schemas

(Does not appear anywhere in the MDM Client)

A single table named XML Schemas. Used to identify the XML schemas for import and syndication. Each XML schema is the name of an .xsd file.

Reports

(Does not appear anywhere in the MDM Client)

A single table named Reports. Contains an entry for each report file generated by the various MDM repository operations, which can be accessed and viewed by the MDM Console user.

Logs

(Does not appear anywhere in the MDM Client)

A single table named Logs. Contains an entry for the log files generated by the MDM Server, which can be accessed and viewed by the MDM Console user.

MDM Repository Structure

An MDM repository consists of the following tables:

Main table

Every MDM repository has exactly one main table. The main table consists of the primary information about each main table record. For example, an MDM repository of product information would include an individual record for each product and an individual field for each piece of information that applies to all products, such as SKU, product name, product description, manufacturer, and price. Most of the time you will be looking at information in the main table.

Subtables

An MDM repository can have any number of subtables. A subtable is usually used as a lookup table to define the set of legal values to which a corresponding lookup field in the main table can be assigned; these tables hold the lookup information. For example, the main table of an MDM repository of product information may include a field called Manufacturer; the actual list of allowed manufacturer names would be contained in a subtable. Only values that exist in records of the subtable can be assigned to the value of the corresponding lookup field in the main table.

 

Lookup subtables are just one of the powerful ways that MDM enforces data integrity in an MDM repository. The set of legal values associated with lookup fields also makes the MDM repository much more searchable, since a consistent set of values is used across the entire repository.

Object tables

Object tables include the Images, Text Blocks, Copy Blocks, Text HTMLs, and PDFs tables. An object table is a special type of lookup subtable, where each object table is used to store a single type of object, such as images, text blocks, copy blocks, HTML text blocks, or PDF files. You cannot store an object directly in a main or subtable field in an MDM repository. Instead, each object is defined or imported into the repository once and then linked to a main or subtable field as a lookup into the object table of that type.

 

Object tables eliminate redundant information, since each object appears only once in the MDM repository even if it is linked to multiple records.

Special tables

Special tables include the Masks, Families, Image Variants, Relationships, Workflows, Data Groups, and Validation Groups tables.

System tables

System tables appear under the Admin node in the Console Hierarchy and include the Roles, Users, Logins, Change Tracking, Remote Systems, XML Schemas, Reports, and Logs tables.

SAP MDM Tutorials | SAP MDM Training | SAP MDM Interview Questions |SAP MDM Books

MDM Modes

The MDM Client operates in five modes. Each mode is designed for manipulating specific types of tables and repository information, as follows:

Record mode

Allows you to search, view and edit the records of any table in the MDM repository. This is the mode you will use most often, primarily to view and edit records in the main table, but also to view and edit records in any of the subtables.

Hierarchy mode

Allows you to view and edit the hierarchy tables in the MDM repository, including regular hierarchy tables, taxonomy tables, and the Masks table. Though you can also view and edit the records of a hierarchy table in Record mode, Hierarchy mode specifically allows you to edit the parent/child relationships and the sibling ordering of the hierarchy.

Taxonomy mode

Allows you to view and edit the taxonomy tables in the MDM repository. You will use this mode to create and maintain the category hierarchy used in the repository, and to manage the attributes associated with each category and subcategory. Though you can also view and edit taxonomy tables in both Record mode (for searching) and Hierarchy mode (for editing the other fields of information associated with each category), Taxonomy mode is unique in that instead of focusing on the records of the taxonomy table, it allows you to create and manage the pool of attributes associated with the taxonomy table, and to assign attributes to categories on a category-by-category basis.

Matching mode

Allows you to identify and eliminate duplicate records within an MDM repository. When you view the main table in Matching mode, MDM allows you to perform “matching-and-merging” on and against any or all of its records, using various user-defined criteria to decide whether or not records are potential duplicates.

Familiy mode

Allows you to view and edit the Families table, which layers a hierarchy of families upon the taxonomy hierarchy to further break down each category into smaller groups of main table records. Use this mode to partition the categories of the taxonomy hierarchy by the values of other fields and/or attributes, and then to associate family data (such as an image, a paragraph, and bullets) once with each family of main table records rather than each individual record.

Testing and Monitoring an Interface Between MDM & XI Part 2

clip_image001

· Select a message and press Display.

clip_image002

You may notice that I have selected a message that coantains an error and did not actually reach it's destination. In Call Adapter -> SOAP Header take a look at Error. If you double click that button a screen will appear on the right hand side that shows the details of the error.
clip_image003
This error tells us that something is wrong with the IDoc Adapter. It tells us that transaction IDX1 contains errors, but in this case the error is actually in the configuration of our communication channel, in which we have made reference to the wrong Port. If you select Call Adapter -> Payloads you can see the content of the XML message that came from MDM.
clip_image004
If you go back to SXMB_MONI you may want to also take a look at the Processing Statistics program that will show a good overview which can be helpful when testing your interface with thousands of materials.
clip_image005

3. Testing

Now we're going to go ahead and test out the interface from end to end. I'm assuming that by now you have turned on the MDM Syndication Server and your XI interface is activated in the Integration Directory. Lets log into the MDM Data Manager and create a new material for testing purposes.

· Right click -> Add

clip_image006

· Enter enough data to satisfy your interface requirements (ie: which fields must be populated?)

clip_image007

· Click on another material to save changes

· Close the MDM Data Manager

· Turn on your MDM Syndication Server (if it's not already turned on)

If your Syndication Server settings have been configured correctly then we can assume that because you added a new material to the data manager, it will now syndicate as soon as your interval cycles through (set in the mdss.ini file on your server). Lets go ahead and move over to the Exchange Infrastructure Runtime Workbench to see if it has processed our message. Keep in mind, depending on your interval time it may take a few minutes. Hopefully you should see something like this:
clip_image008
If the runtime workbench shows the message transferred successfully then lets log ino ECC and see if the IDoc was posted.

· Log into ECC system

· Run transaction WE02

clip_image009

· Press F8

· In the left hand pane, select Inbound IDocs -> MATMAS

clip_image010

· In the right hand pane, select the IDoc that just transferred and double click on it

· In the IDoc display, on the left hand side expand E1MARAM and select E1MAKTM

clip_image011

· Verify that the material data is correct

clip_image012

· Expand Status Records -> 53 and double click the only record available

clip_image013

· In the pop up window, copy the message number that was issued to the IDoc

· Press Proceed

· Paste the message number that you copied

clip_image014

· Press F8

clip_image015

You may notice that my image says material 11696 created. This is because a modification was made to an ABAP program to create a material when an IDoc is processed with a certain code. In this blog, the ABAP modification is out of scope, but I'm assuming if you are familiar with ALE then this process should be familiar as well. In any case, this is not a permanent solution, just a temporary solution to finish our prototype. If we take that newly generated material number and run transaction MM02 we should be able to pull up the details on that material.
clip_image016
Press Select Views and select Basic Data and continue.
clip_image017
Hopefully if all went as planned, the material should have transferred smoothly, with no loss in data. This concludes the three part series on MDM and XI. Thanks for reading, hopefully it helps!

Testing and Monitoring an Interface Between MDM & XI Part 1

2. Exchange Infrastructure

Now we'll take a look at the second half of this scenario and test out our XI interface.

2.1 Check Configuration

The only configuration we are going to check is the outbound communication channel. This is what tells Exchange Infrastructure where to pick up what file (location, filename) and do what after it's processed by the inbound communication channel (processing mode, ie: delete).

· Start your Integration Directory (Integration Builder: Configuration).

· Navigate to your outbound communication channel.

· Examine your File Access Parameters.

clip_image001

In my case, because this is a test scenario, I have a bash script picking up the file from the port directory and dropping it onto a drive that all of the SAP systems have access to; this being the /depot filesystem. As you can see I made a temporary folder on that filesystem for the files for this interface to be stored while waiting to be processed. Of course, the simplest way to do this would be to mount the Port directory from your MDM machine to your XI machine. Next take a look at your Processing Parameters and change the settings accordingly. For this particular scenario I have set the poll interval to 5 seconds for testing purposes. Also, notice that I am using delete as the processing parameter. This is so that I can verify that the file was processed, and so the folder doesn't get cluttered up with files.
clip_image002
If everything is the way you want it, lets go ahead and take a look at some important locations that will come in handy for testing and debugging the interface.

2.2 Important Locations

2.2.1 Integration Repository - Map Testing

Start the Integration Repository (Integration Builder: Design) and navigate to the map that we built in Part II. Select the Test tab.
clip_image003
To test our map, we can actually use the XML document that MDM generated via the Syndication Server. Lets go ahead and try this.

· Press the "Load Test Instance" button.

clip_image004

· Select the XML file MDM generated.

clip_image005

· Press the "Start Transformation" button.

clip_image006

If everything went smooth then you should see a pop up screen that says "Executed successfully". Otherwise you will recieve an error to which you can begin your debugging process.
clip_image007

2.2.2 Runtime Workbench - Component Monitoring

The runtime workbench is one of the most powerful and useful features of Exchange Infrastructure. Here we can get very detailed descriptions of errors that may occur with each component of XI. The component that we will want to pay particular attention to is the Adapter Engine.

· Log into your runtime workbench and select Component Monitoring -> Display.

clip_image008

· Click the Adapter Engine link.

clip_image009

Here you can view the status of the adapter. If there is an error in your configuration of a particular adapter it will show up here.
clip_image010

2.2.3 Runtime Workbench - Message Monitoring

Follow a similar procedure to display that Message Monitoring.
clip_image011

· Select your time filter, in this case I will select the last hour.

· Press Start.

clip_image012

You can now see the list of messages that have been processed by the Adapter Engine over the last hour. On my system only one message has been processed in the last hour. You can press either Details or Versions to view more information about any particular message that was processed.
clip_image013

2.2.4 Integration Engine - Monitoring

This is a particularly useful component of Exchange Infrastructure that allows us to view various aspects of the messages that get processed. Lets start by logging into the XI system and taking a look.

· Run transaction SXMB_MONI.

clip_image014

· Double-click Monitor for Processed XML Messages.

· Press F8 or the Execute button.

Testing and Monitoring an Interface Between MDM & XI

Hello and welcome back to the last of a three part series on integrating MDM with ECC with XI (sorry for the onslaught of acronyms). If you missed out on the first two you can find them here: Part I, Part II. In this one we will focus on testing out your scenario, and how to troubleshoot (where to look) in both MDM and XI. You may have already noticed that in the previous two parts of this series I used a sample scenario dealing with material master data and the MATMAS05 IDoc structure. Ultimately we want to generate a new material in ECC based on the creation of a material in MDM.So lets go ahead and get started. First we'll start with the syndication process in MDM, and making sure our settings are correct.

1. MDM

First we'll start with the syndication process in MDM, and making sure our settings are correct.

1.1 Check Configuration

1.1.1 Client-Side Settings

· Open the MDM Console

· Navigate to Ports in the repository to which your map is located.

clip_image001

· Verify that you have selected the correct map (built in Part I)

· Select your processing mode as Automatic

clip_image002

· Open the MDM Syndicator (Select the repository and press OK)

· Select File->Open

· Select the remote system representing ECC

· Select your map and press OK

· Select the Map Properties tab

clip_image003

· Check Suppress Unchanged Records so we automatically update only changed records.

· Close and save your map

clip_image004

1.1.2 Server-Side Settings

· Open your mdss.ini file on the MDM server

· Verify that Auto Syndication Task Enabled=True

· For testing purposes, change the Auto Syndication Task Delay (seconds) to something rather small, such as 30 or less. This way you don't have to wait a long time for syndication when testing.

clip_image005

· Verify that the service is started.

· UNIX systems: ps -ef | grep mdss

· WINDOWS systems: open services, and look for entry regarding syndication server

· If service is not running, run command ./mdss (UNIX) or rightclick->start service (WINDOWS)

clip_image006

1.2 Important Locations

I'd like to go over some of the important locations (directories) on your server that will come in handy when troubleshooting and testing. One of the trickiest parts of working with MDM is figuring out where things go and where to look. Because it's so different from the SAP software that we are all used to, navigating the system is not as easy as running a transaction code. Also, MDM reacts to certain situations differently that you may expect, so it's important to know where to look when things aren't working properly. I'm working with MDM installed on HP-UX, however I will try to address each topic as it would appear in Windows to the best of my knowledge.

1.2.1 Home

Log onto your MDM server and navigate to the home directory for the MDM application server. On the server I am working with (sandbox) it happens to be located on the opt filesystem, and the path looks like /opt/MDM. In this directory take note of several important directories:

/opt/MDM/Distributions
/opt/MDM/Logs
/opt/MDM/bin

The Distributions folder is very important because this is where the port directories get created. When you create a port in the MDM Console for a particular repository, it creates a subset of folders in the Distributions directory based on which repository the port was created in, and whether the port is inbound or outbound. For example, in our particular scenario we may navigate to the path/opt/MDM/Distributions/install_specific_directory/Material/Outbound/. Here we will notice a folder entitled ECC which (if you followed the fist part of this series) corresponds to the port that we created earlier. This directory was created as soon as the port was created in the MDM Console. We will focus more on the contents of our port directory shortly.

The Logs folder contains several important log files, however most of them will not apply to our particular scenario, because the logs that we will want to look at are going to be specific to the syndication process, and are located within the port directory. Neverless, I thought it was important to mention that in certain troubleshooting scenarios, don't forget that these log files also exist.

The Bin directory is critical because that is where the files that start the app servers are located. The programs mds, mdss, and mdis are critical files.

1.2.2 Port

Your port directory is going to have the following format:
/MDM_HOME_DIRECTORY/Distributions/MDM_NAME/REPOSITORY/Outbound/REMOTE_SYSTEM/CODE/
For example the we created looks like this:
/opt/MDM/SID.WORLD_ORCL/Material/Outbound/ECC/Out_ECC/
Here you should see the following directories:

/Archive
/Exception
/Log
/Ready
/Status

The Archive directory is not as important during the process of syndication as it is with the process of importing data into MDM. This directory contains the processed data. For example, if you were to import an XML document containing material master data, a message would get placed in the archive directory for reference later if you ever needed to check.
The Exception directory is very important because often times when an error occurs you can find a file has been generated in the Exceptions folder that should look similar to that file that either the import server or the syndication server are attempting to import or syndicate. In other words, lets say you were attempting to import an XML document that contained material master data, but the map that was built in MDM has a logic error, the document will instead get passed to the Exceptions folder and the status of the port will be changed in the MDM Console program to "blocked".
The Log directory is important for the obvious reason. Logs are created each time the syndication server runs. So if your interval is 30 seconds, then a log will be generated in this folder every 30 seconds. It will give you the details of the syndication process which ultimately can be critical in the troubleshooting process.
The Ready folder is the most important folder in our scenario. When the Syndication Server polls during it's interval and performs the syndication, the generated XML message will appear in the Ready folder. So in the case of our scenario, we are going to have material master data exported to this directory and ultimately Exchange Infraustructure is going to pick up the data and process it to ECC.
The Status directory contains XML files that hold certain information pertaining to the import / export of data. This information includes processing codes and timestamps.

1.3 Testing

Now are going to test out our scenario and make sure that the export of materials works correctly. First things first, we need to create a new material in the MDM Data Manager. Make sure that your MDM syndication server is turned on! Remember on UNIX we can start it by running ./mdss in the /bindirectory, and on Windows by simply starting the service.

1.3.1 MDM Data Manager

· Start MDM Data Manager

· Connect to Material repository.

clip_image007

· Add a new material by "right-click".

clip_image008

· Fill in required fields to satisfy the map built in Part I.

clip_image009

· Verify the new product is saved by clicking elsewhere in the Records screen, and then back to the new Material.

clip_image010

1.3.2 Check Syndication

We are now going go verify that the syndication process is taking place as it should based on the settings in your mdss.ini file. If you have set the MDM Syndication Server to perform the syndication process every 30 seconds, as I set it for testing purposes, then by the time you log into your server the syndication should have already occured. Lets check by logging onto the server and navigating to the Ready folder in our Port directory.
/opt/MDMSID.WORLD_ORCL/Material/Outbound/ECC/Out_ECC/
If all went as planned your Ready folder may look something like this:
clip_image011
Those files are XML files that contain the data for each material in your repository that has changed. In this case the only materials in my repository are the two that I just added, so the MDM Syndication Server updated the Ready folder with both new materials. Now they are waiting for XI to pick them up and process them. Before we move over to the XI part lets take a look at one of these files and verify that the data in them is correct. Keep in mind that if you have already configured XI to pick up the files from this directory and process them, it's possible you won't see them here because they have already been deleted by XI (based on the settings in your communication channel).

1.3.3 Verify Data

Lets go ahead and open one of these files. I copied the file from the server to my local Windows running computer to examine the file, but of course you can read the file straight from the server if you prefer. If your mapping was done similar to mine, your file should look like a MATMAS05 IDoc in XML structure. This is to make it easier for XI to process since we can export in this format from MDM without much difficulty.
clip_image012

SAP Developer Network Latest Updates