Why some Data Quality and MDM tools undermine Data Governance

If you ask most people involved with information management and data governance if they understand data quality you would get a resounding “yes!”  However, the true definition of what constitutes quality data is a very complex and subtle one.  Data that is “high quality” for the purpose of, say, operations may be nearly useless for the purpose of invoicing.  In other words, data quality really is relative to the business process that will use the data.

Traditional data quality tools generally address only technical aspects of data quality – such as cardinality of attributes, their data type, etc.  Even address verification will only validate that the address exists – it will tell us nothing about whether or not the customer’s accounts payable department will receive an invoice mailed there.

Master Data Management (MDM) tools can certainly help address this issue, but many MDM tools have the potential to create their own problem.  To illustrate what this problem is, let’s take a look at one of the most common categories of information in MDM: customer.

I visited a major regional bank a while back that had “solved” their customer master problem.  They had collected all the customer data from across the bank (depository, retail credit, commercial lending, etc.) and developed a single master repository of customer data in a popular CDM tool.  However, no one was using the data.  The reason?  The CDM tool’s view of customer was not “their” view of customer.  The master records were technically correct; they contained all the common information, and correctly identified each customer.  However, all of the LOB detail and “color” had been lost.

This is, I believe, one of the prime obstacles to agile and effective Data Governance.  A fixed, or at least difficult to change, information model invariably compromises the full value and utility of a MDM program, which is where many Data Governance rules are implemented.  To be effective, MDM implementations need to be customizable to meet the information needs of a particular enterprise.  Just as crucially, they need to be agile enough to evolve along with the changing needs of the business.

So, since having truly high quality data that is not only technically correct but also “fit for purpose” is important to the smooth operation of business processes, then the toolset and procedures that collect , maintain and deliver this information has to be capable of being aligned exactly to the needs of those processes.  This necessitates an open, flexible, and easily customized information model.  It also requires extensive automation, in areas like data entry and review screens, to enable rapid response to changing requirements.  It also, of course, means that many aspects of traditional data quality enforcement be incorporated into the MDM environment (and that these rules adjust to changes in the information model).

Organizations that have embraced this sort of an agile MDM strategy find it much easier to support a robust data governance strategy – not to mention being able to more easily support agile data warehousing and business intelligence.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply