I just finished reading a blog by Forrester’s Connie Moore, “Do You Have a Single Version of Process?” on the Information Management website. Having lived parts of my career in both the Business Process Management and Data Management worlds, I couldn’t agree with her more! This piece helps to bring us back to earth regarding the inextricable link between process and data which I have written and spoke about many times. Good data into a bad process is as unpredictable as bad data into a good process.
I have seen countless organizations struggle with getting to a single version of the truth – both in data and process. The reasons are many, but often it boils down to the appetite for instituting cultural change across an organization. To get to a single version of process truth requires that at least someone in the enterprise needs to give up “their way of doing things” to conform to a standard. Of course in order to affect that change there has to be something in it for them. Data is not so different. In order to arrive at an enterprise definition for a term like “customer” requires compromise on someone’s part to change the way they view things to conform to the generally accepted definition. No one likes to compromise unless there is a very compelling reason to do so.
While these transformations are not easy and can make life for the change-agent miserable at times, the value gained is well worthwhile. A clean, consistent version of master and reference data serves to eliminate or illuminate data as an influence on how a business process operates. This is an important first step before process optimization. Having confidence in the consistency and quality of data fueling a process enables the business process management initiative to better understand how the process should behave and opens the door to better optimization practices yielding significant business performance benefits.
With data no longer a primary concern (because you’ve taken the time to “get it right”), a thorough inspection of the AS-IS process states (plural) and an easier transition to the TO-BE state (singular) can be made – theoretically. Now, moving from the academic exercise of modeling the process to the operational exercise of instituting and consistently executing the process still requires the ability and will to change organizational behavior.
I think that an easier argument for changing the way a process should be executed can be made when the exceptions caused by poor quality data are eliminated. In many cases the variable paths that a process may follow are created to work around the variability of data. Eliminate the variability of data, and you eliminate some (not all) of the variable pathways for the process. Now a more pointed discussion firmed up by measuring the efficiency of process throughput can be had to facilitate the change in organizational behavior and gain more uniformity in the way processes are undertaken.
I realize that I’m making it sound more simple and straightforward than the real world allows. The dynamics of organizational behavior will always make this a very time-consuming and daunting task. However, removing one significant barrier to change – data variability, and instituting ongoing measurement and course correction (you can’t change what you don’t measure) can get your organization on the right path to more predictive process execution and pave the way to more effective optimization yielding greater business agility.