Definitive Process Library? Huh?

This morning one of North America’s leaders in IT best practice consulting, PLEXENT, surprised me with a headline: IT Improvement: What is a Definitive Process Library (DPL)?

Besides a marketing term they made up, it made me wonder, what exactly is a Definitive Process Library?

My conclusion after research: it is a marketing term they made up.

ITIL does not define a DPL. ITIL does define a Definitive Media Library (DML) in Service Transition (Release and Deployment Management):

One or more locations in which the definitive and authorized versions of all software CIs are securely stored. The DML may also contain associated CIs such as licenses and documentation. It is a single logical storage area even if there are multiple locations. The DML is controlled by Service Asset and Configuration Management and is recorded in the Configuration Management System (CMS).

Replace “software” with “processes” and you almost have a definition of DPL, if you would choose to do so (for reasons other than marketing and self-promotion). But why would you?

An organization oriented around services supported by processes would be deeply affected by at all levels, including:

  • The organizational chart
  • Roles and responsibilities
  • Approval matrices
  • Authorization rights
  • Communication plans
  • Key Performance Indicators and reporting metrics
  • Human capital management
  • Automation tools

To name just a few. ITIL provides a conceptual framework dealing with these challenges, including the CMDB, the CMS, and the SKMS.

For services ITIL has added the Service Portfolio and the Service Catalog, concepts which for knowledge management purposes could be dealt with through the broader framework of Knowledge Management.

For processes, they are stored in the CMDB and managed through Change Management. No other consideration is required, besides how you publish, communicate and manage the downstream impacts (some of which are mentioned above).

In practice I have not observed any outstanding or notable best practices. I have seen them stored and published on a file share, on SharePoint, on the Intranet in a CMDB, and as email attachments. Have you seen best practices that uniquely stand out? If so let me know, I would love to hear it.

HP’s $10 billion SKMS

In August 2011 HP announced the acquisition of enterprise search firm, Autonomy, for $10 billion.

It is possible HP was just crazy and former CEO, Leo Apotheker, was desperate to juice up HP’s stock price. With Knowledge Management.

Within ITSM the potential value is huge. Value can be seen in tailored services and improved usage, faster resolution of Incidents, improved availability, faster on-boarding of new employees, and reduction of turnover. (Ironically, improved access to knowledge can reduce loss through employee attrition).

In 2011 Client X asked me for some background on Knowledge Management. I did prepare some background information on ITIL’s Knowledge Management that was never acted on. It seemed like too much work for too little benefit.

ITIL’s description does seem daunting. The process is riddled with abstractions like the Data —> Information —> Knowledge —> Wisdom lifecycle. It elaborates on diverse sources of data such as issue and customer history, reporting, structured and unstructured databases, and IT processes and procedures. ITIL overwhelms one with integration points between the Service Desk system, the Known Error Database, the Confirmation Management Database, and the Service Catalog. Finally, ITIL defines a whole new improvement (Analysis, Strategy, Architecture, Share/Use, and Evaluate), a continuous improvement method distinct from the CSI 7-Step Method.

Is ITIL’s method realistic? Not really. It is unnecessarily complex. It focuses too much on architecture and integrating diverse data sources. It doesn’t focus enough on use-cases and quantifying value.

What are typical adoption barriers? Here are some:

  1. Data is stored in a variety of structured, semi-structured, and unstructured formats. Unlocking this data requires disparate methods and tools.
  2.  Much of the data sits inside individual heads. Recording this requires time and effort.
  3. Publishing this data requires yet another tool or multiple tools.
  4. Rapid growth of data and complexity stays ahead of our ability to stay on top of it.
  5. Thinking about this requires way too much management bandwidth.

In retrospect, my approach with Client X was completely wrong. If I could, I would go back and change that conversation. What should I have done?

  1. Establish the potential benefits.
  2. Identify the most promising use cases.
  3. Quantify the value.
  4. Identify the low hanging fruit.
  5. Choose the most promising set of solutions to address the low hanging fruit and long-term growth potential.

What we need is a big, red button that says “Smartenize”. Maybe HP knew Autonomy was on to something. There is a lot of value in extracting knowledge from information, meaning from data. The rest of the world hasn’t caught up yet, but it will soon.