Validating Configuration Management Uses

Two weeks ago I discussed how the value of Configuration Management activities are derived from the improvements made to other processes.

I want to suggest another way that this can happen, borrowing concepts that recently starting to surface in Project Management communities. That concept is called PRUB and was introduced last year by a professor from New Zealand, Dr. Phil Driver, and published this year in the book “Validating Strategies: Linking Projects and Results to Uses and Benefits“. As a tool it is both simple and somewhat intuitive, but it provides a simple framework for eliminating disconnects between high-level desires of management the practical day-to-day realities of operations. It helps us ensure there exists a clear and understood path from the project charter to through the deliverables to uses and benefits.

PRUB is an acronym for Project, Results, Uses, and Benefits. Originally intended to map the projects that support a particular strategic direction, it is simple and flexible enough that we can map it to improvement initiatives IT Service Management. In this case I am starting to use it as a tool for validating the uses associated with Configuration Management implementations, because it can help eliminate some of the shortcomings common to attempts to improve Configuration Management activities.

I did also want to step through a specific case study from a recent implementation with a government entity. I should note that we did not perform a specific PRUB analysis for this set of use cases of the CMDB, but I mention it because their requirements clearly thought through the each step of the PRUB analysis. They built some novel functionality into their CMDB implementation that I really liked.

I wrote up this and another example in the attachment: PRUB for CMDB. It contains a use case for Configuration Management with the Access Management process that was based loosely on this government entity.

PRUB_for_CMDB_drawing

 

  • Project / Process: Configure the CMDB to improve Access Management (this is a novel usage).
  • Results: Create a Permission Configuration Item for each authorized access request to a system. Link the CI to access request ticket. Create a relationship between the Permission CI and the sytem CI. In addition, import historical Permissions (prior to system implementation) to the CMDB.
  • Uses: When visualizing planned changes to a CI, the relationships can show them who has been granted access. The CI’s can be reported for scheduled reviews or audits of valid accesses. The specific access history can be pulled up from each Permission CI (based on the Access Request ticket). More broadly this use of the CMDB provides a more flexible mechanism to visualize and view the access requests.
  • Benefits: Improved assessment of the impact of planned changes. Improved reporting, controls and auditability for CI’s.

There are other relationships associated with these Permission CI’s, but I just wanted to provide a flavor for how the PRUB tool can demonstrate specific uses and benefits of the Configuration Management process.

Are the benefits measurable? If you can identify measurable benefits you should include them in your analysis. If you come across a CMDB implementation that does not clearly map out the results, how those results will be used, and how those uses will bring benefits to the users or the organization, then that should be a red flag.

The Balanced Improvement Matrix

Two weeks ago I presented to a customer how their IT improvement program can be improved by adopting principles from ITIL. I used this slide to illustrate another way to think about the issue.

Benefit-Change-MatrixClick to expand

Recipient of Benefits

The Y-axis who receives most of the immediate benefit of the activity. “Inside” refers to IT, either a component of IT or the entire department.

Outside refers to the outside stakeholders for IT services. Generally they fall into one of these groups:

  • Users: those who directly use the services. Generally the users also request the service.
  • Internal customers: those who request or authorize services on behalf of the users. Generally customers are the users, but sometimes they are distinct.
  • External customers: The ultimate customer who exchanges value with the organization.

Focus of Change

The focus or perspective of change describes where most of the change or improvement takes place. We are also describing this as within IT or out of IT.

The change or improvement may or may not be limited to the primary location. There are often spillover benefits for related stakeholders that are less immediate.

Examining the Quadrants

Inside-In

This quadrant describes change or improvement activities that are limited exclusively to IT. Some examples may include:

  • Code refactoring
  • Recabling
  • Process improvement
  • Service Asset and Configuration Management
  • Training

Inside-In activities may be thought of as “charging the batteries”.  External stakeholders will not see immediate benefits, but the benefits will accrue over time as the IT organization becomes more agile, flexible, efficient and effective.

Inside-Out

Inside-Out activities are those that modify the behavior of external stakeholders in order to maximize the capabilities of IT. Some examples may include Demand Management and Financial Management of IT Services, specifically charging for IT services in a way that encourages their efficient use.

Service Catalog Management and Service Portfolio Management also create activities in this quadrant, specifically those that describe prerequisites or costs to external stakeholders.

Outside-In

Outside-In activities are those that benefit external stakeholders by modifying the services or processes of IT. Service Level Management sits firmly in this area. The Service Improvement initiatives within CSI certainly fit here too. Alignment of IT with organizational strategy also reside predominantly in this quadrant.

Outside-Out

Does IT ever perform Outside-Out activities? With a few exceptions, yes, all IT organizations do.

Outside-Out efforts or improvement activities take place whenever IT acts as a consultant to the organization by bringing its unique capabilities and resources to business problems.

Some examples may include:

  • Strategic planning
  • Creating new lines of business
  • Due diligence of partnerships or acquisitions
  • Enterprise Risk Management and Business Continuity Planning

From an ITIL process perspective,  Outside-Out quadrant is best illustrated by Business Relationship Management (SS) and Supplier Management (SD), and some activities of Change Management (ST) and Knowledge Management (ST).

Optimizing the matrix

In no case did we ever claim that any one quadrant is better than another. IT departments of the last century received criticism for focusing too much on inward benefits and losing focus on the broader context in which IT operates. That situation was expensive, frustrating to users, and ultimately untenable.

IT organizations in this century must and do perform activities in all four quadrants. Neglecting any quadrant can lead to the following outcomes.

Benefit-Change-Neglect-MatrixClick to expand

Using frameworks such as ITIL, COBIT 5, or ISO/IEC 20000 to guide improvement initiatives can help IT organizations balance their efforts in all quadrants.

ITIL Certifications for 2013

The ITIL Exam Certification Statistics for 2013 are out, and we are now ready to present the final results.

All the images below may be expanded for higher resolution. All numbers are rounded to thousands (Foundation) or hundreds (Advanced) unless otherwise indicated.

Foundation

A total of 245,000 certificates were issued in 2013, up 3.6% from 236,000 in 2012. There are now 1.73 million little ITIL’ers in the world.

Pass rates increased about 1% to 91%. The compound annual growth rates (CAGR) of annual certificates since 2008 was 1.69%.

The regional distribution of ITIL certificates shifted only slightly from North America and Europe to Asia, whose market share rose 1.1% to 33.8%.

Intermediate

Overall the Intermediate market is growing faster and changing more rapidly than the Foundations market.

A total of 33,300 Intermediate Lifecycle certificates were issued in 2013, up 26% from 2012. In addition 17,600 Intermediate Capability certificates were issued in 2013, up 10% from 2012. We don’t know how many unique individuals this represents, but we can assume that most individuals do not stop at one.

The market share of Lifecycle, adjusted by credit hours, increased from 55.3% in 2012 to 58.8% in 2013. Although gradual, the Lifecycle exams are slowly coming to dominate the Intermediate certification market.

The MALC (alt. MATL) exam was passed 4,500 times in 2013, up 21% from 2012. There are now 25,000 ITIL Experts in existence. (Please note, this number differs slightly from the official number, I assume due to time delays in conferring Expert certificate.)

The regional distribution of Intermediate exams is also shifting. The share of Intermediate certificates is still dominated by Europe, at 41%, down from 47% in 2010. North America declined from 32% in 2010 to 19% in 2013. Meanwhile Asia increased from 12% to 30.5% over the same period. The numbers here represent regional distribution. The number of certificates awarded is up in each market, they are just rising faster in Asia.

 

2012 ITIL Exam Statistics

APM Group has released their ITIL exam statistics for the whole year 2012. I have compiled their statistics and present them with a little more context. 1

ITIL Foundation

  • Over 263,000 exams were administered in 2012, up 5% from 2011. Over 236,000 certificates were issued. 
  • This number finally exceeded the previous annual high which occurred in 2008 at 255,000. Annual exam registrations have climbed steadily since the global financial meltdown.
  • Overall pass rate was 90% in 2012, up steadily from 85% in 2010.
  • We have witnessed a shift in geographic distribution of Foundation certificates. North America’s representation in the global certificate pool dropped steadily from 25% in 2010 to 21.4% in 2012, while Asia’s has risen steadily from 29% to 32.7%.
  • Using unverified but credible data from another source that dates back to 1994, I estimate just under 1.5 million ITIL Foundation certificates have been issued total worldwide.

ITIL Intermediate

  • Over 3,700 ITIL Experts were minted in 2012. No V2 or V3 Bridge certifications were issued. 
  • Just under 54,000 ITIL V3 Intermediate exams were administered in 2012, up 21% from 2011. Over 42,000 Intermediate certificates were awarded (including MALC, which qualifies one for ITIL Expert).
  • The pass rates averaged 79% for the Lifecycle exams, and 78% for the Capability exams. Individual exam pass rates ranged from 75% (SO, ST) to 83% (SS).
  • Pass rate for the MALC (alt. MATL) was 66% in 2012, up steadily from 58% in 2010.
  • Of the distribution of Intermediate certificates, the global shift was even more striking than seen in Foundation. North America’s representation of certificates declined from 32% in 2010 to 20% in 2012, while Asia’s rose from 12% to 24%.
  • Europe’s representation of Intermediate certificates held steady at 47%.
  • Although interest in the ITIL Expert certification via MALC continues to climb, it will not exceed on an annual basis the 5,000 ITIL V3 Experts minted in 2011 via the Managers Bridge exam until 2014 at the earliest, based on historical trends.

Click on a graph to expand

2012ITILFound1 2012ITILFound2 2012ITILFound3 2012ITILFound4 2012ITILAdv1 2012ITILAdv2 2012ITILAdv3 2012ITILAdv4 2012ITILRegion1 2012ITILRegion2 2012ITILRegion3 2012ITILRegion4

1 Unless otherwise indicated, numbers are rounded to the thousands.

The Role of COBIT5 in IT Service Management

In Improvement in COBIT5 I discussed my preference for the Continual Improvement life cycle.

Recently I was fact-checking a post on ITIL (priorities in Incident Management) and I became curious about the guidance in COBIT5.

The relevant location is “DSS02.02 Record, classify and prioritize requests and incidents” in “DSS02 Manage Service Requests and Incidents”. Here is what is says:

3. Prioritise service requests and incidents based on SLA service definition of business impact and urgency.

Yes, that’s all it says. Clearly COBIT5 has some room for improvement.

COBIT5 is an excellent resource that compliments several frameworks, including ITIL, without being able to replace them. For the record, the COBIT5 framework says it serves as a “reference and framework to integrate multiple frameworks,” including ITIL. COBIT5 never claims it replaces other frameworks.

We shouldn’t expect to throw away ITIL books for a while. Damn! I was hoping to clear up some shelf space.

Incident Prioritization in Detail

One of the advantages of working with BMC FootPrints is the lack of definition “out of the box”.1 The tool provides easy configuration for fields, priorities and statuses, and workflows within multiple workspaces, but there are few defaults (besides sample workspaces that are not very usable). This lack of out of the box configurations has exposed me to an infinite variety of choices used by different organizations.

Priority

One organization used the term Severity. This has the advantage of abbreviating to “Sev”, so incidents can be described as Sev1, Sev2, etc. Nevertheless, most organizations stick with Priority.

I have seen these range all the way from 2 (Critical, Normal) to as many as 7 or 8.

 

2 3 4 5 6 7
Critical High Critical Criitical Critical P1
Normal Medium High High High P2
Low Medium Medium Medium P3
Low Normal Low P4
Project Service Request P5
Normal

The table above shows the more common configurations. In my experience the use of terms (High, Medium, Low) is more common than numbering (P1, P2, P3), but the latter is also used.

One of my clients had used numbering, P1 through P5, but they had overused P1 so badly they had to insert a new P0 to achieve the purpose of P1–fortunately they have since fixed the issue. (This reminds me of the project “prioritization” of a former employer. Everything on the list was “High Priority”. They effectively said everything was the same priority, and they were all low.)

I encourage the use of “Normal” instead of “Low”, because no user wants their issue perceived as “low priority”. I have also seen a customer take this advice but swap it out for Medium instead of Low. Most organizations track Service Requests with Incidents, so we usually want some mechanism for differentiating them, but note that new priorities are not required (see Urgency below).

I also find it common to create a separate priority level Project, for handling projects (or extended service requests) that are scheduled past normal Service Level targets. A Project choice is particularly useful when Service Level measurements are tied to Priority (my colleague, Evans Martin, has written about this already.)

I have also seen duplicated sets of Priority choices tied to different Service Levels depending on which team was assigned the work, or regional organizational differences. For example, an software issue assigned to a development group might have a separate set of service levels but remain assigned in the same Service Desk system for tracking purposes. In this case they might have choices like P1, P2, P3, P1-Dev, P2-Dev, and P3-Dev.

Impact

Impact describes the level of business impact. Usually this is described in terms of the number or percentage of users impacted. I had one customer who described the percentage of configuration items (CIs) at its facilities that were impacted (see column 5 below).

 

1 2 3 4 5
High 10+ People Organization Entire Company 80-100%
Medium 2-9 People Department One/Multiple Sites 50-80%
Low 1 Person Individual Department 20-50%
VIP Under 20%
Individual

I have seen organizations describe the number of people affected (column 2), but most common are the choices ranging from Entire Organization to Individual. The choices in between need to reflect your own organization. One customer who ran fitness outlets needed to distinguish corporate sites from fitness centers.

The default configuration High/Medium/Low (column 1) is too ambiguous in most cases, but I have seen it used.

Many organizations separate VIPs from non-VIP individuals. VIPs will often map to Priority similar to Departments.

Urgency

Urgency describes how quickly the incidents should be resolved. In the simplest case this can be High, Medium, and Low, but as with Impact this is usually too ambiguous to be useful.

 

1 2 3
High 0-2 Hours Down
Medium 3-4 Hours Affected
Low 4-8 Hours Service Request
1-2 Days Project
Over 3 Days

I have also seen Urgency described in Resolution time frames. There are two issues with this: the time frame is easily confused with Service Levels (which they are not), and the time frames are also ambiguous especially in situations when no downtime is acceptable. I find Down, Affected, and Request to be useful.

The combination of column 4 in Impact and column 3 in Urgency results in sentences that read in English: the Company is Affected, or the Individual is Down. I like this because the intent is clear.

Mapping Table

The mapping from Impact and Urgency to Priority can usually be described in a table like below. There are no right or wrong answers here, and it varies by organization and by choices for Impact and Urgency. In the table, Impact runs in the first column and Urgency runs in the first row.

 

Down Affected Request Project
Entire Company Critical Critical High Project
Department Critical High Medium Project
VIP High Medium Normal Project
Individual Medium Normal Normal Project

In some cases multiple choices of Impact or Urgency will always map to the same priority. For example, VIP often maps like Department. Although I encourage simplicity, sometimes it makes sense to break them out in order to make the choices clear. (You could also stack choices, such as “Department / VIP”).

You will also need to decide whether to allow overriding these choices. If so, you will need to add a third field (called something like Override or Priority Override) to your mapping table.

Other Issues

  1. Start the discussion with minimal choices for Impact, Urgency, and Priority. Add choices only necessary.
  2. If the tool has default choices, start with those.
  3. You may have Service Level Agreements tied to your Priority that need to be factored in.
  4. Avoid duplicating terms across fields, such as using High/Medium/Low in both Urgency and Priority.
  5. You need to decide whether customers / users can choose the Priority. I don’t encourage it, because the user may not be qualified to understand the Impact. Moreover they will always choose critical. Nevertheless, many organizations do allow it.
  6. Decide if you want default choices for Impact and Urgency. Doing so may limit the usefulness of Priority (IT agents are lazy and often leave the defaults).
  7. As discussed before, you may need a policy for when and whether Priority can be changed.

1 Several customers preferred more options out the box. I can understand the desire for more the “standard configurations” provided by other vendors, but at the time it seemed strange and undesirable.

Changing Incident Priority

The correlation between sanity and Linkedin Groups is inverted. I joined several groups because I like to stay connected with the industry, but the disinformation (and verbosity) can be infuriating. Recently I read the following and several people agreed.

The priority of an incident must never be changed, once determined

For the record, here was my response:

Whether and how the priority should change is a policy issue for the organization. I am not aware of any “good practices” that says one way or the other. Some organizations allow the customer or user to provide the initial prioritization. The Service Desk should review the initial prioritization as a matter of good practice (and obvious necessity).

As Stephen suggested, and as described in ITIL 2011, the calculation of Priority will often be based on Urgency and Impact.

If you enforced this policy in the tool, just imagine the consequences of a simple data entry error that wasn’t detected prior to saving. Fortunately, few organizations use this policy, and ITIL 2011 is even more liberal.

It should be noted that an incident’s priority may be dynamic — if circumstances change, or if an indent is not resolved within SLA target times, then the priority must be altered to reflect the new situation. Changes to priority that might occur throughout the management of an incident should be recorded in the incident record to provide an audit trail of why the priority was changed.

In my experience few organizations create an audit trail for the change of an incident prioritization (although some tools, such as FootPrints Service Core, tracks these changes in the History). As a general good practice I stand by my original comment.

I will discuss the details of incident priorities in an upcoming post.

Improvement in COBIT 5

In a previous post I discussed starting your service or process improvements efforts with Continual Service Improvement (or just Improvement).

I prefer COBIT5, and the issue is ITIL. The good news is the Continual Service Improvement is the shortest of the five core books of ITIL 2011. CSI defines a 7 Step Improvement Process:

  1. Identify the strategy for improvement
  2. Define what you will measure
  3. Gather the data
  4. Process the data
  5. Analyze the information and data
  6. Present and use the information
  7. Implement improvement

This method, as the name suggests, is heavily focused on service and process improvement. It is infeasible in situations where there is no discernible process, a complete absence of metrics, and a lack of activity that could be captured for measurement and analysis. It is infeasible in most services and processes described in most organizations, due to this lack of maturity.

I find the COBIT5 method is more flexible. It also provides 7 steps, but it also views them from multiple standpoints, such as program management, change enablement, and the continuous improvement life cycle.

For example, the program management view consists of:

  1. Initiate program
  2. Define problems and opportunities
  3. Define road map
  4. Plan program
  5. Execute plan
  6. Realize benefits
  7. Review effectiveness

COBIT5 provides a framework that is more flexible and yet more concise, but still provides detailed guidance on implementation and improvement efforts in terms of a) roles and responsibilities, b) tasks, c) inputs and d) outputs among others.

Therefore I find the COBIT5 framework, particularly the COBIT5 Implementation guide, superior to the Continual Service Improvement book of ITIL 2011.

In addition COBIT5 provides a goals cascade that provides detailed guidance and mapping between organizational and IT-related goals and processes throughout the framework that may influence those goals. The goals cascade is useful guidance for improvement efforts, but alas it is the subject of another discussion.

Starting With Improvement

At last week’s Service Management Fusion 12 conference, I attended a brief presentation on Event Management that left a lot of time for questions and answers. One of the questioners had an ordinary concern for organizations starting down the road of “implementing ITIL”: where should we start?1

In this case the speaker demurred using ordinary consultant speak: it depends on your organization and objectives. Event Management supports Incident Management, and that is where many organizations start their journey. I raised my hand and offered a brief alternative: start with Continual Service Improvement (CSI). I didn’t want to upstage the speaker, so I left my comment brief and exited for another speaker whom I also wanted to see.

The 5 books of ITIL imply a natural flow: Service Strategy leads naturally to Service Design. Services are then ready for testing and deployment as part of Service Transition, which will then require support as defined by Service Operations. Once in production, services can be improved with Continual Service Improvement.

This is a natural life cycle for individual services and processes, but ITIL never says services or processes should be improved (or defined) in this order. In fact, ITIL does not offer much guidance on this at all. Because of this, and because organizations are all unique, each organization needs to define its own road map. CSI is one tool for doing this.

I encourage organizations to assemble a board to oversee the development and improvement of service and processes. The board may consist of stakeholders from IT and other functional units that depend heavily on IT’s services, as well as executive management who oversee them. The composition will vary by organization, and would meet quarterly or monthly.

The board’s agenda will include several items, including upcoming projects (new services), reviews or assessments of service and process maturity (if any), reviews of user satisfaction surveys or interviews, and review of existing implementation and improvement efforts. Most importantly, existing performance metrics should be summarized and reviewed. Care should be taken to avoid making this a project review meeting. Instead the focus is on the assessment and maturity improvement of overall IT services and processes in order to guide future development initiatives.

The board serves several purposes:

  • Ensures the prioritization of implementation and improvement efforts receives feedback from a variety of stakeholders.
  • Ensures there is a method or process for implementing and improving services and processes.
  • Provides a forum for reviewing service and process maturity.
  • Provides a mechanism for reviewing service and process performance metrics with various stakeholders.

This concept of a governance board presented here may not apply to all organizations. I have applied it only to one organization. For IT organizations who are challenged with immature service definitions (lack of a Service Catalog), poor operational dialog with other business units, or poorly understood maturity of services and processes, the board is one mechanism for prioritizing and overseeing the improvement efforts.

I emphasize both concepts of implementation and improvement. The practices presented in ITIL v3/2011 are more complete and mature than most IT organizations. In fact I have encountered few organizations with maturity in more than a small fraction, and even fewer with usable performance metrics. Most of the time we start with implementation, because they have too little to engage in improvement, but the improvement board should still oversee and prioritize the implementations.

1 ITIL as a framework cannot be “implemented”. However, we can engage in improvement efforts using the framework as guidance.

 

Definitive Process Library? Huh?

This morning one of North America’s leaders in IT best practice consulting, PLEXENT, surprised me with a headline: IT Improvement: What is a Definitive Process Library (DPL)?

Besides a marketing term they made up, it made me wonder, what exactly is a Definitive Process Library?

My conclusion after research: it is a marketing term they made up.

ITIL does not define a DPL. ITIL does define a Definitive Media Library (DML) in Service Transition (Release and Deployment Management):

One or more locations in which the definitive and authorized versions of all software CIs are securely stored. The DML may also contain associated CIs such as licenses and documentation. It is a single logical storage area even if there are multiple locations. The DML is controlled by Service Asset and Configuration Management and is recorded in the Configuration Management System (CMS).

Replace “software” with “processes” and you almost have a definition of DPL, if you would choose to do so (for reasons other than marketing and self-promotion). But why would you?

An organization oriented around services supported by processes would be deeply affected by at all levels, including:

  • The organizational chart
  • Roles and responsibilities
  • Approval matrices
  • Authorization rights
  • Communication plans
  • Key Performance Indicators and reporting metrics
  • Human capital management
  • Automation tools

To name just a few. ITIL provides a conceptual framework dealing with these challenges, including the CMDB, the CMS, and the SKMS.

For services ITIL has added the Service Portfolio and the Service Catalog, concepts which for knowledge management purposes could be dealt with through the broader framework of Knowledge Management.

For processes, they are stored in the CMDB and managed through Change Management. No other consideration is required, besides how you publish, communicate and manage the downstream impacts (some of which are mentioned above).

In practice I have not observed any outstanding or notable best practices. I have seen them stored and published on a file share, on SharePoint, on the Intranet in a CMDB, and as email attachments. Have you seen best practices that uniquely stand out? If so let me know, I would love to hear it.