Analytic Hierarchy Process

AHP Background

One model I want to explore today is the Analytic Hierarchy Process (AHP) that was developed by Thomas L. Saaty in the 1970’s as a tool for choosing an option using a set of weighted criteria.

For example, we may choose a software package on the basis of criteria such as supported features or functions, scale-ability, quality (fitness for purpose, fitness for use), security, availability and disaster recovery. AHP provides a mechanism for weighting the criteria by interviewing several members of staff for one-by-one assessments of relative importance, which can then be transformed into relative weightings using an eigenvector transformation.

The idea of using multiple criteria to assess multiple options is not new. AHP enhances the ability to weight the assessment criteria using feedback from multiple stakeholders with conflicting agendas. Rather than determining a “correct” answer it assesses the answer most consistent with the organization’s understanding of the problem.

Other use cases can include project portfolio optimization, vendor selection, plant location, hiring, and risk assessment. More information can be found at the International Journal of the Analytic Hierarchy Process (free registration).

Simple AHP hierarchy with associated default priorities.

Applications in ITSM

In the field of ITSM there a examples of papers that describe the instances in which AHP was used.

The paper “EDITOR-IN-CHIEF ENRIQUE MU USES AHP TO HELP CITY OF PITTSBURGH MOVE TO THE CLOUD” (free registration) briefly discusses Professor Enrique Mu’s application of the AHP for the City of Pittsburgh’s efforts to migrate IT functions to cloud providers. The decision period spanned several months and was considered strategic for the city.

Another paper “The critical factors of success for information service industry in developing international market: Using analytic hierarchy process (AHP) approach” (paywall) discusses the use of AHP for analyzing critical success factors in international market diversification for information service providers in Taiwan. They interviewed 22 participants (CEO’s, experts, consultants) to generate pairwise comparisons of CSF’s, with which the AHP method was able to distill into factor weighting. These factor weightings could be used by specific information service providers to determine whether or not they should consider entering specific markets.

In “A Method to Select IT Service Management Processes for Improvement” (free access to PDF) Professors from School of Management & Enterprise at University of Southern Queensland used AHP as part of a method for ranking ISO2000 process priorities for improvement. This particular paper is worth exploring in much greater detail because, in my experience, the prioritization or process or service improvement initiatives can be very painful at organizations, particularly those with multiple influential stakeholders with incompatible or conflicting requirements.

Last but not least, In “DECISION SUPPORT IN IT SERVICE MANAGEMENT: APPLYING AHP METHODOLOGY TO THE ITIL INCIDENT MANAGEMENT PROCESS” (free registration) Professors at the FH JOANNEUM University of Applied Sciences in Graz, Austria discuss the use of AHP in prioritizing Incidents. In their specific implementation they used four decision criteria to prioritize Incidents:

  1. Number of customers affected
  2. Whether “important” customers are affected
  3. Penalty or cost of the outage
  4. Remaining time until violation of the service level target

IT organizations typically use simplified “rules of thumb” methods for prioritizing Incidents based on Impact and Urgency. Notably three of these four factors are typically included inside variants of the schema. Please see my discussion in Incident prioritization in detail (which also avoids the explicit use of SLAs in evaluating Incident resolution).

I don’t find the prioritization of Incidents to be a particularly strong candidate for AHP analysis. High priority incidents are relatively rare and are generally handled one at a time or by non-overlapping resources. Lower priority incidents (routine break-fixes for Services Desk) can be handled first-come-first-service or using the relatively crude but quick methods described in ITIL.

Prioritization of Problems seems a more suitable candidate for AHP because a) Problem resolution can require days or weeks, b) multiple Problems may be active and contending for resources, and c) the resolution or Problem can deliver much greater long-term financial impacts for organizations. The principles and underlying support system would be similar.

Other uses of AHP that merit further investigation include:

  • Prioritization of service and process improvement initiatives
  • Selection of ITSSM tools
  • Selection of vendors (in support of the Supplier Management function / process of Service Design) and/or cloud providers
  • Activity prioritization in environments of resources in multi-queuing environments (leveling of activities across multiple processes and/or projects)

 

 

The Balanced Improvement Matrix

Two weeks ago I presented to a customer how their IT improvement program can be improved by adopting principles from ITIL. I used this slide to illustrate another way to think about the issue.

Benefit-Change-MatrixClick to expand

Recipient of Benefits

The Y-axis who receives most of the immediate benefit of the activity. “Inside” refers to IT, either a component of IT or the entire department.

Outside refers to the outside stakeholders for IT services. Generally they fall into one of these groups:

  • Users: those who directly use the services. Generally the users also request the service.
  • Internal customers: those who request or authorize services on behalf of the users. Generally customers are the users, but sometimes they are distinct.
  • External customers: The ultimate customer who exchanges value with the organization.

Focus of Change

The focus or perspective of change describes where most of the change or improvement takes place. We are also describing this as within IT or out of IT.

The change or improvement may or may not be limited to the primary location. There are often spillover benefits for related stakeholders that are less immediate.

Examining the Quadrants

Inside-In

This quadrant describes change or improvement activities that are limited exclusively to IT. Some examples may include:

  • Code refactoring
  • Recabling
  • Process improvement
  • Service Asset and Configuration Management
  • Training

Inside-In activities may be thought of as “charging the batteries”.  External stakeholders will not see immediate benefits, but the benefits will accrue over time as the IT organization becomes more agile, flexible, efficient and effective.

Inside-Out

Inside-Out activities are those that modify the behavior of external stakeholders in order to maximize the capabilities of IT. Some examples may include Demand Management and Financial Management of IT Services, specifically charging for IT services in a way that encourages their efficient use.

Service Catalog Management and Service Portfolio Management also create activities in this quadrant, specifically those that describe prerequisites or costs to external stakeholders.

Outside-In

Outside-In activities are those that benefit external stakeholders by modifying the services or processes of IT. Service Level Management sits firmly in this area. The Service Improvement initiatives within CSI certainly fit here too. Alignment of IT with organizational strategy also reside predominantly in this quadrant.

Outside-Out

Does IT ever perform Outside-Out activities? With a few exceptions, yes, all IT organizations do.

Outside-Out efforts or improvement activities take place whenever IT acts as a consultant to the organization by bringing its unique capabilities and resources to business problems.

Some examples may include:

  • Strategic planning
  • Creating new lines of business
  • Due diligence of partnerships or acquisitions
  • Enterprise Risk Management and Business Continuity Planning

From an ITIL process perspective,  Outside-Out quadrant is best illustrated by Business Relationship Management (SS) and Supplier Management (SD), and some activities of Change Management (ST) and Knowledge Management (ST).

Optimizing the matrix

In no case did we ever claim that any one quadrant is better than another. IT departments of the last century received criticism for focusing too much on inward benefits and losing focus on the broader context in which IT operates. That situation was expensive, frustrating to users, and ultimately untenable.

IT organizations in this century must and do perform activities in all four quadrants. Neglecting any quadrant can lead to the following outcomes.

Benefit-Change-Neglect-MatrixClick to expand

Using frameworks such as ITIL, COBIT 5, or ISO/IEC 20000 to guide improvement initiatives can help IT organizations balance their efforts in all quadrants.

ITIL Certifications for 2013

The ITIL Exam Certification Statistics for 2013 are out, and we are now ready to present the final results.

All the images below may be expanded for higher resolution. All numbers are rounded to thousands (Foundation) or hundreds (Advanced) unless otherwise indicated.

Foundation

A total of 245,000 certificates were issued in 2013, up 3.6% from 236,000 in 2012. There are now 1.73 million little ITIL’ers in the world.

Pass rates increased about 1% to 91%. The compound annual growth rates (CAGR) of annual certificates since 2008 was 1.69%.

The regional distribution of ITIL certificates shifted only slightly from North America and Europe to Asia, whose market share rose 1.1% to 33.8%.

Intermediate

Overall the Intermediate market is growing faster and changing more rapidly than the Foundations market.

A total of 33,300 Intermediate Lifecycle certificates were issued in 2013, up 26% from 2012. In addition 17,600 Intermediate Capability certificates were issued in 2013, up 10% from 2012. We don’t know how many unique individuals this represents, but we can assume that most individuals do not stop at one.

The market share of Lifecycle, adjusted by credit hours, increased from 55.3% in 2012 to 58.8% in 2013. Although gradual, the Lifecycle exams are slowly coming to dominate the Intermediate certification market.

The MALC (alt. MATL) exam was passed 4,500 times in 2013, up 21% from 2012. There are now 25,000 ITIL Experts in existence. (Please note, this number differs slightly from the official number, I assume due to time delays in conferring Expert certificate.)

The regional distribution of Intermediate exams is also shifting. The share of Intermediate certificates is still dominated by Europe, at 41%, down from 47% in 2010. North America declined from 32% in 2010 to 19% in 2013. Meanwhile Asia increased from 12% to 30.5% over the same period. The numbers here represent regional distribution. The number of certificates awarded is up in each market, they are just rising faster in Asia.

 

ITIL Exams for Oct 2013

Axelos has updated their ITIL Exam Performance Statistics through October of last year. There are no major breakthroughs since the statistics were last reported here for 2012. I am providing only major highlights.

ITIL Foundation

196,000 ITIL Foundation certificates have been awarded so far in 2013, out of 216,000 attempts, an overall pass rate of 91%, up from 90% in 2012. There are now a little over 1.1 million ITIL V3 certificate holders.

The overall attempt rate is flat compared with the same 9 months in 2012. As a result, I predict the total number of new certificates in 2013 will hold flat with 2012, at around 236,000.

Intermediate and Expert

A total of 39,000 Intermediate certificates have been issued during the first 10 months of 2013, up from 35,000 during the same period in 2012. The total number of certificates should reach 47,000 for the whole year.

Intermediate pass rates ranged from 73.8% (Capability SOA) to 82.3% (Lifecycle CSI). Overall pass rates for Lifecycle are slightly higher (80% vs. 78%). The pass rate for MALC is 66.1% in 2013, the lowest overall.

The “marketshare” of the Lifecycle track rose to 58% so far in 2013, compared with 55% in 2012. This is the number of total number of each type taken, adjusted by the number of credits received.

3,200 ITIL V3 Experts have been minted so far in 2013, compared with 3,000 for the same period in 2012. I am reporting a total of 23,720 ITIL V3 Experts, while Axelos reports 23,141. Axelos may be under-reporting due to time lags. I may be over-reporting based on pass rates of the MALC exam.

2012 ITIL Exam Statistics

APM Group has released their ITIL exam statistics for the whole year 2012. I have compiled their statistics and present them with a little more context. 1

ITIL Foundation

  • Over 263,000 exams were administered in 2012, up 5% from 2011. Over 236,000 certificates were issued. 
  • This number finally exceeded the previous annual high which occurred in 2008 at 255,000. Annual exam registrations have climbed steadily since the global financial meltdown.
  • Overall pass rate was 90% in 2012, up steadily from 85% in 2010.
  • We have witnessed a shift in geographic distribution of Foundation certificates. North America’s representation in the global certificate pool dropped steadily from 25% in 2010 to 21.4% in 2012, while Asia’s has risen steadily from 29% to 32.7%.
  • Using unverified but credible data from another source that dates back to 1994, I estimate just under 1.5 million ITIL Foundation certificates have been issued total worldwide.

ITIL Intermediate

  • Over 3,700 ITIL Experts were minted in 2012. No V2 or V3 Bridge certifications were issued. 
  • Just under 54,000 ITIL V3 Intermediate exams were administered in 2012, up 21% from 2011. Over 42,000 Intermediate certificates were awarded (including MALC, which qualifies one for ITIL Expert).
  • The pass rates averaged 79% for the Lifecycle exams, and 78% for the Capability exams. Individual exam pass rates ranged from 75% (SO, ST) to 83% (SS).
  • Pass rate for the MALC (alt. MATL) was 66% in 2012, up steadily from 58% in 2010.
  • Of the distribution of Intermediate certificates, the global shift was even more striking than seen in Foundation. North America’s representation of certificates declined from 32% in 2010 to 20% in 2012, while Asia’s rose from 12% to 24%.
  • Europe’s representation of Intermediate certificates held steady at 47%.
  • Although interest in the ITIL Expert certification via MALC continues to climb, it will not exceed on an annual basis the 5,000 ITIL V3 Experts minted in 2011 via the Managers Bridge exam until 2014 at the earliest, based on historical trends.

Click on a graph to expand

2012ITILFound1 2012ITILFound2 2012ITILFound3 2012ITILFound4 2012ITILAdv1 2012ITILAdv2 2012ITILAdv3 2012ITILAdv4 2012ITILRegion1 2012ITILRegion2 2012ITILRegion3 2012ITILRegion4

1 Unless otherwise indicated, numbers are rounded to the thousands.

The Role of COBIT5 in IT Service Management

In Improvement in COBIT5 I discussed my preference for the Continual Improvement life cycle.

Recently I was fact-checking a post on ITIL (priorities in Incident Management) and I became curious about the guidance in COBIT5.

The relevant location is “DSS02.02 Record, classify and prioritize requests and incidents” in “DSS02 Manage Service Requests and Incidents”. Here is what is says:

3. Prioritise service requests and incidents based on SLA service definition of business impact and urgency.

Yes, that’s all it says. Clearly COBIT5 has some room for improvement.

COBIT5 is an excellent resource that compliments several frameworks, including ITIL, without being able to replace them. For the record, the COBIT5 framework says it serves as a “reference and framework to integrate multiple frameworks,” including ITIL. COBIT5 never claims it replaces other frameworks.

We shouldn’t expect to throw away ITIL books for a while. Damn! I was hoping to clear up some shelf space.

Improvement in COBIT 5

In a previous post I discussed starting your service or process improvements efforts with Continual Service Improvement (or just Improvement).

I prefer COBIT5, and the issue is ITIL. The good news is the Continual Service Improvement is the shortest of the five core books of ITIL 2011. CSI defines a 7 Step Improvement Process:

  1. Identify the strategy for improvement
  2. Define what you will measure
  3. Gather the data
  4. Process the data
  5. Analyze the information and data
  6. Present and use the information
  7. Implement improvement

This method, as the name suggests, is heavily focused on service and process improvement. It is infeasible in situations where there is no discernible process, a complete absence of metrics, and a lack of activity that could be captured for measurement and analysis. It is infeasible in most services and processes described in most organizations, due to this lack of maturity.

I find the COBIT5 method is more flexible. It also provides 7 steps, but it also views them from multiple standpoints, such as program management, change enablement, and the continuous improvement life cycle.

For example, the program management view consists of:

  1. Initiate program
  2. Define problems and opportunities
  3. Define road map
  4. Plan program
  5. Execute plan
  6. Realize benefits
  7. Review effectiveness

COBIT5 provides a framework that is more flexible and yet more concise, but still provides detailed guidance on implementation and improvement efforts in terms of a) roles and responsibilities, b) tasks, c) inputs and d) outputs among others.

Therefore I find the COBIT5 framework, particularly the COBIT5 Implementation guide, superior to the Continual Service Improvement book of ITIL 2011.

In addition COBIT5 provides a goals cascade that provides detailed guidance and mapping between organizational and IT-related goals and processes throughout the framework that may influence those goals. The goals cascade is useful guidance for improvement efforts, but alas it is the subject of another discussion.

Starting With Improvement

At last week’s Service Management Fusion 12 conference, I attended a brief presentation on Event Management that left a lot of time for questions and answers. One of the questioners had an ordinary concern for organizations starting down the road of “implementing ITIL”: where should we start?1

In this case the speaker demurred using ordinary consultant speak: it depends on your organization and objectives. Event Management supports Incident Management, and that is where many organizations start their journey. I raised my hand and offered a brief alternative: start with Continual Service Improvement (CSI). I didn’t want to upstage the speaker, so I left my comment brief and exited for another speaker whom I also wanted to see.

The 5 books of ITIL imply a natural flow: Service Strategy leads naturally to Service Design. Services are then ready for testing and deployment as part of Service Transition, which will then require support as defined by Service Operations. Once in production, services can be improved with Continual Service Improvement.

This is a natural life cycle for individual services and processes, but ITIL never says services or processes should be improved (or defined) in this order. In fact, ITIL does not offer much guidance on this at all. Because of this, and because organizations are all unique, each organization needs to define its own road map. CSI is one tool for doing this.

I encourage organizations to assemble a board to oversee the development and improvement of service and processes. The board may consist of stakeholders from IT and other functional units that depend heavily on IT’s services, as well as executive management who oversee them. The composition will vary by organization, and would meet quarterly or monthly.

The board’s agenda will include several items, including upcoming projects (new services), reviews or assessments of service and process maturity (if any), reviews of user satisfaction surveys or interviews, and review of existing implementation and improvement efforts. Most importantly, existing performance metrics should be summarized and reviewed. Care should be taken to avoid making this a project review meeting. Instead the focus is on the assessment and maturity improvement of overall IT services and processes in order to guide future development initiatives.

The board serves several purposes:

  • Ensures the prioritization of implementation and improvement efforts receives feedback from a variety of stakeholders.
  • Ensures there is a method or process for implementing and improving services and processes.
  • Provides a forum for reviewing service and process maturity.
  • Provides a mechanism for reviewing service and process performance metrics with various stakeholders.

This concept of a governance board presented here may not apply to all organizations. I have applied it only to one organization. For IT organizations who are challenged with immature service definitions (lack of a Service Catalog), poor operational dialog with other business units, or poorly understood maturity of services and processes, the board is one mechanism for prioritizing and overseeing the improvement efforts.

I emphasize both concepts of implementation and improvement. The practices presented in ITIL v3/2011 are more complete and mature than most IT organizations. In fact I have encountered few organizations with maturity in more than a small fraction, and even fewer with usable performance metrics. Most of the time we start with implementation, because they have too little to engage in improvement, but the improvement board should still oversee and prioritize the implementations.

1 ITIL as a framework cannot be “implemented”. However, we can engage in improvement efforts using the framework as guidance.

 

ITIL Exam Statistics Updated for July 2012

APM Group has released their ITIL exam statistics through July 2012. I have compiled their statistics and present them with a little more context.

ITIL Foundation

  • Over 148,000 Foundation exams have been administered in so far in 2012, resulting in over 132,000 certificates to date.
  • Passrate is 90% in 2012, up steadily from 85% in 2010.
  • Total results for 2012 is on a trajectory for 10% growth over 2011. That year ended with 250,000 exams taken resulting in 220,000 Foundation certificates issued.
  • Asia has overtaken Europe in July at 40% of exams taken globally. This is partially attributable to seasonal cycles in both regions, but Asia’s share has risen steadily from around 25% in the first half of 2010.
  • Using unverified but credible data from another source that dates back to 1994, I estimate just under 1.4 million ITIL Foundation certificates have been issued total worldwide.

ITIL Advanced Certificates

  • No V2 or V3 Bridge certifications were issued in 2012.
  • Almost 29,500 intermediate exams were taken in 2012, resulting in over 23,000 intermediate certificates. (Note: a certificate does not imply a unique individual.)
  • Interest in Lifecycle track continues to rise relative to Capability track. Adjusting for credit disparities, Lifecycle track constituted 69% of the certificates in 2012, up from 59% in 2009.
  • Over 2,000 ITIL V3 Experts have been minted thus far in 2012, via the Managing Across the Lifecycle (MALC; alt. MATL) exam.
  • Although interest in the ITIL Expert certification via MALC continues to climb, it will not exceed on an annual basis the 5,000 ITIL V3 Experts minted in 2011 via the Managers Bridge exam until 2014 at the earliest.
  • Europe continues to dominate the advanced ITIL certification market at over 40%. However, Asian interest continues to climb and now constitutes over 30% of the advanced certifications.

Click on a thumbnail below to see an expanded chart.

Definitive Process Library? Huh?

This morning one of North America’s leaders in IT best practice consulting, PLEXENT, surprised me with a headline: IT Improvement: What is a Definitive Process Library (DPL)?

Besides a marketing term they made up, it made me wonder, what exactly is a Definitive Process Library?

My conclusion after research: it is a marketing term they made up.

ITIL does not define a DPL. ITIL does define a Definitive Media Library (DML) in Service Transition (Release and Deployment Management):

One or more locations in which the definitive and authorized versions of all software CIs are securely stored. The DML may also contain associated CIs such as licenses and documentation. It is a single logical storage area even if there are multiple locations. The DML is controlled by Service Asset and Configuration Management and is recorded in the Configuration Management System (CMS).

Replace “software” with “processes” and you almost have a definition of DPL, if you would choose to do so (for reasons other than marketing and self-promotion). But why would you?

An organization oriented around services supported by processes would be deeply affected by at all levels, including:

  • The organizational chart
  • Roles and responsibilities
  • Approval matrices
  • Authorization rights
  • Communication plans
  • Key Performance Indicators and reporting metrics
  • Human capital management
  • Automation tools

To name just a few. ITIL provides a conceptual framework dealing with these challenges, including the CMDB, the CMS, and the SKMS.

For services ITIL has added the Service Portfolio and the Service Catalog, concepts which for knowledge management purposes could be dealt with through the broader framework of Knowledge Management.

For processes, they are stored in the CMDB and managed through Change Management. No other consideration is required, besides how you publish, communicate and manage the downstream impacts (some of which are mentioned above).

In practice I have not observed any outstanding or notable best practices. I have seen them stored and published on a file share, on SharePoint, on the Intranet in a CMDB, and as email attachments. Have you seen best practices that uniquely stand out? If so let me know, I would love to hear it.