Analytic Hierarchy Process

AHP Background

One model I want to explore today is the Analytic Hierarchy Process (AHP) that was developed by Thomas L. Saaty in the 1970’s as a tool for choosing an option using a set of weighted criteria.

For example, we may choose a software package on the basis of criteria such as supported features or functions, scale-ability, quality (fitness for purpose, fitness for use), security, availability and disaster recovery. AHP provides a mechanism for weighting the criteria by interviewing several members of staff for one-by-one assessments of relative importance, which can then be transformed into relative weightings using an eigenvector transformation.

The idea of using multiple criteria to assess multiple options is not new. AHP enhances the ability to weight the assessment criteria using feedback from multiple stakeholders with conflicting agendas. Rather than determining a “correct” answer it assesses the answer most consistent with the organization’s understanding of the problem.

Other use cases can include project portfolio optimization, vendor selection, plant location, hiring, and risk assessment. More information can be found at the International Journal of the Analytic Hierarchy Process (free registration).

Simple AHP hierarchy with associated default priorities.

Applications in ITSM

In the field of ITSM there a examples of papers that describe the instances in which AHP was used.

The paper “EDITOR-IN-CHIEF ENRIQUE MU USES AHP TO HELP CITY OF PITTSBURGH MOVE TO THE CLOUD” (free registration) briefly discusses Professor Enrique Mu’s application of the AHP for the City of Pittsburgh’s efforts to migrate IT functions to cloud providers. The decision period spanned several months and was considered strategic for the city.

Another paper “The critical factors of success for information service industry in developing international market: Using analytic hierarchy process (AHP) approach” (paywall) discusses the use of AHP for analyzing critical success factors in international market diversification for information service providers in Taiwan. They interviewed 22 participants (CEO’s, experts, consultants) to generate pairwise comparisons of CSF’s, with which the AHP method was able to distill into factor weighting. These factor weightings could be used by specific information service providers to determine whether or not they should consider entering specific markets.

In “A Method to Select IT Service Management Processes for Improvement” (free access to PDF) Professors from School of Management & Enterprise at University of Southern Queensland used AHP as part of a method for ranking ISO2000 process priorities for improvement. This particular paper is worth exploring in much greater detail because, in my experience, the prioritization or process or service improvement initiatives can be very painful at organizations, particularly those with multiple influential stakeholders with incompatible or conflicting requirements.

Last but not least, In “DECISION SUPPORT IN IT SERVICE MANAGEMENT: APPLYING AHP METHODOLOGY TO THE ITIL INCIDENT MANAGEMENT PROCESS” (free registration) Professors at the FH JOANNEUM University of Applied Sciences in Graz, Austria discuss the use of AHP in prioritizing Incidents. In their specific implementation they used four decision criteria to prioritize Incidents:

  1. Number of customers affected
  2. Whether “important” customers are affected
  3. Penalty or cost of the outage
  4. Remaining time until violation of the service level target

IT organizations typically use simplified “rules of thumb” methods for prioritizing Incidents based on Impact and Urgency. Notably three of these four factors are typically included inside variants of the schema. Please see my discussion in Incident prioritization in detail (which also avoids the explicit use of SLAs in evaluating Incident resolution).

I don’t find the prioritization of Incidents to be a particularly strong candidate for AHP analysis. High priority incidents are relatively rare and are generally handled one at a time or by non-overlapping resources. Lower priority incidents (routine break-fixes for Services Desk) can be handled first-come-first-service or using the relatively crude but quick methods described in ITIL.

Prioritization of Problems seems a more suitable candidate for AHP because a) Problem resolution can require days or weeks, b) multiple Problems may be active and contending for resources, and c) the resolution or Problem can deliver much greater long-term financial impacts for organizations. The principles and underlying support system would be similar.

Other uses of AHP that merit further investigation include:

  • Prioritization of service and process improvement initiatives
  • Selection of ITSSM tools
  • Selection of vendors (in support of the Supplier Management function / process of Service Design) and/or cloud providers
  • Activity prioritization in environments of resources in multi-queuing environments (leveling of activities across multiple processes and/or projects)

 

 

Models in Service Management

For the time being I am focusing my attention on the use of models in service management (more here). Models are useful because they help us understand correlations, understand system dynamics, make predictions, and test the effects of changes.

There are few, if any, models in regular use in service management. There may be valid reasons for this. There are few, if any, good models in regular use in business (there are many bad model, and many more that are fragile and not applicable outside narrow domains).

ITIL1 2011 does make use of models, where appropriate. The service provider model is one such example that helps understand the nature of industry changes.

More are needed, and I am making a few assumptions here:

  1. There are robust models in use in other domains that can be applied to the management of IT operations and services
  2. These haven’t been adopted because we are unaware of them, or
  3. The conditions in which these models can be applied haven’t been explored.

It is time for us to explore other models that may be applicable and useful.

Oh, and Happy New Year! I wish everyone a happy and prosperous 2017.

1 ITIL is a registered trademark of AXELOS.

A Critique of Pure Data: Part 1

Rationalism was a European philosophy popular in the 18th and 19th centuries that emphasized discovering knowledge through the use of pure reason, independent of experience. It rejected the assertion of Empiricism that no knowledge can be deduced a priori. At the center of the dispute was cause and effect–whether effects could ever be determined from causes, whether causes could ever be deduced from effects, or whether they had to be learned through experimentation. Kant, a Rationalist, observed that both positions are necessary to understanding.

Modern science descended from Empiricism, but like Kant is pragmatic, neither accepting nor rejecting either position entirely. Scientists observe nature, deduce models, make predictions using the models, and test the predictions against observations. They describe the assumptions and limits of the models, and refine the models to adapt to new observations.

The old quip says all models are wrong, but some are useful. Scientific models are are useful only to the extent they are demonstrated useful. At their simplest, they are abstract representations of the real world that are simpler and easier to comprehend than the complex phenomena they attempt to explain. They can be intuited from pure thought, or induced from observation. The benefit of models is their simplicity–they are easier to manipulate and analyze than their real-world counterparts.

Models are useful in some situations and not useful in others. Good models are fertile, meaning they apply to several fields of study beyond those originally envisioned. For example, agent models have demonstrated how cities segregate despite widespread tolerance of variation. Colonel Blotto outcomes can be applied to electoral college politics, sports, legal strategies, and screening of candidates.

To be useful, models are predictive, meaning they can infer effects from causes. For example, a model can predict that a given force (i.e. a rocket) applied to a object of a given mass (i.e. a payload) will cause a given amount of acceleration, which causes an increase in velocity over time. Models Screenshot_5_20_13_12_09_PMpredict that clocks in orbit on Earth satellites are slightly faster than those on the surface, resulting from gravitational time dilation predicted by general relativity. Models may be useful in one domain but not appropriate for another. Users have to be aware of the capabilities and their limitations.

Models give us the ability to distinguish causation from correlation. We may correlate schools running equestrian programs with higher academic performance, but we would be unwise to accept causation. We would have to create a model to show how aspects of equestrian activities improve cognitive development, and to discount the relevance of other models that may show causation to other factors. We would then search out data that can confirm or deny the affects of equestrian development on cognition. (It is more likely there are other causal factors acting on both equestrian programs and academic performance.) Whether or not a model can show causal connections to all world phenomena, they can guide us to better questions.

For this discussion we are interested in computation, and that means Alan Turing who, in 1936, devised a Universal Turing Machine (UTM) that is a simple model for a computer. Turing showed the UTM can be used to compute any computable sequence. At the time this conclusion was astonishing. The benefit of UTM lay not in its practicality–it is not a practical device–but in the simplicity of the model. In order to prove a problem is computable, you just need to demonstrate a program in the UTM. Separately, Turing also gave us the Turing Test, an approximate model of intelligence.

Those who use models to make predictions are demonstrated more accurate than experts or non-experts using intuition. This last point is the most important, and is the main reason we develop and use them.

The IT Service Management industry lacks academic rigor because it has never been modeled. Most academic research focuses on mostly vain attempts to measure satisfaction and financial returns. Lacking a model, it is impossible to predict the effect of an “ITIL Implementation Project” on an organization or how changes to the frameworks will affect industry performance. Is ITIL 2011 any better than ITIL V2? We presume it is, but we don’t know.

Continued in Part 2

All I Really Needed to Know About Titanic’s Deck Chairs I Learned From ITIL

Headline have a tendency towards hyperbole to grab attention, and there is no shortage describing the dwindling role of the CIO. Here is a recent one: IT department ‘re-arranging deckchairs on the Titanic’ as execs bypass the CIO.

As reporting of technological change increases,the probability of comparing IT tor e-arranging deck chairs on the Titanic approaches 100%. 1 – Tucker’s Law 2

It is true, CIO’s are challenged by technological change. The widespread adoption of cloud-based services by business units bypassing the traditional IT function is well documented. Support and adoption of consumer devices, including mobile phones, tablets, and non-standard operating systems such as Mac OS and Android, is also challenging traditional IT functional units.

Specific to the cloud example, some further examination is helpful. We don’t need to reinvent a framework, because ITIL 2011 already provides one. Service Strategy section 3.3 (p.80) describes three types of service providers:

  • Type I — Internal service provider. Embedded in an individual business unit. Have in-depth knowledge of business function. Higher cost due to duplication.
  • Type II — Shared services unit. Sharing of resources across business units. More efficient utilization.
  • Type III — External service provider. Outsourced provider of services.

Furthermore, ITIL describes the movement from one type of service provider as follows.

Current challenges to the CIO role come in 2 directions:

  1. Change from Type II to Type I, or dis-aggregation of services to the business units.
  2. Change from Type II to Type III, or outsourcing of services (presumably) to cloud providers.

In fact the CIO may be seeing both at the same time, as traditional in-house applications are replaced with cloud services and the management of those services and the information supply chain are brought back to the business unit. The combination of those two trends together could be called a value net reconfiguration, or simply, re-arranging the deck chairs on the Titanic.

Is this a necessary and permanent shift? Maybe but probably not. I personally believe that part of the impetus is simply to bypass organizational governance standards such as enterprise architecture and security policies. Business Units can get away with this for a while, but as these services realize failures and downside risks, aggregated IT functions will have take back control.

This does not mean the end of cloud adoption. Far from it. It means that the CIO will orchestrate the cloud providers, in order to optimize performance and manage risk. The CIO is as necessary as ever albeit with a different set of requirements.

Peter Kretzman has successfully argued that the reported demise of the CIO has it backwards: IT consumerization, the cloud, and the alleged death of the CIO.

Kretzman has also argued the dangers of uncoordinated fragmentation: IT entropy in reverse: ITSM and integrated software.

 

1 In 1990 Wired Magazine published the observation that as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.0. As an experiment in memetics, it taught us a lot about psychological tendencies towards hyperbole. The Wired observation is now termed Godwin’s Law of Nazi Analogies. http://en.wikipedia.org/wiki/Godwin%27s_law
2 Consulting firm Forrester has provided us enough ammunition for Tucker’s Law as a corollary.