Much Ado Over Gamification

Gamification is coming, whether you need it or not.

In December 2013, BMC announced a partnership with Bunchball to integrate its game mechanics engine with RemedyForce. I am told other vendors have game mechanics on their roadmaps.

Two years and a half years ago I expressed my skepticism about including game mechanics in non-game scenarios. We are still waiting for anything to happen. And waiting, and waiting…

The idea behind gamification is to promote desired behaviors. Examples may include first call resolution of calls, submission of knowledge base articles, utilization of KB articles in issue resolution, on-time resolutions Incidents and Requests, implementation of Changes without an Incident, etc.

Question Mark BadgeAny behaviors you ever desired with traditional tools can be reinforced with game mechanics. Traditional tools include dashboards, metrics reports, disciplinary actions, public humiliation, and performance rewards. To these we are adding badges, leader boards, and progress bars.

It seems to me a like much ado over very little.

ITIL Certifications for 2013

The ITIL Exam Certification Statistics for 2013 are out, and we are now ready to present the final results.

All the images below may be expanded for higher resolution. All numbers are rounded to thousands (Foundation) or hundreds (Advanced) unless otherwise indicated.

Foundation

A total of 245,000 certificates were issued in 2013, up 3.6% from 236,000 in 2012. There are now 1.73 million little ITIL’ers in the world.

Pass rates increased about 1% to 91%. The compound annual growth rates (CAGR) of annual certificates since 2008 was 1.69%.

The regional distribution of ITIL certificates shifted only slightly from North America and Europe to Asia, whose market share rose 1.1% to 33.8%.

Intermediate

Overall the Intermediate market is growing faster and changing more rapidly than the Foundations market.

A total of 33,300 Intermediate Lifecycle certificates were issued in 2013, up 26% from 2012. In addition 17,600 Intermediate Capability certificates were issued in 2013, up 10% from 2012. We don’t know how many unique individuals this represents, but we can assume that most individuals do not stop at one.

The market share of Lifecycle, adjusted by credit hours, increased from 55.3% in 2012 to 58.8% in 2013. Although gradual, the Lifecycle exams are slowly coming to dominate the Intermediate certification market.

The MALC (alt. MATL) exam was passed 4,500 times in 2013, up 21% from 2012. There are now 25,000 ITIL Experts in existence. (Please note, this number differs slightly from the official number, I assume due to time delays in conferring Expert certificate.)

The regional distribution of Intermediate exams is also shifting. The share of Intermediate certificates is still dominated by Europe, at 41%, down from 47% in 2010. North America declined from 32% in 2010 to 19% in 2013. Meanwhile Asia increased from 12% to 30.5% over the same period. The numbers here represent regional distribution. The number of certificates awarded is up in each market, they are just rising faster in Asia.

 

Goals Versus Outcomes

Examples of Goals

Revenue

Market Share

Sales Targets

Expanded Customer Base

Unicorn in 5 years — Cash Out (Kaching!)

 

Fairly common goals, right? Sorry, people. These are outcomes of goals, but they are not goals themselves.

I have seen too many people confuse them. I have worked for too many such “leaders“.

These are better goals.
- “Fuck yeah, those guys rocked.”
- “Those guys keep every promise they make.”
- We remember the little things, that the customers forget.
- Our software is incredibly easy to use.
- Customers cannot believe how fast we return the data.
- Our customers never knew how much they knew.

ITIL Exams for Oct 2013

Axelos has updated their ITIL Exam Performance Statistics through October of last year. There are no major breakthroughs since the statistics were last reported here for 2012. I am providing only major highlights.

ITIL Foundation

196,000 ITIL Foundation certificates have been awarded so far in 2013, out of 216,000 attempts, an overall pass rate of 91%, up from 90% in 2012. There are now a little over 1.1 million ITIL V3 certificate holders.

The overall attempt rate is flat compared with the same 9 months in 2012. As a result, I predict the total number of new certificates in 2013 will hold flat with 2012, at around 236,000.

Intermediate and Expert

A total of 39,000 Intermediate certificates have been issued during the first 10 months of 2013, up from 35,000 during the same period in 2012. The total number of certificates should reach 47,000 for the whole year.

Intermediate pass rates ranged from 73.8% (Capability SOA) to 82.3% (Lifecycle CSI). Overall pass rates for Lifecycle are slightly higher (80% vs. 78%). The pass rate for MALC is 66.1% in 2013, the lowest overall.

The “marketshare” of the Lifecycle track rose to 58% so far in 2013, compared with 55% in 2012. This is the number of total number of each type taken, adjusted by the number of credits received.

3,200 ITIL V3 Experts have been minted so far in 2013, compared with 3,000 for the same period in 2012. I am reporting a total of 23,720 ITIL V3 Experts, while Axelos reports 23,141. Axelos may be under-reporting due to time lags. I may be over-reporting based on pass rates of the MALC exam.

ITIL Lifecycle or Capability Track?

For the ITIL V3 Expert I used Art Of Service’s e-learning program, and recommended it. If I were starting now I would seriously consider the HP VISPEL program. Candidates should probably should consider the 360 day license unless they have a lot of spare time to complete the 180 day program on time.

Regarding Lifecycle vs. Capability track, the Lifecycle track is more popular and aligned with the books. The Capability track uses other books for study, but is suitable for practitioners who will stay at Intermediate. Some certification figures are found on this site. I will update the stats once the June (mid-year) figures are released.

http://itsminfo.com/2012-itil-exam-statistics/

Should Backups be Tracked as CIs?

Short answer: probably not.

Is there any business value in tracking backups? In other words, can you improve operational efficiency, increase revenue or defer costs in excess of expenses by doing so?

One scenario to track backups as CIs would occur if the cost of loss (I.e to reputation) is significantly in excess of tracking costs, and the status of the CIs can be updated throughout their life cycle. This is especially true if tracking can be automated.

I am not aware of any companies doing this. In most cases a process or operating manual provides sufficient control of backups.

Six Steps to Successful CMDB Implementations

Have you been asked to implement a CMDB? Here are a few pointers for doing it successfully.

DSCN0259

  1. Find the “low hanging fruit” where you will obtain the most benefit for the least cost. Implement that.
  2. Configuration Management should be focused on improving processes, not implementing a database. A database is the presumed tool, but you need to look at how your processes will be improved.
  3. The leading candidates for improvement include: Request Fulfillment, Incident Management, Change Management, Problem Management, Availability Management, and Capacity Management (not necessarily in that order).
  4. Configuration Management is not about building a database. Your CMDB can be a spreadsheet, if that provides the most benefit for the lowest cost. If necessary you can generate a more robust database later. However, see the next caveat.
  5. Maintaining the CMDB will be costly. This leads to two points: 1) make sure you understand what data you need and why, and 2) automate data collection as much as possible.
  6. Implement your CMDB in phases, in conjunction with Continuous Service Improvement.

Don’t expect your CMDB to include everything that it could conceivably contain according to ITIL. That would be too costly for the value provided to most organizations.

A Critique of Pure Data: Part 2

Please see Part 1 here.

Enter Big Data

In the June 2013 issue of Foreign Affairs (“The Rise of Big Data”), Kenneth Cukier and Viktor Mayer-Schoenberger describe the phenomena as more than larger sets of data. It is also the digitization of information previously stored in non-digital formats, and the availability of data, such as location and personal connections, that was never previously available.

They describe three profound changes in how we approach data.

  1. We collect complete sets of data, rather than samples that must be interpreted with traditional techniques of statistics.
  2. We are exchanging our preferences for curated, high quality data sets for variable, messy ones whose benefits outweigh the costs of curating.
  3. We tolerate correlation in the absence of causation. In other words, we accept the likelihood of what will happen without knowing why it will happen.

Big data has demonstrated significant gains, and a notable one is language translation. Formal models of language never progressed to a usable point, despite decades of effort. In the 1990s IBM broke through using statistical translation from a French-English dictionary gleaned from high-quality Canadian parliamentary transcripts. Then progress stalled until Google applied massive memory and processing power to much larger and messier data sets of words measuring in the billions. Machine translations are now much more accurate and cover 65 languages (which it can detect automatically when most humans could not).

Another notable success was the 2011 victory of IBM’s Watson over former winners in the game Jeopardy. Like Google Translate, the victory was based primarily on the statistical analysis of 200 million pages of structured and unstructured content. It was not based on a model of the human brain. Watson falls short of a true Turing Test, but it is significant nonetheless.

The loss of causality is not, by definition, a loss of useful information. UPS uses sensors to diagnose likely engine failures without understanding the cause of failure, reducing time spent on the roadside. Medical researchers in Canada have correlated small changes in large data streams of vital statistics to serious health problems, without understanding why those changes occur.

Given these successes, and the presence of influential political movements that attempt to discredit the validity of scientific models in areas such as evolutionary biology and climate science, it is tempting to announce the death of models. Indeed many pundits of late have written obituaries on causation.

I believe these proclamations are premature. For starters, models in the form of data structures and algorithms are the backbone of big data. The rise of big data is derived not only from the increased availability of processing power, memory, and storage, but also from the algorithms that use these resources more efficiently and enable new methods of identifying the correlations. Some of these techniques are implicit, such as the rise of NoSQL databases that eliminate structured data tables and table joins. Others are innovative ways to find patterns in the data. Regardless, understanding which algorithms to apply to which data sets requires the understanding of them as abstract models of reality.

As practitioners discover more correlations that were never known before, researchers will ask more questions and better questions about why those correlations exist. We won’t get away from the why entirely, in part because the new correlations will be so intriguing that the causation will become more important. Researchers can not only ask better questions, but they will have new computational techniques and larger data sets with which to establish the validity of new models. In other words, the same advances that enable big data will enable the generation of new models, albeit with a time lag.

Moreover, as we press for more answers from the large data sets. we will find it increasingly harder to establish correlations. Analysts will solve this in part by finding new sets of data, and there will always be more data generated. However much of the data will be redundant with existing data sets, or of poorer quality. As the correlations become more ambiguous, analysts will have to work harder to ask why. Analysts will inevitably have to establish causation in order to improve the quality of their predictions.

Please note that I don’t discount the successes of big data. This is one of the most important developments in the industry. Instead I conclude the availability of new data sources and means to process them does not mean the death of modeling. It is leading instead to a great renaissance of model creation that advances hand-in-hand with big data.

Managing to Design

I realized yesterday that I almost didn’t buy my iPhone 5 because of a cable.

358033-apple-iphone-5-lightning-port

Apple introduced the new Lightening Connector with the iPhone 5 and iPad 3 to replace the older iPod connector. The new cable is smaller and reversible, but unlike Android devices supports only the slower USB2 standard.

Lack of higher speeds and incompatibility with earlier peripherals are flimsy excuses to switch platforms. The write speed of flash storage does not exceed USB2 performance, and I don’t have any incompatible peripherals anyway. Maybe it seemed Apple was simply advancing their agenda of incompatibility with the rest of the tech world.

The event is a reminder how easy it is to fixate on shiny objects and small road bumps, and to take our eyes off the goal. Whatever our specific intentions, our broad goals are similar: to improve our lives, and to make our businesses more productive in pursuit of their goals.

Technological developments are important. Like Apple, we build technical architectures in order to maximize  current use of technology as a function of cost, while maximizing our ability to adapt to change, also as a function of cost. Good design considers both current and expected future use of technology.

In my opinion, one of our worst behaviors is over-responding to user requests that compromise a designed service. I know this statement is heresy in our “customer service” culture, in which “the customer is always right.” In Real Business of IT, Richard Hunter and George Westerman explain it this way:

In the absence of a larger design, delivering without question on every request is a value trap. Over time, setting up IT as an order taker produces the complicated, brittle, and expensive legacy environments that most mature enterprises have. It hurts the business’s ability to deliver what’s needed for the future.

Our colleagues outside of IT are not customers. Our colleagues are just that–colleagues. We collaborate with each other as colleagues to create outcomes that deliver value to the customers who purchase our organization’s products. Like IT, our colleagues in HR, sales, marketing, and accounting consider the short-term and long-term ramifications of their decisions in the execution of their services.

This is not an excuse or empowerment to simply say no to our colleagues. There are correct and incorrect methods to refuse a request, and our reflexive action is not to push back. One way is to explain why we do things the way we do, and to offer alternatives that meet the objectives without compromising longer-term objectives. We can also offer to review our modes of operation if they appear incompatible with changing needs of the organization.

As a Project Manager for a call center and data center relocation, I remember the back and forth discussions between management and construction, with me in the middle, of laying out the new facility. The construction firm held traditional mindsets, but did not blindly refuse requests. Instead they politely and patiently described the byzantine fire and construction regulations and the cost implications of various design trade-offs.

We eventually achieved a design that met most current and future needs. Whatever Apple’s specific designs, I prefer the Lightening Connector.

A Critique of Pure Data: Part 1

Rationalism was a European philosophy popular in the 18th and 19th centuries that emphasized discovering knowledge through the use of pure reason, independent of experience. It rejected the assertion of Empiricism that no knowledge can be deduced a priori. At the center of the dispute was cause and effect–whether effects could ever be determined from causes, whether causes could ever be deduced from effects, or whether they had to be learned through experimentation. Kant, a Rationalist, observed that both positions are necessary to understanding.

Modern science descended from Empiricism, but like Kant is pragmatic, neither accepting nor rejecting either position entirely. Scientists observe nature, deduce models, make predictions using the models, and test the predictions against observations. They describe the assumptions and limits of the models, and refine the models to adapt to new observations.

The old quip says all models are wrong, but some are useful. Scientific models are are useful only to the extent they are demonstrated useful. At their simplest, they are abstract representations of the real world that are simpler and easier to comprehend than the complex phenomena they attempt to explain. They can be intuited from pure thought, or induced from observation. The benefit of models is their simplicity–they are easier to manipulate and analyze than their real-world counterparts.

Models are useful in some situations and not useful in others. Good models are fertile, meaning they apply to several fields of study beyond those originally envisioned. For example, agent models have demonstrated how cities segregate despite widespread tolerance of variation. Colonel Blotto outcomes can be applied to electoral college politics, sports, legal strategies, and screening of candidates.

To be useful, models are predictive, meaning they can infer effects from causes. For example, a model can predict that a given force (i.e. a rocket) applied to a object of a given mass (i.e. a payload) will cause a given amount of acceleration, which causes an increase in velocity over time. Models Screenshot_5_20_13_12_09_PMpredict that clocks in orbit on Earth satellites are slightly faster than those on the surface, resulting from gravitational time dilation predicted by general relativity. Models may be useful in one domain but not appropriate for another. Users have to be aware of the capabilities and their limitations.

Models give us the ability to distinguish causation from correlation. We may correlate schools running equestrian programs with higher academic performance, but we would be unwise to accept causation. We would have to create a model to show how aspects of equestrian activities improve cognitive development, and to discount the relevance of other models that may show causation to other factors. We would then search out data that can confirm or deny the affects of equestrian development on cognition. (It is more likely there are other causal factors acting on both equestrian programs and academic performance.) Whether or not a model can show causal connections to all world phenomena, they can guide us to better questions.

For this discussion we are interested in computation, and that means Alan Turing who, in 1936, devised a Universal Turing Machine (UTM) that is a simple model for a computer. Turing showed the UTM can be used to compute any computable sequence. At the time this conclusion was astonishing. The benefit of UTM lay not in its practicality–it is not a practical device–but in the simplicity of the model. In order to prove a problem is computable, you just need to demonstrate a program in the UTM. Separately, Turing also gave us the Turing Test, an approximate model of intelligence.

Those who use models to make predictions are demonstrated more accurate than experts or non-experts using intuition. This last point is the most important, and is the main reason we develop and use them.

The IT Service Management industry lacks academic rigor because it has never been modeled. Most academic research focuses on mostly vain attempts to measure satisfaction and financial returns. Lacking a model, it is impossible to predict the effect of an “ITIL Implementation Project” on an organization or how changes to the frameworks will affect industry performance. Is ITIL 2011 any better than ITIL V2? We presume it is, but we don’t know.

To be continued…