Changing Incident Priority

The correlation between sanity and Linkedin Groups is inverted. I joined several groups because I like to stay connected with the industry, but the disinformation (and verbosity) can be infuriating. Recently I read the following and several people agreed.

The priority of an incident must never be changed, once determined

For the record, here was my response:

Whether and how the priority should change is a policy issue for the organization. I am not aware of any “good practices” that says one way or the other. Some organizations allow the customer or user to provide the initial prioritization. The Service Desk should review the initial prioritization as a matter of good practice (and obvious necessity).

As Stephen suggested, and as described in ITIL 2011, the calculation of Priority will often be based on Urgency and Impact.

If you enforced this policy in the tool, just imagine the consequences of a simple data entry error that wasn’t detected prior to saving. Fortunately, few organizations use this policy, and ITIL 2011 is even more liberal.

It should be noted that an incident’s priority may be dynamic — if circumstances change, or if an indent is not resolved within SLA target times, then the priority must be altered to reflect the new situation. Changes to priority that might occur throughout the management of an incident should be recorded in the incident record to provide an audit trail of why the priority was changed.

In my experience few organizations create an audit trail for the change of an incident prioritization (although some tools, such as FootPrints Service Core, tracks these changes in the History). As a general good practice I stand by my original comment.

I will discuss the details of incident priorities in an upcoming post.

Process Before Tool (right)?

Tonight at the IT Service Management Fusion 12 conference I ran into an old colleague. It was nice to see him again. We worked together at an IT good practice consultancy, and like me he later moved on to a tool vendor.

This isn’t unusual. Most work in this industry involves the configuration or customization of tools to meet the specific needs of the organization. Consultants need to earn a living and that means going where the work is, or much of it anyway.

He and I still focus on good practices, but now from a different perspective. Operational excellence, consistently executed, usually requires defining the process at some level. Sometimes this definition is informal, in the heads of the stakeholders, and sometimes the process is defined more formally, using Visio diagrams and descriptions of process details and statements of policy.

In many organizations the sum total of the process is expressed in the configuration of the tool. This is not good practice and I don’t advise it. It happens a lot.

Consultants in this space repeat “process before tool” ad nauseum. Another variation is “a fool with a tool is still a fool (or a faster fool).” At conferences and in presentations there is no shortage of this advice, and I expect to hear it repeated several times this week. Tweeting that will make me a faster fool too.

There are, however, some problems with this advice. A process defined completely in abstract, and devoid of any tool consideration, is unlikely to be useful. It will demand process steps that cannot be readily automated, or cannot be enforced through automation. Or it will demand complex configurations (or customizations) that make the tool brittle. It will ignore current state processes implemented in the tool and try to supplant it with something that is foreign.

We almost never define services devoid of any tool considerations.

The definition and improvement of the services and processes go hand-in-hand with the implementation and configuration of the automation. The industry calls it Continual Service Improvement (CSI), and it is important to get this right sooner rather than later. CSI is internal to the organization and very organic. It is not a binder delivered by credentialed IT Service Management consultants or the tool vendors.

The automation of IT service delivery and process execution is underway. It has been for several years, and new tools are appearing to make this easier and better. Publicly-traded BMC Software acquired Numara Software in February 2012. ServiceNow went public in June 2012.

Not only will the trend continue, it will accelerate. In fact, I believe the “Continuous Highly Automated Service Management” organization will require integrated automation that is several orders of magnitude more effective than today. Crossing that chasm will take a lot of work from vendors and their customers, and we have some hard problems to solve.

And yes, it will be outside-in, as well as outside-out, inside-out, and inside-in. In short, it will be awesome, but we will develop this theme in more detail later.

Key takeaways:

  • Get Continual Service Improvement right first
  • Improve services and process together with the automation
  • Automation of services and processes will accelerate non-linearly and disruptively (a chasm)

Is IT Value Intrinsically Linked to Organizaitonal Strategy?

Revenues of Inditex Group, Spanish parent of the global Zara fashion store chain, grew from 756 million euros in 19941 to 13.8 billion euros in 2011, a compound annual growth rate (CAGR) of 19%. The number of stores has risen from 424 to 5044 over the same period (CAGR 16%). Comparatively the apparel industry in the US grew 1.9 in 2010 and the entire apparel industry grew 5.83% in 2011.

Zara’s competitive success is the result of good strategy. In “Good Strategy, Bad Strategy: The Difference and Why It Matters” author Richard P. Rumelt identifies three necessary components of the kernel or core of a good strategy2:

1. A diagnosis of the main competitive challenges,
2. Guiding policies that address the diagnosis, and
3. Coherent set of actions that implement the policies

Zara, once a low-cost manufacturer, correctly diagnosed that as the industry moved toward low-cost manufacturing centers in Asia, a shorter supply chain with design and manufacturing remaining in Spain could compete by 1) remaining close to customers and 2) rapidly responding to quickly changing fashion trends.

Zara designed a coherent set of actions across the company to adapt and respond to these insights. Designers create new designs in two weeks. Manufacturing remained in Spain. Designers work closely with manufacturing to ensure the new design can scale up production quickly with reasonable controls on cost. Store managers watch customer trends closely and enter orders nightly via hand held terminals. Orders ship daily regardless of percent utilization of the vehicles (other stores will hold the shipment until the truck is full). The entire system is designed to keep feedback loops as short as possible.

The company does not advertise in order to shape or explain customer value. Instead Zara listens and responds to changing customer preferences much more rapidly than competitors. Inventory is kept small and discounting due to overstocks are lowest in the industry. Where is IT in this story? IT is not central to the organization, and spend is lower than the rest of the industry. Information Technology is used, for example, to optimize logistics routes to reduce shipping times and reduce CO2 emissions. IT support operational processes as at most organizations, and as a public company is used to manage risks. The technical environment is intentionally maintained as simple as possible and the number of applications are minimized in order to reduce costs and risks.

Zara is just one example of how Information Technology supports the organizational strategy. It is particularly revealing because the organization actually has such a strategy, but it is not the only example. Wal-Mart Stores have been analyzed in great detail already, but one point worth examining is Wal-Mart’s use of bar-code scanners. The adoption of bar-code scanners is almost synonymous with Wal-Mart Stores, but the firm did not invent even become an early adopter. Kmart began adopting bar-code scanners at the same time as Wal-Mart in the early 80’s, and they were in use in grocery stores before that. However, Wal-Mart seemed to benefit more than anyone else. The firm integrated bar-code data into its logistics system faster than its competitors, and traded its bar-code data with suppliers in return for product discounts.3 The important point for Wal-Mart is the use of bar-code data integrated with and supported the rest of Wal-Mart’s logistical system as part of a integrated and self-reinforcing design. They were not one CIO’s pet project that was tangential to the rest of the organization.

Organizations try to provide superior value to customers over a sustained period of time. However, this is insufficient. The organization tries to capture a significant portion of that value in a way that is difficult for competitors to imitate. As an internal Type I or Type II provider, the IT organization in general needs to support and enhance the strategy of the organization. Operational excellence (warranty) is at times necessary but is never sufficient. (Indeed the sole focus on operational excellence is a race to the bottom, as all industry participants have to spend more to produce decreasingly differentiated products.) Utility as defined in ITIL is not useful. More utility is not better.

Enough utility to enable, support, and integrated with the organization’s competitive differentiators is what we seek to create. The CIO deserves a seat at the table. As both information and technology change, improve, increase, and differentiate, the need to have the CIO at the table will only increase, both to manage risks and to define and improve the organization’s strategy. In addition, IT will need to execute that strategy as part of a coherent and integrated set of actions across the organization.

1 Spanish pesetas converted 166.386 ESP/EUR, the official exchange rate when it was converted in 1999.
2 Bad strategy, by contrast is not the absence of good strategy. Rather bad strategies are fat analysis documents that fail to focus resources and actions, or are performance targets that fail to diagnose underlying competitive challenges to growth.
3 Good Strategy, Bad Strategy: The Difference and Why It Matters

All I Really Needed to Know About Titanic’s Deck Chairs I Learned From ITIL

Headline have a tendency towards hyperbole to grab attention, and there is no shortage describing the dwindling role of the CIO. Here is a recent one: IT department ‘re-arranging deckchairs on the Titanic’ as execs bypass the CIO.

As reporting of technological change increases,the probability of comparing IT tor e-arranging deck chairs on the Titanic approaches 100%. 1 – Tucker’s Law 2

It is true, CIO’s are challenged by technological change. The widespread adoption of cloud-based services by business units bypassing the traditional IT function is well documented. Support and adoption of consumer devices, including mobile phones, tablets, and non-standard operating systems such as Mac OS and Android, is also challenging traditional IT functional units.

Specific to the cloud example, some further examination is helpful. We don’t need to reinvent a framework, because ITIL 2011 already provides one. Service Strategy section 3.3 (p.80) describes three types of service providers:

  • Type I — Internal service provider. Embedded in an individual business unit. Have in-depth knowledge of business function. Higher cost due to duplication.
  • Type II — Shared services unit. Sharing of resources across business units. More efficient utilization.
  • Type III — External service provider. Outsourced provider of services.

Furthermore, ITIL describes the movement from one type of service provider as follows.

Current challenges to the CIO role come in 2 directions:

  1. Change from Type II to Type I, or dis-aggregation of services to the business units.
  2. Change from Type II to Type III, or outsourcing of services (presumably) to cloud providers.

In fact the CIO may be seeing both at the same time, as traditional in-house applications are replaced with cloud services and the management of those services and the information supply chain are brought back to the business unit. The combination of those two trends together could be called a value net reconfiguration, or simply, re-arranging the deck chairs on the Titanic.

Is this a necessary and permanent shift? Maybe but probably not. I personally believe that part of the impetus is simply to bypass organizational governance standards such as enterprise architecture and security policies. Business Units can get away with this for a while, but as these services realize failures and downside risks, aggregated IT functions will have take back control.

This does not mean the end of cloud adoption. Far from it. It means that the CIO will orchestrate the cloud providers, in order to optimize performance and manage risk. The CIO is as necessary as ever albeit with a different set of requirements.

Peter Kretzman has successfully argued that the reported demise of the CIO has it backwards: IT consumerization, the cloud, and the alleged death of the CIO.

Kretzman has also argued the dangers of uncoordinated fragmentation: IT entropy in reverse: ITSM and integrated software.

 

1 In 1990 Wired Magazine published the observation that as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.0. As an experiment in memetics, it taught us a lot about psychological tendencies towards hyperbole. The Wired observation is now termed Godwin’s Law of Nazi Analogies. http://en.wikipedia.org/wiki/Godwin%27s_law
2 Consulting firm Forrester has provided us enough ammunition for Tucker’s Law as a corollary.

HP’s $10 billion SKMS

In August 2011 HP announced the acquisition of enterprise search firm, Autonomy, for $10 billion.

It is possible HP was just crazy and former CEO, Leo Apotheker, was desperate to juice up HP’s stock price. With Knowledge Management.

Within ITSM the potential value is huge. Value can be seen in tailored services and improved usage, faster resolution of Incidents, improved availability, faster on-boarding of new employees, and reduction of turnover. (Ironically, improved access to knowledge can reduce loss through employee attrition).

In 2011 Client X asked me for some background on Knowledge Management. I did prepare some background information on ITIL’s Knowledge Management that was never acted on. It seemed like too much work for too little benefit.

ITIL’s description does seem daunting. The process is riddled with abstractions like the Data —> Information —> Knowledge —> Wisdom lifecycle. It elaborates on diverse sources of data such as issue and customer history, reporting, structured and unstructured databases, and IT processes and procedures. ITIL overwhelms one with integration points between the Service Desk system, the Known Error Database, the Confirmation Management Database, and the Service Catalog. Finally, ITIL defines a whole new improvement (Analysis, Strategy, Architecture, Share/Use, and Evaluate), a continuous improvement method distinct from the CSI 7-Step Method.

Is ITIL’s method realistic? Not really. It is unnecessarily complex. It focuses too much on architecture and integrating diverse data sources. It doesn’t focus enough on use-cases and quantifying value.

What are typical adoption barriers? Here are some:

  1. Data is stored in a variety of structured, semi-structured, and unstructured formats. Unlocking this data requires disparate methods and tools.
  2.  Much of the data sits inside individual heads. Recording this requires time and effort.
  3. Publishing this data requires yet another tool or multiple tools.
  4. Rapid growth of data and complexity stays ahead of our ability to stay on top of it.
  5. Thinking about this requires way too much management bandwidth.

In retrospect, my approach with Client X was completely wrong. If I could, I would go back and change that conversation. What should I have done?

  1. Establish the potential benefits.
  2. Identify the most promising use cases.
  3. Quantify the value.
  4. Identify the low hanging fruit.
  5. Choose the most promising set of solutions to address the low hanging fruit and long-term growth potential.

What we need is a big, red button that says “Smartenize”. Maybe HP knew Autonomy was on to something. There is a lot of value in extracting knowledge from information, meaning from data. The rest of the world hasn’t caught up yet, but it will soon.

BMC and Numara: What it Means

Disclaimer: I contracted at Numara Software as an Implementation Consultant in Professional Services from July 2007 to June 2010.

Chris Dancy said it best: this is about as close to J-Lo and Marc Anthony as we get in the IT industry. On January 30, 2012 BMC Software announced the acquisition of Numara Software.

My initial reaction was shock—both have been stable, mainstays of the industry. Shock gave way to disappointment. Disappointment soon gave way to cautious optimism about the future of the combined company.

Stephen Mann and David Johnson of Forrester fame have written some initial reactions. Here are mine.

Track-It

The Track-It family of products is the core of the original Blue Ocean Software, which was acquired by Quicken then spun-off again as Numara Software.

The initial indicators are that BMC intends to allow Numara to operate as an independent unit and to continue to operate Track-It as a standalone product. Track-It forms the low-end of the product line but generates high margin maintenance fees.

Track-It is profitable on its own and does not undercut sales of FootPrints Service Core or BMC Remedy. BMC may choose to scale back feature development, but it cannot make significant reductions in commitment or support without jeopardizing the highly repeatable and stable revenue stream of maintenance and support.

FootPrints Service Core

I joined Numara Software as a contractor shortly after the acquisition of FootPrints from Unipress and watched them struggle to make the transition from high volume transactions to high touch solutions. They did make this transition successfully, though it has become apparent in the last couple releases that the code-base is brittle. Version 12 to be released in 2012 will be a major refactoring of the code base to a new programming language.  I expect development to continue along its current path and schedule unless the refactoring seriously jeopardizes backward compatibility–in which case product management should revisit the product line.

FootPrints Service Core is more directly competitive with BMC Remedy, and I have been engaged at several customers where FootPrints replaced Remedy or beat Remedy in a competitive comparison. FootPrints provides easy configuration and rapid ROI, but is flexible enough to support several business processes. Although there are workspace templates built-in, they aren’t very useful, and customers usually start from scratch. As such what are customizations in Remedy are web-based configurations in FootPrints.

Nevertheless, while there is competitive overlap, FootPrints usually sells at a lower point in smaller environments. The question is whether BMC’s sales force is up for the transition to lower customer service and reduced professional service requirements of FootPrints customers. It depends on whether BMC sales staff consider this a threat of reduced revenues or an opportunity to retain valid sales leads where Remedy isn’t competitive. I suspect Remedy has been squeezed by competition (primarily Service-Now and Hornbill) and will welcome a competitive solution that can be sold on-premise or SaaS.

FootPrints Asset Core

FootPrints Asset Core (formerly Numara Asset Management Platform, or NAMP) has always been an enterprise product designed for stability and scaleability.  It is a product based on client agents that provide hardware and software inventory, software deployment, patching, and policy deployment of Windows, Mac, and Linux devices.

Asset Core competes more directly with BMC’s BladeLogic Client Automation. BMC will need to pay more attention to how they position these product lines. Asset Core is poor at automated discovery and agentless inventory and is very complementary to Atrium Discovery & Dependency Mapping (ADDM) in its mid-market. I anticipate BMC will strip some functionality and complexity out of Asset Core and keep it focused on the mid-market, leaving BladeLogic in the enterprise.

Mobile Device Manager

Numara Cloud is a repackaging of FootPrints Service Core, FootPrints Asset Core, and a new product called Mobile Device Manager (MDM) they acquired in 2011. The strategic positioning was brilliant and the growth potential is huge. BMC does not offer much in this area, and this addition should be welcomed in their product line.

Conclusions

Numara has traditionally been weak in several areas.

  • Mobile computing: no solution here, nor even a hint of future product development. BMC could capitalize on this with mobile solutions that integrate across the spectrum of IT Service Management and asset management products.
  • Social networking and chat integration: While FootPrints provides web-based messaging capability, the functionality is slow and dismal and it provides no workflow or issue integration. FootPrints provides no integration or API for social networks.
  • Configuration Management and Service Catalog: the initial release of the CMDB functionality was promising but they have failed to improve on it. Reporting, data federation, and data reconciliation functions are very poor. Product management has mostly focused on integration with Asset Core.

 

I would look for changes and improvements in these areas. BMC would be wise to focus product management in these areas in order to capitalize the relative strengths of both organizations.

For existing or prospective customers of both organizations, I don’t expect much to change in 2012. For many organizations I don’t expect much to change through 2015 beyond “normal” product feature evolution that would have occurred anyway.

The reactions of my current and former inside Numara have been very positive. If BMC is planning on major force reductions (RIFs) they have been very quiet about it. I don’t expect many RIFs beyond back office staff, where Numara has already been very efficient. I don’t expect many reductions in development, product management, or sales because the products are either high-margin and non-strategic (Track-It) or strategic and complementary (MDM).

I am rushing this response in order to get out some initial reactions. Overall I believe they provide complementary product coverage that, if utilized and coordinated, could provide a lot of future growth for customers and BMC. I will keep an eye on things and let you know what transpires, but please feel free to provide feedback and updates.

OBASHI and the CMDB

In September 2010 APMG unveiled its new OBASHI Certification based on the OBASHI methodology developed in 2001 in the gas & oil industry. I won’t go into detail here, but there is at least one book available at the APMG-Business Books but apparently not on Amazon, and least of all not in Kindle format. There is also a OBASHI Explained white paper. Confession: I haven’t yet read the book.

This is just a first impression, and it was this: this is a lot like the CMDB analysis I have done several times in the past. Here is a CMDB framework which I have commonly seen used in industry.

At the top you can imagine are products that your company offers to its customers. Those products are provided by Users (of Services provided by IT), which may be individual users, departments, divisions, teams, etc. The Users use services which may be provided by other services or by applications. Applications run on servers and computers, which are connected by networks.

That sounds obvious but have found some people find it a bit abstract until they start laying out practical examples. The important thing to remember is the objects (rectangles on a diagram) are connected by arrows. In CMDB parlance, the objects are Configuration Items (CI’s) and the arrows are relationships. I typically label the diagrams.

The OBASHI framework seems to use the same concepts. When modeling a CMDB I usually allowed the customer a little more flexibility of CI Types and Relationships, depending on their requirements. OBASHI seems a little more rigid in the use of components and data flows.

At first I wondered what is the purpose of OBASHI. However, I like it after further thought. Although it describes data flows, it is easy to envision it describing other flows, such as process flows. It is an effective framework for analysis that effectively communicates complex systems. It doesn’t require the full implementation of an expensive CMDB to achieve its benefits, and the information collected will readily be reused in the implementation of a CMDB.

The Problem of Revealed Preferences

Consumers express their rational interests through their behaviors. Economists call these revealed preferences.

IT Service Management trainers and consultants tell other companies how they should run their businesses, based on industry best practice frameworks. We seldom examine revealed preferences, but perhaps we should.

One of my first engagements as an ITSM consultant involved the planning and implementation of a Problem Management process at an organization that had committed to widespread ITIL adoption. For several years after I was an acolyte of Problem Management.

I have implemented Problem Management at a dozen customers and tried to sell it to even more. Among the latter, most resisted. The excuses often included “we are too busy putting out fires”, “we aren’t ready for that”, and “that’s not our culture”. Perhaps I wasn’t selling it well enough.

Most organizations do ad-hoc Problem Management, but few organizations have defined processes. Their reasons probably contain some legitimacy.

Organizations do need to be in control of their Incident Management process before they are ready for Problem Management. They do need to be in control of putting out fires and fulfilling service requests. Most organizations find themselves with a backlog, even those under control, and that is alright. A reporting history is a prerequisite.

Organizations must also be in control of its Changes. The resolution of known errors take place through Change Management, and organizations in control of Changes are better able to prioritize the resolution of its known errors. Anyway, at most organizations, an effective Change Management process is more beneficial than Problem Management.

I usually told customers they were ready for Problem Management only if they could devote at minimum one-quarter to one-half of an FTE to the Problem Manager role, and this person would need to have a good overview of the architecture or be well-connected throughout the organization.

In other words, the Problem Manager should be reasonably senior and independent of the Service Desk Manager. Without this the Problems will be starved of resources. Someone needs to liaise with senior management, ascertain business pain points, prioritize tasks, acquire resources, and hold people accountable. In other words, Problem Management requires commitment at senior levels, and it isn’t clear all organizations have this. Many don’t.

There is another reason that is more important. Organizations that are so focused on executing strategic projects won’t have the resources to execute Problem Management processes consistently. There are several reasons this can occur. In one instance the organization had acquired another and had dozens of projects to deliver on a tight deadline. In another, the reverse situation was occurring, as they built functions to replace those provided by a former parent company. In another, the organizations simply put a lot of focus on executing projects in support of the organization’s new strategy.

Some organizations plainly require Problem Management processes. I have seen more rigorous adherence among organizations who provide IT services to other technical organizations, such as data center outsourcers, hosting providers, or organizations who provide services such as video distribution or mapping to telcos. When services are interrupted, the customers demand root causes analyses.

But it is probably true that many organizations don’t need or aren’t ready for Problem Management. Problem Management is an investment, and like all investments should deliver returns that exceed not only its own costs, but also exceed the benefits of similar efforts the organization may undertake. So it has been revealed.