The varied responses to this question are interesting

The responses vary among:
1. This guy is incompetent and should be fired (fairness seeking).
2. This guy is incompetent and will be fired anyway (realism).
3. This guy is incompetent and you should help him to acquire the skills he needs for his job (compassion).
4. Talk to your manager (practical).
5. It depends on the organization and culture (consultant speak).

In general the correct answer is to address any concerns with your immediate manager, while being willing and able to offer suggestions should he/she request them.

Incompetent new employee-should I advise HR/ethics? – Best Practices – Spiceworks

https://community.spiceworks.com/topic/1976832-incompetent-new-employee-should-i-advise-hr-ethics

Bill Gates and Steve Jobs agreed on little

But both agreed that healthcare was ripe for disruption. That is still true, but the pace is slower than we envisioned a decade ago.

One reason is the high cost of certification. Consumer-grade equipment produces interesting data for casual self-analysis. Producing data to be used in medical diagnoses requires greater confidence in the accuracy of the data and the consistency of the devices used to produce it. In the case of robotics, makers have to demonstrate in clinical trials that the equipment is safer and produces tangibly better outcomes.

Another reason for the slow pace of disruption is maintaining the confidentiality of patient data. Device makers collect and store patient data, but need mechanisms authorize and interface with medical providers on behalf of the patients. Extending the value chain requires complex protocols and interfaces, while there is little incentive for any single party to develop them.

These are some random musings on research I performed a few years ago. If I have overlooked any recent developments, please feel free to leave feedback.

A robotic revolution in healthcare – BBC News

http://www.bbc.com/news/uk-scotland-39330441

Analytic Hierarchy Process

AHP Background

One model I want to explore today is the Analytic Hierarchy Process (AHP) that was developed by Thomas L. Saaty in the 1970’s as a tool for choosing an option using a set of weighted criteria.

For example, we may choose a software package on the basis of criteria such as supported features or functions, scale-ability, quality (fitness for purpose, fitness for use), security, availability and disaster recovery. AHP provides a mechanism for weighting the criteria by interviewing several members of staff for one-by-one assessments of relative importance, which can then be transformed into relative weightings using an eigenvector transformation.

The idea of using multiple criteria to assess multiple options is not new. AHP enhances the ability to weight the assessment criteria using feedback from multiple stakeholders with conflicting agendas. Rather than determining a “correct” answer it assesses the answer most consistent with the organization’s understanding of the problem.

Other use cases can include project portfolio optimization, vendor selection, plant location, hiring, and risk assessment. More information can be found at the International Journal of the Analytic Hierarchy Process (free registration).

Simple AHP hierarchy with associated default priorities.

Applications in ITSM

In the field of ITSM there a examples of papers that describe the instances in which AHP was used.

The paper “EDITOR-IN-CHIEF ENRIQUE MU USES AHP TO HELP CITY OF PITTSBURGH MOVE TO THE CLOUD” (free registration) briefly discusses Professor Enrique Mu’s application of the AHP for the City of Pittsburgh’s efforts to migrate IT functions to cloud providers. The decision period spanned several months and was considered strategic for the city.

Another paper “The critical factors of success for information service industry in developing international market: Using analytic hierarchy process (AHP) approach” (paywall) discusses the use of AHP for analyzing critical success factors in international market diversification for information service providers in Taiwan. They interviewed 22 participants (CEO’s, experts, consultants) to generate pairwise comparisons of CSF’s, with which the AHP method was able to distill into factor weighting. These factor weightings could be used by specific information service providers to determine whether or not they should consider entering specific markets.

In “A Method to Select IT Service Management Processes for Improvement” (free access to PDF) Professors from School of Management & Enterprise at University of Southern Queensland used AHP as part of a method for ranking ISO2000 process priorities for improvement. This particular paper is worth exploring in much greater detail because, in my experience, the prioritization or process or service improvement initiatives can be very painful at organizations, particularly those with multiple influential stakeholders with incompatible or conflicting requirements.

Last but not least, In “DECISION SUPPORT IN IT SERVICE MANAGEMENT: APPLYING AHP METHODOLOGY TO THE ITIL INCIDENT MANAGEMENT PROCESS” (free registration) Professors at the FH JOANNEUM University of Applied Sciences in Graz, Austria discuss the use of AHP in prioritizing Incidents. In their specific implementation they used four decision criteria to prioritize Incidents:

  1. Number of customers affected
  2. Whether “important” customers are affected
  3. Penalty or cost of the outage
  4. Remaining time until violation of the service level target

IT organizations typically use simplified “rules of thumb” methods for prioritizing Incidents based on Impact and Urgency. Notably three of these four factors are typically included inside variants of the schema. Please see my discussion in Incident prioritization in detail (which also avoids the explicit use of SLAs in evaluating Incident resolution).

I don’t find the prioritization of Incidents to be a particularly strong candidate for AHP analysis. High priority incidents are relatively rare and are generally handled one at a time or by non-overlapping resources. Lower priority incidents (routine break-fixes for Services Desk) can be handled first-come-first-service or using the relatively crude but quick methods described in ITIL.

Prioritization of Problems seems a more suitable candidate for AHP because a) Problem resolution can require days or weeks, b) multiple Problems may be active and contending for resources, and c) the resolution or Problem can deliver much greater long-term financial impacts for organizations. The principles and underlying support system would be similar.

Other uses of AHP that merit further investigation include:

  • Prioritization of service and process improvement initiatives
  • Selection of ITSSM tools
  • Selection of vendors (in support of the Supplier Management function / process of Service Design) and/or cloud providers
  • Activity prioritization in environments of resources in multi-queuing environments (leveling of activities across multiple processes and/or projects)

 

 

Models in Service Management

For the time being I am focusing my attention on the use of models in service management (more here). Models are useful because they help us understand correlations, understand system dynamics, make predictions, and test the effects of changes.

There are few, if any, models in regular use in service management. There may be valid reasons for this. There are few, if any, good models in regular use in business (there are many bad model, and many more that are fragile and not applicable outside narrow domains).

ITIL1 2011 does make use of models, where appropriate. The service provider model is one such example that helps understand the nature of industry changes.

More are needed, and I am making a few assumptions here:

  1. There are robust models in use in other domains that can be applied to the management of IT operations and services
  2. These haven’t been adopted because we are unaware of them, or
  3. The conditions in which these models can be applied haven’t been explored.

It is time for us to explore other models that may be applicable and useful.

Oh, and Happy New Year! I wish everyone a happy and prosperous 2017.

1 ITIL is a registered trademark of AXELOS.

DevOps Changes Everything

…and DevOps changes nothing.

I was holding off writing about DevOps because:

  1. I don’t work with customers who implemented it successfully.
  2. I have nothing new to offer.

Both points are still true. There isn’t anything new to say about DevOps except that the hype machine is still in overdrive and the loop machine is wearing out.

DevOps is an improvement to Release and Deployment Management. There’s no conceptual abstraction on top that changes the way we think about releases and deployments. It arose in response to new layers of technical abstraction that enabled new capabilities. These include server virtualization (VMWare), the availability automated deployment tools (Chef, Puppet), and the rise of containers (Docker) and supporting programming interfaces and orchestration tools.

Together they allow us to make an order of magnitude improvement in the performance of the Release and Deployment Management and Service Validation and Testing processes. This is a very good thing, because it drastically decreases the costs of software deployment, and enables more rapid experimentation when addressing new customers or improving existing customers. Moreover developers can do this with tools (programming languages) familiar to them. This is also a good thing.

However, organizations adopting DevOps practices must understand the requirements of various stakeholders and minimize barriers of communication across silos: see Can We Stop Talking About DevOps?. Moreover, these organizations have to be in control of how applications are released (Release and Deployment Management), and you still have to know what you are automating (Service Asset and Configuration Management).

People. Process. Technology. In that order. Nothing has changed.

DevOps will fail without them, except in limited instances or for newly developed services (or recently deployed services) receiving abundant management attention and not encumbered by legacy configurations.

In the long run the automated deployment of production environments has a bright future, as even more functions of IT are virtualized (networking, storage). In a sense, DevOps will have won; it will be pervasive. We just won’t call it DevOps any longer. We will go back to calling it Service Management.

IT Isn’t Broken

It’s just a hell of a lot harder than we are given credit for.

Enter FiveThirtyEight Science on Science Isn’t Broken (“It’s just a hell of a lot harder than we give it credit for”). Science is messy by design. Scientific methods are adversarial. Scientists are competitive for mind share as well as funding. Science pushes into the unknown and forces us to recognize uncomfortable new truths.

When thinking about this 1 it occurred to me that traditional modes of managing IT are failing us. That isn’t necessarily the fault of IT departments or their supporting frameworks, or not entirely. Over the last decade I have worked with over 200 IT organizations in a variety of industries and geographies. Some IT departments are better than others, but overall IT staff have become cooperative, thoughtful, and motivated to fulfill the needs of their stakeholders. This is a major shift in mindset from when we regularly disparaged “users” as “lusers”, “PEBKACs”, or “identity (or ID10T) errors”. 2

All is not well;
I doubt some foul play; would the night were come!
– Hamlet (1.2.254), Hamlet, alone on the platform.

This is not to say all is well in IT Service Management or ITIL®.3

  • Interest in ITIL® certifications is flat, and declining in most regions except Europe and Asia, the latter primarily meaning India. ITIL is largely irrelevant in the rest of the world, or rapidly becoming so.
  • None of the frameworks built in parallel with ITIL®, including IEC/ISO 20000 and COBIT5, have made any traction.
  • Best practices and good practices are, by definition, past practices. The framework is ineffective in complex environments in which cause and effect relationships are obvious only in retrospect, and in which emergent behaviors are unpredictable.
  • The ITIL® framework, itself long in the tooth, was last updated in 2011 with no refresh in sight.
  • AXELOS and ISACA have increasingly turned their attention to information security with their RESILIA™ framework and Cybersecurity Nexus portal, respectively. This is a natural extension for ISACA and a slight departure for AXELOS.

  • IT is so hard in part because it is complex. In a few short years the industry has transitioned from traditional servers, to virtual servers and increasingly to containers. Container orchestration is improving just as hybrid containers and hyperconverged infrastructure standards are appearing. IT services are increasingly delivered via a combination of in-house and cloud vendors, each of which operates with different standards and API’s. Meanwhile, security attacks are becoming increasingly sophisticated as the attackers become professionals.

    Progress is arriving from outside of IT operations, in the form of Lean and Agile practices, and codified (more or less) under the umbrella of DevOps (Let’s Not Paper Over the Differences), many of which specifically reject the bureaucracy erected by the process focus of traditional frameworks.4

    What will emerge in two years is anyone’s guess. If your head isn’t spinning, you aren’t paying attention. IT departments are paying attention, even if they do not have the tools for managing it. What they don’t need is 4 more processes in the next revision of ITIL® or more control points in COBIT5.

    One thing we do need is a framework for managing complexity. The Cynefin Framework shows promise for helping to manage the trade-offs between discovering emergent behaviors and exploiting them. Cynefin is not a panacea, but once the nature of complexity is understood, it follows that panaceas only exist in ancient mythologies. IT departments, meanwhile, will continue muddling along, hobbled but not broken.



    1. Science is an imperfect metaphor for IT Service Management because IT should be cooperative, not competitive. However, IT does function in economic environments that are, by nature, competitive. IT relies heavily on vendors whose interests are not necessarily aligned with their customers. 
    2. The fact that most people stopped referring to them as “users” is major progress, but perhaps this is because IT has stopped being the tail that wags the dog — and become the dog. 
    3. ITIL and RESILIA are registered trademarks of AXELOS Limited. 
    4. Although ITSM has shifted its focus from processes to services, the latter are largely misunderstood even by IT. Stakeholders outside of IT are largely disinterested in Service Catalogs. 

    Change Types

    Introduction

    Changes to an IT production environment come in a variety of shapes and sizes. Industry practices have converged on types of that have served us well for several years as a starting point for discussion. However, they are insufficient and require further elaboration. Historically, these Change Types are:

    • Standard: a pre-authorized change that is low risk, relatively common, and follows a procedure or work instruction
    • Emergency: A change that must be implemented as soon as possible, for example, to resolve a major Incident or implement a security patch.
    • Normal: Any service change that is not a Standard or Emergency Change.

    In this post I elaborate on and refine the above definitions. In addition I propose two new Change Types: Escalated and Latent.

    Change Types

    In the diagram above we see the emphasis and utility of the Change Types across two axes: the source of the Change (external vs. internal), and the level of risk to the service provider and users of the service. The ultimate source of a change can vary widely. External sources include regulators, vendors, partners, customers, who may request specific functions or who may make changes that affect the service provider. Internal sources include Incidents, component upgrades, and code refactoring. Some where in the middle of these are patches, COTS software upgrades, etc. We would see a similar diagram if we replaced source with urgency.

    Assessing the risk of a Change is another discussion. I also ignore the broader definitions of what are Changes. These definitions are organizationally-specific and are the subject of a later post.

    Normal Change

    Normal is the default mode for Changes. Unless otherwise specified, Changes are Normal.

    Normal Change is also the place to start when defining or improving the Change Management process. Other Change types are variations of the Normal Change process.

    Emergency Change

    Sometimes Incident resolutions requires a Change. This may include restarting a component of a service, applying a patch or update, or modifying configuration files. In order to facilitate prompt resolution of Incidents, we define an Emergency Change process that is faster than that of Normal Changes.

    Emergency Changes are created in order to resolve an Incident. The Emergency Change record should be linked to the Incident record. Emergency Changes are distinct from Escalated Changes, which are not created in response to an active Incident.

    There is no standard Emergency Change process. The authorization of Emergency Changes will be different in each organization, but it should be faster and simpler than the Normal Change authorization. Emergency Change authorization may require only the approval of the functional supervisor or manager, the service owner, or even a peer review of another team member.

    The pool of eligible approvers should be flexible. Emergencies sometimes occur in the middle of the night or on a weekend, when staff availability is limited.

    The Emergency Change approvals may also be transmitted verbally. In this case the Emergency Change record may not even exist until after it is approved and implemented. The Change should always be recorded after the fact, typically when the Incident record is updated, and should include who gave the approval. Emergency Changes that are verbally approved may be treated as a Normal Change after the fact, in order to reflect on what occurred, whether the actions were appropriate, and whether any follow up actions are required in order to monitor, modify, or roll back the Emergency Change.

    Standard Change

    In order to avoid clogging up the Normal Change management queue with high-volume, low-risk, organizations may pre-approve specific classes of Changes that may be implemented, at will, within the constraints specified by the approval. Such Changes are called Standard Changes.

    Constraints on Standard Changes may include limitations on the scope of the Change, the specific system services on which the Change may or may not be implemented, or time-based windows for the Changes, to name just a few.

    A specific class of a Standard Change should be approved as a Normal Change. Thereafter it may be implemented as required, with only logging of the Change. In the ITSM tool this may be implemented as a “Quick Template” in order to pre-fill much or most of the Change record data.

    Organizations may  refer to Standard Changes as Preapproved Changes, in order to eliminate ambiguities between the terms “Standard” and “Normal”.

    Escalated Change

    A Change that must be approved or implemented outside of the Normal Change process or window, but is not an Emergency Change, is an Escalated Change.

    A typical example is an Escalated Change is one that is identified on Thursday and must be implemented the following day, Friday, prior to next CAB meeting.

    Escalated Changes can arrive from a variety of sources:

    • External sources such as regulators or auditors,
    • New requirements generated in response to Latent Changes by vendors, customers, or partners,
    • New requirements generated in response to a recently implemented Change,
    • Internal organizational or political changes.

    The Escalated Change approval process may be the same as that for an Emergency Change, or it may be a variation of either the Emergency Change or Normal Changes approval process.

    Because the preponderance of Escalated Changes frequently reduces the overall efficiency of an organization’s Change and Release Management processes and incur a higher number of Incidents, it is important to track and log them separately from other Change Types. Frequently organizations will do so in order to discourage their excessive or unnecessary use.

    Latent Change

    Latent Changes are Changes to Services or components that were or will be implemented without any action by the service provider, but which may impact the Services of the provider.

    The Latent Change type is growing in importance along with the increased reliance on Software as a Service and managed service providers. These providers may make changes to their applications or infrastructure that affect multiple customers. An individual customer may record such a Change as a Latent Change.

    A Latent Change may also result from an automated action, such as a patch applied by an ITAM system, or a server restart initiated by an Event Monitoring system.

    Finally, a Latent Change may also be an unauthorized Change detected automatically by an ITAM system or manually by a system administrator.

    In all cases the Latent Change is recorded and examined, but is not approved. A Latent Change may result in another Change that will roll back, repair, or replace the Latent Change.

    It is important to log Latent Changes because they happen, and because they affect Services.

     

    Rebooting ITSMinfo

    Wow, 2016 is upon us already. Sadly, 2015 passed me by without a single post on ITSMinfo.

    To some extent I was just busy. As an independent consultant I am paid by the hour and I have to watch billable hours closely. Fortunately, I stayed busy in 2015.

    I also exercised regularly, running and lifting weights at the gym. I also slept regularly, as much as possible, given my global coverage (living in Japan working North American hours).

    To some some extent my lack of productivity on the ITSMinfo blog was simple laziness and poor time management. I didn’t track blog ideas as they came to me. I didn’t follow up on ideas I already had. I am resolved to fix them this year.

    I do expect to increase output on the ITSMinfo blog to at least monthly. I want posts that are useful to the practitioner community. I will follow recent trends and updates from practitioners, consultants, and academics. I hope to contribute new ideas to the ITSM community. I have updated my WordPress theme to the latest refresh.

    I will not deliver:

    • Content dumbed down to the fifth grade level. I assume that people in this community are smart. If we want to move this industry forward, we must have frank, adult discussions.
    • How to implement XYZ process in 5 simple steps.
    • Personal rants or ravings on non-IT topics. You can also find personal rants on my 2G16 blog.

    As always I want this blog to be part of a conversation. Sometimes I am wrong. Okay, more than sometimes. I am happy to be corrected, even insulted. This conversation will extend into related forums on Twitter, LinkedIn, Facebook, and Google Plus. On this blog I have updated the commenting system to use Disqus, which I hope will simplify usage.

     

    The Secret of Change Management

    The secret is there isn’t one secret.

    There is no single aspect of Change Management that makes it successful. There is no right or wrong way design your Change Management process.

    I have worked with two dozen customers on Change Management, and I have found few consistent threads. Every organization is different.

    It’s important that:

    • You have a process
    • You define changes (more on this later)
    • You review and improve the process periodically

    You also need to define “change” in a way that is appropriate to your organization. I once worked for an outsourced data center provider who required a change to access the data center–the one-time access was a Change. This is an extreme example, but it clarifies the point.

    A weekly meeting of the  Change Authorization Board is not required. Half the customers I worked with never defined any group that resembles anything like a CAB. It can work well for some organizations, but most organizations are better off without one.

    Accountability for implementing unauthorized changes is also important. Most companies build “Unauthorized” or “Out of Process”  Changes into the process. One customer called them “Poorly Planned Changes” and the CIO had to approve them. The rate of such changes dropped significantly.

    Otherwise the standard advice applies.

    • Define your KPI’s. Identify your performance metrics.
    • Assign a roles that are appropriate to your organization.
    • Automate approvals and notifications where possible.
    • Use “Standard” (pre-approved) changes in order to reduce the volume of management approvals.

    Validating Configuration Management Uses

    Two weeks ago I discussed how the value of Configuration Management activities are derived from the improvements made to other processes.

    I want to suggest another way that this can happen, borrowing concepts that recently starting to surface in Project Management communities. That concept is called PRUB and was introduced last year by a professor from New Zealand, Dr. Phil Driver, and published this year in the book “Validating Strategies: Linking Projects and Results to Uses and Benefits“. As a tool it is both simple and somewhat intuitive, but it provides a simple framework for eliminating disconnects between high-level desires of management the practical day-to-day realities of operations. It helps us ensure there exists a clear and understood path from the project charter to through the deliverables to uses and benefits.

    PRUB is an acronym for Project, Results, Uses, and Benefits. Originally intended to map the projects that support a particular strategic direction, it is simple and flexible enough that we can map it to improvement initiatives IT Service Management. In this case I am starting to use it as a tool for validating the uses associated with Configuration Management implementations, because it can help eliminate some of the shortcomings common to attempts to improve Configuration Management activities.

    I did also want to step through a specific case study from a recent implementation with a government entity. I should note that we did not perform a specific PRUB analysis for this set of use cases of the CMDB, but I mention it because their requirements clearly thought through the each step of the PRUB analysis. They built some novel functionality into their CMDB implementation that I really liked.

    I wrote up this and another example in the attachment: PRUB for CMDB. It contains a use case for Configuration Management with the Access Management process that was based loosely on this government entity.

    PRUB_for_CMDB_drawing

     

    • Project / Process: Configure the CMDB to improve Access Management (this is a novel usage).
    • Results: Create a Permission Configuration Item for each authorized access request to a system. Link the CI to access request ticket. Create a relationship between the Permission CI and the sytem CI. In addition, import historical Permissions (prior to system implementation) to the CMDB.
    • Uses: When visualizing planned changes to a CI, the relationships can show them who has been granted access. The CI’s can be reported for scheduled reviews or audits of valid accesses. The specific access history can be pulled up from each Permission CI (based on the Access Request ticket). More broadly this use of the CMDB provides a more flexible mechanism to visualize and view the access requests.
    • Benefits: Improved assessment of the impact of planned changes. Improved reporting, controls and auditability for CI’s.

    There are other relationships associated with these Permission CI’s, but I just wanted to provide a flavor for how the PRUB tool can demonstrate specific uses and benefits of the Configuration Management process.

    Are the benefits measurable? If you can identify measurable benefits you should include them in your analysis. If you come across a CMDB implementation that does not clearly map out the results, how those results will be used, and how those uses will bring benefits to the users or the organization, then that should be a red flag.