Skip to main content

Posts

Reasoning for Problem Management

When it comes to Problem Management two things should come to mind: Root Cause Analysis (RCA) and finding a permanent resolution. How often have you thought about what it takes to conduct these aspects of Problem Management? An important underlying aspect of conducting a Root Cause Analysis and finding the permanent resolution are the reasoning approaches used. Three types of basic reasoning approaches are: Inductive: Reasoning from specific examples to general rules Deductive: Reasoning from general rules to specific examples Abductive: Reasoning to the most likely answer Each has its own uses and can be applied to problem solving and Problem Management at different times and for different reasons. However, when performing the Problem Management process we should be open to using all three reasoning approaches. They all complement each other with Inductive and Deductive reasoning forming two ends of a spectrum while Abductive thought looks for the balance between the other two a

Incident vs Problem

In a recent conversation I was asked about the difference between an Incident and a Problem. This is one of the most often confused points in all of IT Service Management and ITIL. Part of the confusion comes from the fact that both words are used (at least in the English language) to express similar ideas. Each reference some kind of issue occurring that potentially could lead to human action. However, in ITIL words are more clearly defined and have particular contexts for usage. Incident: Any unplanned event that causes, or may cause, a disruption or interruption to service delivery or quality Problem: The cause of one or more incidents, events, alerts or situations While Incidents have to do with disruption of delivery or quality, problems have to do with causes. From these distinct definitions we can see that not every incident would result in a problem, and not every problem even needs to be related to an incident. Keep in mind that “Incidents never grow up to be Problems.”

The Service Design Package (SDP)

I was recently asked by one of my followers if I might have an example of a Service Design Package (SDP).  When seeking to implement ITSM and ITIL, we often seek to find examples and models we can use to give us more guidance. This is no less true of the SDP.  Unfortunately when we try to seek out specific examples of a SDP it can often be difficult, if not near impossible. So why is it hard to find actual examples of a SDP? It goes to the very nature of the guidance of what we call best practices. ITIL is not prescriptive as to what should go into a SDP or what one might look like. It provides best practice guidance on the types of items contained, but not the exact look and feel of the content. Therefore each SDP will be unique to the organization that creates it. The organization, type of content, what the content says, and how it is managed are all decisions made by each organization to meet the needs of their customers and users. Just like a Service Catalog or a set of Service L

Quick Wins

Not too long ago we discussed John Kotter’s Eight Steps towards Leading Organizational Change.   The sixth step outlined the necessity of establishing Quick Wins.   As IT Service Management professionals we need to show upper management service improvements within a short time frame.   We also need to get our IT staff on board with the ITIL program and what better way than showing benefits quickly.   I have outlined 10 quick wins, some are for those who are just starting their service improvement journey, and some are for those at a higher maturity level. To help illustrate this, we are going to try something new.  The ITSM Professor would like to solicit your opinions and success stories on Quick Wins and IT Service Management improvements.     We may publish your stories in upcoming blogs on topics such as Recording every Incident and Service Request Defining  models for your frequently occurring Incidents Starting to create a Standard Change library Producing trending re

Is ITIL Best Practice or Good Practice?

By definition, ITIL is a set of best practices (refer to glossary and section 1.2.3 of any of the books)  It is also considered a "source" of good practice.   While this may be confusing, it is important to understand the distinction. There are many sources of good practice but not all of those sources are  validated as "best" practice.   While the term is loosely used, best practices should be repeatedly proven to demonstrate tangible value in actual organizations.  In fact, today's documented best practice could be tomorrow's good or common practice.  That's how service management evolves, improves and becomes institutionalized. Building a service management program can also involve other sources of good practice (i.e., other frameworks, standards, and proprietary knowledge). ITIL makes it clear that it's best-practice guidelines  are not intended to be prescriptive.  Each organization is unique and must 'adapt and adopt' the gui

Juran and Quality

Several key individuals played a significant part in the creation of the movement to develop the ideas and usage of quality in the production of goods and services. Joseph Juran was one of these main contributors. His main efforts came in the form of the Quality Trilogy and the Quality Roadmap. Each of these approaches helped to set the foundational concepts and practices of achieving quality for customers. Juran began with a basic definition of quality: “Quality is conformance or fulfillment of customer requirements” The Quality Trilogy states that there exist three basic stages or aspects of achieving quality: Quality Planning: quality does not happen by accident Quality Control: checks and balances, structure and governance ensure quality Quality Improvement: we must always seek a better way to do things Once we have our basic aspects laid out we can then create a “roadmap” or plan for attaining quality. For the delivery of goods and services we could try to create this r

The 2011 Edition of the ITIL Library

The Professor has kindly lent me his blog to share the latest ITIL Library update - Jayne It was recently announced that the 2011 Edition of the ITIL Core Library will publish on July 29, 2011. Notice that I did not refer to the new release with a new version number - and with good reason. All future revisions to the ITIL publications will be referenced by the year that it was published. ITIL will finally just be ITIL. If you regard the 2011 ITIL update as the equivalent to the revision of a college textbook, you'll understand why this is not a big deal. Academic textbooks are revised on a regular schedule without much fanfare or impact on prior or future students. Past attendees do not retake their final exams or replace their textbook. Just a normal course of continual improvement. Do you base your decision on whether to take a college course on the edition of the textbook being used?  Not really.  What's important is the relevancy of the topic.    The 2011 ITIL Edit

Measuring Service Management Maturity

I was recently asked about how to measure service management maturity when the maturity of individual processes is not equal. Frankly, it’s a bit of chicken and egg. It can be difficult to define where your organization is as a whole compared to each individual process when the processes are at different levels. When we look at a specific process we have to judge it against a specific set of criteria. Each organization will develop this criteria based on the organizational goals and objectives. Each process may have a different set of criteria, different levels of benefit or impact so therefore a different level of need-based maturity. For example, for organizations that are highly dependent on suppliers and outsourcing, the need for a mature Supplier Management process is critical. Other organizations may not focus on Supplier Management but invest their focus and resources on other processes such as Configuration Management. The maturity of individual Service Management process

The Difference Between a Service and a Good

What is the difference between a service and a good? When answering questions like these, I attempt to look at things from a simplified view. I try to stay away from complexity and decompose or deconstruct the parts of an answer to its most basic form. As a result, for me the difference between a service and good is very simple. A service is intangible (an abstraction or an idea) while a good is tangible (having physical characteristics). In terms of the creation or production of each, they are both “manufactured” or “created” using the exact same approach. Raw materials are “processed” into a finished output. So both services and goods could be considered “products” of a manufacturing “process.” In this way services and goods are the same thing. The difficulty arises for many people when it comes to the idea of “manufacturing” services. Because they can see, taste, smell or feel a service as it makes its way through the “manufacturing” process, they end up thinking or believing that

Managing Conflict

The workplace can be a stressful environment. Personality differences, team dynamics, budget constraints, technology issues, achieving business alignment and customer satisfaction are all contributing factors to this stress. Conflict inevitably arises. I recently attended an ITSM Leadership workshop and realized that conflict is not negative nor something that we should avoid. As IT Service Management leaders we must understand that our stakeholders often have a different set of concerns and issues. There will be varied opinions on the right way to implement service management. In our goal of effective leadership it is important to solicit and listen to differing opinions. We have to embrace our differences, work through the issues and implement strategies to limit the negative aspects of conflict. Conflict can actually be valuable to an organization. It is in the effective management of this conflict that our teams can be made stronger, our relationships with our customers improved an

Enterprise Monitoring and the Service Lifecycle

I was recently asked where an entity such as Enterprise Monitoring resides in an organization. Should it be equivalent to the Operations, Engineering and Security areas rather than reporting to one those areas? Yes in some ways it does make sense to have Enterprise Monitoring at the same level. However, we must remember that there is a clear distinction and separation between the functions as proposed in ITIL and the activities that they perform. “Monitoring” is an activity related to a process (including but not limited to Event Management, Availability, Capacity, Service Level, etc.) So this work gets performed across an enterprise and not by a single, particular group. Anyone who needs to “monitor” something should use the associated processes to do so. Someone doing “monitoring” does not need to be located in any specific part of an organization. Functions are organization, location and structure agnostic. A function is not a place or management structure rather an abstract groupin

Evolution of the Balanced Scorecard

The balanced scorecard (BSC) has evolved from simple metrics and performance reporting into a strategic planning and management system. It is no longer a passive reporting document which shows pretty pictures. It has transformed into a framework that not only provides performance measurements but helps analysts identify gaps and continual service improvement programs. It enables our senior staff to truly execute their strategic goals and objectives. Dr. R. Kaplan & David Norton did extensive research and documentation on this framework in the early 1990’s. In Kaplan & Norton’s writing, the four steps required to design a BSC are as follows: Translating the vision into operational goals; Communicating the vision and link it to individual performance; Business planning; index setting Feedback and learning, and adjusting the strategy accordingly. In the late 1990’s an updated version on the traditional balanced scorecard was introduced called the Third Generation Balanced S

Narrowing Tool Selection Criteria Based on Stakeholder Requirements

One of our followers recently asked about how to handle the CIO's concern about security in a cloud environment when evaluating tool solutions.  To my mind, the CIO is expressing a potential requirement that should be considered and that may narrow your selection criterion. Your selection criteria should assist in achieving two outcomes. One is to narrow down the list of providers and their products to a workable number so that you are not spending undue amounts of time evaluating too many vendors. The other is to ensure that the products you have selected to evaluate really do meet 80% of your stated requirements out of the box. You will need to develop three criteria sets. The first list is a set of criteria of what you would like the tool to do in terms of supporting your documented and defined processes (call these functional requirements). Functional requirements are those things that help you to achieve utility of your processes and services. You will also need a set of c

Metrics that Matter to Customers

I was recently asked to elaborate on a previous blog that discussed reducing metrics and reporting on those that matter to customers. In terms of any metrics, especially those that are important to customers, you should always think about or add the phrase “with quality”. Remember that the term “quality” is defined as “conformance to customer requirements”. So all metrics and measurements should ensure the work or actions you perform remains focused on the customer and their needs. Also in terms of how you phrase a metric it can often be more beneficial to measure in terms of increases and decreases rather than specific quantities. Given that, here some metrics that you might think about using: Increased Customer Quality Satisfaction %--perhaps the most important of all metrics Increase First Line Call Resolution [with quality] %--helps reduce costs but also builds perception of preparedness and knowledge in the eyes of the customer Decreased Mean Time to Restore Serv

Culture Shift

When one thinks about how things work in the world, the word paradigm might come to mind. Paradigm (n.)-- A system of assumptions, concepts, values, and practices that constitutes a way of viewing reality. As the definition shows, a paradigm represents “how things are” in our current world. Another way I like to think about the idea of a paradigm is to use the term “culture.” Culture (n.)— The known environment in which a person, thing or idea exists. If you know a foreign language or how to play an instrument it is part of your own personal culture, or paradigm. If you do not speak a foreign language or cannot create music, those capabilities are not part of your culture or paradigm. And just as an individual has a culture or personal paradigm, so can an organization. Often it is this culture or paradigm that wreaks havoc with our ability to understand and implement IT Service Management. So how do we understand and use the knowledge of our cultures or paradigm to our advantage when

Keeping the Momentum Going

The Continual Service Improvement publication describes the Continual Service Improvement model. One of the questions asked in this model is “How do we keep the momentum going?” This question becomes especially important when your ITSM implementation efforts have been in place for a significant amount of time. The question then becomes more one of “How do we stop from losing the momentum and effort invested up to this point?” Or perhaps “How do we avoid from returning to the old ways?” For all our efforts to become efficient, effective and economical there is a potential danger that we will fall into comfortable, yet poor habits. So how do we ensure that we do not fall into bad habits such as taking shortcuts, pushing aside process, and just “getting things done” instead of following established methods and processes and doing proper planning? We must begin by being confident in the strides we have made to this point. If we have followed the Continuous Improvement Model faithfully

Process Maturity Framework (PMF) - Part 3

The professor was recently asked:  "I am having difficulty communicating the business risk of having processes like Change Management and Incident Management sit at Initial (Level 1) maturity. Can you address some of the common business risks and costs companies see by having immature processes?" Great question!  Many organizations do not recognize the inherent risks inhaving immature critical processes such as Incident Management and Change Management. Both processes strive to increase service availability either by identifying and mitigating risk before a change is made or minimizing the impact of a failure after a service is deployed.  To refresh our memories, I have included a description of each aspect of Level 1 in the Process Maturity Framework, with its associated risks:   Vision and Steering: Minimal funds and resources with little activity. Results temporary, not retained. Sporadic reports and reviews. No formal objectives and targets. Wasted activi

MOF and Standard Changes

Organizations looking for help defining standard changes will find it in the Microsoft Operations Framework (MOF). A white paper Using Standard Changes to Improve Provisioning describes what standard changes are in relation to other changes as well as in relation to service requests; along with guidelines for establishing standard changes. The MOF Action Plan: Standard Changes offers a more succinct step-by-step look at how to create standard changes. There are a also a number of “MOF Reliability Workbooks” in the MOF Technical Library (e.g., Reliability Workbook for Active Directory® Certificate Services) that describe proposed standard changes for the given system or service presented in a checklist-like fashion that allows the proposed change to be verified as a standard change. The MOF Reliability Workbooks are Microsoft Excel spreadsheets that also look at things such as Monitoring Activities, Maintenance activities, and Health Risks. This and other tools such as an in

Process Maturity Framework (PMF) - Part 2

In one of my previous blogs I wrote about the ‘Process Maturity Framework”. (Appendix H pg 263 from the V3 ITIL Service Design Book). I mentioned that you can utilize this framework to measure your Service Management processes individually or your Service Management program as a whole.  With this discussion I would like to speak to the five areas that the assessment should be completed against at each level. The five areas are: Vision and Steering Process People Technology Culture  Initial (Level 1) Vision and Steering:   Minimal funds and resources with little activity. Results temporary, not retained. Sporadic reports and reviews. Process : Loosely defined processes and procedures, used reactively when problems occur. Totally reactive processes. Irregular, unplanned activities. People: Loosely defined roles or responsibilities. Technology: Manual processes or a few specific, discrete tools (pockets/islands). Culture: Tools and technology based and driven with strong a

Process Maturity Framework (PMF) - Part 1

I am often asked about the best way to measure process maturity.  While there are several process maturity models available, I prefer the “Process Maturity Framework” (Appendix H pg 263) from the V3 ITIL Service Design Book. You can utilize this framework to measure your Service Management processes individually or Service Management as a whole. The five areas that the assessment should focus on are: Vision and Steering Process People Technology Culture The major characteristics of the Process Maturity Framework (PMF) are the following:    Initial (Level 1) The process has been recognized but there is little or no process management activity and it is allocated no importance, resources or focus within the organization. This level can also be described as ‘ad hoc’ or occasionally even ‘chaotic’. Repeatable (Level 2) The process has been recognized and is allocated little importance, resource or focus within the operation. Generally activities related to the process are unco