Skip to main content

Posts

Showing posts with the label Metrics

ITIL® 4: It’s time to focus on people, not just SLAs

Originally posted on devclass.com, June 22, 2021 and written by Joseph Martins. Sponsored Experience is everything when it comes to delivering IT-enabled products and services. But it’s no longer about how many deadlines your team smashed, how often you’d exceeded service-level agreements (SLAs), or how many lines of code you’ve spat out. Rather it’s about how the services and products you deliver impact the rest of the organisation’s ability to do their jobs, increase productivity, deliver customer satisfaction and co-create value. “Experience” may be seen as subjective, even ephemeral, compared to the traditional IT metrics, deadlines and SLAs. But if you want proof of its importance, consider how ITIL® 4, the latest revision of the best practice framework for service management from AXELOS, focuses on improving user experience of digital services and how this enhances productivity right across the organisation. Ian Aitchison, VP Product Management at Nexthink, the leader in digital

A Dash of Neuroscience – DevOps Leaders Listen Up!

As leaders, we need to understand the people that we are leading.   It is critical to understand that this is a new world and if we are to lead the global enterprise into a successful future, we need to understand strategic, tactical and operational objectives of our organization and also that we must have a passion for learning. “A Dash of Neuroscience“ is one of many topics introduced by the DevOps Institute for the new updated DevOpsLeader course .   This information is taken from that course and is just a smattering of what you will learn as you prepare for your certification. Learn how to optimize speed to value as a DevOps Leader.   Live in a perpetual world of learning Many people feel their brains limit their potential and prevent them from learning.   However, learning can change our brains in terms of function, connectivity, and structure.   Our brain shapes our learning but learning shapes our brain, and research has shown that simply knowing about brain plastici

DevOps Metrics – Time vs. Cost

There are three main principles that will help to optimize your DevOps initiative.  You may have heard them referred to as The Three Ways . All three of the principles will have a role to play but for the purpose of Time vs. Cost, I would like to focus on the first way which is “The Flow of Work from Left to Right”.  When considering this flow of work think of the value stream from left to right as being from the time the request is made until the time that value is realized. Using LEAN methods and applying techniques like the Theory of Constraints we can increase velocity to apply just the right cadence to meet the evolving business demand.  These practices along with our DevOps integration, Continuous Delivery Pipelines and automation will radically increase the time to value for products and services.  Time is a key metric.  DevOps organizations use “time” as the primary measurement tool.   Why time is a better metric than cost: Time is used to set goals beyon

Metrics That Matter to Customers

I was recently asked to elaborate on a previous blog that discussed reducing metrics and reporting on those that matter to customers. In terms of any metrics, especially those that are important to customers, you should always think about or add the phrase “with quality”. Remember that the term “quality” is defined as “conformance to customer requirements”. So all metrics and measurements should ensure the work or actions you perform remains focused on the customer and their needs. Also in terms of how you phrase a metric it can often be more beneficial to measure in terms of increases and decreases rather than specific quantities. Given that, here some metrics that you might think about using: Increased Customer Quality Satisfaction %--perhaps the most important of all metrics Increase First Line Call Resolution [with quality] %--helps reduce costs but also builds perception of preparedness and knowledge in the eyes of the customer Decreased Mean Time to Restore Serv

Metrics and Business Value

IT managers gather and distribute metrics that reflect their group’s performance on a regular and timely basis.  But outside of their immediate organizations do these metrics have any real meaning or impact? Do these measurements really define the value that IT is delivering?  Business executives shouldn’t have to work to see the positive impact of IT performance.  It should be made readily visible, in language they can grasp quickly and easily.  In many IT organizations there is a continued focus of their reporting towards the performance of the technology and not the value being delivered to the business.  This emphasis continues to create a gap between IT and the rest of the organization. (1) What metrics do you employ?  Service metrics, measuring the end to end performance of your services, based on your technology metrics.   Technology metrics, performance of your components and applications. Are they available when needed? Do you have the correct levels of capacity to meet d

Roadmap

Most executives understand that a business’ performance is only significant when it is benchmarked against its competitors.   As an accepted business best practice it is expected that the functioning of an individual organization will be measured against other like type organizations. This practice of benchmarking oneself against competitors should be no different for any IT organization.   There can be no better instrument to utilize then benchmarking to insure whether the IT operation is providing a competitive product.   Without this peer to peer comparison it would be difficult at best to define if IT’s performance is weak, competitive or an industry leader. Of course in order to benchmark you must first determine are my processes mature enough to ensure that I can gather the significant data needed for this undertaking.  If not then your resources would be better utilized in first assessing your processes maturity through tools such as the ITIL Process Maturity Framework (PMF

Defining Business Benefit

In a previous blog I wrote about the need for a high performance Service Desk with the value proposition being reduced re-work, less down time, better utilization of higher cost resources (knowledge management), increased stability and predictable levels of IT services.  In order to deliver this value, we must effectively communicate goals and business benefits in a language that the business finds relevant and meaningful.   Consequently, metrics and reporting should reflect business outcomes and business needs. IT Support Metrics Average speed of answer. First Call Resolution. Average Escalation Duration. Total # of incidents recorded by: Service, CI, Assignment team. IT Goal Less down time, lower abandon rate, quicker speed of answer. Less down time, lower abandon rate, greater use of knowledge bases. Less Down time, predefined escalation paths, greater cooperation between technical resources. Precise picture of which services and Cis, having the greatest impact on t

The Best of CSI, Part 3

Creating a Metrics Program Originally Published on November 30, 2010 Every organization should create a metrics program to ensure that business and process improvement goals are being achieved. We need to have the ability to show that processes achieve results and we must review and schedule process audits. A metrics program describes the measurements needed to achieve business goals. It also identifies how to collect the data and how to use the information to continually improve performance. An effective program focuses on what you should measure to achieve business goals, individual process performance and process interfaces. Each of the best practice frameworks stress metrics as a way of assuring continual improvement. ITIL defines the "7 Step Process" for identifying, collecting, analyzing and using data.  The Deming Cycle's "Check" stage requires that we have methods for monitoring and measuring processes.  The Balanced Scoreboard looks at performa

Service Measurement

Before my life as an ITSM professor, I was responsible for delivering the monthly reports on IT at a large specialty retailer organization with multiple remote locations in several states across America.   I delivered many of the standard reports for Service Desk, Change Management and System Availability.   System availability was a standard report that reviewed from a system / hardware perspective just how available the systems and their supporting components were throughout the month.   This was delivered in percentages and the goal was to maintain 100% infrastructure availability. Even though many of the individual systems and components were meeting their required SLAs, our customers were still not satisfied with the availability and performance of critical services.   W e needed to re-address what should we be measuring and how should we be reporting achievements back to the business and customers. W e decided to report on the end to end delivery of our services and the a

The Myth of Metrics

The world record for the 100 meter dash is 9.58 seconds. The world record for the mile is 3 minutes 43 seconds. The record for running a marathon is 2 hours and 3 minutes. The 100 meter freestyle swimming record is 44 seconds. The world land speed record is 760 miles per hour (1223 KmH).   And the list goes on and on. So what do these records have to do with ITSM? First these records are metrics: results of measurements. They measure the performance of processes (structured steps or actions undertaken to achieve an objective) and indicate the level of performance of people and vehicles doing the process. Second these data points reveal the myth of metrics. This myth is the belief that a person or organization can pick a data point or metric (desired result) before doing a process and through sheer willpower or force of action achieve that point. For example a person could not pick a time such as one (1) hour and say they are going to run a full length marathon in one hour. The only w

First Call Resolution

I was recently asked "Do you have an average for the service desk of first call resolution?  We are trying to set a target for the team and I cannot find any data which gives me any indication what a good target would be."   First call resolution (sometimes called "first contact resolution" or FCR) is an industry recognized metric for the performance of the Service Desk.   Analysts are measured on their ability to restore service to a user and close an incident during the first call or contact.     This is a difficult metric to benchmark across all organizations and all incidents.   Factors such as incident complexity, service desk skills and empowerment,  outsourcing and remote control capabilities can influence the ability (or inability) to restore service during the first contact. While ITIL acknowledges FCR as an important Service Desk metric, it steers clear of offering a target or benchmark.  Industry experts generally accept a FCR range of 65 to 80 %.

Should Service Requests be Included in First Call Resolution metrics?

I recently had a question regarding the inclusion of Service Requests into metrics for First Call Resolution. As always, the answer is “it depends”! ITIL now treats Service Requests and Incidents as two different processes – Service Request Fulfillment and Incident Management. Both are generally logged into the same tool and owned by the Service Desk. They are also measured by their own key performance indicators and metrics. ITIL does not consider first call resolution as a process metric - it is more of a service desk performance measurement. First call resolution historically helps measure the handling of incidents by the Service Desk. The definition of an incident is usually pretty clear. However, since the definition of a service request can vary greatly from organization to organization, the value of including requests in incident metrics may also vary. If your definition of a service request includes pre-authorization and funding, then the Service Desk’s ability t

Evolution of the Balanced Scorecard

The balanced scorecard (BSC) has evolved from simple metrics and performance reporting into a strategic planning and management system. It is no longer a passive reporting document which shows pretty pictures. It has transformed into a framework that not only provides performance measurements but helps analysts identify gaps and continual service improvement programs. It enables our senior staff to truly execute their strategic goals and objectives. Dr. R. Kaplan & David Norton did extensive research and documentation on this framework in the early 1990’s. In Kaplan & Norton’s writing, the four steps required to design a BSC are as follows: Translating the vision into operational goals; Communicating the vision and link it to individual performance; Business planning; index setting Feedback and learning, and adjusting the strategy accordingly. In the late 1990’s an updated version on the traditional balanced scorecard was introduced called the Third Generation Balanced S

Metrics that Matter to Customers

I was recently asked to elaborate on a previous blog that discussed reducing metrics and reporting on those that matter to customers. In terms of any metrics, especially those that are important to customers, you should always think about or add the phrase “with quality”. Remember that the term “quality” is defined as “conformance to customer requirements”. So all metrics and measurements should ensure the work or actions you perform remains focused on the customer and their needs. Also in terms of how you phrase a metric it can often be more beneficial to measure in terms of increases and decreases rather than specific quantities. Given that, here some metrics that you might think about using: Increased Customer Quality Satisfaction %--perhaps the most important of all metrics Increase First Line Call Resolution [with quality] %--helps reduce costs but also builds perception of preparedness and knowledge in the eyes of the customer Decreased Mean Time to Restore Serv

Use CSI to Meet Changing Customer Needs

You can’t always go home again, but you can use Continual Service Improvement (CSI) to meet the changing needs of your customers. I recently posted a blog about returning to a service desk I had managed and spoke about how the changing business environment had impacted management’s ability to sustain the current list of Critical success factors (CSFs) and Key Performance indicators (KPIs). The 1st question that was asked was “What should we measure?” Within the new business reality we reviewed how the corporate vision, mission, goals and objectives had changed? We spoke with service owners, business process owners, business analysts and customers and asked what was critical to them. What services that we were providing was creating the most value to them and enabling them to meet these new goals and objectives? Management then identified the gaps of “what we should measure”, to “what we can measure”. From this a more customer focused list was developed. The overriding objective

Service Desk Metrics

Earlier in my career I had the pleasure of managing a Service Desk. This function is the unsung hero of IT support! We had a multitude of measurements and metric that were taken every day and then meticulously charted, reported and analyzed. At the time it’s what we did. I recently had the opportunity to visit my old Service desk and found to my horror, that many of these metrics were no longer being used. I also was informed that customer satisfaction had not dropped significantly and that some of the KPI still being measured were well within an acceptable range. Now that I no longer am in the thick of it, I took some time to really think about what was it we were measuring and what did it really mean. As an organization we did all of the industry best practices measurements. Speed of answer Call duration Number of calls per day/week/month and analyst Abandoned calls Number of tickets opened versus number of tickets closed Percentage 1st call resolved Customer satisfaction

The 7-Step Improvement Process

One of the most interesting concepts that I've found in the V3 Continual Service Improvement (CSI) book is the 7 Step Improvement process. This process provides a structure for defining, analyzing and using metrics to improve services and service management processes. Prior to beginning the process, it is important to determine the: Vision Strategy Tactical goals Operational goals These will be defined during Service Strategy (vision and strategy) and Service Design (tactical and operational goals). With that in place, the process consists of 7 practical steps: Define what you should measure Define what you can measure (then do a gap analysis between this and Step 1) Gather the data Process the data Analyze the data Present and use the information Implement corrective actions This process provides a framework for ensuring that the data being collected and resulting metrics align with the strategic and tactical goals of the organizatio