Part 1 of this series explores the activities involved in planning and preparing for practice capability assessments. This blog focuses on conducting assessments.
Organizations typically use three broad types of practice capability assessments, each differing in purpose, depth, and rigor.
Self-assessments – internally led, often using surveys or worksheets where practitioners rate their own practices.
- Helps to raise awareness of strengths and weaknesses and capture a baseline
Facilitated assessments – led by a neutral internal facilitator or cross-functional team (e.g., a Certified Process Design Engineer (CPDE), a process improvement team or members of a Service Management Office (SMO)) that guides stakeholders through structured discussions and consensus-based scoring exercises.
- Provides a shared understanding of capability across stakeholder groups
Independent or formal external assessments – conducted by experienced consultants, certified assessors or accredited organizations using recognized standards or models.
- Provides objective, auditable validation of capability levels and can be used to benchmark performance against external standards or industry peers
There are pros and cons to each assessment type related to factors such as time, cost, objectivity, and credibility.
In practice, organizations often progress through these assessment types – starting with self-assessments for awareness, moving to facilitated assessments to build consensus and agree on priorities, and eventually to formal external assessments for validation and benchmarking.
The remainder of this blog focuses on self-assessments, but the guidance is generally applicable to any type of assessment. Regardless of type, the core structural elements of assessment models are the same. They include:
Assessment scope – as described in Part 1 of this series.
Capability levels – the progressive stages that describe how well a practice is defined, managed, measured, and continually improved.
Scoring mechanism – the method used to rate the degree to which each criterion is achieved.
Evidence – the artifacts, affirmations, or observations used to validate achievement.
Criteria – the specific indicators used to evaluate whether a practice demonstrates the behaviors, controls, or outcomes expected at each capability level.
Scoring worksheets – structured worksheets that list the criteria and corresponding rating scales for each capability level.
Part 3 of this series describes how to analyze, interpret and report on the results of your scoring exercise and translate assessment findings into an actionable improvement plan or roadmap.
There are several models and standards that can be used to assess practice capability. These include Capability Maturity Model Integration (CMMI), the ITIL Maturity Model, and ISO/IEC 33020. ISO/IEC 33020 is part of the ISO/IEC 33000 family of International Standards which provides a framework for the assessment of process quality characteristics and organizational maturity.
You can choose to use one of these existing models or standards, or use them as inspiration to build your own. Whichever approach you choose, the key is consistency.
Consistently applying the same assessment approach ensures that results are comparable and repeatable over time. This builds credibility and confidence in the findings, enabling fair benchmarking across practices, teams, or assessment cycles.
Repetition and practice are prerequisites to mastery.
Capability levels
- Level 1 – Initial – no structured practice – activities are ad hoc and inconsistent
- Level 2 – Managed – recognized practice – practice is planned and controlled
- Level 3 – Defined – standardized practice – practice is institutionalized across the organization
- Level 4 – Predictable – quantitatively managed practice – decisions and improvements are driven by reliable, quantitative data
- Level 5 – Optimizing – continuously improved practice – innovation opportunities that support business goals are identified and implemented
Adapted source: The ITSM Process Design Guide
Scoring mechanism
At the criterion level, most models use a binary or categorical judgment.
In its simplest form, scoring can be yes or no. While this mechanism has limitations, it is quick and easy to apply and ideal for introductory self-assessments.
A more nuanced approach could provide a scale such as:
- Achieved – criterion is fully and consistently demonstrated – there is objective evidence
- Partially achieved – criterion is inconsistently demonstrated – evidence may be incomplete or informal
- Not achieved – criterion is absent or ad hoc – no credible evidence
This approach supports data-driven decisions and makes it possible to track trends over time.
Evidence requirements
Evidence is objective proof that a capability criterion has been met. It can take the form of:
- Documents (e.g., policies, procedures, guidelines)
- Records (e.g., completed forms, logs, minutes)
- Data and reports (e.g., dashboards, charts)
- Interviews (e.g., employee confirmations)
- Observations (e.g., workflow inspections)
Capability refers to how well the practice is defined, how embedded it is in the organization’s culture, and how capable it is of being continually improved through measures tied to business goals.
Models tend to categorize assessment criteria under a high-level structure. For example, ITIL 4 practices align with practice success factors (PSFs) and the four dimensions of service management. The PSFs and capability criteria are available via PeopleCert Plus.
For organizations building their own model, example characteristics might include:
- Objective attainment and business alignment
- Cost effectiveness and funding
- Governance, accountability, and compliance
- Skills, competencies, and culture
- Lifecycle management
- Digital enablement and information quality
- Knowledge capture
- Proactive and predictive activities
- Customer satisfaction
Once characteristics are defined, criteria (indicators) are identified within each category that help assessors determine capability level.
Scoring worksheets or surveys
These list the criteria and rating scales for each level. Participants score based on perception and provide supporting evidence.
The level where all criteria are achieved with supporting evidence reflects current capability; partially achieved or not achieved criteria represent improvement opportunities.
Use worksheets or surveys as a starting point for stakeholder conversations. Techniques like observation, interviews, and workshops clarify perspectives and encourage engagement.
Analyzing, Interpreting, and Reporting Results
Once criteria are scored, aggregate and interpret results to transform assessments into actionable improvement plans or roadmaps.
Other relevant blogs include:
- ITIL Maturity and Practice Capability Assessments
- Assessing Practice Capability – Part 1 – Planning and Preparation
- Assessing Practice Capability – Part 3 – Analyzing and Reporting Results
Relevant ITSM Academy certification courses include:
- Certified Process Design Engineer (CPDE)
- Value Stream Mapping Fundamentals (VSMF)
- ITIL 4 Foundation (a prerequisite for all advanced ITIL 4 courses)
Our advisory services also include Process to Practice Workshops to help your team evaluate and improve selected service management practices.
In the ITIL 4 Qualification Scheme, a Practice Manager designation validates skills in specific practice areas. Each ITIL 4 Managing Professional and Strategic Leader course introduces practices relevant to its focus.
Click here to learn more about the ITIL 4 Qualification Scheme
Comments