Measuring the impact of learning interventions is crucial for organizations to ensure effectiveness, drive performance outcomes, and make data-driven decisions to improve their learning programmes. Unfortunately, many training programmes suffer from a lack of clear outcomes from the start, leading to suboptimal return on investment (ROI). To address this, Learning and Development professionals must adopt a mindset for measurement, focusing on quantified business goals and meaningful partnerships with cross-functional stakeholders. By contextualizing the learning experience and managing expectations, organizations can transform their learning initiatives, maximizing ROI and achieving desired outcomes.
The man who starts out going nowhere, generally gets there.
Dale Carnegie
Understanding the present landscape of learning measurement is a crucial step to instigate a transformation.
For far too many organizations, the measurement of learning interventions often turns out to be an afterthought rather than an integrated part of the learning process.
ATD surveyed 779 talent development professionals. Half were managers, directors, or executives. 46% were from mid-sized organizations (500-9,999 employees), 31% from large organizations (10,000+ employees), and the remaining 23% from smaller organizations (<500 employees).
The survey used the Kirkpatrick and Phillips model to categorize learning evaluation levels.
Levels 1 and 2 were the most widely used. Level 1 measures learners' reactions (e.g., smile sheets), and Level 2 measures skills and knowledge acquisition through quizzes. Approximately 80% of organizations used these levels.
Level 3, which focuses on the application of new skills on the job, was measured by 54% of organizations through surveys or observation.
Levels 4 and 5 were less commonly used. Only 38% used Level 4, which measures business or mission effects, and just 16% used Level 5, which measures financial results.
According to the survey report, just 40% said their learning evaluation efforts helped them meet their organization’s business goals. The main barriers reported were:
Feedback collected at the end of trainings is notoriously superficial, focusing more on the immediate experience rather than the effectiveness and application of the learning. This superficial feedback gives a skewed perception of success, thereby misleading organisations and learners about the actual impact of the learning intervention and doing nothing to elevate the position of the learning expert as a valued cross-functional partner.
John Sharpe, a committed Learning and Development professional at MedEquip Inc., a high-tech medical equipment company, is entrusted with developing and implementing a sales training programme. His primary stakeholder, Robert, a determined first-line sales manager, needs an immediate boost in his sales team's languishing performance.
Under the time-sensitive mandate given, John puts together a sales training programme. On its completion, there’s a clear sense of accomplishment. The sales team is full of enthusiasm, demonstrating an improved understanding of the products during a standard post-training quiz.
Months later, Robert notices a frustrating lack of improvement in the team's actual sales performance. The quiz results were positive, but there’s no significant upswing in the numbers that truly matter - the sales.
As John gears up for a round two of the training a few months later, he finds himself retracing familiar steps, a feeling of déjà vu pervading his preparations. The issue is not the content, but the lack of depth in the evaluation. The quiz he uses measures surface knowledge, but no strategies are put in place to track the long-term application of the training or its direct impact on sales.
John’s training intervention is a bit like filling a bucket with a large hole at the bottom. The immediate results look promising, but with no thorough measurements or reinforcement strategies, the newly acquired skills leak out, leaving little to no impact on the sales performance.
Confronting the Fear of Poor Results
The spectre of poor results can be an intimidating obstacle to implementing effective measurement practices. For many, it's easier to bask in the immediate positivity that comes after a learning intervention, like glowing feedback on a post-training survey, than to delve into the gritty details of how that training truly impacts performance over time.
However, avoiding the reality of poor results is not just an exercise in ignorance, but it can also lead to a cyclical pattern of ineffective training. When the fear of confronting weak outcomes prevents thorough evaluation, learning professionals and managers may find themselves trapped in a loop of recurring issues.
The key to breaking free from this cycle lies in embracing the possibility of unfavourable results. This is not about seeking failure, but about making failure a stepping stone to success. In fact, poor results can provide the most useful insights, revealing the gaps in training that need to be addressed, and paving the way for truly impactful learning interventions.
Barriers to Measurement
Despite the undeniable importance of measurement, several barriers often hinder its effective implementation. These include the following, along with potential solutions:.
1. Lack of time
2. Lack of clarity
3. Resource allocation issues
Overcoming these barriers requires a commitment to creating a culture where measurement is seen as an essential part of learning and development — not an afterthought, but a core component of strategy and execution.
I believe learning fulfills its highest purpose with performance activiation. For me, L&D detective work is measuring fulfillment of purpose.
Kevin M Yates, 'Learning Detective'
Having clear measurement criteria from the outset is pivotal for effectively evaluating the success of learning interventions. Think of it like setting out on a journey – having a defined destination enables you to plan the most efficient route and measure your progress along the way.
If you don't know where you are going, you'll end up someplace else.
Yogi Berra
In his L&D Detective Kit for Solving Impact Mysteries, Kevin M Yates makes a compelling case for the adoption of ‘Impact Standards’. Not all learning programmes are created equal, so it is important to first identify the training programmes that “require the time, resources and effort to measure impact.”
Yates’ Impact Standards require the following questions be asked from the outset during what he calls “Impact Opportunity Interviews” to enable “proactive impact planning prior to the design and deployment of your learning solution”.
Impact standards indicate how well a learning solution is designed to impact performance and business goals. The more standards met, the greater the potential for impact. Yates also notes that without these Impact Standards learning solutions are hard to measure. The standards are essential to establish the “purpose and intention for impact” first.
Examples of KPIs - via the Learning Detective
Market share | Employee performance | Customer satisfaction | Operations | Quality |
---|---|---|---|---|
Time | Growth | Errors | Volume | Engagement |
a) Applying the Kirkpatrick Model for ROI Measurement
The Kirkpatrick Model, developed by Donald Kirkpatrick in the 1950s, has stood the test of time as a valuable tool for evaluating the effectiveness of training programmes. This model comprises four levels, explained very simply here:
1. Reaction: Did they enjoy the training?
2. Learning: Did they pass the assessment?
3. Behaviour: Do they work better?
4. Results: Did business metrics improve?
The Kirkpatrick Model offers a structured way to assess the impact of training, allowing stakeholders to gauge the return on their investment and identify areas for improvement.
Using the Kirkpatrick Model, Indiana University Health increased on-the-job compliance scores by over 20% and decreased medication errors with a severity level E or higher by 67% over a three-year period. Emirates Airline increased customer satisfaction ratings in four key areas and created a double-digit decrease in customer complaints.
However, the Kirkpatrick Model is not without its limitations. One important criticism is that the model doesn’t tell us much about the ongoing measurement of the four levels over time. While the learning intervention may have an initial impact, that impact may fade over time. Evaluations should be ongoing and measure lasting impact.
b) Anderson's Value of Learning Model
Anderson’s Value of Learning Model was developed by Valerie Anderson and published by the Chartered Institute of Personnel and Development. The model is a three-stage cycle intended to be applied at organization level vs. for specific learning interventions:
Stage 1: Determine current alignment against strategic priorities, e.g. driving sales, reaching a new market, leadership development. A learning programme that significantly improves the technical skills of the target audience may seem successful. However, if the organization's primary need is to enhance leadership and communication abilities, then the programme is poorly aligned with the organization's development priorities.
Stage 2: Use a range of methods to assess and evaluate the contribution of learning. Four areas of evaluation are recommended:
Stage 3:
Establish the most relevant approach for your organization – this will depend on the stakeholders’ objectives and values. As such, the model suggests four categories should be considered:
The Anderson model has three notable advantages:
Criticisms of the Anderson model are that it does not offer in-depth analysis of individual training programmes; neither does it offer direction on how to evaluate individual learning and development initiatives. However, the Anderson model recommends combining multiple evaluation approaches with the Anderson model to establish whether strategic priorities are being met.
c) The Performance Consulting Model and its Role in Learning Measurement
The Performance Consulting model is a sound framework for measuring learning impact. Rooted in the concept of partnership, this 7-step model can be summarised as follows:
In common with other models, the Performance Consulting model will require the investment of time and resources to derive its benefits.
Measure what is measurable and make measurable what is not so.
Galileo
Learning Detective Kevin Yates emphasises the importance of focusing on performance outcomes vs. learning objectives for the purpose of measurement of learning impact. This perspective results in a shift in the framing of the learning. So, “You will know how to handle conflict” becomes “Manage and resolve conflict by interpreting behaviour instead of responding emotionally.” As such, outcomes are action-based and describe the skill or capability.
Incorporating diverse evaluation methods at different stages of the learning process can provide a comprehensive understanding of the effectiveness. For example, using pre- and post-training assessments, job simulations, and on-the-job observations can assess knowledge acquisition, application, and behaviour change. .
Leveraging technology-enabled learning analytics platforms can facilitate data collection and analysis, allowing for real-time tracking of learning outcomes and performance improvements. Regularly reviewing and analyzing the collected data, along with involving stakeholders in the evaluation process, can provide valuable insights for continuous improvement.
Organizations must foster a culture of shared accountability and prioritize measurement in learning and development strategies. The stakeholders initiating the learning request should take responsibility for defining desired outcomes and collaborating with their learning partners on evaluation methods to assess their achievement.
Measurement, in its essence, is not just about collecting data or ticking boxes. It's an exercise that demands we dive deep into the heart of our efforts and initiatives, asking the tough questions that some might shy away from. This translates to questioning the efficacy of our strategies, the relevance of our content, and the resonance of our delivery methods.
A few useful questions include the following:
These are some examples of questions that a committed learning and development professional or sales manager needs to ask.
The goal is to turn data into information, and information into insight.
Carly Fiorini
Measurement is key to Purposefully Blended’s signature IMPACT approach to learning and development. We are trusted by global clients to measure learning impact and maximise the ROI of their learning and development initiatives. For information on how we can support your organization, get in touch today.
Performing in a volatile, unstable, complex and ambiguous world warrants support from a trusted partner. Purposefully Blended continues to support global Learning and Development Managers with the capabilities or additional expert resource they need to identify, build and implement effective blended learning solutions consistently and at scale. We continue to equip First-Line Managers with coaching capabilities that embed, apply and sustain the learning. When these two roles work in harmony, they have a dramatic, transformative impact on outcomes.
Interested in getting our help to drive performance in your organisation?
Lucy Philip, Purposefully Blended, Founder
Purposefully Blended founder Lucy Philip founded the company in 2015, out of a profound sense of mission and possibility.

A highly experienced leader, ICF certified coach and mentor to Learning Partners and Leaders, Lucy has witnessed firsthand the unique challenges and pressures faced by those in Learning and Development (L&D) and Leadership Roles.
Purposefully Blended is a boutique Learning and Development Consultancy that blends learning design expertise with high-impact leadership practices to drive transformational change in both organisations and individuals.
Over the last decade the company has established a strong reputation for helping global organisations through tailored programmes that incorporate formal, informal and mentoring coaching approaches to learning.
We support and develop leaders at all levels to develop the confidence and skills around: Positive Intelligence (Strength of Mind), Emotional Intelligence (Depth of Heart) and Intrinsic Motivation (Purpose and Drive).
About | First-Line Manager Development | Training Manager Development | Learning Consultancy | Events | Resources | Contact | Creditation | Terms and Conditions | Cookie Policy | Privacy Policy
Copyright Purposefully Blended 2025
Site designed by MV Create
About | First-Line Manager Development
Training Manager Development | Learning Consultancy
Events | Resources | Contact | Terms and Conditions
Creditation | Cookie Policy | Privacy Policy
Copyright Purposefully Blended 2025
Site designed by MV Create
About | First-Line Manager Development | Training Manager Development | Learning Consultancy
Events | Resources | Contact | Creditation | Terms and Conditions | Cookie Policy | Privacy Policy
Copyright Purposefully Blended 2025
Site designed by MV Create
About | First-Line Manager Development
Training Manager Development | Learning Consultancy
Events | Resources | Contact | Terms and Conditions
Creditation | Cookie Policy | Privacy Policy
Copyright Purposefully Blended 2022
Site designed by MV Create