“Perhaps what you measure is what you get. More likely, what you measure is all you’ll get. What you don’t (or can’t) measure is lost” – H. Thomas Johnson
Increasingly, professional education programs are being required, like many other business investments, to show clear and measurable results. In their 2019 study, UNICON reported that 56% of L&D executives think ROI for executive education programs is more critical today than it was in previous years.
Corporate L&D leaders are asking for quantifiable evidence that the professional education programs they are investing in bring forth organizational improvements to justify the cost of these programs and the employees’ time away from their desks.
Like all ROI calculations, it takes a lot of thought and effort to understand what should be measured, how and by whom. A strong collaboration between program sponsors and the professional learning organization is essential to successfully match learning objectives with the organization's strategic goals and defining ways to capture reliable data points that prove a program's success.
To help develop a measurable and impactful learning journey for professionals, we recommend thinking through these four steps.
The first step in developing a program is for the company's program sponsor to define the program’s vision, mission, and goals. Then the professional learning organization can collaborate to translate these program goals into learning objectives and specific skills that participants should acquire during the program, for example:
Measuring impact requires capturing data from many sources throughout the organization. Without a commitment from important stakeholders, the information-gathering efforts can fall short, affecting the relevance of the data.
To ensure you have the right people supporting the program and have access to the information needed to measurement impact, consider these question:
A recognized model for analyzing learning effectiveness is Kirkpatrick’s four levels of evaluation. This model breaks down the learning impact among reaction, learning, behavior, and results.
Ideally, the impact should be measured across all levels of Kirpartricks’s evaluation model. The levels help define what information should be captured and identify which assessment tools should be used.
To choose the levels of impact to focus on, look to answer these questions:
Once you know what impact looks like, you can move to determine how it will be measured across the different evaluation levels chosen. Tools to assess progress on the selected skills can be developed. These tools can include self-assessment and 360 surveys, in-session polls, evaluated individual or group projects, and coaching feedback, among other options.
Questions to determine which tools to develop can be the following:
Below are some examples of tools to measure impact at each level:
Level 1 - Engagement. You may consider a participant’s activity in class as a form of engagement. For example, you can capture a participant's “Talk Time” (the number of minutes each participant participates in plenary and breakout discussions. This is easier measured in digital environments) as one data point to understand engagement. Other tools to capture engagement could include program feedback surveys about the overall learning experience quality.
Level 2 - Learning. This level is about measuring participants’ retention of the theory covered in the program. Participants’ absorption of the learning content can be evaluated during sessions through reflection polls and exercises, and during the learning journey through coaching and group projects.
Level 3 - Behavior. At this level, we are trying to identify if participants are applying the skills they have learned to their work. You may want to include quick in-session polls to ask specifically how participants will apply the learning on the job. If you can, gather feedback from coworkers and managers on participants’ application of the learning in “real life”. Participant self-assessments and 360 assessments completed before and after the program can be used to track changes in behaviors at work.
Level 4 - Results. This can be the most difficult level to assess and may require an extended measure far beyond a learning journey. At this level, we are looking to evaluate the impact the program has had beyond individuals’ progress. Depending on how “results” are defined, it can mean looking into the program’s impact on business units, the entire organization, and even society. To capture this type of impact, KPIs need to be defined early on and monitored and updated as needed over long periods of time.
An impactful learning journey needs to go beyond just providing new ideas. For learning to result in long-term desired behavioral changes, it needs to be grounded in the science of learning. Sessions should be designed to allow participants to actively engage with the defined learning objectives. Research has demonstrated the importance of collaborative learning, spaced practice, and frequent feedback in creating engaging and impactful learning journeys (Kosslyn, S. M., 2017).
Decades of cognitive and behavioral research show that participants retain what they learn interactively much better than the materials they receive passively. Participants should be engaged continuously in some form of active participation. This can include polls, breakout discussions, voting, simulation exercises, and other collaborative learning techniques. Rather than the traditional focus on lectures and information dissemination, learning journeys that ensure a collaborative environment facilitate peer learning and provide real-time formative feedback.
By incorporating deliberate, spaced practice learning fosters both depths of understanding and the ability to apply learned concepts broadly across multiple contexts. This structured progression—or scaffolding—is based on scientific evidence and means learning journeys foster mastery over time. It also allows reinforcement of learning and opportunities to apply knowledge to different professional contexts. Learners need time to digest and apply skills to their specific environment.
Additionally, adding organizational leaders and domain expert participation during the learning journey can lead to greater engagement and a more effective application of linking new learning skills to the job.
Some questions to consider for this topic include:
With the right partner and commitment, showing impact and calculating the ROI of an executive program is possible. Following the four steps outlined above and working through a collaborative process will help ensure the program has been developed with the organization’s needs and goals firmly in mind.
Kosslyn, S. M. (2017). The science of learning. In S. M. Kosslyn & B. Nelson (Eds.), Building the Intentional University: Minerva and the future of higher education. Cambridge, MA: MIT Press.