Reading Time: 7 minutes

Start with what the research actually says

Before the inference, the honest picture

Let us start with something the L&D profession is not always comfortable saying out loud. The ROI case for training in general is weak. Not non-existent, but weak in the specific sense that it is hard to prove, rarely measured rigorously, and frequently overstated. Organisations invest an estimated $60 billion annually in leadership development alone, and the evidence base for what actually works is, as one PMC review put it, characterised by programs that “underperform or fail, resulting in wasted time and money.”

The foundational problem is transfer. Research consistently reports that only around 10% of what is learned in training is transferred back to the workplace (Holton and Baldwin, 2000). A more optimistic study by Saks and Belcourt found that 62% of employees apply training immediately after the event, but that figure drops to 44% after six months and 34% after a year. Something is happening, but most of it fades.

This is the context in which any claim about challenge-driven learning and ROI has to be made, not against an imaginary world of perfectly effective training, but against the actual baseline: an industry that spends enormous sums on programmes that mostly do not stick.

“When not built in, research has shown consistently that transfer is limited and that participants typically revert to previous behaviors.”
Pmc review, maximising the impact of leadership development, 2024

So is there a direct academic link between challenge-driven learning specifically and measurable ROI? The honest answer is: not a clean one. The research on challenge-based learning is largely drawn from educational settings, and the studies that measure business outcomes from experiential and simulation-based learning are fragmented, use different methodologies, and rarely control for confounding variables. Anyone who tells you there is a published study proving that challenge-driven programmes deliver X% more revenue growth than traditional training is either citing vendor research or overstating what the evidence shows.

What there is, however, is something more useful for practical decision-making: a strong inferential chain built from several robust bodies of evidence that, taken together, make a compelling case.

The transfer problem is the roi problem

Why most training fails to deliver value is well understood

The single biggest driver of poor L&D ROI is not bad content. It is not poor facilitation or insufficient budget. It is the failure of learning to transfer from the training environment to actual workplace behaviour. This is called the transfer problem, and it has been studied in detail for four decades.

Wilson Learning’s analysis of 32 studies covering 66 learning transfer activities found that combining the right transfer-supporting techniques could increase the impact of learning by 186%. That is a remarkable figure. It is not primarily about the quality of the programme itself; it is about whether learning is designed to transfer. And the factors that drive transfer are precisely the ones that challenge-driven learning builds into its structure.

Understanding transfer through the kirkpatrick model

L1 – Reaction
Did participants find it useful and engaging?
Challenge-driven link: real stakes and genuine ambiguity produce high engagement as a structural output, not a facilitation aspiration.

L2 – Learning
Did participants acquire the knowledge, skill or attitude?
Challenge-driven link: neuroscience shows that effortful retrieval under uncertainty produces significantly stronger encoding than passive instruction.

L3 – Behaviour
Did the learning change what people actually do at work?
Challenge-driven link: Producing a real artefact and experiencing real consequence creates behavioural commitment that discussion-based learning rarely produces.

L4 – Results
Did the business see measurable improvement?
Challenge-driven link: When L3 transfer occurs, L4 follows. The question for most organisations is not whether better capability drives results; it is whether the training actually produced better capability.

Kirkpatrick’s four-level model (updated by Phillips to include a fifth ROI level) is the most widely used framework for thinking about this. The insight it provides for the challenge-driven ROI argument is not complicated: the reason most training fails to show ROI at Level 4 is not that business outcomes do not follow from real capability change. They do. The reason is that the training fails at Level 3. Behaviour does not change. And the reason behaviour does not change is usually that the learning was not designed to transfer.

Challenge-driven learning addresses the transfer problem at source, not through follow-up mechanisms bolted on after the event. The real consequences, the genuine ambiguity, the tangible artefact, the public accountability of a screening or presentation: these are structural transfer mechanisms, not support scaffolding.

Building the inferential chain

Three bodies of evidence that, taken together, make the case

Given the absence of a single clean study linking challenge-driven learning directly to financial ROI, the honest approach is to build the argument from three bodies of evidence that are themselves robust, and show how they connect.

Body 1: Neuroscience of retention

Challenge-driven conditions produce stronger memory encoding

The neuroscience of desirable difficulties (Bjork, 1994) is one of the most consistently replicated findings in cognitive psychology. Conditions that slow initial learning, including genuine struggle, prediction error and emotional arousal, produce dramatically stronger long-term retention and transfer than conditions that feel productive in the moment. Challenge-driven learning is structurally built on these conditions. Retrieval practice research (Roediger and Karpicke, 2006) shows roughly double the long-term retention versus re-reading the same material. If what is learned is retained and transferred, the probability of behavioural change, and therefore business impact, increases substantially.

Bjork, 1994; roediger & karpicke, 2006; pmc, 2024

Body 2: Simulation roi evidence

Business simulations show measurable commercial outcomes

While challenge-driven learning and simulation-based learning are not identical, they share critical features: consequence, ambiguity, and active decision-making under pressure. The simulation evidence base is more developed than the CDL evidence base and provides the closest available proxy. Studies consistently report measurable improvements in decision quality, shorter time-to-competency, and in commercial training specifically, improvements in metrics like bid quality, win rates, pricing discipline and margin management. Accenture’s research found that companies which invest well in training receive $4.53 back for every dollar spent. That figure depends entirely on transfer, which simulations support far better than passive methods.

Accenture; industry masters; wilson learning, 2024

Body 3: Transfer science

The conditions that produce transfer are the conditions cdl is built on

Transfer research identifies the key drivers of whether training changes workplace behaviour: relevance to actual work conditions, emotional engagement with the material, practice under realistic pressure, peer accountability, and tangible outputs that create post-event commitment. Wilson Learning’s meta-analysis found that adding transfer-supporting activities to any training programme could increase impact by up to 186%. Challenge-driven learning does not add these as supplements. It builds them into the design. The consequence is real. The output travels back to the business. The peer accountability happens in the room.

Wilson learning; saks & belcourt, 2006; holton & baldwin, 2000

The inferential conclusion

Challenge-driven learning produces stronger retention than passive methods. Stronger retention produces better transfer. Better transfer produces behaviour change. Behaviour change produces business results. None of those steps is contested. The gap in the evidence is not about whether the chain holds; it is about the difficulty of isolating the contribution of any single learning intervention in a complex business environment.

Why measuring it is genuinely difficult

And why that difficulty is not the same as the absence of impact

The attribution problem in L&D ROI measurement is real and should be acknowledged rather than papered over. A commercial capability programme runs in March. Win rates improve in September. How much of that improvement came from the programme and how much from a change in market conditions, a new sales director, a competitor withdrawing a product, or simply the passage of time? Isolating the contribution of a learning intervention is methodologically hard, and organisations that claim clean ROI numbers often have soft assumptions buried somewhere in their calculation.

Research from the Institute for Corporate Productivity found that only 35% of organisations have formal processes in place to measure the transfer of learning at all. Most are operating without a baseline and without post-programme measurement against consistent metrics. In those conditions, ROI calculation is largely fiction.

The difficulty of measurement does not mean the impact is not there. It means that most organisations are not measuring it properly. Challenge-driven learning actually makes this problem more tractable, not less, for one specific reason.

The artefact advantage

Because challenge-driven learning produces a tangible output, it creates a natural measurement anchor. A commercial proposal produced in a simulation can be assessed against quality criteria before and after a programme. A film produced in a leadership challenge can be evaluated for clarity of commercial message. The artefact is not just an embedding mechanism; it is also a measurement tool. It externalises capability in a way that makes assessment possible without relying entirely on self-report or manager observation.

The right response to the measurement challenge is not to abandon the ROI conversation but to reframe it. The question is not “can we prove that this programme delivered X% revenue growth?” It is “what would we expect to see in the business if the capability we are developing is genuinely improving, and are we seeing that?” That is a tractable question, and challenge-driven learning, because of the artefacts and the behavioural specificity of what it develops, is better placed to answer it than most other approaches.

What practitioners actually report

The anecdotal evidence, offered honestly as an anecdote

Anecdotal evidence is not academic evidence. It should be offered as what it is: the accumulated observation of practitioners who have run enough programmes to see patterns. With that caveat stated, here is what practitioners consistently report.

Commercial challenge programmes where teams bid, design and deliver against a brief under competitive pressure tend to produce more durable change in commercial behaviour than equivalent programmes built around content delivery and case study discussion. The change shows up in how people talk about commercial problems, in how they frame bids, in their willingness to question assumptions about scope and pricing. It is not always measurable in revenue terms, but it is consistently observable in behaviour.

Leadership challenge events where participants produce something, present it to peers and receive public feedback tend to produce stronger personal commitment to specific behaviour changes than events that end with an action plan completed on a feedback sheet. The accountability of having made something real, in front of colleagues, appears to create a different quality of intention than the accountability of private self-reflection.

Whether that translates into financial returns, and over what timescale, depends on factors the programme cannot control: whether leaders are given opportunities to apply what they have learned, whether the organisation reinforces the behaviours the programme develops, whether the capability gap being addressed is actually the binding constraint on performance. Training cannot fix an organisation that is not organised to use what training produces.

The goal is not to produce learning that participants remember fondly. It is to produce learning that changes what they do when a real problem lands on their desk and there is no framework to reach for.”
Mda training

What the roi conversation should actually look like

The L&D profession does itself no favours by either overstating ROI claims or retreating entirely into the position that learning impact cannot be measured. Both positions are wrong, and both damage credibility with the commercial stakeholders whose support is needed.

The honest ROI conversation about challenge-driven learning runs roughly as follows. The evidence base for experiential approaches producing better transfer than passive methods is solid. The evidence that better transfer drives business outcomes is well established. The evidence specifically linking challenge-driven learning to financial returns is inferential rather than direct, but the inference is strong and the individual links in the chain are well supported. The measurement challenge is real but tractable, particularly because challenge-driven learning produces artefacts that create natural measurement anchors.

Against a baseline industry where only 10% of training transfers to the workplace, an approach that structurally addresses the transfer problem at source is not making a modest claim. It is making a claim about fundamentally changing the odds of training actually working. That is the ROI case. It does not require a single clean study to be worth taking seriously.