Evaluating learning: did it work?

If learning is important to organisational success, measuring and evaluating its effectiveness is crucial. This will depend, firstly, on a clear identification of learning programmes’ objectives – how they will meet identified needs – followed by evaluation against costing and benchmarking criteria to answer the basic question: “Has it worked?”

While calculating and benchmarking the cost of learning and training activities should not be complex, the growing use of informal learning interventions rather than traditional ‘training’ can bring challenges for evaluation. The shift from training to learning has meant extra responsibilities for line managers, and the CIPD reports that fewer organisations encounter difficulty measuring the effectiveness of learning activity (60% down from 74% in 2013), with lack of management priority for this task the most commonly reported problem.

A further finding with implications for future ‘big data’ scenarios reveals that 58% of organisations had difficulty accessing data for evaluation, and a quarter rarely use the evaluation data they collect. Other CIPD respondents reported some surprisingly informal methods of evaluation, including “gut feel” and “corridor conversations”. According to the influential model developed by Donald Kirkpatrick in wide use, evaluation can be carried out on four levels:

reaction – has the individual enjoyed the training and did he/she rate it highly?

learning – what did the employee learn?

behaviours – what is the employee doing differently and applying in the workplace?

results – has performance improved and is the employee more effective?

The evaluation for each method is different, as the table below shows.


It should be noted that surveys have found most evaluation attention focuses on ‘reactions’ because of difficulties and time costs of measuring other outcomes. But while other evaluation techniques (such as ROI analysis) have been found equally wanting, there is unanimity on the need to focus on the main outcome: did it work for the organisation? Standard HR metrics on retention, engagement and performance – even sickness and absence records – were training’s traditional measuring stick, mixed with Kirkpatrick evaluation mostly at the reactions level, or what the CIPD describes as being “stuck in a recurring loop of evaluation”.

One way of breaking out of this is through harnessing the data often ‘siloed’ in organisations. While questions remain about how this will be done and big data managed, this is likely to be as truly transformational as the current shifts to collaboration and social learning, lifelong learning, and full alignment of learning with business objectives.