Measuring return on investment for Mentoring & Coaching Programmes

by Dr Jill Andreanoff
23 September 2019

In my previous blog on the difference between mentoring and coaching, I finished by stating the need to have clearly defined aims and objectives for any mentoring or coaching intervention by which to measure success. Not only is this important for the participants to know exactly what is supposed to be achieved from their meetings, it enables evidence to be provided to be provided for stakeholders and funders to secure the future of the programme. Many stakeholders now require irrefutable evidence of success that goes beyond the qualitative, self-reported, anecdotal evidence that may have been acceptable in the past. Whilst initially you may consider this an impossible task when it comes to mentoring and coaching, there are many means by which to measure success regardless of the organisation or institution in which you work. But in order to do this there must be some intensive planning and development prior to the intervention taking place. 
Firstly, the nature of the intervention or support that is being offered should be fully described (as referred to in my previous blog). Is it group coaching, one to one or a directive/coach as expert model or an enquiry-based approach? This is important so that readers can compare and contrast your findings with that of other similar interventions. Sadly, much misconception and misunderstanding still exists around mentoring and coaching so the nature of the intervention needs to be stated very clearly. 
Once you know what you hope to achieve from the mentoring or coaching programme, whether that is improved employee or student retention rates, a reduction in stress related absence or improved sales, you can start to develop your approach to measuring success. 

Utilise existing data as much as you can, such as previous year’s retention rates or individual sales figures. Then you have something by which you can compare following your coaching intervention. The key to data such as this is having sufficient numbers to provide sufficient confidence in your findings. For example, if you only have 10 members of sales staff receiving coaching, even if you see an improvement in sales figures post coaching, the numbers are too low to be convincing and may even be attributed to something else. If you prove increased sales figures for different members of staff over three iterations of the coaching intervention, this starts to hold more water especially when compared to previous sales figures. However, some consideration needs to be given to other possible influencing factors such as how the coachees are selected. Perhaps they self-select for coaching in which case they may be more motivated individuals and more likely to succeed anyway. These types of variables can be factored into your statistical calculations when dealing with higher numbers. 
Useful data to collect as a matter of routine which will help with ROI calculations include:
  • Employee turnover (including voluntary and severance data)
  • Staff sickness absence
  • Performance (low, medium or high)
  • Salary (including bonuses)
  • Sales figures 
  • Employee engagement survey data 
  • Career progression/advancement 
  • Recruitment costs 
  • External training costs  
  • Training evaluations 
Retention of employees or students or sickness absence can be easily measured and compared to that of those employees or students who have not taken part in your coaching or mentoring programme. These effectively make up your control group. Control groups as described by Mosely (1997) allow two populations to be compared where one benefits from an intervention and the other does not. The most effective method is to collect both baseline data as well as post data collection from both those who received the intervention and those who did not. As mentioned previously these two groups need to be carefully selected so they are as identical as possible as regards age, gender, years employed and seniority etc. The larger your sample sizes, the more the differences between your intervention group and control group are balanced out and the greater the degree of confidence can be held in your findings. 

Using a statistical software package, the data can be analysed, and the groups compared. I have successfully used these techniques in my own doctoral study of peer coaching. I compared the academic performance (module grades) of students who were coached with a group of students who did not receive coaching (but were offered it) and I can assure you that I am no statistical whizz kid! The analysis uncovered so much more than I was looking for. I was able to determine that males made better progress than females who were coached and females who were not coached (in the control group) only performed slightly better than the females who received coaching. Draw what conclusions you like from this, but it certainly made for some interesting findings especially when it was revealed that the coaching impact was greatest for those in their 1st year. This can reveal valuable knowledge as it shows where best to target your intervention when there are limited resources available. 
In another piece of work for one of my clients (a 5-year longitudinal study), I was able to determine a correlation between the number of sessions attended and impact on isolation, satisfaction with social life and perceived quality of life. The more sessions attended the more improvement was seen in these aspects that were all measured pre and post intervention. All this takes a great deal of effort and planning but the external funders were so impressed with the beneficial impact on the participants, it led to further funding and so worth that extra effort. 

The latter example demonstrates how ‘self-perception’ data can also be measured and very often existing methods for collecting this are readily available. For example, when working with young people, Rosenberg’s measure of self-esteem can be used. For higher education students, Sander and Sanders (2009) Academic Behaviour Confidence scale measures self-efficacy, an essential characteristic for academic success. For those in the workplace, there are many well-known, existing questionnaires to measure leadership qualities, entrepreneurial flair etc. Failing this, bespoke measures and tools can be developed to suit your own organisational needs. 

Once you have your participant data available, you can work out the formula to determine ROI. In the case of student retention this could be measured in terms of lost fees (remembering that students lost in their first year will result in two further years of lost funding). 

For employee retention this can be worked out in your formula. An accepted cost for replacing an employee is around 1.5 X salary. If you can determine the percentage number of retained staff, the saving from this can be easily calculated giving you the ROI. 

In both the above examples, the costs for delivering the intervention can be deducted giving you the resulting ROI. The cost for delivering the intervention may be easily calculated if using external coaches or determined by the staff time (pro-rata) who are involved in delivery of an internal mentoring or coaching programmes (obviously including the cost of any external training if used). 

In the event of reduced stress-related absence, this can be calculated based on salaries and/or replacement/interim staff. 
Final words of advice are to always seek unintended outcomes in your data and to elicit feedback from 3rd parties (such as clients, Line Managers and reportees) especially when trying to measure behavioural changes. You may be so focussed on your original outcomes you lose the opportunity of finding others. 

One commonly missed outcome is the possible benefits for ‘internal’ coaches or mentors. In my own study of peer coaching, I was so focussed on the outcomes for the recipients of the coaching and their academic performance that I missed the opportunity to measure (in a quantitative way), the benefits for the student coaches. In focus groups it became apparent that they too perceived improved academic attainment as a result of providing the coaching. It now seems obvious, but I neglected to make the necessary arrangements to explore the coach’s academic grades too. If I had, this too could have been included in my ROI formula.