I can’t get a clear view on how much I should be failing; Thomas Edison said ‘I haven’t failed, I’ve just found 10,000 ways that won’t work’. But equally a colleague and business leader of mine recently said that he thought it was a myth that you learn more from failure – that when the big deal doesn’t come through the reaction is to drag everyone into the boardroom and pour over what happened.. as opposed to the success that is followed by a retirement to the nearest bar and several drinks. He felt the reverse should be true, and that if we only spent more time analysing our successes and putting our failures aside, we’d learn a lot more about what to do next time.

In the context of automated decision support, it is clear however that there is a massive bias towards measuring and managing the metrics of success, and I think that in this lies an opportunity to significantly improve performance. Simply put, by changing this emphasis and spending more time understanding failure.

Measuring success in decision support

There is a natural bias towards measurement of success in automated decision support platforms, whether these are fully automated or involve human interaction. It’s perhaps useful to look at the funnel for all possible customer treatments that arise as a result of decisioning.

What does analysis of each of these points tell us – and what tends to get done?

All possible treatments – this is pretty easy – it’s your proposition matrix, although implicit in this and not shown here is the dropout between ‘all’ and ‘eligible’. 

It’s useful to take a normalized view of offer performance when eligibility is excluded i.e. offer A gets 10 times the volume of offer B, however, current eligibility means that the latter only goes forward for consideration a 100 times less. 

The outcome is to review eligibility rules where possible to level the playing field and give potentially more effective offers more opportunities.

Prioritized – this is a way in which the treatments get prioritized on a per customer basis.

This is not usually analysed – seen as more relevant is the next category, ‘displayed’, however key insights linked to business logic can rest within this. Most typical is the concept of dominant variables and the offers which are associated with these. To illustrate this, the Miami dolphins might be a good team, but because of the league structure they tend to play more games against the New England Patriots, who consistently perform and rate highly. Analogously, offers may appear to be underperforming but this is because they are being consistently ranked with offers which are much better.

The starting point for this is understanding if it is happening!

Displayed to the agent (for environments with facilitated interaction)

Top three offers is often easy to report, though complicated further when the option exists to review all available offers. This is about understanding the degree to which the decision logic is outperforming the front line agent’s selection skills, and also the degree to which this is distorted by positioning. If an offer is not spontaneously displayed but has a high level of selection, why is this? This is the front line saying that the relevance/prioritization isn’t working for them. It also implies that the weighting of this over-reaching offer might be worth reconsidering

Presented to the customer

This tends to be a tough metric to collect accurately, largely because it requires an explicit human intervention that can’t be corroborated by the decision software. I know the agent eyeballed it ‘displayed to the agent’, but was it actually discussed?  How was it discussed?  All I can collect is that the agent selected ‘offer was presented’, and even that requires a data collection point to be able to do that. It also means that there is an operational efficiency imperative to assume that if ‘presented to the customer’ isn’t selected, then the assumption is that the offer was not discussed.

All of this can be covered to a limited extent by call monitoring, but the author’s experience is that this is, at best, marginally effective and doesn’t often even prevent situations where ‘presented’ is being habitually selected to allow agents to move through a call flow, even when it isn’t.

And this is a shame, as this is probably the most important metric you can possibly capture – if you follow Edison’s thinking. This is the immediate point of failure – for whatever reason, this was your best shot and it didn’t engage the customer. Was it considered, or did it miss by a mile? This is the point on the performance curve where there is the most learning to be had, and anything that can be done to get feedback from the interaction into logic and offer design is going to reap rich dividends.

Offer accepted / Offer rejected

Offer rejected is another cut on ‘presented to the customer’, by default if no offer is accepted. ‘Offer accepted’ is of course our ultimate measure of success, and the metric that most dwell on, but I’d argue again that without context it is over indexed in reporting. This isn’t to say it’s not important – obviously – but it measured so much because it is easy to do so. Offer accepts, as rates, or by volumes, can be pretty meaningless, particularly where there is wide differences in the offer category. Some offers are no-brainers for the customer, eg. prepaid top up incentives for customers who (unbeknownst to you) were going to top up anyway. Others may only be made to customers who already have two feet and most of their body out of the door, and may be great offers, but only being used in extremely difficult circumstances.  A really great illustration of the shortfalls in measuring success is from the medical world, in individual surgeon’s success rate. Even within ostensibly comparable operations, the very best surgeons take on operations that less skilled individuals will not attempt, and will accordingly fail more often. What does a metric on successful completion rate actually tell us then?  Virtually nothing, unless very tight constraints around measurement and control are in place.

Conclusion

Don’t get hung up on measuring success alone, as it doesn’t give the full picture. Learn to break out of the habit of only measuring the easy things; it’s a useful maxim that the more difficult it is to measure, the more the result is going to tell you. Don’t be afraid to discuss failure because it is the bread and butter of machine-based learning and the basis of future success.

And finally, be like Thomas Edison, it’s only really a failure if you don’t learn from it.

What are some ‘failures’ you’ve learned from when measuring success in decision support? 


20161109_113333.jpg

Alex Ford

Consultant

Alex Ford works in CVM transformation, and has implemented change from working to board level in many operators.  He has worked in telecommunications all his life including six years in Asia, and is equally at home in leadership roles technology and marketing. At Everything Everywhere in the UK he laid the for a world leading capability in 121 marketing. His experience includes a range of operational and consultancy roles in Asia, Africa, Europe and the US.  He has a particular interest in how human and technology driven decision making interplay, and how to build on the strengths and weaknesses of both.  Alex is currently working with a major telco in the US on retention operations.