Webcast question: How do you make sure that Vision AI works in the real-world
Tim Hartley responds
The problem we have, is that if we add to the number of interventions – the time when a member of staff has to go up and do something, if we add to the numbers of times they have to do that already then the efficiency of the whole self-checkout experience goes down.
So, shoppers will hate it, the store assistants will hate it and they’ll switch it all off. There are examples of that, that have happened.
So without thinking about how that integrates into the real-world, puts you into a really bad place. So absolutely, what we’re looking at is trying to integrate this to a level where the retailer can control the level of what we call nudges versus alerts.
A nudge is something which doesn’t involve the store assistant at all, it’s a message to the shopper to say ‘did you mean to scan something?’ in a very non-aggressive way, and studies have shown that a pretty high percentage of people when presented with the opportunity to do the right thing, will do the right thing.
So you can reduce shrink, you can reduce interventions, but the rules that you can set and that the retailer can then control allows them to say ‘You know what, after maybe the first nudge or the second nudge and they are still doing it, they haven’t corrected themselves, then maybe we should have an intervention at that point’.
But being able to tune that, even whilst the shop is running. If it gets busier for example you’re going to care suddenly much more about efficiency and perhaps a little bit less about shrink. So being able to dynamically change the sensitivity of the system is another important part of it.
So yeah, to target the real-world problem you have to think along these lines, how it’s going to fit into how people want to do what they want to do.