Operationalizing Computer Vision in retail: from concept to ROI
08th August 2024
Transforming retail with computer vision: from visibility to value
In the evolving landscape of FMCG retail, the integration of AI, particularly computer vision – which enables computers to derive information from images, video and other inputs, is transforming how we interact with physical spaces.
Historically, brick-and-mortar locations like supermarkets have been akin to closed boxes, with limited visibility into customer behaviour and store operations. This opacity has posed significant challenges in optimising store layouts, enhancing customer experience, and reducing loss.
How is computer vision used in retail?
Advancements in computer vision are now offering unprecedented insights, allowing retailers to ‘see’ inside their environments. By tracking movement, monitoring inventory, and analysing shopper behaviour, computer vision not only opens the box but also ensures that retailers can extract actionable value from these insights, paving the way for more efficient and profitable operations.
In fact, IHL predicts that the AI revolution’s impact on retail will bring an economic impact of $9.2 trillion USD, with 72% of the overall benefit going to retailers over $1 billion in size.
As we rolled out the SeeChange platform we witnessed firsthand what computer vision is really required for and how it is useful in practice. We have moved from what can we see, to why do we want to see it, what will we do with the visual intelligence being delivered, and crucially, is there enough commercial benefit to make implementation worthwhile.
Operationalizing computer vision: laying the foundations
As a computer vision and vision AI provider, we have engaged with businesses across diverse industries, each facing unique challenges. Despite these differences, we have found that the underlying principles for operationalizing computer vision remains consistent across use cases.
To make best use of computer vision, it is essential to understand where its application fits within existing processes. This involves asking a handful of relatively simple questions:
- What current processes can computer vision make more efficient?
- How are these processes currently being executed?
- What are the existing reporting methods?
- What actions are currently being taken based on these processes?
- Does the benefit of computer vision outweigh the cost of deployment?
These questions may seem straightforward but are inherently important.
Case Study: Spill detection and resolution in retail with computer vision
Based on a real-deployment, let’s consider a hypermarket retailer who aims to improve in-store health and safety by deploying a computer vision AI hazard detection solution where alerts for in-aisle liquid spills are sent to store colleagues for resolution via a tablet located in the checkout area with evidence clips, and audit trail, then stored with HQ.
Employees initially find the new tablet devices for spill notifications novel. However, they soon begin to ignore them as the devices add to their workload without integrating into their routines. As employees begin ignoring the alerts, this places the retailer in a difficult position; should an accident now happen in store there would also be evidence that the hazard had been identified and notified, and that the lack of response had subsequently led to a customer incident. You need to take action to achieve the value.
Add to this the fact that it can take up to three years for a claim to be made, with the claim being paid by HQ rather than the store then there’s a natural dissociation between cause and effect. This example highlights the importance of thoroughly understanding who benefits from deployment and integrating computer vision powered solutions with existing workflows to ensure successful adoption and real-world impact.
Avoiding distractions: ensuring effective computer vision implementation in retail
A crucial aspect of deploying computer vision is asking “so what?”. Computer vision AI-enabled solutions can tell you what happened and when, but you need to decide how to use this information to improve efficiency.
We were recently asked if computer vision could be used to detect when bins are full and send alerts to the cleaning staff.
The answer is invariably yes, but for what purpose, and at what cost? The building already had a cleaning crew patrolling the floors hourly. Introducing computer vision to monitor bin levels could disrupt an already efficient process. Interrupting the cleaning crew’s scheduled routine with frequent ‘bins need emptying’ alerts likely leading to frustration. Additionally, weigh up the costs of training a model for this purpose and the compute cost required to enable this, and if it leads to the same cleaning route being identified then it’s for no purpose.
This scenario is common in scoping out the potential for computer vision and vision AI deployments. As humans, once we understand what computer vision can do we quickly start imagining new scenarios and “it could also do this” ideas. One of the most challenging aspects in operationalizing computer vision is finding the right use case that has the best potential to deliver ROI, and sticking with it – not allowing the ‘can it also’ mentality to derail projects and dilute the impact.
What does it mean to successfully operationalise computer vision for retail?
Forget fancy algorithms, successful Vision AI is about making it work in your world.
It’s a full-cycle approach, seamlessly integrating with existing workflows or creating new, user-friendly ones. The key? Start with a clear vision, and understand the value of deployment – does the business case for the technology delivers the right cost benefit balance? As mentioned earlier, identify your problem and determine how Vision AI can solve it efficiently.
Take self-checkouts for example.
Different retailers have different goals – reducing fraud, speeding up checkouts, or minimizing interventions. Whatever the challenge, successful Vision AI integration requires mapping your current process and pinpointing where it can add value.
By outlining rules for alerts, nudges, and interventions, you control how technology enhances your existing workflows, addressing specific pain points in each store.
Empowering, not replacing, retail employees with computer vision and vision AI
New technology, automation and job displacement has been a concern since the industrial revolution and computer vision, and AI in general, generate the same debate.
Yet many successful computer vision deployments require human oversight and expertise. The focus should be on meaningful workflows (existing or new) and asking, “Where can humans best contribute?”.
Case study: self-checkout interventions by employees
Self-checkout blocks often require employee intervention without context. Attendants manage multiple stations, and this can cause delays leading to frustration for both them and customers.
Computer vision and AI powered solutions can guide customers to self-correct, reducing unnecessary interventions. Similarly, weigh scale discrepancies often result in attendants clearing them without proper investigation due to workload pressures. AI self-checkout solutions can streamline this process, providing important context, highlighting issues and empowering employees to make informed decisions.
The result? Fewer interruptions, less stress, and better on-the-job decision-making for employees. By alleviating burdens and providing context, ai-self-checkouts become a valued partner, not a threat.
Operationalizing computer vision and AI
The image above outlines the key stages of operationalizing computer vision and AI deployments, each crucial for success. Throughout the process, collaboration is key. This is as much an internal business process as a collaboration with the solution provider.
Case Study: AI-powered fresh produce recognition
Imagine a retail operations team wants to use an AI fresh produce recognition solution at self-checkouts, while another team plans a separate solution for the aisles. Disconnected efforts like this create a fragmented system with isolated model learning and solutions that address a single issue without broader benefit.
As isolated deployments grow, they demand more processing power and bandwidth, and add unnecessary hardware and maintenance burden on the retailer.
The future of computer vision in retail and beyond
The potential of computer vision is easy to extrapolate. Imagine a spill in a supermarket aisle is detected prompting an automated clean-up by a robot. This not only addresses the issue smoothly and efficiently but also frees up employees to focus on more complex tasks.
The key to unlocking this future lies in a thoughtful and strategic approach. By following the learning from our experience, retailers are quickly grabbing the value from computer vision to drive efficiency, improve customer experiences, and empower their workforce.