Utilizing Machine Learning To Solve a $60M Inventory Loss Problem

Two people's hands working on computers with spreadsheets and chats

A closer look at how the emerging tech and analytics teams applied machine learning to Operational Technology, increasing product identification accuracy to over 90%

Snapshot

One of the world’s leading food processors was losing millions of dollars annually because of an overly manual and error-prone process that resulted in a higher-than-tolerable instance of misidentified and lost inventory on the factory floor. Leaders knew the existing process was woefully inefficient and riddled with repeated patterns of human error. Nearly all agreed that significant change was needed. Namely, a much more efficient system that reduced the load on its factory staff while increasing the accuracy and reliability of the labeling process.

Increased inventory accuracy to 90%, a 20% increase over the previous process
Solved a critical component of what amounted to a $64m inventory problem
Initial prototype and data model deployed to production facilities nationwide
Services

Research

Prototyping

Technical Architecture

Data Modeling

Upskilling

man with glasses smiling

We’re trying to apply the most cutting-edge technology in order to derive new insights so that we can enable new ways of working.

— VP, Emerging Technology & Analytics

The Challenge
What We Were Up Against

The existing system was both manual and error-prone

Accurately tracking products as they move through production facilities across the country is critical; the consequences of misidentified or mislabeled products—typically a result of human error—are immediate, resulting in spoiled products and a lot of wasted money. Still, the identification and labeling processes used at most of the plant locations across the country were too dependent on human accuracy. 

Typically, tracking inventory through this particular production facility is a largely manual process that relied heavily on human judgement before critical data was then entered into a system. What’s more, that process wasn’t always accurate. In fact, more often than not, it was wrong, which led to a number of unwanted outcomes, including both under and over-processing products. Each of these outcomes cost the company millions of dollars.

Spotlight ———

For all involved, the challenge was difficult: How might we design a more modern, efficient system that: (1) drastically increases the efficiency and accuracy of critical product identification workflows, and (2) demonstrates a clear, quantifiable improvement over what exists today.

Any new solution would first need to pass an initial test

Together, we quickly zeroed in on a number of potential solutions that might help to mitigate the product identification and processing errors. Many of which utilized machine learning (ML) and computer vision (CV) to increase the accuracy and efficiency of the identification process. Still, each of these solutions were untested and unproven. So, we set out to quickly prototype and test in a real-world environment in order to prove or disprove the efficacy of such a solution before investing more fully. The goal was simple: to test and deploy the ML and CV models in a small, controlled fashion before a larger planned rollout across other processing facilities.

We ran up against a few other challenges, too
01—
There wasn’t any existing training data, so we had to create it from scratch

There’s rarely existing training data for most bespoke enterprise machine learning initiatives; this was the case in this instance, too. Most importantly, for this effort, there wasn’t an existing way to get an overhead view of the packed product inside the plant. Instead, we had to devise a way to capture that data ourselves. So, in short, we found a way to associate the timestamps from the weight scale with the timestamps from overhead streaming video to generate a base training data set. Lastly, we took the necessary step to validate every single image (since they were known to be imperfect) and balanced the training set so that we represented the various images equitably to ensure quality results.

02—
We didn’t quite know how human staff would respond to machine augmentation.

We simply didn’t know how staff (those working inside the production facilities) would actually “take” to a new, tech-driven solution. What’s more, there’s not yet any well-defined rules or best practices to follow here. So, in order to give ourselves the best chance at success, we worked closely with staff to create a more efficient experience without the complexity one might imagine from a machine learning solution. Doing so both created a better solution from the start, but instantly formed the next round of training in order to “dial in” the model even further.

03—
There were a lot of potential “gotchas” and unknowns when it came to deployment

It’s common practice to deploy transactional user software in the cloud, but it wasn’t that simple in this case. There wasn’t a straightforward method to deploy an intricate computer vision system—complete with live video streams, machine learning, and user interaction dialog—within a production facility. All involved ultimately evaluated a number of options, which, when coupled with some of the strategic aims of the effort, helped inform the construction of a truly modern combination of services that were used to bring the solution to life.

man with glasses smiling

The driving goal of this effort—and our number one priority—was to find a way to correct the existing inventory loss issues. We knew there was some promise in using tech like computer vision and machine learning, coupled with the most modern infrastructure, to devise a solution that worked well within a really complex environment. Ultimately, the fusion of design, machine learning engineering, DevOps and production operations gave us a fighting chance.

— Colin Shaw, Director of Machine Learning, RevUnit

The Partnership

How We Tackled the Challenge, Together
Phase 01: Research
Timeline: 4-5 Weeks
Key Activities:

First and foremost, we had to understand the existing processes in workflows in each production facility. So, we set out for several “ride along” visits, carefully watching how different facilities each handled the same tasks. Additionally, we spent time with those who were actually doing the work, observing and asking questions along the way. These field visits and interviews, which were but a small part of our initial research, gave us enough of a starting point to quickly and confidently move forward.

Phase 02: Data Modeling & Prototyping
Timeline: 3 Weeks
Key Activities:

Next, we made quick strides to develop a machine learning model so that we would be able to develop a working prototype that consumed real-world training data. First, we began collecting critical training data from some of the cameras that had been stationed at various locations inside of the product facilities. Second, we worked quickly to create multiple training sets while simultaneously validating their accuracy with several trained sets of eyes. Finally, through a bit of repeated trial and error, we were able to get to a reliable data model for prototyping purposes. After just three weeks, we had a functional model working at 90% accuracy.

Phase 03: Delivery & Upskilling
Timeline: 2 Months
Key Activities:

Lastly, we created a front-end application that was then connected to the training model so that all involved could showcase technical feasibility and accuracy to senior leadership. In effect, the front-end UI allowed us to show how the product identification process could be vastly improved using a combination of machine learning and computer vision, allowing plant workers to easily verify accuracy while focusing on higher-level tasks. Finally, as part of the handoff of the model and prototype, we spent several weeks working hand-in-hand with a variety of internal teams in order to train each on the various tools and skills needed in order to maintain and scale the solution to other production facilities.

man with glasses smiling

We’re trying to apply technology everywhere we can as a company to drive down costs and drive up efficiency and business value.

— VP, Emerging Technology & Analytics

The Solution

What We Created Together
A functional prototype that’s helping solve a $64m problem 

Together, we quickly developed a machine learning model and accompanying front-end UI that allows production-plant staff to more efficiently and accurately label massive quantities of products so that they can be processed correctly. The functioning model produced a 90% accuracy rate after just three weeks of development, testing, and validation, which represents an improvement of more than 20% when compared with the existing, manual process. 

In short, the model is able to very quickly identify a specific product type, the associated stock-keeping unit (SKU), and a variety of other associated data inputs, all of which are absolutely critical for accurate inventory processing. An automated scale then records the weight of the product before an operator simply verifies the correct weight and SKU number. The system is also able to more accurately detect flaws or impurities in either product or processing equipment, which has led to elevated food safety measures.

New Capabilities

Introduced machine learning into an overly manual and outdated process

Utilized existing infrastructure and hardware to drastically improve efficiency

Created near real-time access to product inventory data, more accurate reporting

Increased product identification accuracy to 90% (a massive improvement)

Allowed plant employees to focus their efforts on on higher-level tasks

Internal teams acquired new skills to allow for ongoing maintenance and upkeep

Increased confidence among senior leadership teams,  specifically operations leaders

Modeled how to work quickly and efficiently, shipping a prototype at scale

Spotlight ———

The process and results gave senior leadership more confidence; specifically, they now had another “proof point” that objectively showed how more novel emerging technologies could be used to potentially solve a variety of critical operational challenges at scale. What’s more, the success of the prototype helped to unlock additional investment for similar initiatives.

The Results

What We Accomplished Together
Increased inventory accuracy to 90%, a 20% increase over the previous process
Solved a critical component of what amounted to a $64m inventory problem
Initial prototype and data model deployed to production facilities nationwide

Meet the leadership team at RevUnit.

josh profile image, revunit president
Josh Stanley
President

Josh aligns leadership goals across the organization, and works diligently on growing RevUnit into a world-class digital solutions provider for the future of work.

You’ll likely catch him collaborating with a team on client work with a Rockstar energy drink in hand, or whiteboarding ideas — sometimes simultaneously.

Joe wearing glasses  profile image
Joe Saumbwber
Co-founder

Joe is a thought leader in enterprise product development. He’s spent the last eight years working alongside Fortune 500 partners to create digital products used by millions of employees.

As if founding a company isn't enough, Joe also owns a farm and brings in fresh eggs for the Bentonville office every once in a while.

Michael profile, wearing glasses, revunit CEO
Michael Paladino
Co-founder

As a Co-founder, Michael has been at RevUnit from the beginning, which means he’s done just about everything from writing code, to taking out the trash, to leading RevUnit’s Marketing efforts. 

That extends to his competitive life, too. Michael won a meat eating competition in college and placed 3rd in his 8th grade math competition. His next goal: Beat Will in a game of NBA Jam.

brian with short brown hair wearing a black hoodie
Brian Hughes
CFO

Brian is a CFO, technologist, futurist, and exponential leader. He is passionate about enabling performance through building an exponential organization (ExO).

As our resident Renaissance Man, Brian also speaks fluent French, has unbeatable ping pong skills, and runs a local chapter of Singularity University.

Joe, blonde hair with black glasses and a black collared shirt
Joe Payne
COO

Joe is focused on leveraging the power of design and systems-thinking to empower people. He’s dedicated to helping people leverage those capability to make meaningful differences in the world.

When not at work, Joe spends his time woodworking and fixing up his motorcycle. He just started wearing a leather jacket to reflect his alter-ego.

Will with a beard and heathered gray shirt
Will Bowlin
CTO

Will heads up the technology team. His ultimate goal: bolstering our ability to help our enterprise clients leverage technology to solve their most pressing business challenges. 

Will likes to play NBA Jam on our arcade machine for a break at work, and you can often hear his yells across the office. He’s always on fire.

Get more information on this offering.

Thank you! Someone on our team will reach out to you shortly.
Oops! Something went wrong while submitting the form.