websights Skip to main content
Industry Insights

The Rise of AI-Based Shopping Agents, Part 2 

By February 15, 2024No Comments
The Rise of AI-Based Shopping Agents, Part 2

Welcome to our Vision for Personalized AI Shopping Agents!
You can find Part 1 here on our website.  

Precision nutrition[i] research and precision manufacturing[ii] are two rapidly evolving fields that are reshaping the landscape of product proliferation, particularly in the food and health sectors. 

Precision nutrition research is a burgeoning field that leverages genomics, metabolomics, and other advanced technologies to understand how individual genetic and metabolic variations influence dietary needs and responses. This research is paving the way for actionable personalized dietary recommendations and interventions, which in turn are driving the proliferation of new, tailored food products.  

On the other hand, precision manufacturing is revolutionizing the production process of these new products. It is a process that involves the use of advanced machinery and techniques to produce goods with a high degree of accuracy and consistency. This approach is particularly beneficial in the production of personalized food products, where precision is key to ensuring that each product meets the specific dietary needs of the individual consumer.  

The convergence of precision nutrition research and precision manufacturing is creating a new wave of product proliferation. This proliferation is not just about increasing the number of products in the market, but also about enhancing the quality and specificity of these products to meet individual dietary needs. This trend is expected to support data-driven shopping, where consumers can make informed decisions based on their unique dietary requirements and preferences. 

The alignment between a shopper’s unique needs and the specificity of new products to help is good. Yet, this proliferation will also increase cognitive load inside of an already complex domain. Shoppers will need help finding and choosing appropriate products. 

[i] Journal of Translational Medicine highlights the potential of precision nutrition to guide the development of personalized dietary interventions for disease prevention and management (link). 

[ii] A report by Capstone Partners reveals that the demand for precision manufacturing has surpassed pre-pandemic levels, indicating a favorable outlook for manufacturers. The report also notes that the increasing need for precise equipment in advanced end markets, coupled with the growing capabilities of precision manufacturers, is expected to drive sector growth in the long term (link). 


In the 1990’s, the Universal Product Code (UPC), retailer Point-of-Sale (POS) systems, retailer loyalty systems, market share data, mainframe computing and data warehousing created a new data foundation. The AI apps built on this foundation led to a significant shift in knowledge, outcomes, and power, from brands to retailers. 

Today, there is a new data foundation emerging. This foundation both includes and transcends the old foundation. This new foundation is created by market transparency (product, price, and availability), search & social media (search history, health concern data), shopping apps (personal preference data), smartphones (location & wearable data), precision nutrition (genomic, metabolomic, and microbiome data), and personal health data. 

Access to these emerging data sources requires an alignment of ecosystem resources and stakeholders. Purposefully structured for vertical integration into the shopper value equation, these emerging data sources can be mined for implicit and explicit expressions of a shopper’s ever-changing values and context. Once mined, they can be modeled to better quantify the why behind the buy.  

Expect this foundation to unfold with ever increasing precision, context, speed, predictive power, and relevancy. 

AI algorithms extract insights from structured data by translating it into AI features that predict shopper switching and stocking behaviors, such as price elasticity, emotional potency, awareness (including advertising, promotions, placement, facings, signage), seasonal trends, comparable pricing, and availability.  With price as an interpreter of a shopper’s ever-evolving values (their truths), these models predict a shopper’s likelihood to switch and stock up. Forecast errors help in adjusting and improving these models, providing a feedback mechanism to better understand a shopper’s ever-changing values and context.  

Like an archeological record, shopper values are often captured in a retailer’s systems both explicitly (i.e., when a shopper clicks a preferred attribute, such as “organic” from within a shopping app) and implicitly when mined from a retailer’s historical sales data. The objective when mining data for shopper values is to quantitatively understand the why behind a buy.  

When a shopper’s purchase history is aligned with market pricing history, a material part of a shopper’s context can be modeled and known. Ultimately, the goal of AI in this context is to predict a shopper’s tipping point; the price threshold where a shopper switches (or tips) from one quantity to another, one brand to another, one product-attribute to another, one channel to another or one store to another. Structured in this way, information serves as an input into the shopper value equation. 

While complex, the structured data and AI methods to make tipping points predictable are in operational use at scale today.  

The why behind a shopper’s buying decision takes place within a context. In retail, as with most industries, price is the dominant driver of sales (the dominant context). With this, to understand the “why”, it is paramount to compare like-for-like product pricing, offers and availability across competing market-baskets. This requires highly structured comparable product information at both frequency and scale (i.e., product attributes normalized by price, size, claims, nutrients, ingredients, packaging, origin, etc.)  

Today, B2C price comparison engines focus primarily on higher-ticket items, as opposed to consumables. Comparison information, when it exists, is often limited to product taxonomy (i.e., category, sub-category) or direct UPC/model number comparisons. It is not structured for attribute-normalized price comparisons. In this way, today’s B2C price comparison engines do not provide the structure needed to support complex like-for-like basket-level price comparisons. 

While not yet available to shoppers, this comparable product information is operational in the B2B world. Retail methodologies for like-for-like basket-level price comparisons are limited but common. With recent advancements in AI-driven Natural Language Processing (NLP) linking of like-for-like products, these advancements enable retail experts (buyers and merchants) to train AI and maintain comparability as the marketplace changes (pack sizes, assortments, formulations, product claims, seasonal variations, taxonomy, cross-channel availability, etc.). The result, retailers now compare and maintain complex attribute-level product catalog matching at ever-increasing precision, frequency, and scale. For the first time, the infrastructure needed to understand the why-behind-the-buy at scale is in place. 


Retail is evolving from the era of price, where every product has one price, to the era of prices. In the era of prices, there is only a probability of a given price at any point in time.  

For an AI agent to provide market context at scale requires a near real-time census of the comparable marketplace. The challenge, this requirement introduces an impossible cost barrier. As an example, a near census of the US grocery marketplace on a weekly basis would require a volume of more than 60 billion comparable prices per year (24,000 locations, 277 metropolitan areas, all channels, all grocery products). A census like this would cost over $3 billion per year to maintain (about 0.5% of sector revenue), making it cost prohibitive.  

This is where AI & optimization come in. They break the cost barrier by about 6,000X (600,000%). How? They make the marketplace predictable. With confidence in predicting competitor price changes, it becomes possible to forecast market prices. With high levels of confidence, forecasted prices can be substituted, in part, for the price collection volume needed to provide a market census. The cost for forecasted prices is measured in compute time and is a fraction of collection costs. In this way, forecasted prices break the cost barrier. Today, this capability empowers retailers to expand competitive visibility while minimizing data collection costs. 

How is this possible? All retailers attempt to project and maintain a consistent price-based market position that reinforces their brand image. With this, there is a consistency to their pricing methods relative to the market. With price transparency, it becomes possible to collect and compare online data at frequency and scale. This then serves as a precision data input for probabilistic AI models and for optimization. Models are then trained to forecast across different channels, locations, and time. Mathematical optimization allocates data collection budget dollars to minimize market uncertainty (forecast errors). 

As we transition into the era of prices, leveraging probabilistic AI models for competitive price forecasting and optimization for intelligent sampling provide a foundation for leveraging price as an interpreter of value.


Both retailers and brands know there are emotionally potent price points that surprise and delight shoppers. When prices drop below these points, unit sales volumes jump upward. The challenge is to understand, with predictive accuracy, what these points are and how high unit sales will jump. This is often a point of guessing, negotiation, and even contention between retailers and brands. The holy grail in this collaboration is agreement on sales volume for a given event.  

In retail, predicting pre-season unit sales volumes is a major source of inefficiency. Lack of predictability challenges organizational AUR (Average Unit Revenue[i]) targets. One source of inefficiency comes when one buyer’s items are used to drive sales (sacrificing AUR) to generate incremental AUR from complimentary product sales. Another source of inefficiency is driven by availability risk. No retailer (and thus no buyer) wishes to be out-of-stock on items that drive sales. This creates an organizational bias to overbuy. For most retailers, as seasonal sales decline, excess product inventory goes into a markdown cycle (lowering price to clear excess inventory) or excess gets warehoused. When cleared, they often cannibalize sales of higher margin items, lowering category AUR. Excess inventories are also a source of conflict between buyers and their brand partners. Promotional plans often include a mutually agreed sell-through target, and warehousing excess is not the plan.   

Empowered shoppers and continued downward pressure on margins put the business models of both brands and retailers at risk. Spending over $1 trillion annually to influence switching behaviors, the top consumer goods brands are challenged to demonstrate ROI. Often losing 12% of every dollar allocated to trade spending (influencing trial, switching, and stocking behavior). When it comes to incentivizing choice, brands are challenged to differentiate between offers that incentivized brand switching versus those that gave shoppers discounts on products that were “already sold.” Without these brand programs flowing dollars through retail, some retailers claim their businesses would fail.  

With predictive accuracy, the emotional potency of price places the shopper’s voice front and center. In this way, the shopper’s values serve as a thread to unite the ecosystem’s need for a shared expectation (forecast) of what’s possible and quantify the risk of an out-of-stock relative to forecast. The AI features that predict shopper switching and stocking behaviors (preferences, price elasticity, emotional potency, awareness, seasonality, comparable pricing, availability, etc.) strengthen collaborative decision-making. With this, advanced pre-season planning strategy can lead in-season tactics; a key unlock to material mutual efficiency. In this shopper-values informed planning scenario, retail buyers, manufacturer brand managers, dieticians and insurer cost-of-care managers would collaboratively strategize seasonal events with healthy options that hold potential to drive trips (advertising, marketing, signage, shelf-space, and offers). 

Today, a growing number of major retailers are adopting the emotional potency of price as a key operating KPI. A natural next step is to create shared advanced planning processes. 

[i] AUR in the retail industry stands for “Average Unit Retail.” It is a metric used to track the per unit retail value of inventory sold. To calculate AUR, you simply take the total revenue (or net sales) and divide it by the number of units sold. For example, if you had $500 in net sales and sold 50 units, the AUR would be $10.