websights Skip to main content
Industry Insights

The Rise of AI-Based Shopping Agents, Part 3 

By March 5, 2024March 12th, 2024No Comments
The Rise of AI-Based Shopping Agents, Part 3
The Rise of AI-Based Shopping Agents, Part 3

Welcome to our Vision for Personalized AI Shopping Agents!
You can find Part 2 here on our website.

OPTIMIZATION CREATES OUTCOMES

Optimization technology enables a shopper or stakeholder to evaluate all possible marketplace configurations in support of their desired outcomes (objectives).

With a focus on values, optimization gets applied to the behavioral prediction of a shopper’s tipping point, the key to relevancy and deep personalization. This is where the role of price as an interpreter of value gets leveraged with maximum efficiency. Price intersects the tipping points between supply and demand to align marketplace participants.

BUY-SIDE AGENTS

Buy-side agents serve as a trusted advocate, helping a shopper find, choose, and afford products that best meet their values while minimizing risk and saving both time and money.

Buy-side agents leverage shopper specific AI models to deeply personalize marketplace choices relative to a shopper’s expressed preferences and shopping list (service, quality, convenience, and budget).

With an AI driven approach, a shopper can instruct the agent to select “organic” options. Then, ask the agent to save more on this trip by allowing non-organic options where they are least toxic. In this way, knowledge-based substitutions can be dynamic based on a shopper’s context. AI can create personalized awareness to reduce a shopper’s cognitive load by dynamically generating product briefs that feature claims, ingredients and nutrients that best align with a shopper’s values (all of this requires highly structured data). For shoppers with dynamic health situations, like pregnancy or recovery from an illness, AI would emphasize nutrients or ingredients beneficial for their current state and why. With ease, a shopper can quickly go deeper to gain insights and even educate themselves based on relevant research. For shoppers concerned about specific ingredients (e.g., artificial preservatives), AI can highlight products that align with their clean-label preferences. AI can even cross-reference product claims with scientific databases to ensure accuracy, building trust with shoppers.

Top ranked choices are placed in a shopper’s buy box. When a shopper selects the top ranked choice, it validates model accuracy. Selecting a different choice signals the need for additional training. For example, if a shopper explicitly expresses a quality preference for “organic” and selects a different organic brand than the top ranked choice, this provides a hard-hitting clue that the price-gap between the two brands was not the tipping factor within the context of that purchase. When no preference is expressed, then AI ranks from preferences implied via a shopper’s past purchase history and the history of shoppers with similar purchasing profiles. In this way, every interaction enables models to collect increasingly valuable shopper-level information. These explicit and implicit expressions are then used as input to further train both a shopper’s personal AI agent and add to the choice history of those with similar purchasing profiles. Over time, the agent improves the relevance of its personalized choice rankings. With this, trust is earned when a shopper is consistently served deeply relevant personalized choice rankings relative to a backdrop of ever-increasing market complexity and the shopper’s ever-evolving values and context.

While not yet available, the first test of buy-side agents with a panel of shoppers demonstrated this methodology, aligning shopper values with market context enabled a near-perfect modeling of shopper-level demand.

SELL-SIDE AGENTS

The goal of a sell-side agent is to close sales while achieving organizational targets.

With shopper-specific AI models, sell-side agents generate awareness and emotionally potent offers that motivate choice switching (tipping a shopper from one quantity to another, one brand to another, one product-attribute to another, one channel to another or one store to another) while minimizing the personal discounts needed to achieve targeted outcomes.

Sell side agents allocate incentive budgets to maximize choice switching behaviors within a given segment of their business (shopper, market, line-of-business, etc.) and only pay for closed sales (providing a guaranteed ROI). Forecasting predicts the volume of switching behaviors to be expected for a given budget. Forecast to actual reporting and closed-loop AI learning continuously improve forecast accuracy and enable stakeholders to trust the forecast. With this, stakeholders can strategically take calculated risks with confidence. In this way, precision information and shopper values lead to price market-fit; aligning the price of a product with the market’s perceived value of it. Enable a stakeholder to fully leverage cost efficiency, scale, innovation in support of getting and keeping customers against a backdrop of ever-increasing market complexity and the shopper’s ever-evolving values and context.

For retailers, sell-side agents with a behavioral approach are just now entering the marketplace and in operation at scale. One retailer reported results as “incredible” lowering overall price perception by 1.8% (roughly equal to dropping prices by 2%) to create a 6.2% increase in sales AND improve gross profit by 3.2%.

THE ART OF SCIENCE

When it comes to predicting shopper switching behaviors and optimizing for targeted outcomes, the barriers are material. Overcoming these challenges necessitates deep domain expertise, mastery of modeling and optimization technology and large volumes of historical data from which to train models. With effective translation of art into science, price recommendations and unit sales forecasts should be directly actionable without the need for artful manual adjustments. This precision and automation will be essential for AI-based sales agents.

SUSTAINABLE COMPUTING

To find the “global” optimum”, the best outcome, often requires a computer to make a near infinite number of calculations. In retail, we often see product categories with over 30 choices. Assuming 10 possible price points for each product, a mathematical optimization would need to perform 10,000,000,000,000,000,000,000,000,000,000 calculations (1030) to find the global optimum pricing configuration for this category of 30 products. The computations needed to maximize choice switching behaviors within a segment, market, or line-of-business requires optimization across product assortments that number into the thousands. As the size of optimization problem increases, the time and processing power required to find the exact solution grows exponentially, making it infeasible for even the most powerful computers.

Due to time and cost constraints, researchers often settle for a solution that is not optimal, but “good enough”, until we enter the age of quantum computing. Yet, the difference between the global optimum and “good enough” is often what makes a recommendation “actionable.”

While not yet in the public domain, there are proprietary approaches to find the global optimum quickly without the need for supercomputers or the era of quantum computing.

LOW SIGNAL TO DEMAND RATIOS

With AI models, the purpose is to step away from guessing and make a shopper’s behavior predictable. With self-learning algorithms, AI finds the strong price elasticity signals captured in a retailer’s sales history. Using probabilistic models, the aim is to assign probabilities to different pricing outcomes largely based on historical data. The problem, most retail price changes are too small (discrete) to create a detectible change in unit sales volume. This is further complicated by real-world factors like seasonality, availability, errors, and competitive pricing which introduce noise (randomness). Like static between radio stations, noise makes it challenging to discern the true underlying patterns. For most items, discrete price changes and noise prevent AI models from capturing valid signals. From an AI learning perspective, this means most retail data has a low signal to noise ratio.

With this, the question then becomes, how to amplify the signal?

THE HIGH COST OF PRICE TESTING (GUESSING)

In dynamic markets, such as ecommerce, some retailers use AI-based price testing to probe for a demand signal. The challenge with price testing, it learns at the expense of profit, trust, and time. Trust is sacrificed when a shopper reacts negatively to a higher price. Profits are sacrificed when a shopper fails to respond to a lower price. Given the low signal to noise ratio in retail data and the range of possible prices, it takes a lot of testing to train models to an acceptable level of confidence (especially when attempting to localize elasticity models to a given business segment). Time is sacrificed during the testing phase as the model gains confidence. When a retailer’s online pricing mirrors it’s in-store pricing, these losses are compounded by limitations to a retailer’s ability to dynamically change pricing (i.e., labor, laws, policy, etc.). Further, the knowledge gained is temporary as shopper-values and markets are ever-changing. Meaning that price testing methods always lag. Given these risks, price testing is often limited to a small range of products and a small range of prices, leaving the remaining products to be governed by experts and static pricing rules (as opposed to shopper values).

There is a way out, but this way requires careful consideration of the business objectives, available computational resources, and the domain-specific nuances of the data.

THE COMPLEXITY BARRIER

Domain-specific knowledge plays an important role in data science.

In retail, it is common knowledge that price drives sales—shoppers tend to be price sensitive. It is also common knowledge that pricing a product at $2.99 is better than $3.00 because shoppers perceive the difference as more than a penny—perception is behavioral. No matter what price is offered, sales volumes go to zero when an item is unavailable. While random noise makes it difficult to predict behavior, several retail-specific AI features are known to be materially important in signaling the why behind a buy. These include:

  • awareness features (promotion types, promo budget, promo discount),
  • behavioral features (emotional potency, price elasticity, placement/ranking),
  • market features (market trends, competitive pricing, pricing strategies),
  • product features (comparability, lifecycle)
  • time features (seasonal trends, holidays, paydays),
  • service features (omni-channel, assortment, return policy),
  • convenience features (location proximity), and
  • operational features (out-of-stocks, planogram changes, pricing errors).

With domain knowledge as a guide, data scientists design models that more accurately mirror the real world within a time and computationally efficient manner. Looking at retail data, a scientist would say it is sparse (low signal), has a high degree of randomness (it’s stochastic), it does not follow a bell curve (it’s non-parametric), there are massive non-linear spikes in demand (it’s discontinuous), sales are in multiples of 1 (it’s quantized), there are a multitude of factors simultaneous impacting volumes it’s multi-collinear), it is error prone (data is not normalized, there are missing periods, etc.), and elasticities are not constant.

Without overfitting the data, the design goal is to amplify the deterministic patterns (the signal) while minimizing the impact of random fluctuations (the noise). With a sufficiently strong signal, this process needs to allow models to operate at the lowest levels of granularity. To prevent overfitting, the process must also identify and prioritize variables most likely to be relevant to a data given prediction. This process requires data preprocessing, purpose-built modeling, and iterative refinement.

AI modeling is an iterative innovation process. Experimentation and validation on hold-out data sets are crucial to ensure that the chosen approach meets the desired accuracy levels without excessive computational costs. By evaluating model performance on a validation set, data scientists can identify areas where the model might be capturing noise instead of the signal. They can then revisit factor selection, model design, or other steps to improve the signal-to-noise ratio.

Complexity, domain expertise and compute time costs often lead researchers to sacrifice predictability in favor of less effective learning methods. For retailers, evaluating vendor claims of superiority can be challenging. A first indicator of less effective learning methods is the need for price testing to learn or an extensive use of pricing rules to govern the outcome. A more in-depth vendor-evaluation would involve giving multiple vendors a identical sets of training data and request each vendor to forecast unit sales for a known hold-out period. Vendors can be evaluated on the accuracy of their unit sales predictions relative to the hold-out period.