Skip to main content
Verify the version tags to ensure you are consuming the intended content or, complete the latest version.

Adaptive analytics

Online, adaptive models play a crucial part in Pega's next-best-action decision strategies as they are used to predict a customer's propensity for all available actions. Adaptive models are an important element in providing highly personalized and relevant actions to each individual customer - helping brands achieve the goal of true 1:1 customer engagement. Pega Adaptive Decision Manager (ADM) provides a full set of capabilities in Prediction Studio that data scientist can make use of to create, train, and manage their self-learning models.

Video

Transcript

This video shows you how adaptive analytics supports Pega Customer Decision Hub™ in the selection of the next best action to take for each customer.

Adaptive Decision Manager (ADM) is a component of Pega Decision Management that businesses can use to implement online, adaptive models that drive predictions about customer behavior, like clicking or ignoring a web banner that offers a credit card on a bank's website. When customer Troy logs in to the U+ Bank website, a decision request is sent to a client node of Pega Platform.

customer Troy logs in on the U Bank website

ADM retrieves all available customer data, which may include the customer profile, the interaction context, past customer behavior, and model scores.

ADM retrieves all available customer data

Using this data as input, the system runs a decision strategy to determine which credit card is the best offer for Troy, balancing customer relevance and business priority.

the system runs a decision strategy

The result of the decision strategy, the Next-Best-Action for customer Troy, is the Standard Card, which is then displayed on the website. There are two possible outcomes of the interaction.

two possible outcomes

If Troy is interested and clicks on the web banner, ADM records the outcome as target behavior. If he ignores the banner, ADM records the outcome as alternative behavior. The prediction predicts the probability that a customer shows the target behavior.

The adaptive models that drive the widely used predictions that ship with Customer Decision Hub™ use a Bayesian algorithm. The prediction that calculates the propensity that a customer will click on the web banner is the Predict Web Propensity prediction. The adaptive model rule that drives this prediction is the Web Click Through Rate model.

The adaptive model rule that drives this prediction is the Web Click Through Rate model

A data scientist configures the settings of both the prediction and the adaptive model rule, including the customer fields that are available as features to the model rule. Customer fields that are unsuitable as features, for example the customer ID, should be excluded.

An adaptive model rule typically generates many adaptive model instances without human intervention, because each unique combination of an action, treatment, direction, and channel, will generate a model the first time a decision strategy runs that references the model.

An adaptive model rule typically generates many adaptive model instances

ADM captures the responses and updates the adaptive model instances regularly, so they continuously learn from customer responses and adapt to account for changing customer interests and needs. The ADM server is physically separated from the nodes that process decision strategies and model executions, so that the laborious process of updating models does not impact decisioning speed.

The ADM server is physically separated from the nodes that process decision strategies

The Bayesian algorithm that generates and updates the model instances consists of 4 steps: preprocessing, feature selection, scoring, and transformation of scores to propensities. Preprocessing involves binning of the predictor values. For numeric predictors, ADM creates intervals with similar behavior. Customers aged 42 and aged 43 may have similar propensities to show target behavior and, after binning, reside in the same interval.

numeric predictors

For symbolic predictors, ADM groups values with similar customer behavior.

symbolic predictors

The granularity of the binning is a trade-off between performance and the statistical robustness of the predictor. Appropriate default settings for binning are provided and should only be changed by an experienced data scientist for specific use cases.

Next, ADM selects features based on their individual univariate performance against the outcome, measured as the area under the curve (AUC) of an ROC graph. By default, the univariate performance threshold is set to 0.52 AUC. A value of 0.5 represents no performance of a predictor, and the default threshold will exclude only features with a very low performance.

A value of 05 represents no correlation

Additionally, ADM groups predictors that are highly correlated and then selects the best predictor from each group, to reduce unwanted complexity.

ADM groups predictors

Next, a Naïve Bayes calculation is executed for the model using all selected predictors.

This simple and scalable calculation is based on Bayes' theorem, which says that the probability of A, if B is true is equal to the probability of B, if A is true, times the probability of A being true, divided by the probability of B being true.

Bayes theorem

Naïve Bayes relies on the assumption that the predictors are independent. The grouping of correlated predictors in the previous feature selection step minimizes the uncertainty introduced by this assumption. The ADM algorithm uses the posteriori log odds - that is, the logarithm of the posterior probability of target behavior divided by one minus this probability - as the score.

posteriori log odds

The final postprocessing step transforms the raw Naïve Bayes scores to true propensities. The algorithm creates score intervals in such a way that the propensity for each next bin always increases to optimize the accuracy of the models.

The algorithm creates score intervals

When you create a new adaptive model, you have a choice between either an ADM Bayesian adaptive model rule based on Naïve Bayes calculations, or building an ADM gradient boosting, or AGB, model rule, which is based on decision trees. In contrast to ADM Bayesian model instances that are focused on a single combination of action, treatment, direction, and channel, an AGB model instance learns from all combinations of actions and contexts.

AGB models can achieve higher predictive power to deliver more accurate predictions, which leads to higher success rates, retention, and lifetime value. However, as AGB model are more complex, they come with an important trade-off between accuracy versus transparency. So, your choice will depend largely on the transparency requirements of your use case.

AGB models can achieve higher predictive power

This video has concluded. What did it show you?

  • How adaptive analytics supports Pega Customer Decision Hub in the selection of the next best action for each customer.
  • How ADM generates Bayesian models.
  • How AGB models can deliver more accurate predictions.
  • How the choice of a Bayesian model or an AGB model depends on the transparency requirements in a specific use case.

This Topic is available in the following Module:

If you are having problems with your training, please review the Pega Academy Support FAQs.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega Academy has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice