Run guide experiments

Last updated:

Note: This article is relevant for Visual Design Studio.

Guide experiments determine if behavior results from a guide intervention or other confounding criteria. Much time, energy, and hard work is put into creating quality guides that improve user experience. In any guide program, it's important to prioritize the guides that influence user behavior or fix or discontinue those that don't produce results. Guide experiments help you confidently say that guide content produces real behavior change.

Guide experiments enable you to measure and validate the impact the guide has on product outcomes. In a controlled experiment, the goal is to measure the impact a single independent variable has on a predicted outcome across a population, while controlling as many other variables as possible. To measure the change produced by the intervention, we need to compare behavior between a group that has seen the intervention, the experimental group, and a different group with no outside influence, the control group.

In a guide experiment, the guide is the independent variable influencing behavior change. The predicted outcome is a Feature, Page, or Track Event that you expect visitors to use after seeing a guide, identified as the guide goal target. The guide's target segment is the population for the experiment. In an active experiment, when an eligible visitor triggers the guide, they're randomly assigned to the experimental group and see the guide, or to the control group and don't see the guide. Pendo automatically tracks which group visitors are in and monitors their behavior after entering the experiment. The experiment tracks the guide goal adoption rates for the experimental and control groups. An effective guide shows a higher guide goal adoption rate in the experimental group compared to the control group.





  • Pendo Guide Creator or Content Editor user roles
  • Web Agent 2.87.1 or higher
  • Mobile SDK 2.8.0 or higher
  • Pro or Enterprise-tier Pendo subscription

How to run an experiment

All good experiments start with a hypothesis. In many cases, you're publishing a guide because you want a user to do something. A new Feature announcement is pushing users to try the Feature. A marketing guide is trying to increase new subscriptions. A self-help guide points users in the right direction to continue a workflow.

The basic hypothesis is "If I publish a guide telling users about [my product], then users are more likely to engage with [my product] and improve outcomes related to low or no usage." Selecting the target of the experiment, predicting the outcomes resulting from increased usage, and designing the guide content is up to you. Use Pendo to run a control group experiment and measure whether or not your guide changed user behavior.

A control group experiment has three primary steps:

  1. Set the guide goal.
    A guide goal can be set up independently of an experiment and is useful for guide analytics without an experiment. For more information, see Set guide goals
  2. Set up the control group and notification time.
  3. Review the results and take action.

Set up an experiment

The Experiment tile is used to set up a control group experiment that measures how effectively your guide influences user behavior in the experimental group who see the guide as opposed to the control group who don't see the guide. The experiment should be configured after other guide details have been set up. Guide goal definition is required before experiment setup, but the segment is also crucial for understanding the size of the overall experiment population. 

  1. On the guide's details page, in the Experiment tile, select + Create Experiment.

  2. Set the experiment follow-up notification time and experiment group size.


    The follow-up notification displays an alert in Pendo when it's expired, reminding you to check the results of your experiment. Experiments must be stopped manually. You might decide to end an experiment early if the results are clear or allow the experiment to continue if you need more data. This is difficult to guess before you see the data.

    Experiment Group Size sets the percentage of the target segment that will see the guide and the size of the control group that won't see the guide. Don't worry, the control group can see the guide when the experiment ends.

    • Try to get more than 1,000 visitors total, with at least 500 visitors in each group.
    • A larger sample size, generated by the eligible visitors in the target segment for the guide, generally produces more data and better results.
    • Increasing the size of the experiment group with a small overall sample size increases the number of opportunities for users to convert after seeing the guide.
    • Having a statistically significant control group size is important to say with confidence that your guide is influencing behavior and your guide goal adoption rate isn't the result of other factors.

    Tip: Use caution with extremely large sample sizes in large populations. The sample should be large enough that the results aren't swayed by random chance but not so large that the sheer volume of data suggests a meaningful result

    • After thousands of visitors have participated in the experiment, the experimental group may have several hundred more guide goal adoptions than the control group.
    • This appears to be statistically significant, but the difference between the experimental and control group adoption rates might be so small that it isn't practically significant, meaning the guide won't have a tangible effect on your users or any expected business outcomes.
  3. Select Save to save the experiment.

  4. Set the guide to Public to start the experiment.


Review results

Your experiment starts running automatically when the guide is published. You can monitor the results from the Experiment Results tab. On this tab, you can see a summary of the experiment results, a chart displaying guide goal adoption over time for the experimental and control groups, and an experiment data table you can download.

Let your experiment run until you're satisfied with the results. The confidence score is included with the guide goal data to help you weigh the significance of the results and decide when to end the experiment. A Confidence percentage greater than 95% is marked as Significant. You receive a follow-up notification when the reminder you set expires.



Confidence Score

Confidence calculates the statistical probability that the outcome was based on the guide and not random chance. The score is determined by comparing the guide goal adoption rates, variance, and number of results, and determining the probability that the results of the experiment are statistically significant. When an experiment reaches a confidence score of 95% or greater, it is flagged with (Significant) next to the Confidence percentage.

This doesn't mean that the magnitude of the outcome is significant, only that the results are probably not due to random chance. For example, an experiment might show that visitors are more likely to convert, had no change in the rate of guide goal adoption, or are less likely to convert, and the results are still significant with a confidence score greater than 95%. This gives you the confidence to make the decision to keep or discontinue your guide based on the results of the experiment. This isn't an indication that the magnitude of the outcome of the experiment is the expected conversation rate over time. Determining the average outcome over time requires running multiple experiments and comparing the results, or monitoring the guide goal adoption rate over time to track the sustained impact your guide is having on your users.

For statisticians, we want to find the probability that in our split test, the results of the experimental group are more probable than the null hypothesis represented by the control group. The guide goal adoption rates and variance for the experimental and control groups are found. These values are used in the final calculation that determines the p value, and then the final confidence score showing the probability the null hypothesis is false.


p is the distribution of the difference between the experimental and control groups distributions

sigma is the variance of the experimental and control groups


End the experiment

Experiments must be stopped manually. You might decide to end an experiment early if the results are clear or allow the experiment to continue if you need more data. This is difficult to guess before you see the data. If possible, you should wait until you have at least 500 visitors in each group.

You can end an experiment:

  1. From the Experiment tile on the Summary tab, or alternatively from the Results tile on the Experiment Results tab.

    EndExperimentTile.png EndExperiment.png     
  2. Select End Experiment.

  3. Select End & Keep Public or End & Disable.
    • End & Keep Public shows the guide to users who were in the Control Group and did not receive the guide during the experiment and to any users who have not seen it yet. We recommend you select this option if your experiment ended with a positive outcome.
    • End & Disable ends the experiment and disables the guide. Users in the targeted segment will no longer receive the guide, but you can access the experiment metrics from the Experiment Results tab or the Experiments tab on the Guide List.

Share results

Experiments tab

All experiments can be accessed by all Pendo users from the Experiments tab in the Guides List. The Experiments tab shows:

  • Results summaries for any guides that have experiments configured, even if they haven't started yet.
  • Each summary has links to the guide goal target and the guide.

You can:

  • Filter the summaries on this page by experiment status and newest to oldest.
  • Delete a guide to remove the experiment summary from the Experiment tab.



Experiment Results dashboard widget

Experiment summaries can be added as a dashboard widget and shared when you share a dashboard with other Pendo users in your subscription.

Add experiment results to the dashboard from the experiment results summary in the Experiments tab of the Guide List or the Experiment Results tab in Guide Details.

  1. In the Experiment Results summary, select the ellipses (...) and select Add To Dashboard.

  2. Select the dashboard from a dropdown menu of your current dashboards.

  3. Select Add to Dashboard to add the widget to the selected dashboard.
Was this article helpful?
5 out of 6 found this helpful