help center

Choose the type of your problem
  • A/B Basics
  • Test parameters
  • Test results
  • Dashboard

A/B Basics

How do I create an A/B test?

Creating an A/B test is as simple as plugging in different variations for Prices, Images, Titles, and Descriptions to see how they perform against each other. The more tests you perform, the more key data insights you will obtain to become more profitable. Here is a step-by-step guide to using Optimal A/B:

What is A/B Testing?

A/B testing is the practice of presenting your visitors with different variations of the same page or product listing with the intention of optimizing its performance. In order to ensure tests are ran properly, only one variable can be tested at a time for your product listings, such as Price, Images, Title, or Description.  You can learn more about the practice of A/B testing here:

https://www.bigcommerce.com/articles/ecommerce/ab-testing/

Will Optimal A/B be compatible with my store?

Optimal A/B is compatible with all Shopify stores (including Shopify Plus stores).

Will Optimal A/B impact my store's performance?

Since Optimal A/B only makes changes to products through Shopify directly, our app has no impact on your store’s performance.

How does Optimal A/B work?

Depending on what kind of test you run, Optimal A/B will make changes to a single aspect of your product listing on a set interval. These changes are based on different variables you want to test, such as different variations of: Prices, Images, Titles or Descriptions. As each variant is tested, we will capture data insights that will allow you to see which variant has the highest impact to your store’s profitability, see data analytics for more on this.

As part of creating an A/B test, you will need to decide on a frequency for how often the app will change and update the product listing to each test variant.

Then you will need to select an end goal for the test:

No End Date = Tests will run indefinitely until you select a winner or de-activate them.

Visitor Count=  You specify the amount of visitors each variant will need to achieve (at a minimum) to end the test.

Custom Parameters = A premium feature that allows you to use statistical inputs to pinpoint your visitor goal.

 

“Visitor Count” & “Custom Parameter” tests offer the option for us to select a winner for you at the end of the test (based on the best performing variant). Enter your email address if you wish to receive an email notification alerting you once the test is complete:

 

Here is a step-by-step guide for using Optimal A/B:

How to run Price tests?

When running a price test, the product “Cost” (your cost for the product) and “Price” (what your customer will pay for the product) are both required fields. The “Compare at price” field is optional. Optimal A/B will calculate: (profit)/(views) for each price point you want to test against. This will help you determine which price point generates the highest “Average Profit Per View” for the product listing (which is the default metric we use to determine the best performing variant).

How to run Image tests?

Identify the “Image(s)” for your product listing that captures your visitor’s interest the best. Optimal A/B will calculate: (revenue)/(views) for each image configuration you want to test against. This will help you determine which Image(s) generate the highest “Average Revenue Per View” (which is the default metric we use to determine the best performing variant).

How to run Title tests?

Identify the product “Title” for your product listing that captures your visitor’s interest the best. Optimal A/B will calculate: (revenue)/(views) for each title you want to test against. This will help you determine which Title generates the highest “Average Revenue Per View” (which is the default metric we use to determine the best performing variant).

How to run Description tests?

Identify the product “Description” for your product listing that captures your visitor’s interest the best. Optimal A/B will calculate: (revenue)/(views) for each description you want to test against. This will help you determine which Description generates the highest “Average Revenue Per View” (which is the default metric we use to determine the best performing variant).

How do I know if my A/B test is statistically significant?

If you want to ensure you are running tests that are “statistically significant”, refer to this blog article:

https://www.invespcro.com/blog/how-to-analyze-a-b-test-results/

Can I edit A/B tests?

Only “Draft” and “Scheduled” status tests can be edited by clicking on the gear icon from the Dashboard page:

 

If a test has already started running, you can press the “Copy” button instead:

 

 

Why can I only run one test per product at a time?

Running multiple tests for the same product skews test data for both tests making it impossible to know which test variable helped or hurt the overall test performance. Instead, Optimal A/B requires that you only test one variable at a time for each product, leaving all other variables the same.

“In an experiment, scientists only test one variable at a time to ensure the results can be attributed to that variable. If more than one variable is changed, scientists cannot attribute the changes to one cause.”

-Google AI

How many test variants can I create per test?

“Basic” plan subscribers can test up to 3 variants per test.

“Premium” plan subscribers can test up to 6 variants per test.

See additional plan details here.

Why can I only test one variable at a time?

Testing multiple variables for the same product skews test data which makes it impossible to know which test variable helped or hurt the overall test performance. Instead, Optimal A/B requires that you only test one variable at a time for each product, leaving all other variables the same.

“In an experiment, scientists only test one variable at a time to ensure the results can be attributed to that variable. If more than one variable is changed, scientists cannot attribute the changes to one cause.”

-Google AI

Differences between Basic and Premium plans?

Basic Plan Benefits:

  • Run up to 5 A/B tests at once
  • Test up to 3 variants per test
  • Automation settings
  • Detailed analytics
  • Customer Support

Premium Plan Benefits:

  • Run unlimited A/B tests at once
  • Test up to 6 variants per test
  • Custom parameter tests
  • Automation settings
  • Detailed analytics
  • Customer Support
Downgrading plans?

Premium Plan to Basic Plan Downgrade:

If you received an error message when trying to downgrade your subscription from the “Premium” plan to the “Basic” plan, you must ensure that you are not using any of the following three “Premium” plan benefits before you are eligible to downgrade:

  • On the Dashboard, you need to have a total of 5 or less “Running” or “Scheduled” tests
  • None of your “Running” or “Scheduled” tests can have more than 3 test variants
  • None of your “Running” or “Scheduled” tests can have custom parameters in use (a “Custom” end goal)

To quickly check how many running/scheduled tests you have, you can use the “Filter” feature and select “Running” & “Scheduled” tests only:

To check the number of test variants for running/scheduled tests, click “View Results” and count the number of variants:

To check if a running/scheduled test is using custom parameters, locate the “End Goal” column on the Dashboard. Tests using custom parameters will say “Custom”:

How to make “Scheduled” tests eligible for plan downgrade (click gear icon):

  1. Edit test (and modify it accordingly)
  2. Delete test

How to make “Running” tests eligible for plan downgrade:

  1. Delete test
  2. Wait until test is finished
  3. Pause/Deactivate test
  4. Select a Winner
What each test status means?

Running: If a test is Running, that means Optimal A/B is actively changing your product listing on a scheduled basis using each test variant you created. This also means we are actively collecting data for each variant to ultimately find the variation that is the best performing “Winner”.

Winner selected: This status means that you have selected a winner yourself (on the test results page) or you enabled “Auto Select Winner”  and the test reached its required visitor goal. This status also indicates that the test is no longer actively cycling through test variants, nor are we collecting data any longer for this test.

Scheduled: This means that once a certain point in time is reached, the test will automatically start running for the first time.

Draft: This is a test that has never actually started running yet. This means you can also “Edit” the test. There is no limit to how many Draft tests you can create.

Inactive: An inactive test is a test that was running at one point but was manually paused/deactivated. While you cannot “Edit” Inactive tests, you can copy or delete them still.

Complete: Occurs if each variant (within the test) reaches the test’s required visitor goal. It also means you did NOT enable “Auto Select Winner” when setting up the test. If a test has a status of “Complete”, this means we changed the test variable back to how it was before you started the test. Lastly, this status indicates that the test is no longer actively cycling through test variants, nor are we collecting data any longer for this test.

Product Deleted:  This status indicates that you deleted a product listing outside of the app while running an A/B test for that same product. It is never recommended to edit, modify, or delete a product listing outside of the app while there is a test running for that same product. Deleting a product will also prevent the test from being activated again in the future.

Test parameters

Naming tests and selecting products?

Naming Tests: Creating a descriptive name for your A/B test will help ensure you will be able to quickly identify key elements of the test at a quick glance. Customers will not see the test name; its only purpose is to assist you in keeping track of your A/B Tests:

Selecting Products: Click “Select Product” on the “Test Details” screen. Keep in mind that if you wish to change the product selection, you will first need to click the checkmark next to the selected product to de-select it:

Setting up Test Variants?

On the right portion of the “Variants” page is where you configure different variations of: Prices, Images, Titles, or Descriptions, depending on what type of test you are running:

Optimal A/B will automatically cycle through all the different variants you setup here and we will record data for each variant to identify a clear best performing “Winner”.

Selecting a start time?

Optimal A/B changes your product listings on a scheduled interval. If your test starts or is scheduled to start at 8:30 PM for example, and you select a change frequency of 12 hours, the app will alternate between changing your product listing at 8:30 AM and 8:30 PM every day for each variant until the test is no longer running.

How often should I have Optimal A/B change variants?

If you are running a paid promotion campaign to drive traffic to certain product pages on your website, it is important to select a change frequency that will allow adequate visitor exposure to each test variant. For example, choosing a “Daily” change frequency would be a bad idea if you are only running the paid promotion for one day, but a “Daily” change frequency might be fine if you were going to run the paid promotion for 30 days (depending on how much traffic your store is getting).

Determining the end of test?

End Goal

No End-Date: These tests will run indefinitely until you “Select a Winner” or “Pause/deactivate” the test.

Visitor Count:  You specify the amount of visitors each variant will need to achieve (at a minimum) to end the test.

Custom Parameters: Define custom test parameters to create a custom sample size/visitor goal that each test variant must achieve before the test is considered complete (and variant changes no longer occur):

This feature is best suited for store owners who have very clear objectives with their A/B Testing needs and want to ensure their tests are ran in a statistically significant manner. See this article to learn more about statistically significant A/B Testing:

https://www.invespcro.com/blog/how-to-analyze-a-b-test-results/

Why are fields greyed out?

Greyed Out Fields

Default Product Listing Snapshot: You will notice that all fields are greyed out on the Default Product listing/ Snapshot view. That is because the Snapshot view’s only purpose is to show you what the product listing looks like at the time the test is created:

Test Variants: The reason fields are greyed out within a test variant window is because Optimal A/B will not be changing or testing these variables. For example, if you are running an “Image test”, the Title, Description and Price fields will all be disabled/greyed out, because the app will not be changing those variables once you begin running the image test:

Why do you need my product cost for Price Tests?

Importance of “Cost” in Price tests

The default and most effective way we believe to determine the success of different price points is by identifying the price variant (price point) that yields the highest “Average Profit Per View”. To calculate this, we must know your product’s cost. Don’t worry, we won’t share your cost information with your customers, or anyone else.

What are Custom Parameter End Goal tests?

Custom Parameter tests allow you to fine tune the following variables below so that you can pinpoint the exact sample size (visitor goal) which each variant must achieve for the test to become complete:

Test results

What do the test analytics mean?

Avg Revenue Per View: The average revenue generated by a single view for a given test variant. This is the default metric used to determine the winning variation for Image, Title, and Description tests.

Avg Profit Per View (only available for Price tests): The average profit generated by a single view for a given variant. This is the default metric used to determine the winning variation for Price tests.

Units Sold: The total number of units sold of the product for each test variant

Conversion Rate: The calculated percentage for how often a page view for the product leads to a purchase for each test variant

Cart Adds: The total number of times the product has been added to the cart for each test variant

Avg Order Value: The average dollar value for each order which includes the test product within the order

Unique Views: The number of unique visitors that land on the product page for a given test variant

How does "Selecting Winners" work?

If a test has already started running (currently or at some point), you can “Select a Winner”. Selecting a winner will end running tests and immediately change the product listing to the selected variant. This variant will remain indefinitely unless you re-activate the test later or change the product listing outside of the app. To select a winner, click “View Results” and then “Select Winner”:

Highlighting Best Performing Variants?

Once a test starts receiving activity, Optimal A/B will automatically highlight the best performing variant in green:

Price tests will default to highlighting the test variant with the highest “Average Profit Per View”.

All other tests will default to highlighting the test variant with the highest “Average Revenue Per View”.

You can also change which variant is highlighted in green to indicate the best performing variant using other metrics:

Current Variant in use indicator?

The “Current variant in use” icon lets you know which test variant is currently in use on your product listing:

Indicator showing the best performing variant in an A/B test

View Variant Links?

The “View Variant” link allows you to quickly view how each test variant was setup:

View variant link is shown on View Results page of A/B Test screen

Dashboard

Pausing/ Deactivating Tests?

Click the gear icon of a running test to find the “Pause/Deactivate” option:

Pausing a test causes the test to indefinitely stop cycling through test variants and we will stop collecting data for the test. To pause the test, you will need to tell our app if we should revert the test variable back to how it was when you started the test (Default Variant), or if you would like to leave the current test variant in place (Current Variant):

You can quickly tell which variant is currently in use by this indicator:

To re-activate a test, click the gear icon and select “Resume/re-activate”:

How do I "Delete" tests?

Click the gear icon to find the “Delete” test option:

Deleting a test will clear all data and test variants associated with that test.

When you select to delete a test, you will need to tell our app if we should revert the product listing back to how it was when you started the test (Default Variant), or if you would like to leave the current test variant in place (Current Variant):

You can quickly tell which variant is currently in use by this indicator:

How do I "Copy" tests?

Click the gear icon to find the “Copy” test option:

How do I "Edit" tests?

You can only edit tests that have not started running yet. This would include tests with a status of “Draft” and tests with a status of “Scheduled”. To edit a test, you will find the option under the gear icon:

Can I "Re-activate" tests?

You can only “Resume/re-activate” tests that have not reached their visitor goal yet. Re-activating a test will cause it to resume cycling through your test variants and we will start collecting data again for each test variant.  Click the gear icon to resume/re-activate the test:

Start Date Link?

Clicking the “Start Date” link for a funning test will provide information about when the test is scheduled to change variants next and the frequency of how often the app is cycling through your test variants:

Clicking the “Start Date” link for a test that is NOT running will provide information about what time the test started running and the frequency of how often the app is setup to cycle through your test variants.

Test Conflict/New Test Conflict?

If you received a test conflict error, this means that you are trying to schedule or run more than one test at a time for the same product. Running or scheduling more than one test for the same product can lead to multiple variable changes during the duration of a test, which would make it impossible to know which variable change helped or hurt the overall test performance. Instead, Optimal A/B requires that you only test one variable at a time for each product, leaving all other variables the same.

Change Winner Conflict?

If you receive the “Change Winner Conflict” error message, this means you are trying to “Select a Winner” for a product that currently has a Running test (for a different variable). This would lead to multiple variable changes that overlap during the duration of the test and would make it impossible to know which variable change helped or hurt the overall test performance. This also means you should never modify a product listing outside of the app when you have a running test for that same product (for the same reason).