Split Testing

Test different advertising strategies on mutually exclusive audiences to see what works. The API automates audience division, ensures no overlap between groups and helps you to test different variables. Test the impact of different audience types, delivery optimization techniques, ad placements, ad creative, budgets and more. You or your marketing partner can create, initiate and view test results in one place. See Ad Study Reference.

Guidelines

  • Define KPIs with your marketing partner or internal team you create a test.
  • Confidence Level Determine this before creating a test. Tests with larger reach, longer schedules, or higher budgets tend to deliver more statistically significant results.
  • Select only one variable per test. This helps determine the most likely cause of difference in performance.
  • Comparable Test Sizes When you test for volume metrics, such as number of conversions, you should scale results and audience sizes so both both test sizes are comparable.

Test Restrictions

  • Max concurrent studies per advertiser: 100
  • Max cells per study: 100
  • Max ad entities per cell: 100

Variable Testing

While you can test many different types of variables, we recommend you only test one variable at a time. This preserves the scientific integrity of your test, and helps you identify the specific difference that drives better performance.

For example, consider a split test with ad set A and ad set B. If A uses conversions as its delivery optimization method and automatic placements, while B uses link clicks for delivery optimization *and *custom placements, you cannot determine if the different delivery optimization methods or the different placements drove better performance.

In this example, if both ad sets used conversions for delivery optimization, but had different placements, you know that placement strategy is responsible for differences in performance.

To setup this test at the ad set level:

curl \
-F 'name="new study"' \
-F 'description="test creative"' \ 
-F 'start_time=1478387569' \
-F 'end_time=1479597169' \
-F 'type=SPLIT_TEST' \
-F 'cells=[{name:"Group A",treatment_percentage:50,adsets:[<AD_SET_ID>]},{name:"Group B",treatment_percentage:50,adsets:[<AD_SET_ID>]}]' \
-F 'access_token=<ACCESS_TOKEN>' \ https://graph.facebook.com/<API_VERSION>/<BUSINESS_ID>/ad_studies

Testing Strategies

You can test two or more strategies against one another. For example, do ads with the conversion objective have a greater impact on your direct response marketing than a website visits objective? To setup this test at the campaign level:

curl \
-F 'name="new study"' \
-F 'description="test creative"' \ 
-F 'start_time=1478387569' \
-F 'end_time=1479597169' \
-F 'type=SPLIT_TEST' \
-F 'cells=[{name:"Group A",treatment_percentage:50,campaigns:[<CAMPAIGN_ID>]},{name:"Group B",treatment_percentage:50,campaigns:[<CAMPAIGN_ID>]}]' \
-F 'access_token=<ACCESS_TOKEN>' \ https://graph.facebook.com/<API_VERSION>/<BUSINESS_ID>/ad_studies

Evaluating Tests

To determine the test that performs the best, chose a strategy or variable that achieves the highest efficiency metric based on your campaign objective. For example, to test the conversions objective, the ad set that achieves the lowest cost-per-action (CPA) performs the best.

Avoid evaluating tests with uneven test group sizes, or significantly different audience sizes. In this case, you should increase the size and results of one split so that it is comparable to you other tests. If your budget is not proportionate to the size of the test group you should consider the volume of outcomes in addition to efficiency.

You should also use an attribution model that makes sense for your business, and to agree upon it before initiating a split test. If your current attribution model needs reevaluation, contact your Facebook representative to run a lift study. This can show the true causal impact of your conversion and brand marketing efforts.

Budgeting

You can use custom budgets with your split tests, and choose to test different budgets against each other. However, budget directly impacts reach for your test groups. If your test groups result in large differences in reach or audience size, you increase budget to improve your results and make your test comparable.