Marketing API Version

Split Testing

Reference Docs

Available October 22, 2016. The Split Testing API allows you to test different advertising strategies on mutually exclusive audiences to see what works, and what doesn’t. The API automates audience division, ensures no overlap between groups and allows you to test variables like audience types, delivery optimization techniques, ad placements, different ad creative, budgets and more. You no longer have to manually build unique audiences and run your test campaigns separately. The API allows you or your Facebook Marketing Partner to create, initiate and view the results of a test all in one place.

Best Practices

  • Define KPIs with your Marketing Partner or internal team before creating a split test.
  • Select only one variable per test. This will help you understand the most likely reason for differences in performance.
  • Determine an acceptable confidence level before creating a test. Tests with larger reach, longer schedules, or higher budgets tend to deliver more statistically significant results.
  • When testing for volume metrics, such as number of conversions, remember to scale results or populations to ensure both test sizes are comparable.

Test Restrictions

  • Max concurrent studies per advertiser = 100
  • Max cells per study = 100
  • Max ad entities per cell = 100

Variable Testing

While the Split Testing API allows you to test many different types of variables, we recommend you only test one variable at a time. Doing so preserves the scientific integrity of your test, and allows you to hone in on the specific difference that drove better performance.

For example, consider a split test at the ad set level of ad set A vs. ad set B. If ad set A uses conversions as its delivery optimization method and automatic placements, while ad set B uses link clicks for delivery optimization *and *custom placements, you’d be unable to tell whether the different delivery optimization methods or the different placements drove better performance.

In the same example, if both ad sets used conversions for delivery optimization, but still had different placements, you’d know that placement strategy could be the only thing responsible for differences in performance. Here’s an example to setup the abovementioned test at the ad set level:

Example to setup a split test at the ad set level to test variables:

curl \
-F 'name="new study"' \
-F 'description="test creative"' \ 
-F 'start_time=1478387569' \
-F 'end_time=1479597169' \
-F 'type=SPLIT_TEST' \
-F 'cells=[{name:"Group A",treatment_percentage:50,adsets:[<AD_SET_ID>]},{name:"Group B",treatment_percentage:50,adsets:[<AD_SET_ID>]}]' \
-F 'access_token=<ACCESS_TOKEN>' \ https://graph.facebook.com/<API_VERSION>/<BUSINESS_ID>/ad_studies

Testing Strategies

You may also wish to test two or more overarching strategies against each other. For example, does a campaign with an ad objective to measure conversions have a greater impact on your direct response marketing efforts than a campaign to drive website visits?

Here’s how to setup such a test at the campaign level:

curl \
-F 'name="new study"' \
-F 'description="test creative"' \ 
-F 'start_time=1478387569' \
-F 'end_time=1479597169' \
-F 'type=SPLIT_TEST' \
-F 'cells=[{name:"Group A",treatment_percentage:50,campaigns:[<CAMPAIGN_ID>]},{name:"Group B",treatment_percentage:50,campaigns:[<CAMPAIGN_ID>]}]' \
-F 'access_token=<ACCESS_TOKEN>' \ https://graph.facebook.com/<API_VERSION>/<BUSINESS_ID>/ad_studies

Winning Tests

When determining a winning test you should chose the strategy or variable that achieves the highest efficiency metric based on your campaign objective. For example, in a test with conversions as its objective, the ad set that achieves the lowest cost-per-action (CPA) will be dubbed the winner.

Be wary if your test uses a uneven test group sizes, or has vastly different audience sizes. In these instances, it may make sense to scale the size and results of one split so that it’s equally comparable to the others. If your budget isn’t proportionate to the size of the test group you should consider the volume of outcomes in addition to efficiency.

It’s important to use an attribution model that makes sense for your business, and to have agreed upon it internally before initiating a split test. If you believe your current attribution model needs to be reevaluated, contact your Facebook representative to run a lift study. Lift studies can show the true causal impact of your conversion and brand marketing efforts.

Budget

You may customize the budgets of your split tests, and even choose to test different budgets against each other. However, be aware that budget directly impacts the reach of your splits. If you end up with splits that have vast differences in reach or audience sizes, you may need to scale your results to make your test comparable.