The 10-Step Guide to Meta Ads Creative Testing πŸ’‘

App Masters
7 May 202409:07

Summary

TLDRThe video script outlines a 10-step creative testing framework for Meta's app install campaigns, emphasizing the importance of data-driven insights to refine ad creatives. It suggests starting with a manageable number of concepts and variations, setting a baseline hypothesis, and defining the testing environment. The script also covers event optimization, budget considerations, and the significance of ongoing testing to identify successful ad creatives. It discusses strategies for new advertisers to overcome budget constraints and highlights the use of Aggregated Event Measurement (AEM) and hooks in creatives to boost performance.

Takeaways

  • πŸ“ The importance of creative testing in Meta for app installs campaigns is emphasized, highlighting the need to gather data on creatives and define clear winners and losers for continuous iteration.
  • πŸ” The suggestion to start with a manageable number of creatives, typically three concepts with five variations each, to ensure focused and effective testing.
  • πŸ“ The necessity of establishing a baseline and forming hypotheses to understand if new creative concepts will outperform existing ones, considering elements like background or character changes.
  • 🌐 The need to define the environment for testing, including OS, language, and market version, to ensure consistency in the testing conditions.
  • 🎯 The significance of setting clear objectives and optimization events for the creative testing, determining how many variants to run and the frequency of ongoing tests.
  • πŸ’° The budget is a crucial factor in creative testing, with the cost per acquisition or install being pivotal in determining the worth of the test.
  • ⏳ The recommendation to run creative tests for at least one week, considering the attribution window for Android and the potential scan framework delay for iOS.
  • πŸ“Š The process of waiting for the attribution window to close before drawing conclusions and iterating over the winners, with the encouragement to select multiple winners and continue testing.
  • πŸ“ˆ The strategy of using aggregate event measurement solutions like ADC in TikTok and AEM campaigns in Meta to overcome budget constraints and ensure effective testing.
  • πŸ“‰ An example provided where a test campaign achieved a 12% reduction in CPI and 44% reduction in cost per trial compared to the best-performing ads, demonstrating the impact of testing.
  • 🎯 The use of custom columns and dimensions in reporting to gain insights per concept and identify promising creative directions.
  • πŸ”‘ The role of psychological hooks in creatives to capture attention and the importance of understanding the target audience's psychology for effective advertising.

Q & A

  • What is the purpose of creative testing in Meta's advertising platform?

    -The purpose of creative testing is to gather data on different creatives, understand which ones perform better, and iterate on the successful ones while discarding those that do not work well.

  • How many creatives are typically involved in the initial testing phase according to the transcript?

    -Typically, the initial testing phase involves three concepts with five variations each, totaling 15 creatives.

  • What is the importance of defining a baseline before starting creative testing?

    -Defining a baseline helps to understand the current situation and the hypothesis for testing new concepts, which is crucial for determining if the new creatives will work better or not.

  • What factors should be considered when defining the environment for creative testing?

    -Factors to consider include the operating system, language, market, and version to ensure that the testing is fair and consistent.

  • Why is it important to have an ongoing test for creatives?

    -Ongoing tests ensure that you continuously gather insights and can make informed decisions about which creatives to keep iterating on and which to discard.

  • What is the recommended minimum duration for running a creative test?

    -It is recommended to run a creative test for at least one week to gather enough data for meaningful insights.

  • How does the budget affect the outcome of creative testing?

    -The budget is crucial as it defines the scope of testing and the number of creatives that can be run. It also impacts the cost per acquisition or install.

  • What is the significance of the attribution window in the context of creative testing?

    -The attribution window is important because it determines the period after which the data is considered final and can be used for making decisions about the creatives.

  • What is the role of feedback and iterations in creative testing?

    -Feedback and iterations are essential for refining the creatives based on the test results, selecting winners, and continuously improving the advertising strategy.

  • How can a newbie with a constrained budget overcome the challenges of running Meta ads?

    -A newbie can overcome budget constraints by using aggregate event measurement solutions like ADC in TikTok or AEM campaigns in Meta, which allow for more efficient testing and learning.

  • What are some examples of hooks used in creatives that perform well?

    -Examples of hooks include statements that provoke curiosity or surprise, such as 'This is why your current skincare doesn't work' or 'This is going to blow your mind.'

  • How can advertisers set up an Aggregated Event Measurement (AEM) campaign in Meta?

    -Setting up an AEM campaign in Meta involves selecting the AEM toggle when creating a new campaign for iOS, which automatically enables aggregated event measurement.

  • Is it necessary to have a Mobile Measurement Partner (MMP) for AEM campaigns, or is the Meta SDK sufficient?

    -While an MMP can be used for more comprehensive data, if the promotion is solely within Meta, the SDK is sufficient for most advertisers.

Outlines

00:00

πŸ“Š Creative Testing Framework for Meta Ads

This paragraph introduces a 10-step blueprint for creative testing in Meta, developed by Admiral Media, focusing on app installs campaigns. The framework aims to gather data on creatives, identify winners and losers, and iterate based on insights. It suggests starting with three concepts, each with five variations, and emphasizes the importance of a baseline hypothesis. The environment setup, including OS, language, and market version, is crucial for fair testing. The paragraph also discusses event optimization, budget considerations, and the frequency of testing. It advises running tests for at least a week, considering the attribution window and potential delays in the scan framework. The goal is to continuously test and refine creatives to improve ad performance.

05:01

πŸ’° Budget Management and Testing Strategies in Meta

The second paragraph delves into budget considerations for testing in Meta, suggesting a daily investment calculation based on expected installs and cost per install (CPI). It recommends adjusting the daily budget for each creative asset and provides a strategy for potential budget changes after the initial testing phase. The speaker addresses concerns about the high costs of testing on Meta and introduces Aggregated Event Measurement (AEM) as a solution, which is also available on TikTok as ADC. The paragraph shares an example of a successful test campaign that achieved lower CPI and cost per trial by grouping ads by concept and optimizing towards installs. It also touches on the use of custom dimensions for reporting and the importance of aiming for a 1% click-through rate. Additionally, it discusses the use of hooks in creatives to capture audience attention and the setup process for AEM in Meta.

Mindmap

Keywords

πŸ’‘Creative Testing

Creative testing refers to the process of evaluating different advertising creatives to determine which ones resonate best with the target audience. In the video, it's about launching successful creative tests on the Meta platform, which is crucial for optimizing app installs campaigns. The script discusses the importance of testing various concepts and variations to identify the most effective creatives.

πŸ’‘Meta

Meta, in this context, refers to the company formerly known as Facebook, which offers a platform for advertising and app installs. The video script outlines a framework for testing creatives specifically designed for Meta's advertising system, emphasizing the platform's capabilities and the strategies needed for successful campaigns.

πŸ’‘App Installs Campaigns

App installs campaigns are advertising efforts aimed at driving users to download and install a particular app. The video discusses the framework for creative testing within Meta to enhance the performance of these campaigns, highlighting the need for continuous iteration and optimization based on testing insights.

πŸ’‘Concepts and Variations

In the script, concepts refer to the initial ideas for ad creatives, while variations are the different executions of these ideas. The video mentions that typically, clients start with three concepts, each with five variations, totaling 15 creatives for testing. This approach helps in understanding what aspects of the creatives contribute to their success.

πŸ’‘Baseline

The baseline represents the starting point or the current situation that leads to the decision to test something new. In the video, establishing a baseline is crucial for understanding how new creative concepts might perform in comparison to existing standards or benchmarks within the advertising campaign.

πŸ’‘Hypothesis

A hypothesis in the context of creative testing is an educated guess about how changes in creative elements might affect performance. The script uses the example of changing backgrounds or characters to illustrate how hypotheses guide the testing process and help in predicting potential improvements.

πŸ’‘Environment

Environment, in the context of the video, refers to the settings within which the creative tests are conducted, such as the operating system for app installs, language, and market version. The script emphasizes the importance of maintaining consistent settings to ensure fair and accurate testing.

πŸ’‘Events and Optimization

Events and optimization pertain to the specific actions or outcomes that the campaign aims to achieve, such as app installs or in-app events. The video script discusses setting up these events for testing and determining the number of creative variants to run, which is a key part of the ongoing testing process.

πŸ’‘Budget

Budget is a critical factor in creative testing, as it defines the scope and duration of the tests. The script mentions that the budget should be considered in relation to the cost per acquisition or install, and it provides examples of how to calculate the daily budget for each creative asset based on expected installs.

πŸ’‘Attribution Window

The attribution window is the period after an ad is viewed during which a conversion (like an app install) is counted towards the ad's performance metrics. The video script explains the importance of waiting for this window to close before analyzing test results, with specific mention of the different windows for Android and iOS.

πŸ’‘Feedback and Iteration

Feedback and iteration are essential components of the creative testing process, allowing advertisers to learn from the results and refine their strategies. The video emphasizes the importance of selecting winning creatives based on test outcomes and continuously testing new ideas to improve ad performance.

πŸ’‘Aggregated Event Measurement (AEM)

Aggregated Event Measurement is a solution developed by Meta to help advertisers measure the performance of their ads more accurately. The script discusses how AEM can be used in the context of testing, especially when dealing with constraints like limited budgets, to ensure that the data collected is reliable and actionable.

πŸ’‘Mobile Measurement Partner (MMP)

A Mobile Measurement Partner is a third-party service that provides attribution and analytics for mobile advertising campaigns. The video script mentions that while MMPs can be used in conjunction with Meta's SDK, there can be issues with reporting, and it may be sufficient to rely solely on Meta's SDK for campaigns promoted exclusively on the platform.

πŸ’‘Creative Hooks

Creative hooks are elements within an ad designed to capture the audience's attention and interest. The script provides examples of hooks, such as intriguing statements or visual effects, that are used to engage viewers and make the ad memorable. These hooks are crucial for the success of creatives in capturing the target audience's psychology.

Highlights

The importance of creative testing in Meta for app installs campaigns and the 10 steps framework by Admiral Media.

The necessity of data collection on creatives to identify new concepts and insights for further iteration.

Defining clear winners and losers in creative testing to iterate and remove ineffective elements.

Starting with three concepts and five variations each for a total of 15 creatives to test.

Developing creatives based on a baseline hypothesis to test new concepts against existing ones.

Defining the testing environment including OS, language, and market version for consistency.

Setting up events and optimization goals for creative testing with variants and ongoing tests.

The significance of budget in creative testing and its impact on the cost per acquisition or install.

Recommendation to run creative tests for at least one week considering the attribution window for Android and iOS.

Creating feedback and iterations over the winners to continuously improve advertisement performance.

Meta's ADC and TikTok's Aggregate Event Measurement Solution as tools for constrained budgets.

The impact of budget constraints on outcomes and how to leverage Meta's solutions for better results.

An example of a test campaign with a limited budget that achieved 12% less CPI and 44% cost per trial.

Using custom columns and naming conventions for better classification and insights in reporting.

The psychological aspect of creative hooks and their effectiveness in capturing audience attention.

Setting up Aggregated Event Measurement (AM) in Meta with a simple toggle in the campaign creation process.

The debate on the necessity of a Mobile Measurement Partner (MMP) versus relying on the Meta SDK for reporting.

The challenges with MMP reporting and the preference for using the Meta SDK for certain setups.

The importance of testing and continuous improvement in advertising strategies for better performance.

Transcripts

play00:00

one you want to kick it off with your

play00:02

creative testing

play00:05

framework the 10 steps that you have to

play00:08

take into account when launching a

play00:11

successful creative test in meta but

play00:14

also how to properly use that for your

play00:16

app installs campaigns in this platform

play00:19

so this is the blueprint to creative

play00:21

testing Al meta by Admiral media when we

play00:24

are testing what we want is to get data

play00:28

on creatives because there are some

play00:30

cases where we have new Concepts and

play00:33

creatives but not able to run them all

play00:37

and to have insights for them after that

play00:39

we're going to Define clearly winners

play00:42

and losers so we can keep iterating on

play00:44

that and and remove what didn't work are

play00:47

you saying like we need hundreds of

play00:48

creatives to start testing about 50 over

play00:53

with massive that amounts but normally

play00:55

what I have in my clients is three

play00:57

concepts each concept contains

play01:00

five variations so it would means

play01:03

15 in total so when testing first of all

play01:06

we need to think about how to develop

play01:07

properly these creatives and we have to

play01:10

start from a baseline what is that

play01:12

situation that I'm facing that led me to

play01:16

think of testing something so the

play01:18

hypothesis is going to help me to

play01:20

understand if these new Concepts are

play01:21

going to work better or not for example

play01:23

if I change background or or the

play01:26

character would that make the difference

play01:28

or not once I have this two variables

play01:32

clear I need to define the environment

play01:35

which is the OS in this case for app

play01:37

install what is going to be the language

play01:39

that I will be using the market the

play01:41

version as well because we need to make

play01:43

sure that everything is going to run

play01:45

under the same settings to be fair with

play01:47

the testing for creatives in this case

play01:49

the fourth step is about events and

play01:51

optimization what's the event that I

play01:53

want to aieve setting up this for

play01:55

testing how many variants of creatives

play01:57

am I going to to run for example is

play02:00

going to be five different variants from

play02:03

a single concept or three variants from

play02:06

five different concepts it's like

play02:08

playing with this amount of creatives

play02:09

that we're going to run for this test

play02:11

but the idea is to have this ongoing

play02:14

test happening frequently like every

play02:16

week or several tests running once or

play02:19

twice a month it's important to have

play02:22

clear here that whenever you're testing

play02:24

you need to take into account that a

play02:26

budget is the crucial factor that is

play02:28

going to Define if that that's is worth

play02:30

or not because it depends of also on the

play02:33

cost per acquisition or install I

play02:35

recommend to run at least one week for

play02:37

every creative testing but bear in mind

play02:40

that with Android there is a window

play02:41

attribution that is seven days and for

play02:45

iOS it's for you need to take into

play02:47

account the scan framework delay which

play02:49

is 48 hours to 72 depending on your

play02:52

version of the scan once I have defined

play02:54

my time let's suppose one week and I run

play02:57

the test I wait for the window

play02:59

attribution to to be Clos in this case

play03:00

for7 days before I report that back and

play03:03

so on I will have my conclusion then I

play03:05

can create a feedback and iterations

play03:08

over the winners and do not be afraid of

play03:11

selecting two or three winners as David

play03:13

OG said never stop testing because the

play03:17

test will do your advertisement better

play03:20

so it's always about testing and that's

play03:23

why we have come up with these steps in

play03:25

the case of meta if you're running a

play03:27

test campaign and optimizing towards

play03:29

inab event you need to be getting 88

play03:32

daily installs if you're testing for iOS

play03:34

suppose I'm picking the auction

play03:37

selection approach and I want to run a

play03:40

separate campaign because I already

play03:42

tried to run my new creatives into the

play03:46

usual campaign of course these creators

play03:48

are not getting any data I need to do

play03:50

something different and I want to launch

play03:52

a new whole campaign and this is where

play03:54

this chart Comes This is the CPI that I

play03:57

have for US based on the scan data

play04:00

in meta for the last 14 days

play04:03

70.2 so I want to run five different

play04:07

concepts of creatives each concept

play04:09

contains five other variations so

play04:12

example concept one let's talk about a

play04:15

subscription app so my first concept is

play04:17

going to focus on the benefits for

play04:20

sleeping my second concept is going to

play04:22

be focused on benefits for breathing the

play04:25

next one's going to be for work the next

play04:28

one's going to be for exercise size and

play04:30

the next one anxiety exactly so each of

play04:33

these Concepts will contain different

play04:35

variations try not to make the variants

play04:38

very different from each other because

play04:40

it will be not helping you to actually

play04:43

understand what's the winning Factor

play04:45

inside each concept if I do some quick

play04:48

calculation we would need to invest

play04:51

daily 7.2 * 88 that equals

play04:58

634 a day that's a lot but it is between

play05:01

the average of many experts also

play05:03

recommend after you have this budget you

play05:05

have to multiply by five and that will

play05:08

give you the daily budget for each asset

play05:11

if you just do the calculation for the

play05:13

days that you will be having the

play05:14

campaign active it will give you the

play05:17

total amount of spend that you will need

play05:19

to count in in order to to run this

play05:22

hypothetical test for S days only after

play05:24

7 days if we let's say we hit that

play05:26

threshold can we then lower the daily

play05:28

budget you can increase it because you

play05:30

have already some learning into the

play05:31

campaign let's suppose from 100 you can

play05:34

decrease in this case from

play05:36

127 you might be able to decrease

play05:38

probably by to to n 100 or or 9 I mean

play05:43

that my concern with running meta ads is

play05:45

the amount of money needed to do testing

play05:47

as Ju Just Illustrated before I can get

play05:49

a good row ass how can a newbie overcome

play05:52

this the answer is the a aggregate event

play05:54

measurement solution that meta has

play05:57

developed and Tik Tok has also done done

play06:00

a couple of weeks ago which is called

play06:02

ADC in terms of Tik Tok advanc dedicated

play06:06

campaign also work like a AEM campaign

play06:09

so how does a constraint budget affects

play06:11

outcomes so answer is a once you have

play06:13

this scan numbers you wait for the

play06:15

window attribution to close and then you

play06:17

will have the data that you need to

play06:20

actually pick those winners and that way

play06:22

you can upload an into the business

play06:24

asual campaign and you can start testing

play06:26

know you can be more confident about

play06:28

what you're running here is an example

play06:30

this was a test that I was running for

play06:32

seven days where I spent just 1K and I

play06:34

run seven different ad and I was

play06:36

optimizing towards install in this case

play06:39

and what I did was to group them by

play06:42

concept this is an AM campaign but I had

play06:45

here two concepts grouped which in this

play06:49

case were F and H and G and D so they

play06:54

sum the 200 daily that I was assigning

play06:57

per adset but when we look at the

play06:59

information

play07:00

at level of course this will make the

play07:02

difference were you doing this as under

play07:03

one ad group limited budget this is one

play07:06

campaign but five ad groups okay so we

play07:09

have each ad group with uh 200 of a

play07:12

spend after the first week we were able

play07:14

to achieve 12% less CPI and 12 44% cost

play07:18

per trial compared to the all performing

play07:21

ads in the report stab there are some

play07:23

custom columns that you can create if

play07:25

you have a good naming naming conversion

play07:29

concept one reals version a and so on

play07:32

you will be able to classify them into

play07:34

the reporting tab through the custom

play07:36

dimensions and if you are able to do so

play07:39

you will have insights per concept are

play07:42

you underlining the 1% because that's

play07:44

the percentage we're trying to aim for

play07:45

one% clickr rate exactly this is this is

play07:47

very good and also a reason to consider

play07:50

a concept promising Juan can you give

play07:52

examples of hooks used in creatives that

play07:55

are performing well this is why your

play07:56

current assets or your current skincare

play07:59

doesn't work this will basically catch

play08:01

people because they want to know what

play08:03

they're doing wrong another one this is

play08:05

going to blow your mind there are some

play08:07

visual effects also using lately for

play08:09

some hyper casual games where you just

play08:12

start seeing some crazy animation and

play08:14

then they ad starts but you already are

play08:16

seeing the ad after five seconds and you

play08:18

you don't realize that play with

play08:19

psychology I think this is the most

play08:21

valuable asset play with the psychology

play08:23

of the target audience how do we set up

play08:25

am in meta it's just a a toggle that you

play08:28

can select when you're Crea new campaign

play08:30

for iOS you you just click a tole and it

play08:32

will be aggregated EV measurement and

play08:35

that's it do you still need an MMP for

play08:36

that or is the Facebook SDK good enough

play08:38

for a no the MMP is having issues with

play08:41

the a reporting and normally they're

play08:43

reporting back the scan data most of the

play08:45

advertisers are not seeing their numbers

play08:47

of meta SDK into into MMP but there are

play08:50

others that yes so it depends on the

play08:53

setup and the integration done you can

play08:55

use MMP to have this the scand DAT I

play08:57

mean I would recommend to use MMP but if

play08:59

you're only promoting in meta it's fine

play09:01

if you just stay with the SDK

play09:05

[Music]

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Creative TestingMeta AdsApp InstallsCampaign OptimizationAd PerformanceBudget PlanningMarketing InsightsUser EngagementPsychological MarketingCPI Analysis