If ten people think about how best to communicate about a product or service in a marketing campaign, you will probably get ten opinions. Sounds tedious? But it’s actually a fine thing. After all, we should all be clear: Nobody eats wisdom with a spoon and it can be worthwhile to consider different communicative approaches. That’s the basic idea behind the testing approach we take at Content Garden in our Native Advertising campaigns. Why this is such a hot topic, where the challenges lie and what the possible benefits are, we explain in this magazine post.
Think out of the box!
Of course, as an advertiser you carry around a whole backpack full of brand identity, corporate wording and cemented slogans; sometimes there’s not too much room to manoeuvre. And sure, even we in content management often have an idea of what content might go down particularly well with the target group. However, neither the established brand messages nor our gut feeling are always the last word. So the question must be allowed: Why not try out several different – and sometimes completely new – possibilities? Perhaps the golden egg will be found. What we always find, at least, are insights and learnings.
What is testing actually? A definition.
To put it very simply: testing takes place when at least two equally valid variants of an asset are tested for their suitability in parallel in a comparable setting. Simple example: We show 100 people poster A and poster B and ask them to say which one they like better. Of course, Native Advertising is not about posters, but for example about different advertorials, teasers, advertising material or even specific wordings and images. And we don’t even have to ask people for their opinion here, but get the feedback directly through their online click behaviour. This also allows us to test many more options and parameters simultaneously than just A or B. Potentially, the possibilities are limitless. Nevertheless – or precisely because of this – you have to approach it carefully. Because testing does not always really make sense.
When and where does testing make sense?
Brand messages have to be unchangeable to a certain extent, so that they ensure identification. That’s why you shouldn’t constantly send out new, changing messages. But there are certain (advertising) settings in which you can constantly test opportunities and limits. Native Advertising offers such a setting on a silver platter. Targeted testing takes hold where the ready-made slogan is of no use: in the preferences of the potential target group. Testing these preferences is hardly possible anywhere as efficiently as in online content marketing. Through clearly measurable metrics such as clickrate, dwelltime or bouncerate, we can evaluate which approaches elicit the most response. This is how we receive learnings – for ourselves and for our customers, the advertisers.
Testing in practice at Content Garden
The extent of our testing possibilities depends, of course, on the specific topic, because some products or services simply have more facets than others. On the other hand, it also depends on how broad the target group is, how established the brand message is or how strict any legal requirements are. Overall, however, our testing palette is huge: it ranges from different content approaches to variations in addressing the target group to a targeted check of how we achieve the most click-outs. A few examples?
Testing level #1: Exploring content approaches
- Promotional or editorial? In the course of advertorial testing, we can write several articles on the same product or topic in different formats. The classic way is to test a text with a strong promotional focus against a text with an editorial touch. A very promotional text focuses primarily on brand messages, deals aggressively with the product and tends to use inviting language. In contrast, an editorial text is linguistically more neutral, it focuses on information, entertainment or inspiration, revolves around a (brand- or product-related) meta-theme and sometimes approaches the product “via the back door”.
- Storytelling or fact-heavy? These approaches can also lead to very different content – regardless of whether they are promotional or editorial. The test here is whether the readership is primarily interested in factual information or whether an emotional presentation, for example by means of a realistic story, perhaps with a fictitious buyer persona in the leading role, can win the hearts of the readers.
- Which type of text? Each topic can be presented in several very different formats: from the sober information text to the helpful guide and the popular listicle (“6 things you should know about the testing approach”) to the interview and the narrated product test. What do you think attracts the readership the most?
Testing level #2: Addressing target group(s)
- In linguistic nuances, entire articles can be conceived in completely different ways and thus reach different target groups. For example, it can make a big difference whether we address our readers directly or in a neutral way, or – in German – if we use “Sie” or “du”. Tonality is also an important factor. It can be worthwhile to write one article in a provocative and humorous way and another – on the same topic – in a sober and respectful way. Who knows what insights will come out of it?
- In the course of our teaser testing, as standard we send several different headlines with different lead texts into the race. Together with a header image, the headline and lead text make up a “teaser” – and we test these teasers against each other. Here, the tendencies of the readers can often be measured very clearly by means of click behaviour. We learn: Which headlines are most popular? Which wordings are not accepted at all? Do questions, calls-to-action or factual statements work better? Does a teaser have to be particularly “clicky” or is it better for the specific product to radiate factual restraint? Questions upon questions – teaser testing answers them.
- The image level also plays a decisive role in addressing the target group. To name just a few examples: Do colourful or sober images work better in combination with the headline? Do the high-contrast or the homogeneously coloured images work better? With or without people? Close-up or panorama? Product photos or those that have an editorial feel? We will be happy to find out!
Testing level #3: Triggering engagement
Of course, we also want to get our readers to look further into a topic. In other words, we want to generate clicks and clickouts. There is no patent remedy for how we trigger this engagement – and therefore it is always advisable to test different possibilities here as well. These are, for example:
- Ad split tests: We have various ads in our portfolio. Knowing which one works best provides important insights and helps in subsequent marketing measures, among other things.
- With regard to engagement, we take another look at the image level under different circumstances. Here we can sometimes go completely different ways than with header images. A small example: product release plates (which are not suitable as header images on daily media) can have great relevance here and are even often our “test winners” in certain sectors – which is why we naturally use them more and with success in these sectors. Thank you, testing!
- In ads, too, the testing approach is often a gamechanger on the linguistic level. There are always interesting learnings here. Questions could be: Should the tonality of the ads rather adapt to the article or stand out from it? Is it better to use strongly promotional text or to remain neutral and informative? Do readers prefer a strong call-to-action or do they prefer factuality or even mysterious wording?
- Of course, we also have to consider the content level in terms of engagement. Should we tease out more details about the product or should we insist once again on the known values? Is it worth triggering the “fear of missing out” by talking about limited availability or limited editions? And which product USPs are the most likely to encourage clicks? We want to know. That’s why we test!
Non-stop insights: Our technology makes it possible
The beauty is that all these efforts do not come to nothing with us. Because we can not only creatively implement testing approaches, but also distribute the content via our publisher network, administer it via our internal system and analyse it with data support via our in-house technology. The analysis already takes place during the runtime of a campaign: we optimise the content in real time, discard elements that are not performing well and replace them with new ones if necessary. In short, testing provides us with continuous insights that help us to become even better every day.
Have the courage to test! (Your brand, too, can be different.)
A little euphoria at the end can’t hurt. That’s why we say: Let’s test for all we’re worth! Because gut feeling and established brand messages are all well and good; but with approaches that go outside the box, we can all only win: the advertisers, the creative minds in content creation and also the readers. After all, testing is about finding out what the target group is really interested in – so we can create advertising content that people enjoy consuming. If that isn’t by chance the declared goal of Content Garden!
By the way, you can find out more about how this goal also binds us together as a team following this conservative link. Or, if you prefer, here: You can’t miss this! Because testing is simply our thing.