Split testing comes up from time to time, but people forget one thing. Split testing simple things like the colour of a button tells us almost nothing. Why? Because the changes are almost always too subtle to measure long term.
On Facebook for example, ads that produced amazing results a week ago can fail miserably the week after (Ed: tell me about it!).
If we split-test two different coloured Buy Now buttons and left the test running for a month or two, I’m willing to bet the results would be almost identical (assuming both Buy Now buttons could actually be seen and read of course).
This is because the button colour takes no account of the changing audience (ie. different people), the changing current needs of each person in that audience, how compelling the content was, or even anomalies such as zeitgeist.
These types of split-test attempts are done to fine-tune ads, but they’re a waste of time if our ad isn’t performing in the first place.
So is there any point split-testing? And what should be tested? Radically different ideas, that’s what.
For example, in the monocular ad (see parts 1 and 2), we might split-test the anchor (we might try comparing prices to a $100 or a $10,000 telescope). We might also split-test having an anchor vs not having an anchor (to prove that price anchoring on this particular product works at all).
I saw an example of just that this week for our monocular ad:
“Is this $47 monocular better than a $3000 telescope?” [$3000 price anchor]
“See everything from miles away with this $47 monocular” [no price anchor]
When I clicked the price anchored ad again today, it linked to a different landing page than the one I saw a couple of days ago. This time the landing page used the $3000 anchor. So the advertiser is not just testing ad headlines, but mixing it up with landing pages too.
But we can go a lot further with this. Click here to read part 4.