Content Marketing in an Age of (Faulty) Analytics
Science now pervades digital marketing (or it should). But how do you manage the prospect of bad science?
I have to preface this post to avoid misconceptions: We love science. And we practice it. From in-depth Google Analytics for our own and clients’ content marketing efforts to marketing automation, landing page analytics and split and multivariate testing, we do it. And we believe that it’s an important part of any digital marketing practice.
Now that’s out of the way, let’s admit it: There’s lots of crap science floating around in digital marketing.
Marketing directors wrongly attribute an uptick in web traffic to the latest blog post. Product managers A/B test three different elements of a landing page, and draw conclusions about only one of the elements. Bad science is as ubiquitous as good science, or more so.
This post isn’t about how to create good tests, though. It’s about how to navigate, manage and succeed in digital marketing when the foundations we’re operating on are fallible. How do you make the best of science, when some (or much) of it may be bad
First, here’s a few ways that bad science can hurt you and your marketing organization:
1. False positives and false negatives: Bad science leads marketers to draw incorrect conclusions. The false negative is unfortunate (“Oh, that didn’t work, so we won’t do it again.”) The false positive is deadly (“That was great. Let’s pour our money into that.”)
2. Viral ignorance: Bad science trickles down through the marketing organization. Marketing data that the CMO recognizes as somewhat faulty, but good enough for rough guidance, tends to become law the farther down it goes in the organization. In this way, false assumptions can become company gospel.
3. Ossification of marketing: In an environment where people gain power by debunking or supporting ideas with data, managers hurl (questionable) figures at each other to win points. The willingness to experiment and change (the life force of a brand) gets sacrificed on the altar of science.
4. Over-reliance on analytics: A pervasively scientific environment can become paralyzing. Arguments and counterarguments freeze all action. (This is, for B2B companies, not a danger for the foreseeable future).
So, if you can’t prevent bad science from happening, and you want to continue using science to guide your marketing, what can you do?
Document and codify findings.
Like any good scientist, marketing analysts need to describe and record in detail how they derived their data. This, along with the findings, should be preserved in a central place for anyone in the organization to see. This at least helps cut down on incorrect anecdotal citations of science, and allows bad science to be identified and discarded.
Experiments are the lifeblood of science. You need to test out wild hypotheses, retest earlier results, challenge truths, again refute old failures and invite people to build on your science. The degree to which an organization experiments is, in my experience, directly linked to its maturity and health. Sadly, many fail here (marketers are career-minded, risk-averse animals by nature).
The more people around you that understand science and data, the less likely that you’ll trip up over the bad stuff. The first lesson: Doubt science. Yes, you should act based on scientific insights, but you should not take for granted the soundness of your foundations. Also, a few lessons in basic statistics help.
Share your knowledge.
This is related to both the codification and experimentation. Please share your insights with your marketing agency. Agencies are, for obvious reasons, often on the sharp end of experimentation. Why not equip them with your findings before they start experimenting for you? Some of your more data-savvy agencies may even be able to help you improve the quality of your findings (ahem).
That’s my advice on coping with science, good and bad. What’s your experience with analytics?