Does Online Marketing Work?
Welcome to Marketing BS, where I share a weekly article dismantling a little piece of the Marketing-Industrial Complex — and sometimes I offer simple ideas that actually work.
If you enjoy this article, I invite you to subscribe to Marketing BS — the weekly newsletters feature bonus content, including follow-ups from the previous week, commentary on topical marketing news, and information about unlisted career opportunities.
Thanks for reading and keep it simple,
Does Online Marketing Work?
On November 6, The Correspondent published an article that included some audacious claims about digital marketing. Authors Jesse Frederik and Maurits Martijn asserted that online advertising was the new “dot com bubble.”
The article presented two core arguments:
Digital marketing is much less effective than most people — even marketers — believe.
Money is wasted on ineffective marketing due to misaligned incentives.
Today’s newsletter reflects on both of those bold ideas.
Let’s start with an examination of their first point, about the general ineffectiveness of digital marketing campaigns.
Auctions within Auctions
The Frederik/Martijn piece focuses on the story of Steve Tadelis, an economist who worked at eBay back in 2011. Convinced that eBay’s digital marketing spend was being wasted, Tadelis couldn’t find anyone who would listen to his concerns. Tadelis refuted eBay’s fanciful claims about advertising effectiveness by spotting a fundamental error in the way the company evaluated their digital strategies:
Picture this. Luigi’s Pizzeria hires three teenagers to hand out coupons to passersby. After a few weeks of flyering, one of the three turns out to be a marketing genius. Customers keep showing up with coupons distributed by this particular kid. The other two can’t make any sense of it: how does he do it? When they ask him, he explains: “I stand in the waiting area of the pizzeria.”
It’s plain to see that junior’s no marketing whiz. Pizzerias do not attract more customers by giving coupons to people already planning to order a quattro stagioni five minutes from now.
Economists refer to this as a “selection effect.”
Tadelis was dumbfounded when the eBay team explained their use of brand keywords on Google ads:
Brand keyword advertising … was eBay’s most successful advertising method. Somebody googles “eBay” and for a fee, Google places a link to eBay at the top of the search results. Lots of people, apparently, click on this paid link. So many people, according to the consultants, that the auction website earns at least $12.28 for every dollar it spends on brand keyword advertising — a hefty profit!
Tadelis didn’t buy it. … His rationale? People really do click on the paid-link to eBay.com an awful lot. But if that link weren’t there, presumably they would click on the link just below it: the free link to eBay.com. The data consultants were basing their profit calculations on clicks they would be getting anyway. [Emphasis mine]
In my experience, selection effect in marketing is very real, surprisingly common, and often ignored. Just like Tadelis, I have found myself stressing the selection effect problem to a room full of marketers who believe they’ve struck gold with their digital marketing strategies. Selection effects explain why re-targeting is almost always a terrible strategy (that’s a topic for a whole other newsletter someday).
Tadelis suspected that brand advertising on Google was completely non-incremental. In other words, every single person who clicked on the paid ad would’ve still clicked on a free result — even in the absence of the paid ad.
How could eBay test Tadelis’s theory? With a complete stop of any Google ads that used the keyword ‘eBay.’ The results were illuminating.
Was eBay wasting most of the money they spent on Google ads? Tadelis thought so, and he convinced the company to run a more aggressive experiment:
...[Tadelis] was permitted to halt all of eBay’s ads on Google for three months throughout a third of the United States. Not just those for the brand’s own name, but also those targeted to match simple keywords like “shoes,” “shirts” and “glassware.”
The marketing department anticipated a disaster: sales, they thought, were certain to drop at least 5%. ...
The experiment continued for [eight weeks]. What was the effect of pulling the ads? Almost none. For every dollar eBay spent on search advertising, they lost roughly 63 cents, according to Tadelis’s calculations. [Emphasis mine]
During my time at A Place for Mom, we ran some experiments to measure the incremental effect of display advertising. We selected a representative set of consumers and divided them in half. For one group, we ran ads for APFM, and for the other half, we ran ads for an unrelated company (who partnered with us on the experiment). After our test, we were able to measure the absolute lift we obtained from the ads versus the “selection effect.” The results weren’t pretty — there was some true impact, but most was selection effect. In similar fashion, Facebook ran a series of tests to identify selection effect. Their research was startling: 13 of their 15 tests showed more than 50% of the measured sales were due to selection effect.
Remember: Tadelis collaborated with eBay back in 2011 and he co-authored research papers about his experiences in 2014. Major media outlets like The Economist, The Atlantic, and the BBC covered Tadelis’s conclusions, as did a plethora of marketing blogs.
In other words, we’ve known — for years — that (1) selection effect is significant, and (2) digital advertising is not as effective as “last click attribution” might make you believe.
Question: Why would The Correspondent publish an article about this topic — in 2019? Answer: Because very few people learned their lesson from eBay.
An Incentive to be Afraid
Advertising is scary and dangerous, at least according to popular perception. Case in point: many people shield their children from TV commercials or online ads, out of fear that the ads might warp their kids’ young minds. Cambridge Analytica became the villain of the decade not only for using Facebook’s data without proper permissions, but also because their sophisticated ads swung the presidential election for Donald Trump (never mind that Trump’s campaign fired CA over the belief that their ads were NOT effective). Facebook, in particular, finds itself in the crosshairs of Congress due to rampant worries that the social network is able to manipulate the masses with their trove of data and analytics. Even non-tech companies seem proficient at identifying and influencing our behavior. Back in 2012, The New York Times published an article about Target that claimed the retailer’s analytics programs could predict when women were pregnant — before they even knew themselves.
If advertising was really able to change minds as well as pundits allege, it’s a little surprising that so many of the companies that own marketing channels are in the “bad box” of reputational value. People’s trust in tech companies — and social media platforms in particular — is at an all-time low. Just last week, Amazon wasn’t even able to persuade voters to elect their slate of preferred candidates for Seattle City Council, despite the fact that the company contributed a record $1.5 million toward a pro-business Super PAC. If these powerful companies are able to influence us — without our knowledge, no less — they have a funny way of showing it.
Maybe advertising just isn’t as effective as most people think.
From The Correspondent piece:
According to the Israeli thinker [Yuval Noah Harari], it’s only a matter of time before big data systems “understand humans much better than we understand ourselves.”
In a highly acclaimed new book, Harvard professor Shoshana Zuboff predicts a “seventh extinction wave,” where human beings lose “the will to will.” Cunning marketers can predict and manipulate our behaviour. Facebook knows your soul. Google is hacking your brain. [Emphasis mine]
Marketers like to believe they possess incredible power; prognosticators like to believe them. Like most fantastical claims, they are both wrong. Without question, marketing can have an effect, but it is unlikely to hack your brain (whatever that means). Marketing works by (1) making you aware of something, (2) reminding you of something, (3) increasing your consideration for the product in a specific use case (e.g., McDonalds for breakfast) and/or (4) letting you know something is socially acceptable. Good marketing strategies succeed when they capture your attention enough to do one of those things, or when they thrust themselves in front of the right person at the right time so the impression is not wasted. But let’s be clear: any marketing tactics that are currently employed in the real world (i.e., not just the fantasy of fearmongers) are a long way from “knowing your soul” or understanding you better than you know yourself.
The Correspondent piece provided a well-timed reminder that maybe society’s pervasive fear of advertising is not warranted. Perhaps we shouldn’t perceive technology companies as scary, but incompetent.
But are advertisers really as inept as Frederik and Martijn seem to have concluded?
Selection versus Synergy
Just as the selection effect can cause advertising attribution to be overstated, “synergy effects” can result in the understated impact of some advertising tactics. Most TV advertising provides far greater effect than, for instance, measuring the website traffic for five minutes after the spot runs — a technique that many media agencies use to estimate ad effectiveness.
Frederik and Martijn highlighted some glaring examples of money being wasted on digital advertising, but we shouldn’t conclude that ALL digital advertising is worthless.
For a number of different organizations, I have measured the incremental lift of brand spend on Google. I have identified “selection effects” ranging from 30% to 94% (i.e., between 6% to 70% of the traffic generated from the ads was actually incremental — we would have received the rest of the traffic anyway).
What drives that huge range in the prevalence of the selection effect? Three important factors:
The power of the brand. When a brand is new, traffic tends to be more exploratory and less navigational. In general, navigational clicks are frequently due to selection effect.
Brand/category confusion. Some brands are so well known that their name becomes eponymous with the category. Think Rollerblade, Kleenex, or Xerox. In these cases, many people search for the brand when they’re actually intending to search for the category. For this situation, being on top of the brand search — which is really a category search — becomes far more incremental and less navigational.
The types of searches. If someone searches for [your company], versus [your competitor], or [your company] + reviews, or [your company] + [company category], being able to influence what page they see next can have real incremental (non-selection effect) impact. When someone conducts a simple search for [your company], any clicks are far more likely to result from the selection effect.
Here’s something I find interesting about digital advertising: Google clicks from brand searches are highly variable in price, as is conversion rate off of those clicks. For example, I have seen companies where unadjusted brand traffic attribution ROI was 200x, but I’ve also found companies where it’s less than 2x.
Over time, I’ve noticed an interesting observation: the higher the apparent ROI, the higher the selection effect. When a company is receiving 200x ROI, the selection effect is always 90%+. Sometimes, of course, that level of selection effect is okay. Suppose you get a 200x apparent return, but 95% of that is selection effect; in that situation, your “true return” is still [200 x 5%] 10x. A 10x return is pretty incredible! When unadjusted brand return is 2x, the selection effect is usually very small — somewhere around 30%. Keep in mind, though, that your “true return” would still be positive [2 x 70%] at 1.4x. And while 1.4x isn’t as impressive as 20x, it’s still really, really good.
After running these tests, I have frequently determined that brand advertising on Google was not as effective as the marketers believed. That said, I have also always found that the results are better than NOT advertising brand terms on Google.
My personal experiences informed my general opinions about Frederik and Martijn’s findings:
They are correct that most marketers possess an irrational (and incorrect) belief in their own marketing effectiveness.
They miss their mark by suggesting that most Google ads are worthless. As I noted above, even strategies with overstated impacts usually generate some return — and almost always provide a better outcome than not advertising at all.
How Could this Happen?
So far, I’ve focused my attention on Frederik and Martijn’s first argument, about the idea that digital marketing is less effective than most people think. I’d like to wrap up today’s newsletter with a comment about their second point: that a lot of marketing money is wasted due to misaligned incentives. Here’s the core section from their article in The Correspondent:
It might sound crazy, but companies are not equipped to assess whether their ad spending actually makes money. It is in the best interest of a firm like eBay to know whether its campaigns are profitable, but not so for eBay’s marketing department. Its own interest is in securing the largest possible budget... Within the marketing department, TV, print and digital compete with each other to show who’s more important, a dynamic that hardly promotes honest reporting.
“Bad methodology makes everyone happy,” said David Reiley, who used to head Yahoo’s economics team and is now working for streaming service Pandora. “It will make the publisher happy. It will make the person who bought the media happy. It will make the boss of the person who bought the media happy. It will make the ad agency happy. Everybody can brag that they had a very successful campaign.” [Emphasis mine]
I think this perspective is dead on. Whenever a headhunter contacts me about a CMO role, the first question is inevitably, “how big was your budget?”, followed closely by “how large was your team?” To date, no one has ever asked me, “how effective was your marketing?” Investigating a potential CMO’s effectiveness seems like the fundamental task for a search firm.
But there’s a reason that headhunters don’t inquire about effectiveness: they know how easily people can spew BS metrics to illustrate their previous successes. Candidates would have a much harder time blurring the details (read: lying) about the size of their previous budgets and teams. As such, the search firms stick to those questions.
If CMOs (and VPs and Directors) really want to accelerate their careers, then they need to grow their budgets and the size of their teams. How? One method is pretty straightforward: demonstrate positive results. If a marketer can reliably turn every dollar under their control into ten dollars of customer spending, you can bet that CFOs and CEOs will push more and more dollars to that marketer. This sounds fine in theory, but it’s more complicated in practice, for the simple reason that evaluating the effectiveness of marketing campaigns is very difficult. More importantly, the person spending the money is the exact same person assessing the effectiveness of the spend. As you can imagine, there is a very strong incentive for a marketer to CLAIM that their plans are working — even if (1) the results aren’t that great, or (2) the marketer lacks the knowledge to accurately evaluate their results. In some situations, marketers are incentivized to focus their marketing spend on activities that provide the appearance of effectiveness. If you can produce a report that shows you are obtaining a 10x return on your marketing spend, that might be almost as good — or maybe even better — than actually achieving a 10x return on spend (especially if the impact of the 10x return isn’t immediately apparent).
A person would need a strong mind to believe something that would have a negative effect on their own pocketbook — especially if just following the crowd will earn them a promotion.
The type of misaligned incentive contributes to the piles of Marketing BS that plague our industry. I started this newsletter to uncover some of these concerns, by calling BS whenever I see blind adherence to beliefs over the facts.
Thanks for reading, and I hope you’ll join me on a mission to clean up the marketing world.
Keep it simple,
If you enjoyed this article, I invite you to subscribe to Marketing BS — the weekly newsletters feature bonus content, including follow-ups from the previous week, commentary on topical marketing news, and information about unlisted career opportunities.
Edward Nevraumont is a Senior Advisor with Warburg Pincus. The former CMO of General Assembly and A Place for Mom, Edward previously worked at Expedia and McKinsey & Company. For more information, including details about his latest book, check out Marketing BS.