Navigator logo

Why it’s never lupus

“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” – Sherlock Holmes

You will not find a more agonized fan than the one who awaits Season 4 of the popular BBC show, “Sherlock”. Diehards have been waiting for more than two years for the next installment of Benedict Cumberbatch running around the streets of London and solving crimes with the likes of Martin Freeman as Dr. Watson. Adapted from the 1892 classic by Sir Arthur Conan Doyle, the storyline in its modernized form still captivates audiences today as much as it did back in 19th century Britain.

Why? Humans are natural problem-solvers – goal-oriented, adaptive, and curious. We are very good at assessing our surroundings and deducing information from our experience. We are always looking for that one conclusion we can draw from our observations. In fact, the scientific method was one of the earliest creations of human society, passed down from the Ancient Greeks through Aristotle.

Eliminating the Impossible

Fortunately for progress, the reach of the scientific method of deduction has not been limited to ancient Greek culture or the fictional parameters of 121B Baker Street. In the medical world, it’s veritable equivalent is known as Occam’s razor. Translated from the original Latin, it posits: “Among competing hypotheses, the one with the fewest assumptions should be selected.” Simply put, a simple explanation is preferred to a complex one.

This is also known as that moment in each episode of House, where Dr. Gregory House has that moment of clarity about the one obvious and perfect diagnosis that explains all the symptoms and solves the mysterious disease of his patient before (and sometimes after) they expire. Hint: it’s not lupus.

The theory of Occam’s razor was challenged centuries later by a man named John Hickam, MD. Hickam found the established process of eliminating causes and exhaustively speculating on rare diseases to explain all the strange symptoms in a patient ineffective. He thought it was far more likely for patients in these cases to have a set of common diseases, rather than a perfect cause that explained all symptoms. This led to the blunt conclusion: “Patients can have as many diseases as they damn well please”.

Double, Double Toil and Trouble

In the public affairs world, the world that Navigator maps out on a daily basis, the same theories take on similar applications. Clients run to the experts and expect there will be one, all-encompassing solution to the issue, or set of issues, they face. Most of the time, the solution may be simple. For the rest, a more layered approach is required.

In a 1973 treatise, two German design theorists Horst Rittel and Melvin Webber coined the term “wicked problem” to aptly describe problems of such scope that could not be solved like the more “tame” problems found in mathematics and puzzle games. “Wicked problems” were much more complex – social, political, environmental – and required strategies that included more collaborative, unconventional approaches and “outside-the-box” thinking.

We are living in a world where our collective problems are becoming increasingly “wicked” and difficult to solve in a narrow sense. Whether you’re talking about climate change, armed conflict and terrorism, or democratization, these are not problems that can be solved in isolation or independent from the efforts of other global players.

Smart, Honest Counsel

Wicked problems are not confined to the international sphere, but are increasingly entering the world of public affairs. Few experts have decided to abandon the approach of Occam’s razor and turn to the Hickum’s dictum equivalent of diagnosing these problems and propose collaborative strategies. What they fail to realize is that some of these challenges are simply too complex and unique, essentially uncharted territory for many clients, and clients themselves will need someone who understands this new environment.

Gone are the days when you can simply knock on government’s door, make your case, and get an answer. To increase your chances of success, you may need to work with third parties with aligned interests. You may need to mobilize your online and offline supporters. You may need to get public opinion among Canadians onside with your proposal before coming to the table.

At Navigator, the solution will not always be the simple one for your company. Whether you’re seeking social licence for a massive cross-country infrastructure project, or engaging a major financial transaction affecting national interests, or activating a public advocacy campaign – you will need the smart, honest counsel that will get your public affairs goals to the finish line.

Understand your users: lessons from Adblock Plus

Last week, Adblock Plus announced it would begin selling ads. That is not a misprint. The company literally named ‘adblock’ ( as in stop or prevent advertisements) and ‘plus’ (to tell users that they do that very well) is doing the exact opposite of the thing it promises to do.

For those unfamiliar, Adblock Plus was one of, if not the, leading adblocker. As a plugin or browser extension, users could install it to prevent advertisements from appearing in the websites they visited. The program is generally credited with popularizing online adblocking, which has had a major impact on how information travels on the Internet. Selling ads after selling the ability to block them is a complete 180. In fact, it looks like the company built on preventing ads from being served never really understood how users and advertisers interact with the ads they do see in the first place.

Predictably, initial reaction to the announcement was harsh. This could very well be the beginning of the end for the company. Technically, Adblock Plus is expanding its Acceptable Ads initiative with ‘a fully functional ad-tech platform that will make whitelisting faster and easier’ that promises to ‘turn the model on its head.’ According to Adblock Plus, the new program offers advertisers auction-based or real-time bidding (RTB), just like Google or Facebook. The difference is that all of the ads are, theoretically, vetted by Adblock Plus’ users. This is supposed to act as a kind of guarantee that they will not detract from any website’s browsing experience. If you ask the company, Adblock Plus is offering an alternative to RTB — instead of targeting options offered by every other RTB platform, user experience determines which ads are ultimately served.

Basically, Adblock Plus is hoping to enter the supply side of the digital advertising market. The new service will allow publishers and bloggers to buy ads vetted by Adblock Plus or users of Acceptable Ads because these ads are not disruptive to the browsing experience. Yes that was supposed to sound weird. There are lots of problems with this strategy. First is the problem of perception: Adblock Plus, a celebrated Adblocker is selling advertisements to online publishers. IAB UK CEO Guy Phillapson alluded to some of the other strategic issues in a statement, comparing the company’s new direction to a protection racket:

‘We see the cynical move from Adblock Plus as a new string in their racket. Now they’re saying to publishers we took away some of your customers who didn’t want ads, and now we are selling them back to you on commission. The fact is, in the UK ad blocking has stalled. It’s been stuck at 21% throughout 2016 because the premium publishers who own great content, and provide a good ad experience, hold all the cards. More and more of them are offering ad blocking consumers a clear choice: turn off your ad blocking software or no access to our content. And their strategy is working, with 25% to 40% turning off their blockers. So with their original business model running out of steam, Adblock Plus have gone full circle to get into the ad sales business.’

Adblock Plus’s decision, and the initial reaction to it, prove the company misunderstood its old customer base and the publishers or advertisers it is hoping to turn into customers. First, Adblock assumed its current users, people who downloaded something that, again, is named Adblock Plus, want to filter ads instead of blocking them. They also misjudged how appealing RTB is in its current form for advertisers, and like Phillapson said, that users are actually willing to put up with highly targeted ads from the content suppliers they enjoy. Most importantly, as a brand or someone paying for an ad, why switch to a system with less control when there is no substantial opposition to the current RTB model?

Besides Adblock Plus, there are other similar adblocking programs that provide practically ad-free browsing experiences. Many of them have capitalized on the negative reaction to Adblock Plus’s announcement by doubling down on their stated promise of actually blocking ads. Most of these programs use a process similar to the ‘whitelisting’ service Adblock is offering, allowing users to view ads from the sites they deem safe. This gives users the sense of control Adblock Plus is convinced it just invented.

Adblock Plus’ new take on whitelisting ignores the dynamic its previous version helped establish between users, publishers, ads, and advertisers. In the original model, training users to whitelist sites instead of individual ad units placed credit or blame for ads appearing with the publishers who accept revenue from them. Once a publisher or website was whitelisted, they remained whitelisted until the Adblock Plus user manually reversed their decision. Giving the ‘pass’ to publishers, instead of individual ads, made a ton of sense: ads change much more frequently on a random site than on a site someone frequently visits, and publishers generally adhere to the same standards when deciding which ads they’ll allow on their site.

This system was successful because it was simple. It also let advertisers actually advertise, which is by nature intrusive. Crucially, by placing agency on the sites, ads were presented as a necessary evil to support the content that users enjoyed. The new model abandons that simplicity by asking users to vote on the ads themselves and it changes the criteria for whitelisting. No advertiser in their right mind would choose an ad unit that is sanctioned for its inability to draw attention when alternatives advertising models exist. Adblock Plus seems to have forgotten that publishers need ads, and ads need to be somewhat disruptive in order to be effective, which is why there was a desire to block them in the first place. Also the RTB marketplace Adblock Plus envisions would require a staggering amount of sanctioned ads in order to provide enough variety to publishers to compensate for the (likely) reduced appeal among actual advertisers. Adblock Plus probably doesn’t have the user base necessary to vett that many ad units, especially after losing so many customers in the wake of its announcement. In fact, RTB, or the auction model Adblock Plus is attempting to adopt, is dependant on the relationship between site owners or publishers and users. Before, the company played a part in emphasizing this relationship, but now it’s neglecting it at its own peril.

Real time bidding is practically the only way to advertise on social media and search engines. First popularized by Google, versions of this bidding can be found on practically every other search engine, the largest social networks –like Facebook, Twitter, and LinkedIn –as well as leading content marketing services like OutBrain and Taboola. These platforms employ continuous streams of content, and the auction model was the only way to account for how they disseminate information in real-time based. The auction system accounts for the practically infinite variations of a given user’s news feed or search results. Instead of paying for a predetermined placement, advertisers bid to appear in the most relevant possible placements as they become available. The innumerable choices in social or search platforms that determine where ads could be shown mean users are subject to an incalculable number of ad units. This only works if the users trust the website or publishers to choose ‘acceptable’ ad units. Going through each ad individually would take forever. Even in content marketing, random units appearing in a given site’s ad spaces are subject to browsing data that essentially creates the same degree of randomness as social media. It is more practical to establish trust between sites and the people who visit them, instead of users and ads. One could argue any RTB system needs to be based on demographics instead of user experience, because targeting needs to be grounded in something that correlates to the person actually seeing the ad to account for the endless contexts in which it can appear.

Demographic information was key to popularizing RTB because advertisers love getting this info. They gain access to unprecedented amounts of users in a single ad buy, and a targeting system that is infinitely more specific than any other format. Social and search advertising use demographic categories based on static information (as well as in-platform decisions, but that’s for another column), like registration data, as static endpoints in a given user’s ever-evolving data set. Essentially, this allows advertisers to bid on who sees an ad, unlike older models where they paid for placement. RTB took out a lot of the guesswork in terms of ‘is the type of person I want to see my ad guaranteed to see my ad?’ Though users may complain about advertisers using their private information to build RTB campaigns, the information advertisers actually get to work with cannot identify individual users. There are certainly issues with privacy and RTB, though they are not close to significant enough to overthrow the system.

Right now, it looks like although Adblock Plus understood the trends in online advertising, they failed to contextualize their role in a changing digital landscape. People care generally if ads are on their screen, but the vast majority do not worry about how they were targeted. Though users are growing more tolerant of ads, and perhaps concerned over how they are delivered, high quality content keeps them coming back to sites using granular tracking options in their RTB units. People understand that websites need to pay the bills and, for the most part, they are willing to let them serve targeted ads in exchange for the services they provide. Up until recently, users who were unwilling to make that trade relied on Adblock Plus. Since, until last week, its entire business was blocking ads, the company is still considered toxic by many groups it now hopes to count as customers. Any chance of building up the user base quickly is slim, having lost a considerable portion of existing customers, and they do not seem to have the quality content needed to attract new ones. The promise of an ‘acceptable’ advertising experience is nice, but it’s a job for a plugin, not a content publisher or even an ad broker, which is what Adblock Plus is trying to morph into. Perhaps slowly rolling out a different plugin, branded with something connected to its acceptable ads initiative, working to accelerate the whitelisting process, maybe by gathering information about which ads users find acceptable to later sell to publishers, while still maintaining their initial service or line of business, would be a better strategy. Anything to avoid having to say ‘Hi we are called Adblock Plus, though we will no longer be blocking ads, as much as asking sites to pay us to show ads to users they attracted without any real help from us.’ Adblock Plus already alienated online publishers. Trying to quickly pivot and turn those people into customers may have cost them the ones they did have for their original service. Unless content providers, advertisers, and users radically change how they interact with ads online, Adblock Plus’ may end up using their new RTB platform to sell their office space instead of actual ad units.

Why the crackdown on fake news is a good thing

Do you know that the Washington Post cranks out more than 1,200 news articles per day? The New York Times produces at least 230 articles per day. Good luck tracking them all down. Buzzfeed published 6,365 stories and 319 videos in April alone—or about 222 pieces of content per day. These are but a few of the news organizations producing so much daily content, and no human being could realistically consume it all in a day. The Internet contains a near-infinite amount of information—we just can’t keep up with it.

So what do we do? We rely on the convenience of social algorithms to tell us what matters. We pull up our Facebook mobile feed and let the miracle and science of its algorithm find the diamonds in the rough. It’s a wonderful experience. We literally have no work to do: no newspaper to flip through, no news channels to suffer through, and no photo albums to thumb through. It’s all there for us, conveniently sorted and available at a swipe of a finger. Just about everything we read has to make its way through a filter before we see it.

Think about the apps you use most on mobile. I’m willing to wager a bet that you get a lot of information through Facebook, Twitter, and Google. Facebook notoriously tweaks its news feed on a regular basis to ensure it’s properly calibrated to give you content you want to consume. Twitter finally realized that people find a raw news feel overwhelming and now uses an algorithm of its own to prioritize content it thinks you want to see. Even your search results are filtered. For sometime now Google has tailored its search results to make them more relevant to you, the user, based on your browsing and search history. All of these platforms have an incentive to give you information you want instead of the information that is the most up-to-date or relevant: their bottom line depends on it. If they fail to give you the content you want, you’ll tune out. And if you tune out, you’re one less person they can serve ads to. And if a whole lot of you start doing the same, revenues take a hit, membership numbers stagnate, and Wall Street gets cranky. So, these three digital behemoths need to give you quality content, which is a lot easier said than done.

For years, publishers have focused on producing huge volumes of content. Most of this content was (and remains) thoroughly cheap and unfulfilling. Think about the scourge of ‘click-bait’ articles that used to fill up our social feeds and rank highly in search results. The headlines were catchy—we couldn’t help but click on the link only to discover that the resulting article was barely 100 words long, and often, completely different than what the headline promised. Sadly, this type of content continues to plague the Internet. It’s a serious problem for curators like Facebook, Twitter, and Google. When this content appears in feeds or results and we click on it, only to get angry about where we landed, it diminishes the user experience.

Considering this, it makes complete sense that Facebook and Twitter are taking steps to remove this type of content, this fake news, from their feeds. The pair are joining at least 30 major news outlets—the Washington Post and The New York Times among them— to crack down on fake news articles more effectively, in the hopes of improving the quality of the information in social feeds. In some ways, it’s encouraging—even heartening—to see these major platforms recognize that as they are the primary news source for most of their consumers they should ensure a basic level of quality for the news it serves. This newly-formed network is backed by Google and is working to create a voluntary code of practice and a verification system for journalists and social media companies to ensure a basic level of integrity in news coverage. Of course, partisans of all stripes will laugh at such a statement, since news organizations are hardly seen as objective operatives. But if we can park our bias aside, most of us will concede that ‘traditional’ news outlets are bound by some journalistic standards (fact-checking, legal checks etc.) that ought to be the norm. Of course, they’re far from perfect, but they serve as a basic foundation.

We live in a world where most news breaks online. People at the site of the news event are the ones posting raw video and images online. Eyewitnesses don’t wait for a reporter to arrive on the scene before sharing what they’ve seen first-hand. Stories that would never have been reported in the pre-smartphone era now become global movements because someone took out their smartphone and captured an event or altercation. And of course, fake news and hoaxes, like everything else online, have become much more sophisticated, and tougher to crack down on.

In this context, it doesn’t help that all news looks the same in our news feeds. It can be tough to sort out the real stuff from the hoaxes. In truth, the Internet has democratized content-creation. Anyone with an Internet connect and a keyboard can become a publisher. Nothing stops me from starting a new website today, writing completely egregious or false content and publishing it to the major social platforms. Or they could write breaking stories, uncovering facts and perspectives that others can’t or unwilling to investigate. But whether or not that same piece of content should be subjected to the same filtering and standards as fact-checked and verified stories, is a matter of debate. I for one, am ok with it, even if it means traditional news outlets regain some level of clout.

However, these recent developments further entrench the shift towards a highly-filtered Internet. And cracking that filter is no easy task, especially if you’re not a pre-approved news outlet. It means, more than ever, that brands and organizations need to double-down on quality content. Stop producing content for the sake of it—focus on providing value and you may get to join the ranks of the ‘Big 30.’ And if that fails, you may need to dust off your traditional media relations skills—’traditional’ outlets may soon get a bump in clout.

Curated content and the demand for excellence

For the past few years we’ve been experiencing a shift towards curated content emphasizing customization and personalization. We are demanding more, but by more we mean better, to filter the mass amounts of noise on the Internet. The trend toward curated content started a few years ago and while it is no longer new or novel, we now expect, to some degree, a level of filtering and it seems to be reaching some sort of zenith. According to a Princeton study, Internet tracking has intensified to something called ‘fingerprinting‘ which surveils your computer for behavioral information, such as your battery life and browser window, to determine your online activity. For some time, Google has tracked searches to give you personalized suggestions based on your prior keyword searches. Now it is moving towards even more individual services that tailor search away from the common experience to a more individualized environment.

Within the social media sphere, platforms are constantly updating their algorithms to show you more content from people they think you like the most. Twitter rearranged its feed to show tweets in non-chronological order to prioritize quality over quantity. The platform’s ‘While you were away‘ feature relieves you of the fear of missing tweets from those (Twitter determines) you care about most. Rather than reading it all, it assumes you’d prefer to read the tweets that count.

This is part of the idea that, arguably, mainstream culture as we know it has become fragmented. This is most obvious with music. The rise of streaming services mean you don’t need to rely on the radio. There are still top 40 and songs of the summer. Certain artists and songs still appeal to the masses, but there is less of a cultural consensus of what ‘good’ music is or should be and more independent blogs, critics, and services than ever before. However, the idea of a fractured ‘mainstream’ — whether or good or bad — is extending to other spheres.

Recently, Nathan Heller wrote in the New Yorker that ‘language of common values has lost common meaning.’ To a certain degree, Heller sees this as part of an overarching trend to the rise of personalization, as we remove ourselves from the truly public sphere to one that reflects our own beliefs and discourses. Our curated content feeds suggest we are independent thinkers and individuals — it’s how we’re self-identifying. From there, he talks about the unassailability of Trump’s linguistic nonsense, that he predictably and reliably divorces words from their meaning. ‘To know what Trump means, despite the words that he is saying, you have to understand — or think you understand — the message before he opens his mouth.’ This is a leap his followers are willing to make. Similarly, to know what your consumers are looking for before they even try to look for it, is what marketers have attempted to do since the beginning of time and what the Internet is getting increasingly good at.

Heller makes the grander point that the rhetoric of change has become disconnected from the process of actually making it happen, not just online but everywhere. For example, we can all identify with the word ‘feminism’ but we’ve become disassociated with what this really mean on a day-to-day scale. This is generally the worry for the Internet in particular: that online behavior doesn’t necessarily translate into real world action. On a smaller, shallower level, this applies to cultural trends and terms. What we all ‘know’ and consider as a part of ‘us’ isn’t what it used to be. This becomes more pronounced as the generational divide between those who didn’t grow up with the Internet and those who did becomes more stark, with the latter moving into the workforce and exercising greater purchasing power. As the younger generation shifts into decision-making and directing roles, the comparisons are more obvious and direct between what one group wants out of work, out of each other, and our lives in general. And on a search-engine scale, the more personalized the search results, the less universal the meaning to the search terms and the information attached to them.

With personalization and withdrawal from mass consumption there have been some techniques to make the best of both worlds. Within this trend, newsletters have gone from spammy email blasts to a more sophisticated form of content delivery. Aggregators of taste, style, and substance have risen to the fore: similar to the automation of searches and social media, we prefer to have things automatically filtered, such as through the lens of an appointed purveyor of whatever it is you’re searching for. For example, Lena Dunhum, of HBO’s Girls fame, has a weekly newsletter called Lenny Letter that delivers curated content to your inbox. The Skimm aims to make American news more digestible by giving you the top headlines for the day, complete with pop culture references and policy explanations. For an even more personal touch, TinyLetter has quietly found its own corner of the Internet and its helping fledgling companies and writers distribute their content to people who truly want it. Acquired by MailChimp in 2011, the service allows anyone to send out a newsletter — as often or as little as you like — to a relatively small list of subscribers. As many individuals use TinyLetter by means to keep in touch with a group of friends, colleagues, or small fan base as new and growing organizations. With this kind of content, we create the sense of a conversation that is more targeted and more private.

Ironically, the very things that are providing this illusion of privacy are, perhaps, the most invasive. The entire idea of curated content is only possible with mining more and more of your personal data and online behavior. And this is leap we are willing to make: we’re exchanging the mass public for the seemingly private by forfeiting up details; in attempting to gain more control over the content we see, we are sacrificing control over access to our ‘individualized’ information. However, most of us give this up willingly, or at least, prefer not to think about it too hard, in our search for both substance and convenience.

In another form, the demand for excellence recently played out in popular culture and music — namely album drops — very clearly. Dropping an album has become a bit of an all-encompassing thing: there’s the announcement, the hype, the previews, the leaks, often there are sites dedicated not to the artist, but the album itself. Afterward, there are endless think pieces on the relative importance or irrelevance of said album. Frank Ocean is a reclusive artist who let four years pass between Channel Orange, his first critically acclaimed album, and his second, Blonde. The public outcry from Frank Ocean fans for a new album was loud and only became louder and more expectant when the reported release date for Blonde came and went with no sign of new music. Online, things went from excited, to angry, to betrayed. Few artists can put the world on pause, but those who do are the ones who let the anticipation build to breaking point.

However, the pressure mounts with this kind of discipline. The expectation is that if you are going to make people wait, you are doing so for a good reason. As a popular musician, failure to deliver on your restraint is to play a dangerous game with your fans’ idealized version of you, which is partially why, when done well, it pays off. Retreating has its own cache these days. Patience is a virtue that has left most of us and today it signals a level of self-control that few seem to possess anymore: it makes you a grown-up of the Internet age, capable of a level of resistance, a dignified power move.

But this a remove we value, perhaps because most are incapable, or perhaps because sometimes there is an overwhelming amount of information. In the search for individualized content, there is something to be said for giving people both a good enough product and enough time to miss you, because it means your product is so uniquely you that you can afford to gamble on it. The bigger the gamble, the bigger the get — while other artists focus on being nothing if not consistent, with new release after new release, some choose to remind us that there is a difference between being timely and being timeless.

Of course, within a digital world and within public affairs, intent still matters and decides timelines. Crises often require quick responses, and SEO efforts are different from true content creation. It is easier to be choosy from an artistic standpoint than a business one. But still, there is some untold magic in restraint. Within the zeitgeist, we’re looking to those who remove themselves to a certain degree because they seem to be the ones that dictate rather than mimic, that lead rather than follow. Since we can communicate with anyone en masse, the exclusivity and seemingly intimate nature of communicating with a few — or communicating with a purpose — is appealing. It’s also a luxury that comes with talent and confidence in that talent.

But, whether we have the luxury or not, there is a palpable shift toward wanting to feel like we’re getting such an experience. We’ve become a demanding set: we want the access to all of the information on the Internet and we also want special content, whether it’s in the form of genius or something tailored just for us. And if Heller is right, it’s because this is how we showcase ourselves to the world: these are the excellent things that I like, these are the personal selections that create my online personality. At 28, Frank Ocean is part of the generation that straddles the Internet line of remembering a time before, but also having had it for most of his life. His recent album includes a one-minute speech about being dumped for refusing to add someone as a Facebook friend. Like most of us, he both celebrates and rebukes the Internet age. While he is exceptional in many other ways, in this is one in which he’s just like the rest of us.

It’s Google’s world and we’re just living in it

Google already won. As the most dominant search engine in the world it has unprecedented control over each individual’s access to information. While most people already knew that, some forget that Google is also among the world’s largest data brokers, listing services, and news providers. Google may now be too big to fail and too ubiquitous to worry about whether its actions anger users or bother legislators (many of whom still do not understand search engines’ role in contemporary communications). Two recent events—the company’s decision to essentially begin charging for accurate keyword data and a European Union proposal to charge search engines for news headlines indexed in their results pages—underscore how Google can do whatever it wants and nobody in power has any idea how to stop it.

Last week, Google changed how research is done in any field involving online communications, and specifically in SEO and digital marketing. The company significantly reduced the amount of free data available via its popular keyword planner tool. This is a big deal. In order to enjoy access to keyword planner you have to be a Google customer, which wasn’t the case before. Technically, full access to keyword planner now requires account holders to have active AdWords campaigns. Using the search engine, Gmail, or YouTube is not enough — you have to be paying Google every month to be considered enough of a customer to use its keyword planning service.

So why the outrage? First, because keyword planner has always been free. People will always resent having to pay for something when it previously cost nothing. Technically they can still use keyword planner without paying, but now the tool returns ranges instead of round numbers for each search term:

Before:

google_screenshot1

After:

google_screenshot2
As you can imagine, the difference between ‘2,900’ and ‘1k-10k’ is huge. That switch—from being able to pull precise search volume to an estimate that could be off by 10,000— has digital strategists fuming. Suddenly we’re planning campaigns using data points with massive ranges when we had been working with exact figures our entire careers. More importantly, how do we explain to clients why we aren’t so confident in next month’s projections beyond the nearest 10k, when we’ve previously provided much more precise estimates?

Thankfully—for now, at least—there are ways to bypass Google Keyword Planner’s recent restrictions by using reputable third-party software. Still, these tools rely on Google’s data, and the restrictions Google is implementing for individual users are having a trickle-down effect on these tools. One of these third-party providers recently sent its customers a letter explaining that it had no real idea what was going on, acknowledging the current situation is less than ideal, and asking for patience while it works on a solution. Most of these third-party providers have recovered from Google’s changes after a long four-day adjustment period. But, the problem for people working with this software everyday, is that in a more specialized sense they each serve a different function as part of a holistic keyword research strategy. For example, some programs are ideal for on-page SEO and competitor research on specific URLs or domains, while others are designed for ecommerce applications or to find specific ‘long tail keywords’. Most people that work with keyword data regularly consider Google’s Keyword Planner a primary source; it is by far the most trusted source for volume and competition levels, which can be applied in almost any SEO context. Making people pay for keyword planner honestly would not be worth writing about if it was only used for Adwords campaigns. If the tool was only used for ads on Google results pages, advertisers would be monthly customers by default. Even if that was not the case, the impact of less targeted ads within the search engine would only be felt because of Google’s near universal reach. It would not really impact people’s lives. Things are different because just as ‘Google’ has become shorthand for searching the internet, it’s Keyword Planner is behind just about any task that involves building audiences online, and Google knows this. In a world where users interact with far more than just ads, keyword research— which helps us understand the words people use to navigate between online entities—is incredibly important to any businesses’ digital marketing efforts, especially in helping companies study how customers talk about products.

Google likely annoyed just as many digital marketing professionals with its excuse for blocking Keyword Planner as it did with the block itself. Google explained that it revoked Keyword Planner to stop bots from accessing keyword data. There are voices in the SEO/digital marketing community who feel ‘bots’ has become standard Google nomenclature for so-called black hat SEO practices,some of which do involve robots. However, many more do not. These aggressive techniques attempt to expedite the long process of changing search results by catering exclusively to ‘technical’ or ‘computer’ algorithmic factors, instead of providing content of actual value to humans. Things get hazy when you remember keyword research is by no means a black hat tactic. The truth is, there are already a number of mechanisms in place to prevent these black hat techniques from undermining Google’s value as an information aggregator for real people. This is why practitioners are skeptical of Google’s rationale. Remember, Google’s Keyword Planner was considered the best primary source for search volume and some corresponding demographic data. While it was a tool originally developed for search advertisers, it has many applications way beyond advertising. Digital strategists have long relied on Google’s Keyword Planner for things like content optimization or technical SEO analysis. Most suspect Google made this decision to increase revenues, knowing nobody outside of the SEO community would raise an eyebrow if it blamed it on ‘bots’ or ‘black hats’. The argument that bots were using Keyword Planner to a serious extent is thin. Because it enjoys broad market dominance from a position of highly specialized knowledge, Google doesn’t have to care. There is simply no entity with the necessary combination of reach and authority that users could use instead. To someone working in the digital analytics business, the Keyword Planner decision is incredibly frustrating and a sure sign of Google using its influence to limit anyone’s ability to effect change within its platform without buying ads. That said, justifying what would otherwise be a very unpopular decision with ‘bots’, makes Google seem diligent and is a brilliant public messaging strategy.

Forcing people to pay for keyword planner will have noticeable consequences. Without reliable data, marketers are less likely to be able to craft strategies that legitimately increase their site’s positioning in Google. This will force them to spend more on ads to increase traffic, and decrease their ability to influence how their properties appear in what is, by far, the world’s largest source of information. Thus, Google has greater control over what appears online than ever before. This means marketers need to pay to play. It’s great news for Google shareholders, but troubling for people concerned about the level of influence one corporate entity yields over our access to information.

Google built its incredible market share on extreme competence and a vastly superior product compared to competitors like Bing, Yahoo, and Ask. It is a phenomenally successful company that doesn’t owe anyone anything. However, extra responsibility seems like a fair consequence for unprecedented success. The ideal time to seriously debate its place in the world and if we as a society should place restrictions on companies deciding how information is disseminated likely passed already. That conversation still needs to happen. Search engines should be required to disclose some details of how the general public interacts with its platform, so that businesses can plan accordingly. Though they are the ‘gateway’ to information, without content creators or publishers there would be no need for search engines. Working with them can create a better experience in the long run for developers, marketers, and even Google. That’s not to mention how the Internet is much more entrenched in daily life than when Google first started and especially for new technologies, some kind of regulation is required for the greater good, if they become as popular as Google. Unfortunately, most laws attempting to regulate Google, or almost anything to do with the Internet, rely on thinking from a pre digital age. In the past, implementing online regulations has done more harm than good in terms of access of information..

For example, Tim Worstall of Forbes magazine does a great job explaining how lawmakers have no understanding of the economic benefits search engines provide publishers. Consider the EU’s recent proposal to charge search engines for displaying news headlines. When Spanish regulators tried to force Google to pay publishers of the news headlines it indexes, Google refused and shut down Google News in the country. Now there is no Google News in Spain. As a result, publishers are suffering way more than Google, having lost referral traffic and the advertising revenues that come from said traffic. Now the proposed legislation could force search engines to choose between headlines, likely for monetary reasons. This effectively creates a scenario where digital news goes to the highest bidder, which seems like the opposite of what the law intended.

Beyond the publishing industry, these laws could have longer-term consequences. By forcing search engines to pay to list news results, lawmakers are creating barriers to competition and strengthening Google’s stranglehold on the search market. Now, any new search engine trying to establish itself has an expense Google did not have to account for when building its audience.

The EU’s proposal is the culmination of smaller attempts to give publishers more influence in Spain and Germany. These attempts did not work. The fact that European regulators have now tried three versions of the same arrangement between search engines and publishers, despite it failing twice, suggests they may not know how to strike the right balance between cracking down on powerful search engines and protecting domestic interests. This ignorance allows the current situation to continue, where Google can do things with far reaching applications without notice, discussion, or material consequence.

No entity exists to determine what information, if any, Google must freely disclose so the general public can best manage websites in an environment where it influences most of the relevant traffic. One is probably coming soon. People are noticing Google’s disproportionate influence and are growing wary. In the meantime, legislators struggling to make sense of search engines combined with everyone using them being made to pay for their best data source, the importance of applicable SEO knowledge has never been more apparent. Until the next major change, messaging online will be about accepting Google will do whatever it wants and learning how to leverage that towards your goal.