Navigator logo

Understand your users: lessons from Adblock Plus

Last week, Adblock Plus announced it would begin selling ads. That is not a misprint. The company literally named ‘adblock’ ( as in stop or prevent advertisements) and ‘plus’ (to tell users that they do that very well) is doing the exact opposite of the thing it promises to do.

For those unfamiliar, Adblock Plus was one of, if not the, leading adblocker. As a plugin or browser extension, users could install it to prevent advertisements from appearing in the websites they visited. The program is generally credited with popularizing online adblocking, which has had a major impact on how information travels on the Internet. Selling ads after selling the ability to block them is a complete 180. In fact, it looks like the company built on preventing ads from being served never really understood how users and advertisers interact with the ads they do see in the first place.

Predictably, initial reaction to the announcement was harsh. This could very well be the beginning of the end for the company. Technically, Adblock Plus is expanding its Acceptable Ads initiative with ‘a fully functional ad-tech platform that will make whitelisting faster and easier’ that promises to ‘turn the model on its head.’ According to Adblock Plus, the new program offers advertisers auction-based or real-time bidding (RTB), just like Google or Facebook. The difference is that all of the ads are, theoretically, vetted by Adblock Plus’ users. This is supposed to act as a kind of guarantee that they will not detract from any website’s browsing experience. If you ask the company, Adblock Plus is offering an alternative to RTB — instead of targeting options offered by every other RTB platform, user experience determines which ads are ultimately served.

Basically, Adblock Plus is hoping to enter the supply side of the digital advertising market. The new service will allow publishers and bloggers to buy ads vetted by Adblock Plus or users of Acceptable Ads because these ads are not disruptive to the browsing experience. Yes that was supposed to sound weird. There are lots of problems with this strategy. First is the problem of perception: Adblock Plus, a celebrated Adblocker is selling advertisements to online publishers. IAB UK CEO Guy Phillapson alluded to some of the other strategic issues in a statement, comparing the company’s new direction to a protection racket:

‘We see the cynical move from Adblock Plus as a new string in their racket. Now they’re saying to publishers we took away some of your customers who didn’t want ads, and now we are selling them back to you on commission. The fact is, in the UK ad blocking has stalled. It’s been stuck at 21% throughout 2016 because the premium publishers who own great content, and provide a good ad experience, hold all the cards. More and more of them are offering ad blocking consumers a clear choice: turn off your ad blocking software or no access to our content. And their strategy is working, with 25% to 40% turning off their blockers. So with their original business model running out of steam, Adblock Plus have gone full circle to get into the ad sales business.’

Adblock Plus’s decision, and the initial reaction to it, prove the company misunderstood its old customer base and the publishers or advertisers it is hoping to turn into customers. First, Adblock assumed its current users, people who downloaded something that, again, is named Adblock Plus, want to filter ads instead of blocking them. They also misjudged how appealing RTB is in its current form for advertisers, and like Phillapson said, that users are actually willing to put up with highly targeted ads from the content suppliers they enjoy. Most importantly, as a brand or someone paying for an ad, why switch to a system with less control when there is no substantial opposition to the current RTB model?

Besides Adblock Plus, there are other similar adblocking programs that provide practically ad-free browsing experiences. Many of them have capitalized on the negative reaction to Adblock Plus’s announcement by doubling down on their stated promise of actually blocking ads. Most of these programs use a process similar to the ‘whitelisting’ service Adblock is offering, allowing users to view ads from the sites they deem safe. This gives users the sense of control Adblock Plus is convinced it just invented.

Adblock Plus’ new take on whitelisting ignores the dynamic its previous version helped establish between users, publishers, ads, and advertisers. In the original model, training users to whitelist sites instead of individual ad units placed credit or blame for ads appearing with the publishers who accept revenue from them. Once a publisher or website was whitelisted, they remained whitelisted until the Adblock Plus user manually reversed their decision. Giving the ‘pass’ to publishers, instead of individual ads, made a ton of sense: ads change much more frequently on a random site than on a site someone frequently visits, and publishers generally adhere to the same standards when deciding which ads they’ll allow on their site.

This system was successful because it was simple. It also let advertisers actually advertise, which is by nature intrusive. Crucially, by placing agency on the sites, ads were presented as a necessary evil to support the content that users enjoyed. The new model abandons that simplicity by asking users to vote on the ads themselves and it changes the criteria for whitelisting. No advertiser in their right mind would choose an ad unit that is sanctioned for its inability to draw attention when alternatives advertising models exist. Adblock Plus seems to have forgotten that publishers need ads, and ads need to be somewhat disruptive in order to be effective, which is why there was a desire to block them in the first place. Also the RTB marketplace Adblock Plus envisions would require a staggering amount of sanctioned ads in order to provide enough variety to publishers to compensate for the (likely) reduced appeal among actual advertisers. Adblock Plus probably doesn’t have the user base necessary to vett that many ad units, especially after losing so many customers in the wake of its announcement. In fact, RTB, or the auction model Adblock Plus is attempting to adopt, is dependant on the relationship between site owners or publishers and users. Before, the company played a part in emphasizing this relationship, but now it’s neglecting it at its own peril.

Real time bidding is practically the only way to advertise on social media and search engines. First popularized by Google, versions of this bidding can be found on practically every other search engine, the largest social networks –like Facebook, Twitter, and LinkedIn –as well as leading content marketing services like OutBrain and Taboola. These platforms employ continuous streams of content, and the auction model was the only way to account for how they disseminate information in real-time based. The auction system accounts for the practically infinite variations of a given user’s news feed or search results. Instead of paying for a predetermined placement, advertisers bid to appear in the most relevant possible placements as they become available. The innumerable choices in social or search platforms that determine where ads could be shown mean users are subject to an incalculable number of ad units. This only works if the users trust the website or publishers to choose ‘acceptable’ ad units. Going through each ad individually would take forever. Even in content marketing, random units appearing in a given site’s ad spaces are subject to browsing data that essentially creates the same degree of randomness as social media. It is more practical to establish trust between sites and the people who visit them, instead of users and ads. One could argue any RTB system needs to be based on demographics instead of user experience, because targeting needs to be grounded in something that correlates to the person actually seeing the ad to account for the endless contexts in which it can appear.

Demographic information was key to popularizing RTB because advertisers love getting this info. They gain access to unprecedented amounts of users in a single ad buy, and a targeting system that is infinitely more specific than any other format. Social and search advertising use demographic categories based on static information (as well as in-platform decisions, but that’s for another column), like registration data, as static endpoints in a given user’s ever-evolving data set. Essentially, this allows advertisers to bid on who sees an ad, unlike older models where they paid for placement. RTB took out a lot of the guesswork in terms of ‘is the type of person I want to see my ad guaranteed to see my ad?’ Though users may complain about advertisers using their private information to build RTB campaigns, the information advertisers actually get to work with cannot identify individual users. There are certainly issues with privacy and RTB, though they are not close to significant enough to overthrow the system.

Right now, it looks like although Adblock Plus understood the trends in online advertising, they failed to contextualize their role in a changing digital landscape. People care generally if ads are on their screen, but the vast majority do not worry about how they were targeted. Though users are growing more tolerant of ads, and perhaps concerned over how they are delivered, high quality content keeps them coming back to sites using granular tracking options in their RTB units. People understand that websites need to pay the bills and, for the most part, they are willing to let them serve targeted ads in exchange for the services they provide. Up until recently, users who were unwilling to make that trade relied on Adblock Plus. Since, until last week, its entire business was blocking ads, the company is still considered toxic by many groups it now hopes to count as customers. Any chance of building up the user base quickly is slim, having lost a considerable portion of existing customers, and they do not seem to have the quality content needed to attract new ones. The promise of an ‘acceptable’ advertising experience is nice, but it’s a job for a plugin, not a content publisher or even an ad broker, which is what Adblock Plus is trying to morph into. Perhaps slowly rolling out a different plugin, branded with something connected to its acceptable ads initiative, working to accelerate the whitelisting process, maybe by gathering information about which ads users find acceptable to later sell to publishers, while still maintaining their initial service or line of business, would be a better strategy. Anything to avoid having to say ‘Hi we are called Adblock Plus, though we will no longer be blocking ads, as much as asking sites to pay us to show ads to users they attracted without any real help from us.’ Adblock Plus already alienated online publishers. Trying to quickly pivot and turn those people into customers may have cost them the ones they did have for their original service. Unless content providers, advertisers, and users radically change how they interact with ads online, Adblock Plus’ may end up using their new RTB platform to sell their office space instead of actual ad units.

Why the crackdown on fake news is a good thing

Do you know that the Washington Post cranks out more than 1,200 news articles per day? The New York Times produces at least 230 articles per day. Good luck tracking them all down. Buzzfeed published 6,365 stories and 319 videos in April alone—or about 222 pieces of content per day. These are but a few of the news organizations producing so much daily content, and no human being could realistically consume it all in a day. The Internet contains a near-infinite amount of information—we just can’t keep up with it.

So what do we do? We rely on the convenience of social algorithms to tell us what matters. We pull up our Facebook mobile feed and let the miracle and science of its algorithm find the diamonds in the rough. It’s a wonderful experience. We literally have no work to do: no newspaper to flip through, no news channels to suffer through, and no photo albums to thumb through. It’s all there for us, conveniently sorted and available at a swipe of a finger. Just about everything we read has to make its way through a filter before we see it.

Think about the apps you use most on mobile. I’m willing to wager a bet that you get a lot of information through Facebook, Twitter, and Google. Facebook notoriously tweaks its news feed on a regular basis to ensure it’s properly calibrated to give you content you want to consume. Twitter finally realized that people find a raw news feel overwhelming and now uses an algorithm of its own to prioritize content it thinks you want to see. Even your search results are filtered. For sometime now Google has tailored its search results to make them more relevant to you, the user, based on your browsing and search history. All of these platforms have an incentive to give you information you want instead of the information that is the most up-to-date or relevant: their bottom line depends on it. If they fail to give you the content you want, you’ll tune out. And if you tune out, you’re one less person they can serve ads to. And if a whole lot of you start doing the same, revenues take a hit, membership numbers stagnate, and Wall Street gets cranky. So, these three digital behemoths need to give you quality content, which is a lot easier said than done.

For years, publishers have focused on producing huge volumes of content. Most of this content was (and remains) thoroughly cheap and unfulfilling. Think about the scourge of ‘click-bait’ articles that used to fill up our social feeds and rank highly in search results. The headlines were catchy—we couldn’t help but click on the link only to discover that the resulting article was barely 100 words long, and often, completely different than what the headline promised. Sadly, this type of content continues to plague the Internet. It’s a serious problem for curators like Facebook, Twitter, and Google. When this content appears in feeds or results and we click on it, only to get angry about where we landed, it diminishes the user experience.

Considering this, it makes complete sense that Facebook and Twitter are taking steps to remove this type of content, this fake news, from their feeds. The pair are joining at least 30 major news outlets—the Washington Post and The New York Times among them— to crack down on fake news articles more effectively, in the hopes of improving the quality of the information in social feeds. In some ways, it’s encouraging—even heartening—to see these major platforms recognize that as they are the primary news source for most of their consumers they should ensure a basic level of quality for the news it serves. This newly-formed network is backed by Google and is working to create a voluntary code of practice and a verification system for journalists and social media companies to ensure a basic level of integrity in news coverage. Of course, partisans of all stripes will laugh at such a statement, since news organizations are hardly seen as objective operatives. But if we can park our bias aside, most of us will concede that ‘traditional’ news outlets are bound by some journalistic standards (fact-checking, legal checks etc.) that ought to be the norm. Of course, they’re far from perfect, but they serve as a basic foundation.

We live in a world where most news breaks online. People at the site of the news event are the ones posting raw video and images online. Eyewitnesses don’t wait for a reporter to arrive on the scene before sharing what they’ve seen first-hand. Stories that would never have been reported in the pre-smartphone era now become global movements because someone took out their smartphone and captured an event or altercation. And of course, fake news and hoaxes, like everything else online, have become much more sophisticated, and tougher to crack down on.

In this context, it doesn’t help that all news looks the same in our news feeds. It can be tough to sort out the real stuff from the hoaxes. In truth, the Internet has democratized content-creation. Anyone with an Internet connect and a keyboard can become a publisher. Nothing stops me from starting a new website today, writing completely egregious or false content and publishing it to the major social platforms. Or they could write breaking stories, uncovering facts and perspectives that others can’t or unwilling to investigate. But whether or not that same piece of content should be subjected to the same filtering and standards as fact-checked and verified stories, is a matter of debate. I for one, am ok with it, even if it means traditional news outlets regain some level of clout.

However, these recent developments further entrench the shift towards a highly-filtered Internet. And cracking that filter is no easy task, especially if you’re not a pre-approved news outlet. It means, more than ever, that brands and organizations need to double-down on quality content. Stop producing content for the sake of it—focus on providing value and you may get to join the ranks of the ‘Big 30.’ And if that fails, you may need to dust off your traditional media relations skills—’traditional’ outlets may soon get a bump in clout.

Curated content and the demand for excellence

For the past few years we’ve been experiencing a shift towards curated content emphasizing customization and personalization. We are demanding more, but by more we mean better, to filter the mass amounts of noise on the Internet. The trend toward curated content started a few years ago and while it is no longer new or novel, we now expect, to some degree, a level of filtering and it seems to be reaching some sort of zenith. According to a Princeton study, Internet tracking has intensified to something called ‘fingerprinting‘ which surveils your computer for behavioral information, such as your battery life and browser window, to determine your online activity. For some time, Google has tracked searches to give you personalized suggestions based on your prior keyword searches. Now it is moving towards even more individual services that tailor search away from the common experience to a more individualized environment.

Within the social media sphere, platforms are constantly updating their algorithms to show you more content from people they think you like the most. Twitter rearranged its feed to show tweets in non-chronological order to prioritize quality over quantity. The platform’s ‘While you were away‘ feature relieves you of the fear of missing tweets from those (Twitter determines) you care about most. Rather than reading it all, it assumes you’d prefer to read the tweets that count.

This is part of the idea that, arguably, mainstream culture as we know it has become fragmented. This is most obvious with music. The rise of streaming services mean you don’t need to rely on the radio. There are still top 40 and songs of the summer. Certain artists and songs still appeal to the masses, but there is less of a cultural consensus of what ‘good’ music is or should be and more independent blogs, critics, and services than ever before. However, the idea of a fractured ‘mainstream’ — whether or good or bad — is extending to other spheres.

Recently, Nathan Heller wrote in the New Yorker that ‘language of common values has lost common meaning.’ To a certain degree, Heller sees this as part of an overarching trend to the rise of personalization, as we remove ourselves from the truly public sphere to one that reflects our own beliefs and discourses. Our curated content feeds suggest we are independent thinkers and individuals — it’s how we’re self-identifying. From there, he talks about the unassailability of Trump’s linguistic nonsense, that he predictably and reliably divorces words from their meaning. ‘To know what Trump means, despite the words that he is saying, you have to understand — or think you understand — the message before he opens his mouth.’ This is a leap his followers are willing to make. Similarly, to know what your consumers are looking for before they even try to look for it, is what marketers have attempted to do since the beginning of time and what the Internet is getting increasingly good at.

Heller makes the grander point that the rhetoric of change has become disconnected from the process of actually making it happen, not just online but everywhere. For example, we can all identify with the word ‘feminism’ but we’ve become disassociated with what this really mean on a day-to-day scale. This is generally the worry for the Internet in particular: that online behavior doesn’t necessarily translate into real world action. On a smaller, shallower level, this applies to cultural trends and terms. What we all ‘know’ and consider as a part of ‘us’ isn’t what it used to be. This becomes more pronounced as the generational divide between those who didn’t grow up with the Internet and those who did becomes more stark, with the latter moving into the workforce and exercising greater purchasing power. As the younger generation shifts into decision-making and directing roles, the comparisons are more obvious and direct between what one group wants out of work, out of each other, and our lives in general. And on a search-engine scale, the more personalized the search results, the less universal the meaning to the search terms and the information attached to them.

With personalization and withdrawal from mass consumption there have been some techniques to make the best of both worlds. Within this trend, newsletters have gone from spammy email blasts to a more sophisticated form of content delivery. Aggregators of taste, style, and substance have risen to the fore: similar to the automation of searches and social media, we prefer to have things automatically filtered, such as through the lens of an appointed purveyor of whatever it is you’re searching for. For example, Lena Dunhum, of HBO’s Girls fame, has a weekly newsletter called Lenny Letter that delivers curated content to your inbox. The Skimm aims to make American news more digestible by giving you the top headlines for the day, complete with pop culture references and policy explanations. For an even more personal touch, TinyLetter has quietly found its own corner of the Internet and its helping fledgling companies and writers distribute their content to people who truly want it. Acquired by MailChimp in 2011, the service allows anyone to send out a newsletter — as often or as little as you like — to a relatively small list of subscribers. As many individuals use TinyLetter by means to keep in touch with a group of friends, colleagues, or small fan base as new and growing organizations. With this kind of content, we create the sense of a conversation that is more targeted and more private.

Ironically, the very things that are providing this illusion of privacy are, perhaps, the most invasive. The entire idea of curated content is only possible with mining more and more of your personal data and online behavior. And this is leap we are willing to make: we’re exchanging the mass public for the seemingly private by forfeiting up details; in attempting to gain more control over the content we see, we are sacrificing control over access to our ‘individualized’ information. However, most of us give this up willingly, or at least, prefer not to think about it too hard, in our search for both substance and convenience.

In another form, the demand for excellence recently played out in popular culture and music — namely album drops — very clearly. Dropping an album has become a bit of an all-encompassing thing: there’s the announcement, the hype, the previews, the leaks, often there are sites dedicated not to the artist, but the album itself. Afterward, there are endless think pieces on the relative importance or irrelevance of said album. Frank Ocean is a reclusive artist who let four years pass between Channel Orange, his first critically acclaimed album, and his second, Blonde. The public outcry from Frank Ocean fans for a new album was loud and only became louder and more expectant when the reported release date for Blonde came and went with no sign of new music. Online, things went from excited, to angry, to betrayed. Few artists can put the world on pause, but those who do are the ones who let the anticipation build to breaking point.

However, the pressure mounts with this kind of discipline. The expectation is that if you are going to make people wait, you are doing so for a good reason. As a popular musician, failure to deliver on your restraint is to play a dangerous game with your fans’ idealized version of you, which is partially why, when done well, it pays off. Retreating has its own cache these days. Patience is a virtue that has left most of us and today it signals a level of self-control that few seem to possess anymore: it makes you a grown-up of the Internet age, capable of a level of resistance, a dignified power move.

But this a remove we value, perhaps because most are incapable, or perhaps because sometimes there is an overwhelming amount of information. In the search for individualized content, there is something to be said for giving people both a good enough product and enough time to miss you, because it means your product is so uniquely you that you can afford to gamble on it. The bigger the gamble, the bigger the get — while other artists focus on being nothing if not consistent, with new release after new release, some choose to remind us that there is a difference between being timely and being timeless.

Of course, within a digital world and within public affairs, intent still matters and decides timelines. Crises often require quick responses, and SEO efforts are different from true content creation. It is easier to be choosy from an artistic standpoint than a business one. But still, there is some untold magic in restraint. Within the zeitgeist, we’re looking to those who remove themselves to a certain degree because they seem to be the ones that dictate rather than mimic, that lead rather than follow. Since we can communicate with anyone en masse, the exclusivity and seemingly intimate nature of communicating with a few — or communicating with a purpose — is appealing. It’s also a luxury that comes with talent and confidence in that talent.

But, whether we have the luxury or not, there is a palpable shift toward wanting to feel like we’re getting such an experience. We’ve become a demanding set: we want the access to all of the information on the Internet and we also want special content, whether it’s in the form of genius or something tailored just for us. And if Heller is right, it’s because this is how we showcase ourselves to the world: these are the excellent things that I like, these are the personal selections that create my online personality. At 28, Frank Ocean is part of the generation that straddles the Internet line of remembering a time before, but also having had it for most of his life. His recent album includes a one-minute speech about being dumped for refusing to add someone as a Facebook friend. Like most of us, he both celebrates and rebukes the Internet age. While he is exceptional in many other ways, in this is one in which he’s just like the rest of us.

It’s Google’s world and we’re just living in it

Google already won. As the most dominant search engine in the world it has unprecedented control over each individual’s access to information. While most people already knew that, some forget that Google is also among the world’s largest data brokers, listing services, and news providers. Google may now be too big to fail and too ubiquitous to worry about whether its actions anger users or bother legislators (many of whom still do not understand search engines’ role in contemporary communications). Two recent events—the company’s decision to essentially begin charging for accurate keyword data and a European Union proposal to charge search engines for news headlines indexed in their results pages—underscore how Google can do whatever it wants and nobody in power has any idea how to stop it.

Last week, Google changed how research is done in any field involving online communications, and specifically in SEO and digital marketing. The company significantly reduced the amount of free data available via its popular keyword planner tool. This is a big deal. In order to enjoy access to keyword planner you have to be a Google customer, which wasn’t the case before. Technically, full access to keyword planner now requires account holders to have active AdWords campaigns. Using the search engine, Gmail, or YouTube is not enough — you have to be paying Google every month to be considered enough of a customer to use its keyword planning service.

So why the outrage? First, because keyword planner has always been free. People will always resent having to pay for something when it previously cost nothing. Technically they can still use keyword planner without paying, but now the tool returns ranges instead of round numbers for each search term:

Before:

google_screenshot1

After:

google_screenshot2
As you can imagine, the difference between ‘2,900’ and ‘1k-10k’ is huge. That switch—from being able to pull precise search volume to an estimate that could be off by 10,000— has digital strategists fuming. Suddenly we’re planning campaigns using data points with massive ranges when we had been working with exact figures our entire careers. More importantly, how do we explain to clients why we aren’t so confident in next month’s projections beyond the nearest 10k, when we’ve previously provided much more precise estimates?

Thankfully—for now, at least—there are ways to bypass Google Keyword Planner’s recent restrictions by using reputable third-party software. Still, these tools rely on Google’s data, and the restrictions Google is implementing for individual users are having a trickle-down effect on these tools. One of these third-party providers recently sent its customers a letter explaining that it had no real idea what was going on, acknowledging the current situation is less than ideal, and asking for patience while it works on a solution. Most of these third-party providers have recovered from Google’s changes after a long four-day adjustment period. But, the problem for people working with this software everyday, is that in a more specialized sense they each serve a different function as part of a holistic keyword research strategy. For example, some programs are ideal for on-page SEO and competitor research on specific URLs or domains, while others are designed for ecommerce applications or to find specific ‘long tail keywords’. Most people that work with keyword data regularly consider Google’s Keyword Planner a primary source; it is by far the most trusted source for volume and competition levels, which can be applied in almost any SEO context. Making people pay for keyword planner honestly would not be worth writing about if it was only used for Adwords campaigns. If the tool was only used for ads on Google results pages, advertisers would be monthly customers by default. Even if that was not the case, the impact of less targeted ads within the search engine would only be felt because of Google’s near universal reach. It would not really impact people’s lives. Things are different because just as ‘Google’ has become shorthand for searching the internet, it’s Keyword Planner is behind just about any task that involves building audiences online, and Google knows this. In a world where users interact with far more than just ads, keyword research— which helps us understand the words people use to navigate between online entities—is incredibly important to any businesses’ digital marketing efforts, especially in helping companies study how customers talk about products.

Google likely annoyed just as many digital marketing professionals with its excuse for blocking Keyword Planner as it did with the block itself. Google explained that it revoked Keyword Planner to stop bots from accessing keyword data. There are voices in the SEO/digital marketing community who feel ‘bots’ has become standard Google nomenclature for so-called black hat SEO practices,some of which do involve robots. However, many more do not. These aggressive techniques attempt to expedite the long process of changing search results by catering exclusively to ‘technical’ or ‘computer’ algorithmic factors, instead of providing content of actual value to humans. Things get hazy when you remember keyword research is by no means a black hat tactic. The truth is, there are already a number of mechanisms in place to prevent these black hat techniques from undermining Google’s value as an information aggregator for real people. This is why practitioners are skeptical of Google’s rationale. Remember, Google’s Keyword Planner was considered the best primary source for search volume and some corresponding demographic data. While it was a tool originally developed for search advertisers, it has many applications way beyond advertising. Digital strategists have long relied on Google’s Keyword Planner for things like content optimization or technical SEO analysis. Most suspect Google made this decision to increase revenues, knowing nobody outside of the SEO community would raise an eyebrow if it blamed it on ‘bots’ or ‘black hats’. The argument that bots were using Keyword Planner to a serious extent is thin. Because it enjoys broad market dominance from a position of highly specialized knowledge, Google doesn’t have to care. There is simply no entity with the necessary combination of reach and authority that users could use instead. To someone working in the digital analytics business, the Keyword Planner decision is incredibly frustrating and a sure sign of Google using its influence to limit anyone’s ability to effect change within its platform without buying ads. That said, justifying what would otherwise be a very unpopular decision with ‘bots’, makes Google seem diligent and is a brilliant public messaging strategy.

Forcing people to pay for keyword planner will have noticeable consequences. Without reliable data, marketers are less likely to be able to craft strategies that legitimately increase their site’s positioning in Google. This will force them to spend more on ads to increase traffic, and decrease their ability to influence how their properties appear in what is, by far, the world’s largest source of information. Thus, Google has greater control over what appears online than ever before. This means marketers need to pay to play. It’s great news for Google shareholders, but troubling for people concerned about the level of influence one corporate entity yields over our access to information.

Google built its incredible market share on extreme competence and a vastly superior product compared to competitors like Bing, Yahoo, and Ask. It is a phenomenally successful company that doesn’t owe anyone anything. However, extra responsibility seems like a fair consequence for unprecedented success. The ideal time to seriously debate its place in the world and if we as a society should place restrictions on companies deciding how information is disseminated likely passed already. That conversation still needs to happen. Search engines should be required to disclose some details of how the general public interacts with its platform, so that businesses can plan accordingly. Though they are the ‘gateway’ to information, without content creators or publishers there would be no need for search engines. Working with them can create a better experience in the long run for developers, marketers, and even Google. That’s not to mention how the Internet is much more entrenched in daily life than when Google first started and especially for new technologies, some kind of regulation is required for the greater good, if they become as popular as Google. Unfortunately, most laws attempting to regulate Google, or almost anything to do with the Internet, rely on thinking from a pre digital age. In the past, implementing online regulations has done more harm than good in terms of access of information..

For example, Tim Worstall of Forbes magazine does a great job explaining how lawmakers have no understanding of the economic benefits search engines provide publishers. Consider the EU’s recent proposal to charge search engines for displaying news headlines. When Spanish regulators tried to force Google to pay publishers of the news headlines it indexes, Google refused and shut down Google News in the country. Now there is no Google News in Spain. As a result, publishers are suffering way more than Google, having lost referral traffic and the advertising revenues that come from said traffic. Now the proposed legislation could force search engines to choose between headlines, likely for monetary reasons. This effectively creates a scenario where digital news goes to the highest bidder, which seems like the opposite of what the law intended.

Beyond the publishing industry, these laws could have longer-term consequences. By forcing search engines to pay to list news results, lawmakers are creating barriers to competition and strengthening Google’s stranglehold on the search market. Now, any new search engine trying to establish itself has an expense Google did not have to account for when building its audience.

The EU’s proposal is the culmination of smaller attempts to give publishers more influence in Spain and Germany. These attempts did not work. The fact that European regulators have now tried three versions of the same arrangement between search engines and publishers, despite it failing twice, suggests they may not know how to strike the right balance between cracking down on powerful search engines and protecting domestic interests. This ignorance allows the current situation to continue, where Google can do things with far reaching applications without notice, discussion, or material consequence.

No entity exists to determine what information, if any, Google must freely disclose so the general public can best manage websites in an environment where it influences most of the relevant traffic. One is probably coming soon. People are noticing Google’s disproportionate influence and are growing wary. In the meantime, legislators struggling to make sense of search engines combined with everyone using them being made to pay for their best data source, the importance of applicable SEO knowledge has never been more apparent. Until the next major change, messaging online will be about accepting Google will do whatever it wants and learning how to leverage that towards your goal.

Risk and reward: expressing support online

The end of Gawker

Last week, Gawker Media announced that it was closing its doors after a protracted legal battle with multibillionaire Peter Thiel, who seems to have made it his mission to destroy the site.

Gawker was a website that pioneered much of the content form and style of online writing we see today. As noted in its closing statement, which included site data, Gawker predates today’s tracking tools, including Google Analytics. Along with its affiliate sites like Deadspin, Jezebel, and Gizmodo, the ‘Gawker Network’popularized the snark and the informal and incisive style that is now characteristic of online writing. As the network’s flagship website, gawker.com was the first to publish posts on topics and from people that, arguably, wouldn’t have found a home otherwise.

As Adrien Chen notes — Gawker was a unique place to become a journalist because it put writers in front of the masses to ‘express themselves how they wanted.’ The site was the birthplace of much of Internet culture. Despite the temporality of the net, and our tendency to valorize something and then dispose of it, Gawker will likely live on as a legend, if for no other reason than the fact that most of its writing staff have gone on to other sites and publications to continue publishing content that perpetuates, in some way shape or form, the style originated by the site.

Gawker was generally the first in any situation to point out when companies, publications, and people — particularly people of power or influence — were being self-congratulatory, smug, or overly-indulgent. Gawker was also a pioneer in the comments section, which was one of the draws to the site. It created the kinja discussion thread, which is pretty similar to how comments and replies currently work on Facebook posts. It was ahead of its time. You can still see it in action on Gawker’s affiliated sites. It really attempted to create an online discussion between bloggers, writers, journalists, and commenters and this worked (or didn’t) to varying degrees throughout the site’s history.

Online conversations and commentary on any given issue can feel circular, and we often feel like everyone on a particular social media platform is all talking about the same thing. Recently, NPR decided to shut down the comments section on its site for partly the same reason: in July, of the 22 million unique users to the site, they had 491,000 comments. Those comments came from 19,400 commenters, or 0.06% of users commenting. Whatever discussion its posts were generating; it was amongst the same group of people.

But, just as in life, not all social spaces are created equal. On Twitter, Gene Demby, a writer at PostBougie and NPR’s Code Switch explained the trouble with comments sections and why it’s difficult to make them resemble anything constructive:

Screen Shot 2016-08-22 at 12.28.17 PM
So in the wake of Gawker’s demise and NPR removing its comments section, it seems fitting to look at some of the trends of our particular social spaces. Public online spaces of interaction can be exhausting or potentially harmful for some people — mostly minority groups. But what about the areas we think of as being more ‘private’, or at the very least limited by permissions and friend requests?

Real friends — how many of us?

In June, Facebook retooled its algorithm to show more posts from your friends over media outlets in your news feed. Not only that, but it reportedly prioritizes the content from friends that you care about — aka the people you interact with most on the site. Facebook used to limit the number of posts you would see in a row from the same person. However, with this new update, this rule is less stringent, allowing your friends to dominate your news feed more than ever.

But, the truth is, Facebook is no longer an accurate representation of your social circle. Most of us don’t talk to 300 plus people a day, let alone probe them on their political and moral beliefs. However, it seems that more than any other platform, Facebook has become the platform for moralizing or politicizing. Facebook is at once, both personal and public, which means publicizing your views on topics that could be considered sensitive or controversial is safe and scary in equal measure.

Self-care

In many ways ‘Self-care’ is the Internet term of 2016. To be short about it — the year has felt bad. World-falling-apart-at-the-seams kind of bad. In case you’ve somehow forgotten, the year has been filled with terrorist attacks, police brutality, peaceful protests-turned-violent-attacks, unsavory politics, and a myriad of other now seemingly regular disasters. For many, social media has, if not perpetuated, at the least amplified, the feeling of constant bad news.

Self-care takes different forms. Some feel checking out from social media altogether is necessary — this is also become some face more discrimination and harassment online than others for sharing their views. And as for sharing your political views on Facebook — well, you can’t really make blanket statements for how this does or does not go, because the stakes are incredibly different, depending on who is doing the sharing.

Quartz published a (somewhat sketchy) article on data collected by Rantic, that suggested your political Facebook posts do not alter people’s behavior or views. The authenticity of the study can’t be verified, however, anecdotally, it feels true: most people who post political updates on Facebook are posting to an audience that, by-and-large, already agrees with them, and that these posts are not changing anyone’s political views.

On the flip side, The Ringer explored the mainstreaming of the ‘Black Lives Matter’ movement and the role of social media. Looking at how the hashtag has spread generally, and then rapidly, following horrifying cases of brutality caught on camera, it compares online tipping points of adoption to when Facebook introduced the equals sign as a profile picture option to show support for marriage equality.

To be clear, there are vast differences between expressing your political views in a well-thought out and constructed post detailing your experience or beliefs and changing your profile picture to include a watermark filter that identifies you as an ally to a particular cause or issue. The chasm between replacing your photo with a Facebook-created-and-sanctioned filter and reaching a point of frustration and confusion at systemic and institutional marginalization is miles wide. The Ringer article makes this point, but also suggests that there are crossover lessons.

Based on a study Facebook conducted on the equals sign profile picture, the number of people posting about an issue does have an impact on whether or not you follow suit. The median number for people to change their photo is eight. Once something becomes popular, there’s less social risk associating yourself with the cause — you can change your photo and update your status with impunity, you don’t risk political backlash or losing friends who vehemently disagree with your viewpoint.

Over the past couple of years though, a performative element has become part of the Facebook post/status update/profile picture change. This doesn’t exclude people from posting about causes or social issues they genuinely care about, but the performance and identification are not mutually exclusive. You receive social recognition and validation for said posts, have warm and self-affirming feelings about your beliefs, and are encouraged to post more on a similar topic.

So there is some truth to both sides. Facebook posts may not alter behavior, but they can grant people who would not take the initiative to post of their own volition permission to enter a digital and social space. Depending on the issue, commenting on social media can be in direct opposition to someone’s sense of self-care — or to put it another way, posting your political views on social media requires you to assess your level of comfort and safety within your online network, the same way it does in a real-life social setting. Studies that look at whether or not issues are adopted are operating on the assumption that all issues are created equal — or can be responded to with equal weight. Depending on the topic, people discussing an issue might not be the ones who are impacted the most. For some of these people, the political and personal blowback you can experience from engaging online isn’t worth the trouble.

And also, from those who are discussing a politically or socially sensitive topic, there’s a line — sometimes, identifying yourself as an ally to a given cause can come off as more congratulatory than not. Like when someone feels the need to tell you about the nice thing they did or are planning on doing for you, rather than just simply doing it, it can feel like cashing in on social justice credit.

#woke

There’s a whole segment of the population that doesn’t know what being woke is — which is an interesting phenomenon, because woke is now common enough to start turning back, almost full circle, to perform double duty as both a positive and negative term. As The New York Times stated in April, ‘Think of ‘woke’ as the inverse of ‘politically correct.’ If ‘P.C.’ is a taunt from the right, a way of calling out hypersensitivity in political discourse, then ‘woke is a back-pat from the left, a way of affirming the sensitive.’ More to the point than that even, ‘It means wanting to be considered correct, and wanting everyone to know just how correct you are.’

Although being woke is becoming a casualty of Internet performance, its origins are in black rights movements and essentially means being aware of the various complexities and problems which our society functions, the multiplicity of ways in which systematic racism and prejudice manifest, and refusing to accept them as the status quo. Its popularization is usually attributed to Erykah Badu, who urged people to ‘stay woke’ in her song ‘Master Teachers’ in 2008. Today, it can still mean that, but it can also mean that you’re so woke you haven’t slept in days, what with pointing out all the ways in which you are more awake to the various injustices in the world than everyone else. In the negative context, being woke is not only performative, but condescending and competitive.

Woke friends — in the negative sense — usually beget woke friends because woke users are generally very active on social media, in order to prove their wokeness, as public recognition is a core component. As an insult, woke is more popular on Twitter, but unaware woke behaviour is arguably more prominent on Facebook. Generally, Twitter is a more public forum: you’re interacting, or at least reading, tweets from people that are writing for a larger and more public audience than on Facebook. Therefore, Twitter users are quicker to point out this kind of behaviour in one another and you’re less likely to find the kind of affirmation you would from your friends.

The rules of engagement

Basically, this is all a word to the wise and consideration for your online objectives. If our social networks are generally comprised of people with similar political views and online discussions are usually dominated by a few, spreading messages and garnering support requires a tipping point for it to spread to a wider audience. But what that looks like and how that works is dependent on your audience and your issue. As the Facebook study on the equals sign found, the number of friends who adopted the photo played a role in whether or not someone would also choose to change their display picture, but so did demographic characteristics, such as age, gender, education, and factors relevant to marriage equality, such as religion.

As aforementioned, not all spaces are created equal. Assuming everyone has the same amount of political risk when adopting a given issue assumes that your entire audience is all on the same playing field. When you can have scenarios where one group needs to actively check out for self-care and the other has the luxury of becoming incredibly woke because the issue is for them, at most, performative, finding the tipping point is much more nuanced than simply attempting to get people onside.

This isn’t to say that you can’t use social media to do this — this is to say that you need to consider which group your message addresses, whether intentionally or not, the most: Performative support isn’t the kind that correlates to action, and worse, it’s the kind that can alienate real supporters from your efforts. If your issue is high-stakes for you, realizing that it could be high stakes for others is the minimum level of consideration you should give when using means to persuade them to publically support your cause. When people are actively avoiding engaging in issues to avoid being part of one group or to engage in a level of self-care, there’s an added level of sensitivity required in what you’re saying and how you’re saying it.

To take it back to Gawker — Gawker was unafraid to challenge the status quo, the powerful, and it took chances on content, sources, and the types of stories it ran. Much of its content was controversial, but it also gave a voice to a lot of people who wouldn’t have had one otherwise, and for that it will forever hold a place within the history of the Internet. And in the end, to the dismay of many, that level of confidence and brashness didn’t survive and it was forced out. Although it’s now bankrupt, Gawker Media as a whole will continue through its affiliate sites. But the brand and the ideals of gawker.com have been effectively shut down. The end of Gawker is bad for many reasons, implications for free speech being one of them. As much as the Internet seems like a free-for-all, it’s still worth asking who has room to participate and what’s at stake for them if they do.