By Sparrow
At the ad agency where I work, the owner brags to clients that we could serve ads to their intern’s mom. It’s not an exaggeration. The vast swathes of user data collected by social media platforms, websites, apps, and the smart devices that colonize our homes grant modern advertisers a staggering ability to target ads to hyper-specific audiences.
In the past several months, tech companies have come under public scrutiny with a series of scandals—from the whistleblower leaks in October 2021, which exposed (among other things) Facebook’s inconsistent policy enforcement, to the revelation in January 2022 that Apple was skirting its own privacy policies to allow certain companies to continue to collect data from people who had opted out of tracking. Yet while these scandals have brought questions of user privacy and safety to the forefront, popular discourse proceeds from the perspective of the user or tech companies. The ad industry—the financial engine of this system and the ultimate purchaser of user data—remains an inscrutable behemoth.
When we approach questions of user privacy without a strong understanding of how modern advertising works, how advertisers access data, and how they exploit it in the service of capital, our proposed solutions address only data collection and security, without addressing that data’s ultimate use and abuse. I hope that by using my vantage point from within the ad industry to explore these questions, I can add valuable context to the conversation around privacy and surveillance advertising.
From Mad Men to the Metaverse
Historically, advertisers had a limited ability to target ads to specific audiences. If my agency wanted to reach the intern’s mom in 1985, we might have bought ad space in Better Homes and Gardens magazine or on a billboard near her house or on NBC during an episode of Cheers. But a whole new ad frontier emerged with the birth of the internet and digital advertising.
Modern digital advertising functions on a system of real-time ad buying, wherein algorithms hold near-instantaneous auctions each time a user is eligible to be served an ad. While traditional advertising methods rely on buying ad space in a context where interested customers are expected but not guaranteed to see the ad, digital ad exchanges allow advertisers to buy ads on a case-by-case basis, dependent on whether the consumer is likely to be interested or receptive to the ad in the first place.
This subtle difference is critical: If a jewelry company buys an ad in a print magazine, they pay a flat fee for that ad space and everyone who reads the magazine will see the ad, regardless of whether they’re interested in jewelry. If the same company buys a targeted ad on Facebook, they pay for each time someone sees the ad, but instead of the ad being seen by everyone on Facebook, they can select targeting parameters and exploit Facebook’s user data to show the ad only to people who are interested in jewelry (or whatever other criteria they choose).
User data forms the backbone of this system. The more an advertiser refines their ability to target receptive users, the less money they “waste” on people who don’t make purchases, and the higher the return on investment for the brand when people do make purchases.
In 2022, in addition to all the strategies my agency could have used in 1985, we could also reach the intern’s mom by targeting members of a Facebook group for parents of the intern’s university. Or we could set up a page on the client’s website showing off intern projects and then serve ads to the people who have visited the page. Or we could use social media to target first degree connections of the client’s employees. Ultimately, advertisers’ ability to create extremely specific targeting parameters is as limitless as the data they have access to.
In modern advertising, advertisers and the mega-corporations at the center of the privacy debate are locked in an incestuous relationship. Tech companies control advertisers’ access to digital ad space. Together with Amazon, Meta (née Facebook) and Alphabet (Google’s parent company) hold a triopoly on digital ads—GroupM estimates the three companies facilitate 80-90% of all digital advertising in 2021. By the same token, advertisers provide tech companies’ main source of cash flow, with advertising sales bringing in 81% of Alphabet’s 2021 annual revenue. For Meta, that number jumps to a whopping 97%.
The algorithms of all three platforms further reinforce the value of user data by favoring ads that are more relevant to users, making well-targeted ads literally cheaper to buy. Not coincidentally, all three platforms also happen to directly offer advertisers access to a dazzling hoard of user data.
Thus, the parasites feed each other. The relentless pursuit of profit incentivizes advertisers to constantly refine their audience targeting capabilities and incentivizes the platforms to continue to collect user data and sell advertisers ever-increasingly precise mechanisms of exploiting that data.
Data Points
So, what data are we talking about exactly? At the risk of sounding alarmist, pretty much anything, since in theory any device with an internet connection can collect data on people. Broadly speaking however, there are two overarching categories: first and third party data.
First party data refers to data that a company or brand collects about their own customers, such as email addresses or phone numbers. Most loyalty programs exist to populate these lists, by getting high value customers—that is, customers likely to make valuable purchases—to share their personal information in exchange for discounts or access to special deals. Another example of first party data is when companies track website visitors, like with the site we set up earlier to find the intern’s mom. Via a piece of code installed on their website, brands can pipe a log of visitors directly into an advertising audience.
Third party data on the other hand is collected by outside actors—including apps and smart devices, the ad platforms themselves, and a whole sub-industry of companies dedicated to compiling data specifically for advertisers. The most common categories include demographic data, like age or gender; behavioral data, such as the amount of time someone spends on a given social media site; interest data, ranging from broad categories like “beauty” or “cosmetics” down to specific parameters like “pink lipstick”; and geographic data, which encompasses not only where a person is at the moment the ad is served to them, but also the places that they have traveled within a given time period.
It’s a common misconception that companies like Meta and Google sell user data to advertisers. While companies that sell data certainly exist, Meta and Google are not technically among them (I reference Meta and Google here specifically, but this also applies to most social media and major tech companies in general). What these companies actually sell is access to user data, meaning that an advertiser can use Meta’s data, for example, to target ads bought through the Facebook ad platform, but they cannot at any point view the data directly or use it on another ad platform. But this distinction, while important for the sake of understanding the relationship between tech companies and advertisers, is ultimately semantic. Whether a company sells user data directly or “just” access to it, the end result is the same.
If my agency targets the intern’s mom by serving social media ads to the first-degree connections of the people who worked at our client’s company, we rely exclusively on data that advertisers can access via a social media sites’ ad platform, without ever owning the data ourselves. Moreover, it’s data that users voluntarily supply to the social media site when they list their job in their profile and connect with their acquaintances.
When used for something like my boss showing off to a client, it seems fairly innocuous. But just imagine how easily it could instead be used more nefariously: for example, to serve union-busting ads to the friends and family of workers trying to unionize. Regardless of whether the advertiser or tech company has access to the raw data, and regardless of people’s ostensible consent to their data’s collection, the very use of this data represents a massive and predatory privacy invasion.
To an Ad Free Future
Advertising is one of capital’s most ubiquitous instruments of control. It influences where we spend our money, the food we eat, how we pass our time, the people we vote for, even the values we hold. When we trust the ad industry with user data and the ability to target highly specific audiences, they will always use it to manipulate us and to profit off us. Granting advertisers access to any form of user data inherently invites abuse.
Unfortunately, it’s unlikely that the advertising industry will abandon targeted advertising any time soon. Although a bill introduced to the US Congress in January 2022 would ban surveillance advertising, its chances of being enacted are slim—at least in part thanks to how heavily political campaigns rely on serving targeted ads to constituents.
There are basic steps everyone should take to protect their privacy against advertisers. All social media sites have privacy settings of some sort which you should check regularly, although the level of control they actually provide the user is often quite vague. A good ad blocker will block not only ads but the tags that track you across the internet. Likewise, switching to a privacy conscious, open-source browser like Firefox grants inbuilt customizable tracking protection. A VPN adds extra security, preventing companies from accessing your location, which is sometime used to serve ads even when users have opted out of tracking. Furthermore, autonomous tech collectives offer alternatives to tech companies’ monopoly on the internet, with non-ad funded options for secure email, collaboration, and document sharing, just to name a few.
But what makes this problem so critical is its inescapability: completely avoiding surveillance advertising in its modern iteration requires either a high level of tech literacy or an abstinence from tech altogether. Furthermore, individual solutions don’t address the root of the problem or combat surveillance advertising as a system.
On a cultural level, we should reduce our consumption habits across the board. Advertisers and tech companies are incentivized as agents of capitalism to convince us to over-consume—everything from food to clothing to “content.” Consciously reducing our consumption undermines the power that advertisers exert over us, limits their ability to steal our data and profit from its exploitation, and frees us to build and immerse ourselves in alternate systems.
Beyond this, we need to become more comfortable with inconvenience. A lot of people like personal ads because they’re incredibly convenient: companies can serve you an ad for the exact thing that you’re looking for at the exact time that you’re looking for it. Likewise, tech companies also use the data they collect to customize the content you see, save your settings, and personalize your overall experience with the brand.
But it’s these very “benefits” that ultimately reveal themselves as self-serving scams. Companies only care about things like your convenience or “brand experience” insofar as that brand experience leads to a return on investment for the brand.
Take for example the ever-mysterious social media algorithms, supposedly designed to enhance user experience by prioritizing relevant content. The most widely publicized of the 2021 whistleblower leaks revealed that Facebook’s own data had indicated for years that Instagram’s algorithm was psychologically addictive and harmful to users’ mental health, particularly among tween and teen girls. But as recently as March 2021, Mark Zuckerberg had publicly denied the accusations that their platforms had negative impacts on mental health.
Similarly, among the slightly less well publicized leaks was the revelation that in 2018, Zuckerberg had personally rejected proposed measures to fix the Facebook algorithm’s proclivity to promote outrage. He cited concerns that the fixes might cause users to interact with the platform less.
In both cases, the apparent “bugs” were ignored or suppressed because they directly serve the explicit purpose of the algorithm: to keep people on the platform as long as possible, because the longer someone is on a given platform, the more opportunities the platform has collect their data and serve them ads.
We must reject experiences that are constantly curated to our convenience, mediated by algorithms and advertisements, and designed to extract maximum profit. We can’t divorce discussions about social media and algorithms from tech companies’ relationships with advertisers. The very real harm inflicted by Meta, Google, and their ilk, allegedly in order to bring us a maximum level of convenience, is incentivized by advertisers at every turn. We must strive, both individually and in our communities, to reclaim our attention and our privacy.
Fundamentally, the most integral part of the advertising “ecosystem” is not the platforms, as advertising leaps from medium to medium, nor the advertisers, who spend and manipulate while never producing anything tangible, but the people they call “consumers.” Advertisers may provide the financial capital, but value is derived from the users themselves—giving users a surprising degree of power. Advertisers know this and ad industry publications have spent much time and energy over the past year fretting about the shift to privacy as a fundamental threat to modern advertising. Without so-called “consumers,” the advertiser-tech partnership becomes nothing but an insatiable ouroboros, eating its own tail.