Algorithmic Harms of Surveillance Tech on Consumers

TL;DR – Surveillance tech is creating social, political, and economic problems through targeted advertising. Consumers are concerned about their privacy and policymakers are proposing significant changes to regulate digital advertising.  

Using personal data that predicts behavior to customize advertisements is not new. Perhaps you have heard about Target’s pregnancy prediction score. Their advertising algorithms analyzed purchasing patterns of shoppers and gave them an internal score which personalized coupons mailed to the home for maternity and baby products. The pregnancy prediction score gained national attention in 2012 when a young woman, attempting to hide her pregnancy from her family, was outed when an advertisement for baby clothes and cribs was mailed to their home address.   

Fast forward 10 years and ads of a personal nature are regularly at the top of search results, next to newsfeeds, and in our Stories. Social media platforms are free and accessible due to the business model that sustains them.

Digital advertising is a $200 billion business that relies heavily on the collection of personal data through the use of devices, apps, and websites to target and personalize advertisements. The data are tracked using cookies and pixels and shared between applications, depending on privacy settings.

But it is more than selling advertisements, it is also the access to behavioral data of around 72% of American adults who use social media. Facebook’s Data Policy clearly states they collect data about the person across devices and Products (Facebook, Messenger, Instagram, WhatsApp) including location, content, time spent on websites, and information others share about you. Google’s Privacy Policy covers the collection of data through their suite of Products (Search, Maps, Home, YouTube, Chrome, and Android device) including those Facebook captures in addition to videos watched, browsing history, and activity on third-party sites that use Google services. 

While on Facebook you will see an ad for the new hiking boots you are thinking of purchasing and a highly contested political issue in your battleground state. Platforms are not only a place to sell products but also politics. Facebook’s Ad Library allows some transparency into the ad spend of businesses and organizations. In the last week, the highest ad spends are those selling ideas.  

More recently targeted advertisements on social media platforms have included misinformation campaigns and fake Instagram Shops. This led to pushback by members of Congress with the introduction of the Banning Surveillance Advertising Act of 2022 (H.R.6416 and S.3520), which seeks to prohibit advertisements that use personal data including protected class information and information purchased from a data broker.

Preliminary results in my study on the awareness of algorithmically targeted advertising shows that 58% of those surveyed are not aware of the personal information Facebook is collecting to personalize advertisements.  

This is not surprising since most American users claim not to read the Terms and Conditions or Privacy Policy. And how could they when it would take 15 weeks to read the terms of all apps and websites the average person uses?

And it is not only social media platforms, as recently reported in the New York Times, people are acquiring a food preservative to complete suicide and enough purchases have been made that Amazon’s algorithm is recommending products along with the preservative to aid in the process.  

What is being done?

Initiatives are underway to help assess and regulate the potential harm of algorithmic systems that drive targeted advertising. For example, Data & Society’s Algorithmic Impact Assessment report provides a framework to evaluate algorithmic impact assessment protocols. Algorithmic decision-making or delivery systems are not created equal and therefore require nuance to assess them appropriately. The recommended 10 components of assessment make it easier for policymakers, scholars, and journalists to evaluate the social, political, and economic harms of algorithmic systems. 

More recently, the Federal Trade Commission (FTC) collected comments on the Petition for Rulemaking by Accountable Tech. The proposed rule utilizes the unfair methods of competition (UMC) authority by recognizing Facebook and Google hold two-thirds of the digital advertising market. The FTC’s UMC authority is broadly defined and substantive with five different interpretative thresholds. 

  1. Conduct that directly violates the Sherman or Clayton Act
  2. Or actions that allude to the beginnings of violations of those acts
  3. Violations of antitrust laws 
  4. Breaches of recognized competitive standards
  5. Enforce competition policy and stop activity that results in substantial harm to the competitive process

While the first three thresholds are clearly defined, the last two are the basis for the current rule proposal. The rulemaking petition seeks to ban surveillance advertising as an unfair method of competition since the practices and harms of the technology are integrated and inescapable.

Digital advertising firms who manage online ads for corporations are understandably against the proposal citing dramatic profit loss and ineffective ad campaigns. DuckDuckGo, Public Knowledge, Public Citizen, and Fight for the Future were among the those in support of this rule citing issues of privacy, mental health, and dis/misinformation campaigns. 

Social media platforms will likely be fighting the FTC’s proposal as well. Meta is already forecasting a loss of $10 billion in ad revenue due to iOS updates, CFO David Wehner reported at last week’s shareholders meeting. When Apple pushed out update 14.5 in April of 2021, users had the option to stop apps from tracking behaviors across platforms and many iPhone users opted in. The lack of personal and behavioral data massively changes how Facebook targets their advertisements and has less appeal to advertisers. 

What does this mean for the consumer? 

For the privacy minded consumer, these proposals are a welcome change to tech policy. For everyone else, the proposals are drawing attention to how private corporations are collecting and using private data to manipulate their online experience. There might be a momentary change in how relevant advertising appears on their newsfeeds and search results, but the feeling of being watched or listened to will no longer be a concern. 

Do you have…

2 minutes?

Professor Casey Fiesler: What is Privacy to You? 

30 minutes?

IRL Podcast: The Surveillance Economy 

Next week: How automated filters, intended to reduce copyright infringement, can impact independent creators. 

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s