Facebook, the social media giant now known as Meta, has built an empire on the collection and monetization of user data. With nearly 3 billion monthly active users across its family of apps, Facebook has access to a vast trove of personal information that it uses to power its targeted advertising business.
However, the company‘s data practices have also landed it in hot water with regulators, privacy advocates, and its own users. Two recent class action lawsuits, Davis v. Facebook and Lundy v. Meta, have put the spotlight on Facebook‘s alleged tracking of users without proper consent.
In this article, we‘ll take a deep dive into the details of these lawsuits and settlements, explore Facebook‘s history of privacy controversies, and analyze what it all means for user privacy in the age of big tech.
Davis v. Facebook: Tracking Users Across the Web
The Davis v. Facebook lawsuit, filed in 2011, accused the company of using cookies to track users‘ web browsing activity even after they had logged out of the social network.
According to the complaint, whenever a user visited a website that featured a Facebook "Like" button or other social plugin, Facebook would receive cookies revealing that user‘s identity and the sites they visited. This tracking allegedly occurred regardless of whether the user actually clicked on the Like button or was even logged into Facebook at the time.
The plaintiffs argued that this practice violated the federal Wiretap Act, the Stored Communications Act, and the California Invasion of Privacy Act, as well as Facebook‘s own privacy policy, which stated that the company would not track users‘ web browsing after they logged out.
Facebook denied the allegations but ultimately agreed to settle the case in 2017 for $9.5 million. As part of the settlement, Facebook also agreed to delete any improperly collected data and update its privacy disclosures.
How Facebook‘s Cookie Tracking Worked
To understand how Facebook was able to track users across the web, it‘s important to know how cookies work. Cookies are small text files that websites can store on a user‘s computer to remember their preferences, login status, and other information.
When a user logs into Facebook, the site sets several cookies on their browser, including a "c_user" cookie that contains their unique user ID. Even after the user logs out, the c_user cookie remains on their browser, allowing Facebook to continue identifying them as they browse other websites.
According to the lawsuit, Facebook took advantage of this by using a technique called "frictionless sharing." Whenever a user visited a third-party site that included a Facebook Like button or other plugin, the plugin would send a request back to Facebook‘s servers with the user‘s c_user cookie.
This allowed Facebook to match the user‘s identity with their browsing activity, building a detailed profile of their interests and behaviors that could be used for ad targeting. The plaintiffs alleged that this tracking occurred on a vast scale, affecting millions of users across millions of websites.
The Settlement and Its Implications
Under the terms of the $9.5 million settlement, Facebook agreed to establish a settlement fund to pay out claims to affected users. Class members who submitted valid claims by the September 2022 deadline are eligible for payouts of up to $10 each, depending on the number of claimants.
While the individual payouts are small, the settlement is significant in that it holds Facebook accountable for tracking users without consent. It also requires the company to make changes to its data practices and privacy disclosures to prevent similar tracking in the future.
However, some privacy advocates have criticized the settlement as a "slap on the wrist" that does little to fundamentally change Facebook‘s business model or incentives. With billions in annual ad revenue at stake, a $9.5 million settlement is a relatively small price to pay for the company.
Lundy v. Meta: Secretly Collecting Location Data
Fast forward to 2018, and Facebook was hit with another class action lawsuit, this time alleging that it was secretly collecting location data from Android users without permission.
The complaint, filed by plaintiff Brendan Lundy, claimed that Facebook exploited a loophole in Android‘s permission settings to infer users‘ locations even when they had explicitly turned off location services for the app.
According to the lawsuit, Facebook used a variety of signals to estimate users‘ locations, including their IP addresses, nearby WiFi networks, and cellular towers. This allowed the company to build detailed profiles of users‘ movements and behaviors, which could be used for ad targeting and other purposes.
Lundy argued that this practice violated California‘s Unfair Competition Law and Consumers Legal Remedies Act, as well as the state‘s constitutional right to privacy. In August 2022, Meta (Facebook‘s parent company) agreed to settle the case for $37.5 million.
How Facebook Inferred User Locations
To appreciate the significance of the Lundy lawsuit, it‘s important to understand how location tracking works on smartphones. Android devices use a combination of GPS, WiFi, and cellular signals to determine a user‘s location, which apps can then request access to through the device‘s permission settings.
However, even if a user denies an app access to their precise location, Android still allows the app to collect coarse location data based on network signals like IP addresses. This data is less precise than GPS but can still place a user within a city or neighborhood.
According to the lawsuit, Facebook took advantage of this loophole to infer users‘ locations even when they had turned off location services for the app. By collecting IP addresses, WiFi networks, and cellular tower data, Facebook could triangulate a user‘s position and movements over time.
The plaintiffs alleged that Facebook used this location data for a variety of purposes, including serving targeted ads, recommending nearby events and businesses, and even tracking which stores users visited in the real world.
The Settlement and Its Implications
Under the proposed settlement, which still needs final approval from a judge, Meta has agreed to establish a $37.5 million settlement fund to pay out claims to affected users. Class members who file valid claims could receive payouts of $200 to $400 each.
In addition to the monetary relief, the settlement also requires Meta to make changes to its data practices and disclosures. The company must be more transparent about its location tracking in its privacy policy and provide users with clearer options to control how their location data is used.
However, some critics argue that the settlement does not go far enough in curbing Facebook‘s location tracking practices. The company will still be able to collect and use location data from users who have granted permission, and it‘s unclear how effectively the new disclosure requirements will communicate the risks to users.
Facebook‘s Privacy Problems: A Pattern of Broken Promises
The Davis and Lundy lawsuits are just the latest in a long line of privacy scandals that have plagued Facebook over the years. From the Cambridge Analytica data breach to the use of facial recognition technology, the company has repeatedly come under fire for its handling of user data.
At the heart of these controversies is Facebook‘s business model, which relies on the collection and monetization of vast amounts of personal information. The more data Facebook has on its users, the more effectively it can target them with ads and keep them engaged on the platform.
This has created a powerful incentive for Facebook to push the boundaries of what is acceptable in terms of data collection and use. The company has been accused of using dark patterns to nudge users into sharing more information, making privacy settings difficult to find and understand, and even collecting data on non-users through tools like the Facebook Pixel.
A Timeline of Facebook‘s Privacy Controversies
Here are just a few of the major privacy issues Facebook has faced over the years:
-
2007: Facebook launches Beacon, a controversial ad program that shared users‘ off-site purchases and activities with their friends without clear consent. The program was shut down in 2009 after a class action lawsuit.
-
2010: The Wall Street Journal reports that Facebook apps like FarmVille were sharing users‘ personal data with advertisers without permission. Facebook tightened its rules on app permissions in response.
-
2011: The Federal Trade Commission charges Facebook with deceiving users about the privacy of their information. Facebook agrees to a consent decree requiring it to obtain clear consent before sharing user data and to undergo regular privacy audits.
-
2014: Facebook conducts a controversial "emotional contagion" experiment on nearly 700,000 users without their knowledge or consent, manipulating their news feeds to study how emotions spread on social media.
-
2015: Facebook rolls out facial recognition technology to help users tag photos of themselves and their friends. The feature raises concerns about the company‘s growing biometric data collection.
-
2018: The Cambridge Analytica scandal reveals that a political consulting firm improperly accessed the data of up to 87 million Facebook users for voter profiling and targeting. Facebook CEO Mark Zuckerberg testifies before Congress and the company tightens its rules on third-party data access.
-
2019: The FTC fines Facebook a record $5 billion for violating the 2011 consent decree in the Cambridge Analytica case. The company also agrees to new privacy oversight measures.
-
2021: A leaked internal document dubbed "The Facebook Files" reveals that the company has long known about the negative impacts of its products on users‘ mental health and civic discourse but failed to address them.
These incidents paint a picture of a company that has repeatedly prioritized growth and engagement over user privacy and well-being. While Facebook has often promised to do better and made changes to its policies and practices in response to scandals, critics argue that the underlying business model and incentives remain unchanged.
The Economic and Ethical Implications of Targeted Advertising
At the core of Facebook‘s privacy issues is the company‘s reliance on targeted advertising as its primary source of revenue. By collecting vast amounts of data on users‘ demographics, interests, behaviors, and locations, Facebook is able to offer advertisers the ability to reach highly specific audiences with tailored messages.
This business model has proven incredibly lucrative for Facebook, with the company earning over $117 billion in ad revenue in 2021 alone. However, it also raises serious questions about the economic and ethical implications of pervasive data collection and profiling.
On the economic front, some argue that targeted advertising can lead to more relevant and useful ads for consumers, as well as more efficient allocation of ad spending for businesses. However, others worry that it gives large tech platforms like Facebook an unfair advantage over smaller competitors and can lead to higher prices for goods and services.
From an ethical perspective, targeted advertising raises concerns about privacy, manipulation, and discrimination. By building detailed profiles of users‘ personal characteristics and behaviors, Facebook and other ad platforms can enable advertisers to exploit vulnerabilities, reinforce stereotypes, and exclude certain groups from opportunities.
There are also questions about the consent and transparency of the data collection that powers targeted ads. Many users may not fully understand the extent to which their data is being harvested and used, or may feel powerless to opt out of tracking without sacrificing access to important services.
Alternative Models and Solutions
As awareness of these issues grows, there is increasing pressure on Facebook and other tech giants to reform their data practices and explore alternative business models that prioritize user privacy and well-being.
One potential solution is to give users more control over their data and make privacy the default setting rather than an opt-in. This could include tools to easily see and delete the data that Facebook has collected, as well as clear prompts to opt out of tracking and profiling.
Another approach is to shift away from the targeted advertising model altogether and explore alternative revenue streams such as subscriptions, micropayments, or affiliate marketing. This could align incentives more closely with user interests and reduce the pressure to collect ever-more personal data.
There are also emerging technologies and platforms that aim to give users more privacy and control by design. Decentralized social networks like Mastodon and Diaspora use distributed architectures and open-source code to avoid concentrating data and power in the hands of a single company. Privacy-focused browsers like Brave and search engines like DuckDuckGo offer tracking-free alternatives to Google‘s suite of ad-supported services.
Ultimately, the path forward will likely require a combination of technological innovation, business model experimentation, and policy reform. Governments around the world are already starting to take action, with laws like the European Union‘s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) imposing new requirements on companies that collect and use personal data.
However, regulation alone may not be enough to fundamentally change the incentives and power dynamics of the digital advertising ecosystem. It will also require a cultural shift in how we think about privacy, data ownership, and the role of technology in our lives.
Conclusion
The Davis and Lundy lawsuits against Facebook are a reminder that even the most powerful tech companies are not above the law when it comes to user privacy. The settlements, while small in the grand scheme of Facebook‘s business, send an important message that there are consequences for collecting and using personal data without proper consent and transparency.
However, the lawsuits also highlight the deeper structural issues with Facebook‘s business model and the online advertising industry more broadly. As long as the incentives are aligned around monetizing user attention and data at all costs, we can expect to see more privacy scandals and breaches of trust in the future.
Solving these issues will require a concerted effort from regulators, advocates, technologists, and everyday users to demand more accountability, transparency, and control over our personal information online. It will also require a willingness to imagine and build alternative models for connecting and sharing online that put privacy and human rights at the center.
As the debate over online privacy continues to evolve, one thing is clear: the stakes could not be higher. Our personal data is not just a commodity to be bought and sold, but a fundamental part of our identity, autonomy, and social fabric. It‘s up to all of us to ensure that it is treated with the care and respect it deserves.