Getty Images
Facebook’s privacy game – how Zuckerberg backtracked on promises to protect personal data
Facebook promised its users privacy then quietly abandoned its promises in pursuit of profits. Now it faces antitrust regulation
Facebook CEO Mark Zuckerberg expected laughs when he joked about the company’s poor record on protecting the privacy of its customers at its annual developer conference.
“Now look, I get that a lot of people aren’t sure that we are serious about this,” a grinning Zuckerberg told the audience at F8. “I know that we don’t have exactly the strongest reputation on privacy right now, to put it lightly.” He was met with an embarrassing silence from the audience.
The CEO used the F8 developer conference to announce its biggest change in direction so far. For Facebook, the future is private. Facebook will offer end-to-end encryption through all its instant messaging services and the platform will be redesigned to encourage people to share with smaller, more private communities of like-minded people.
“This is not just about a few new features, this is a change in how we build these products and how we run this company. It is not going to happen overnight. And, to be clear, we don’t have all the answers for how this is going to work yet,” the 35-year-old multibillionaire told the audience in the McEnery Convention Center in San Jose, California.
Facebook under pressure on privacy
For a business that is facing multiple investigations into privacy breaches, the hijacking of its platform to spread Russian propaganda, and calls for Facebook to be broken up under US antitrust rules, relaunching the company around privacy is a shrewd move.
Zuckerberg has faced calls to step down from Facebook’s board over lapses in risk oversight that have “greatly tarnished Facebook’s reputation”.
The US Federal Trade Commission (FTC) is under pressure from US politicians, not just to fine Facebook, but to hold Zuckerberg personally responsible for the policy failings that led to Cambridge Analytica harvesting the personal details of 87 million Facebook users worldwide, which were subsequently used to target political advertising at swayable voters in the US presidential elections and the UK Brexit vote. The US Securities and Exchange Commission, the Department of Justice, the FBI and European data protection agencies are also investigating.
The social network’s proposed launch of a Facebook cryptocurrency, Libra, has raised eyebrows among banking experts, including the Bank for International Settlements, which warned that, if left unregulated, use of the currency could create data privacy issues.
Facebook’s track record on privacy
History has shown that Facebook’s commitment to privacy until now has been, at best, half-hearted. Privacy is barely mentioned in a cache of over 7,000 pages of confidential documents obtained by Computer Weekly and NBC News (US) featuring emails from Zuckerberg and other senior executives. When privacy is mentioned, it is often an afterthought, or a later public relations (PR) spin to explain commercially driven changes that developers or Facebook’s competitors might find unpalatable.
The documents were disclosed by Facebook and placed under seal during court proceedings brought by app developer Six4Three against Facebook in the US. Six4Three, which was developing image recognition technology, had created an app called Pikinis which was able to search people’s Facebook friends to find pictures of them in swimsuits.
Some of the sealed documents had previously been obtained by the Digital, Culture, Media and Sport (DCMS) Committee of Parliament in November 2018 as part of its investigation into “fake news”. The DCMS Committee’s hard-hitting report, published on 18 February, described Facebook and its executives as “digital gangsters”.
Facebook declined to answer questions on the content of the leaked documents. But Paul Grewal, vice-president and deputy general counsel of Facebook, said in a statement: “As we’ve said many times, Six4Three – creators of the Pikinis app – cherry-picked these documents from years ago as part of a lawsuit to force Facebook to share information on friends of the app’s users. The set of documents, by design, tells only one side of the story and omits important context.”
In this exclusive investigation, Computer Weekly presents 22 occasions when Facebook has exploited users’ private data and disregarded privacy, drawing together leaked documents, academic research and newspaper reports.
The full report follows below – click on any individual heading in this list to jump straight to each example.
1. Facebook breaks promise not to track users
Facebook’s first experiment in technology capable of surreptitiously tracking users’ web browsing habits ended in failure when researchers discovered how its Beacon software really worked.
2. There is nothing hidden that will not be revealed
Facebook caused a backlash from users, privacy groups and politicians when it changed its privacy policy to make information that users had wanted to keep private accessible to anyone.
Undeterred by the fiasco over its Beacon plug-in, Facebook encouraged thousands of websites to install its Like button. Despite denials, researchers discovered the button had the capability to track users’ browsing habits, forcing Facebook to redesign the technology.
4. Sensitive data comes in 57 varieties
Austrian lawyer Max Schrems used European law to discover that Facebook held 57 categories of data about its members, including their religious beliefs and credit card details. Regulators later identified a further 42 categories of personal data.
5. Facebook quietly files tracking patent
After repeated denials that it is interested in tracking people, Facebook quietly filed a patent for technology to track people as they browse the web, watch TV, or go out shopping.
6. Cocktails all round – whatever your age
Facebook discovered during tests that an app for drinks manufacturer Diageo was serving up alcoholic cocktail recipes to youngsters below the legal drinking age. Engineers discovered that Facebook had been sharing information about young people that should have remained private.
7. Facebook settles allegations of “unfair and deceptive” claims
Having reached a settlement with the US Federal Trade Commission over charges that it had deceived customers by failing to keep its privacy promises, Facebook made undertakings not to make misrepresentations over privacy in future.
8. Zuckerberg plans to grab data from app developers
Facebook executives developed a plan to use “leverage” to persuade developers to share private data about Facebook users’ activities on mobile phone applications and websites back to Facebook.
Facebook had previously tried to assuage users’ objections to its privacy decisions by allowing them to vote in referenda on the development of its privacy policies. It later proposed a motion to abolish users’ rights to vote altogether. Facebook ignored the vote result, preparing the way for it to expand its collection of private data.
10. Facebook lobbies to water down GDPR
Facebook executives Sheryl Sandberg and Marne Levine used the World Economic Forum at Davos to hold private discussions with world leaders as part of a huge lobbying operation over the General Data Protection Regulation.
11. Facebook develops VPN to spy on users
Facebook offered users Onavo Protect as a privacy-enhancing virtual private network. But highly confidential internal documents show that Facebook used Onavo to surreptitiously monitor the activities of more than 30 million users.
12. Internal documents reveal plans to harvest more user data
Confidential privacy logs reveal that Facebook’s product team had been gathering data about Facebook users’ phone calls and SMS messages without the knowledge of Facebook’s legal unit. Facebook planned a partnership with financial services company Argus, which had data on 90% of all US credit card transactions, and with Cisco to collect data from Facebook users’ mobile phones when they signed into Wi-Fi.
13. Onavo rises from the ashes
Facebook started an under-the-radar programme to pay users, including teenagers, $20 a month to give away their privacy by installing a “Facebook Research VPN” on their mobile phones. When the project became public, Apple revoked Facebook’s iPhone enterprise certificate, bringing Facebook to a complete standstill for two days.
14. How Facebook’s privacy policy became less private over time
Facebook’s privacy policy has grown from 900 words to more than 12,000 over the course of time, but as the policy has grown in size, it has become less transparent, harder for users to understand and offers people few protections over the privacy of their data.
15. Facebook breaks its promises not to track users
By early 2014, Facebook had overtaken its rival social media companies. After seven years of promising it would not track people outside its own platform, Facebook announced it would start recording people’s browsing habits across the web to deliver targeted advertising.
16. How Facebook stopped people opting out of tracking
Facebook decided not to honour the “do not track” setting in web browsers so it could continue to track people’s web browsing activities. Instead, it directed people to an advertising industry-funded opt-out website that was deliberately designed to be difficult to use. When the public protested by installing ad-blockers, Facebook engineers worked out how to override them.
17. If you shop in Big Gay Ice Cream, Facebook will know
Facebook began a project to expand its surveillance into the real world by using Bluetooth beacons to track Facebook users’ mobile phones as they shop. The companies shortlisted for the Project Gravity trial included the Museum of Modern Art, Grand Central Oyster Bar and Big Gay Ice Cream.
18. Outrage as people discover Facebook has their phone records
Facebook users, horrified at the Cambridge Analytica scandal, began deactivating their accounts and downloading their data. Many were shocked to find that Facebook had kept a record of their phone call and text message histories.
19. Privacy ‘bugs’ go unfixed for years
Facebook’s own staff complained about privacy bugs that exposed their private data when they signed up to mobile phone apps, but Facebook engineers were slow to take action.
20. Advertisers use Facebook data to discriminate on race and gender
Studies reveal that companies can use Facebook to target advertisements to people in discriminatory ways. The US Department of Housing and Urban Development charged Facebook with violating the Fair Housing Act by restricting who could view ads for housing based on demographic data, including race or gender.
21. Cambridge Analytica sparks antitrust and privacy investigations worldwide
Facebook faces multiple investigations from regulators around the world following the Cambridge Analytica data breach. In the US, the FTC is under pressure to bring antitrust action against Facebook for breaching the terms of its 2011 privacy agreement, and to hold Zuckerberg personally responsible.
22. Facebook shares data with devices
New questions were raised about Facebook’s commitment to privacy when it emerged that the company had been sharing people’s private data with mobile phone and other device manufacturers.
Facebook’s surveillance network
If Zuckerberg is serious, turning Facebook into a privacy-focused social network will fundamentally change its business model.
Facebook makes tens of billions of dollars in revenues each year. Over 98% of its revenue comes from selling advertisements to businesses around the world, which can be targeted with great precision at Facebook users, based on huge troves of data it collects on its members.
Facebook’s databases could make government eavesdropping agencies such as GCHQ in the UK or the National Security Agency (NSA) in the US envious. Facebook holds highly sensitive information about people’s religious views, their health, racial or ethnic origin, philosophical beliefs, which trade unions they are members of and life events.
Facebook has gone far beyond harvesting data on its users when they are on its own site. The social network has, in effect, built a surveillance network across millions of sites across the internet, by persuading publishers, retailers and other businesses to install Facebook widgets and plug-ins on their websites.
These buttons allow people to share articles or like products on Facebook, but the social media network also uses them to track visitors to the sites and feed back their activities. Facebook is able to track people’s activities whether or not they are members of its social network and whether or not they are logged in to their accounts.
Facebook’s plug-ins include the Like button, which, as of April 2018, appeared on 8.4 million websites, and the Share button, on 275 million pages. Third-party websites also install Facebook pixels, which are used to provide advertising services, but can also feed back information on users’ behaviour to Facebook. There are some 2.2 million Facebook pixels installed across the internet.
Facebook maintains that it does not create what it calls “shadow profiles” of people who are not Facebook members, and said in 2018 said that it would not serve ads from its partners to non-Facebook members.
But according to Dina Srinivasan, a specialist in digital advertising technology who spent four years at WPP, one of the world’s largest advertising and PR agencies, it is not difficult to connect data gathered by trackers back to an individual. Indeed, she says, it is standard practice in the advertising industry. “Everyone who tracks consumers is in the game of identifying the consumer,” she says.
Advertisers are able to match cookie information with a computer or a mobile device’s location. They can de-anonymise people’s browsing history by matching their location data between the hours of 2am and 5am – when most people are typically home – against commercially available residential databases.
Back to the future
It was not always this way. When Zuckerberg started Facebook in his dorm room in 2004 with a handful of friends, privacy was the social network’s biggest selling point. Facebook’s first privacy policy was only 950 words long, and it made important promises to protect the privacy of its customers’ data, and never to track them.
“We do not and will not use cookies to collect private information from any user,” it stated.
Facebook’s public commitment to privacy was driven by commercial considerations in the first few years of the 2000s, Srinivasan argues in a major study, The antitrust case against Facebook: A monopolist’s journey towards pervasive surveillance in spite of consumers’ preference for privacy.
When Facebook launched, there were many companies trying to build social networks, with names such as Friendster, Hi5, Orkut and Bebo. The most successful platform was MySpace, founded in 2003. Within three years it had overtaken Google to become the most visited website in the US. But it was plagued by negative headlines, which blamed its poor attention to privacy for assaults, suicides and murders.
Facebook was trailing behind, but it attracted customers from rival social networks by promising better privacy. It hired a chief privacy officer, Chris Kelly, to enforce its privacy policies. And it ensured that, unlike MySpace, its default settings meant that people’s profiles were only visible to their friends or university class mates.
“When Facebook entered the market, the consumer’s privacy was paramount. The company prioritised privacy, as did its users – many of whom chose the platform over others due to Facebook’s avowed commitment to preserving their privacy,” Srinivasan writes in her report.
But as Facebook attracted more customers, at the expense of competitors, its commitment to privacy waned. The company sparked outrage and an apology from Zuckerberg when it introduced its newsfeed, making users’ previously private information publicly accessible without their consent.
Facebook breaks promise not to track users
Just three years after its launch, Facebook had more than 50 million users and was on course to overtake its main rival, MySpace. The now emboldened Facebook quietly reneged on its promise not to track users when they visited other websites.
More than 40 websites participated in the launch of Facebook’s website plug-in, Beacon, including online auction site eBay, the travel agency Travelocity and Sony Pictures. Facebook marketed it as an opportunity for Facebook users to share articles, movies and products they were interested in with their friends.
But as Sean Lane, a 28-year-old technical project manager in a printing company, discovered, Beacon often shared users’ browsing habits with their friends whether they intended it to or not.
When he bought a diamond eternity ring as a surprise for his wife, the purchase appeared on his Facebook timeline, together with details of the 51% discount he was offered. Within two hours, he received an instant message from his wife asking, “Who is this ring for?”. The incident ruined his Christmas.
Facebook denies secret tracking
If anyone had suspicions that Facebook was surreptitiously using Beacon to track people without their consent, they were quickly denied by Facebook’s executives. In an interview with the New York Times, Chamath Palihapitiya, vice-president of product marketing and operations at Facebook, dismissed such suggestions as “misinformation that is being propagated unnecessarily”.
That was before Stefan Berteau, senior research engineer at the Threat Research Group of CA, a US software company, analysed Beacon and showed Palihapitiya’s comments were untrue. The code surreptitiously deposited tracking cookies and passed data about users’ activities on third-party websites back to Facebook, whether they had clicked Facebook’s pop-up to share information with their friends or not. More disconcertingly, Berteau discovered Beacon passed information about web browsing back to Facebook even when he had logged out of his account.
He presented his findings in a research note, Facebook’s misrepresentation of Beacon’s threat to privacy. Beacon allowed Facebook to profile consumer behaviour on “a nearly unprecedented level of detail”, he revealed. It could combine data from third-party websites with the data it already held on his circle of friends, his education, communications patterns and geographic locations.
Facebook issued denials for five days, until Zuckerberg finally backtracked and apologised. “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them. We simply did a bad job with this release,” he said. In future, people would have to opt in to share information from third-party websites with their friends, and Facebook introduced an option to opt out of Beacon completely.
The settlement of a class action lawsuit brought by Sean Lane and 19 others finally brought an end to Beacon. Facebook later commemorated the disaster by naming a conference room in Beacon’s honour.
There is nothing hidden that will not be revealed
Facebook made its first major step back from privacy in late 2009. Almost overnight, Facebook changed its privacy settings to make all user profiles and photographs publicly searchable, even though many Facebook members had chosen to keep them private.
The change prompted a backlash from privacy groups, which accused Facebook of behaving deceptively by failing to fully disclose the impact of its privacy policy changes on users. People were worried private information, such as trade union membership, political affiliations, or simply embarrassing photographs, would become public.
This was serious. Researchers at MIT found that they could use people’s now public friends list to determine whether people were gay. In Iran, the government issued threats against Iranians living abroad who had used Facebook to criticise the government, and took out reprisals against their relatives still living in Iran.
Barry Schnitt, Facebook’s director of corporate communications and public policy offered a solution, saying “users are free to lie about their home town or take down their profile picture to protect their privacy”, apparently unaware that doing so would be a clear violation of Facebook’s terms of service.
By default, people’s personal information also became available to third-party app developers and websites that were affiliated to Facebook. That included Facebook users’ names, location, work and educational history, political views, relationship status, copies of photographs in their timeline, friends list, dating interests, and the books and movies they were interested in.
A study by Virginia university in 2007 found that the majority of Facebook apps were already accessing more personal information than they needed to in order to function. Did they really need more?
What’s not to Like?
Zuckerberg announced Like buttons at the 2010 F8 Developer Conference, as a new way for people to share websites, photos and blog posts. Within the first 24 hours, Facebook was going to serve one billion Like buttons on the web, he said.
The case for publishers in particular was compelling. Justin Osofsky, then director of media partnerships, claimed in a presentation that newspaper groups had seen huge increases in traffic since adding Facebook’s social plug-ins, including ABC News up by 290%, Gawker up by 200 % and Sporting News up by 500%. ‘Likers’ had more friends and brought a younger readership to newspapers.
Following the conference, four democratic senators, led by Charles Schumer, wrote an open letter raising concerns about Facebook’s privacy policies. They were right to do so.
Facebook’s extending tentacles
By the end of the year, a Dutch researcher, Arnold Roosendaal, published a research note which revealed that Facebook’s Like button could silently track people’s internet activities whether or not they were members of Facebook. “Facebook’s tentacles reach far beyond their own platform and members,” he wrote.
“The most prominent concern is that web users are somehow misled,” Roosendaal wrote. “Due to the way the button is presented, web users do expect to have data transferred when they use the button. That data are transferred even when the button is not clicked upon is difficult to imagine for the ordinary web user.”
The Like button breached data protection laws in three significant ways: data collection took place without people’s knowledge and consent; Facebook had failed to make the purposes of the data collection clear; and data subjects had no rights to review their data or to ask for its correction or delete it.
A subsequent investigation by Brian Kennish, a former Google engineer for the Wall Street Journal, found that Facebook obtained browsing data from visitors to more than 330 of the top 1,000 most popular websites, as ranked by Google.
Bret Taylor, Facebook’s chief technology officer, told the paper: “We don’t use them for tracking and they are not intended for tracking.” Taylor agreed that the main Facebook site also deposited cookies on the computer of anyone who visits Facebook’s home page, but said they were used to protect the site from cyber attacks and other functions.
But the issue did not go away. In September 2011, blogger Nik Cubrilovic followed up on Roosendaal’s research. He found that when users logged out of Facebook, the social network was able to track them every time they visited a page with a Facebook plug-in.
“Even if you are logged out, Facebook still knows and can track every page you visit that has Facebook integrated. The only solution is to delete every Facebook cookie in your browser, or to use a separate browser for Facebook interactions,” he wrote.
According to Srinivasan, Facebook moved quickly to reassure the public they were not being spied on, mobilising engineers and public relations people to respond. Gregg Stefancik, an engineer who works on login systems at Facebook, wrote on Cubrilovic’s blog that “Facebook has no interest in tracking people” and “cookies are not used for tracking”.
Facebook responded to Cubrilovic’s findings by changing the way its cookies are stored, so that they did not record people’s account information. But questions remained, Srinivasan argues, as to why Facebook had built its cookies to contain users’ ID numbers at all if the company was not interested in tracking.
Facebook vaults contain 57 varieties of sensitive data
In 2011, then 23-year-old law student Max Schrems (pictured left) used European law to acquire a copy of his Facebook data. He received a 1,200-page file containing 57 different categories of data, including highly sensitive personal information such as his religious beliefs and credit card details.
Facebook kept much of his information in perpetuity, including links he shared with friends, posts he made on his Facebook wall, and the locations where he checked into Facebook, regardless of whether or not he had deleted the data.
Regulators later identified a further 42 categories of data collected by Facebook, including details about relationships to friends, friends’ email addresses, removed tags and conversion tracking.
Facebook quietly files a tracking patent
Facebook quietly filed a patent to carry out the very tracking that it had promised its customers it would never do. The patent, dated September 2011, proposed gathering data about the activities of Facebook users on non-Facebook sites. It would hold profiles containing biographic and demographic details, work experience, educational history, hobbies or preferences, and credit card transactions, which could be used to generate advertising revenue.
Facebook envisioned encouraging people to download a mobile phone app that would use GPS to record their movements. Televisions could transmit messages indicating what programme people were watching at a particular time on a particular channel. The possibilities were “limitless”.
Cocktails all round – whatever your age
In March 2011, Facebook staff discovered that an application from the global drinks maker Diageo could share alcoholic cocktail recipes with Facebook users who were under 21 – the minimum age in the US for buying alcohol – and potentially as young as 13, confidential documents reveal.
The incident raised questions internally whether Facebook was adequately protecting the data of young people when they used Facebook to sign up to third-party applications. The social network had created an “alcohol age-gating” function to ensure that Facebook would never serve underage users alcohol-related information.
David Schatz, a solutions engineer, discovered there were some serious problems with it. The alcohol age-gating control allowed apps to download lists of people’s friends, even if they were under 21, and worse, the apps could share information on alcoholic drinks with Facebook users under the legal drinking age.
His colleague, Arthur Rudolph, a young software engineer, looked into the problem. He found a note in the archives that showed that, although age restrictions worked for items on display on the Facebook canvas, they did not apply to information that third-party apps obtained from Facebook’s application programming interfaces (APIs).
“This has been the implementation from day one of app restrictions, probably five or six years ago,” he wrote to colleagues during an online chat. “I assume someone from legal knows about this.”
Facebook settles allegations of “unfair and deceptive” claims
Facebook’s activities had prompted a slew of complaints about its “unfair and deceptive practices”. Facebook reached a settlement with the US Federal Trade Commission in November 2011 over charges that it had deceived consumers by failing to keep privacy promises.
The FTC’s complaint accused Facebook of making claims that were unfair and deceptive and violated federal law, and that Facebook had made promises to protect its customers’ data, but had failed to keep them.
The FTC alleged:
- The social networking site had promised users that it would not share their personal information with advertisers, but then did so.
- Facebook had claimed that third-party apps, such as mobile phone apps into Facebook, would only have access to the data they needed. In fact, they had nearly all of the users’ personal data.
- Facebook had told users that that they could restrict their sharing of data to limited audiences, for example to “friends only”. However, the setting did not prevent their information being shared by applications their friends used.
- Facebook had changed the design of its website to make information public that Facebook users had previously marked private, without seeking approval first.
- Facebook had wrongly claimed that it was compliant with the EU-US Safe Harbour agreement for transferring data between the EU and the US.
Facebook expressly denied the allegations as part of the agreed settlement wording. Nevertheless, the FTC ordered Facebook not to “misrepresent in any manner” how it protects the privacy and security of its customers’ data in future. The settlement also required Facebook to make it clear to users what private data it shares with third parties, set up an internal privacy programme, carry out privacy risk assessments, and to carry out twice yearly privacy audits.
There was one dissenting voice against the agreement from commissioner J Thomas Rosche, who made the prescient observation that the settlement did not require Facebook to address the “deceptive information sharing practices of apps” which used Facebook’s data. If his warning had been heeded, Facebook may have avoided the Cambridge Analytica scandal that rocked it in 2018.
Rosche criticised the FTC itself for allowing Facebook to deny its allegations as part of the settlement. There was no provision in the FTC’s rules for such a move. The FTC, he said, could only accept a consent agreement if there was “reason to believe” that an organisation was engaging in an unfair or deceptive practice.
Zuckerberg plans to grab data from app developers
In 2012, with Facebook’s initial public offering (IPO) under his belt, Zuckerberg began developing plans to leverage Facebook’s most valuable commodity – the data it collects on Facebook users and their friends – to grow the company and collect more data on people’s activities outside of Facebook.
Zuckerberg proposed a new business model which he called “full reciprocity”. His idea was that Facebook would agree to give app developers access to its social graph – data on its subscribers – if they agreed to share with Facebook all of the social actions users take on their platform.
The potential for Facebook to gather extra data on people’s activities on third-party apps and websites was enormous. According to SEC filings, more than nine million websites and apps had, by that time, built integrations with Facebook.
Mike Vernal, then vice-president for product and engineering at Facebook, explained Zuckerberg’s plan to senior managers in November 2012. In future, he said, Facebook would “require partners to give us an API, so we’re both exposing APIs to each other and can access each other’s data”.
“The vast majority of our time (80% plus) should be spent on getting data or money from developers,” he said. Facebook’s core goals became “how to maximise data acquisition or maximise revenue acquisition”.
Facebook managers put pressure on staff to move quickly. Justin Osofsky, then vice-president, complained to Vernal that Facebook was not making enough progress with “apps that read lots of data with little reciprocal value exchange”.
An internal note shows that Facebook planned to reserve the right to “crawl” its partners’ websites or apps if they refused to share their users’ data with Facebook.
Group product manager Rose Yao wrote: “If the partner has an API, we will use that mechanism. We also reserve the right to crawl the partner website for the user’s data.”
She went on: “Partners cannot blacklist or block Facebook from crawling [their] site or using the API. If they do, Facebook reserves the right to block the partner from using our APIs.”
However, Facebook needed a bargaining chip to make application developers comply. Executives reasoned that if Facebook’s social graph was still widely available through APIs, developers would have no incentive to share their data.
On the other hand, if Facebook strictly controlled the APIs, and controlled their access to user data, developers that relied heavily on them would be forced to reciprocate.
The company began restricting its application programming interfaces, removing automatic access for developers to Facebook data. At the same time, it began a whitelisting programme to give favoured application developers continued access to its APIs.
Conversations between Zuckerberg and Sam Lessin, vice-president for product management, suggested that the Facebook executives used privacy as a gloss to make its plans to turn off its APIs to developers and the public more palatable.
“The message to the ecosystem becomes that we are deprecating a few things for privacy reasons/to simplify our model for users, we are enforcing non-competitive terms we have always had and we are opening up a series of new whitelist APIs for the best companies that want to build the best social services and want to work with us deeply,” wrote Lessin.
Facebook announced that it would restrict developers’ access to APIs on 30 April 2014, but internal documents reveal that the social network had already whitelisted 5,200 by November 2013.
Democracy – no thanks!
According to Srinivasan, Facebook faced a roadblock if it wanted to use the code from the Like buttons and other plug-ins to bypass the privacy of its users. So in late 2012, it set about dismantling a referendum process it had instituted four years previously to allow users to vote on its privacy policies.
Back in 2009, Facebook had tried to assuage its users’ anger after unilaterally making their private data public by giving them voting rights over its future privacy policies. “No other company has made such a bold move towards transparency and democratisation,” said Simon Davies, director of Privacy International, at the time.
Then Facebook committed itself to allowing users to decide the contents of key contractual documents, including its privacy policy, the Facebook principles, and the statements of rights and responsibilities.
Later, Facebook proposed a series of changes that would pave the way to reduce the privacy of users. One of its proposals was to abolish future referendums altogether.
Some 80% of users voted against the plans, but it hardly mattered. Facebook had a get-out clause – for a vote to be binding, 30% of its users would have to vote. With more than one billion users and just under 600,000 votes cast, Facebook was able to discard the results of the election and push on with its plans to build its surveillance network.
Facebook’s global lobbying campaign against GDPR
Facebook’s internal voting procedures were one thing, government legislation another. By 2013, Sheryl Sandberg (pictured left), Facebook’s chief operating officer (COO), and Zuckerberg were increasingly concerned about the impact of the proposed European Data Protection Directive – which would later become the General Data Protection Regulation (GDPR).
Facebook mobilised its staff for a huge lobbying campaign at the World Economic Forum in Davos, holding private meetings with policy-makers, in what executives viewed as an uphill battle to ensure Europe’s data protection standard was not “overly prescriptive”.
The COO took full advantage of her influential book Lean in: Women, work, and the will to lead to try to build rapport with European decision-makers, including Viviane Reding, EU commissioner for data protection, and Neelie Kroes, commissioner for communications technology.
Internal documents record a private meeting with the chancellor of the exchequer, George Osborne. Sandberg urged him to become “even more active and vocal in the European data protection debate and to really shape the proposals”.
Sandberg and Marne Levine, Facebook’s vice-president of global public policy who went on to become COO of Instagram, had private discussions with the Irish prime minister, Enda Kenny. Facebook had made huge investments in Dublin, creating thousands of jobs, and Ireland was just about to take over the presidency of the European Union.
“We used the meeting to press them to make the EU Data Protection Directive a priority for their presidency. The prime minister said they could exercise significant influence as president of the EU, even though technically Ireland is supposed to remain neutral in this role,” Facebook’s highly confidential memo records.
Facebook develops VPN to spy on users
Facebook was working with Onavo, an Israeli company that had developed a virtual private network (VPN), which Facebook eventually bought in October 2013.
Onavo had developed a number of applications for Android and Apple mobile phones, which helped mobile phone owners to use less mobile data and to protect their mobile phone privacy.
Its flagship app was Onavo Protect, a VPN that purported to protect people’s privacy as they browsed the web. According to advertising on Facebook, “Onavo Protect helps keep you and your data safe when you browse information on the web”.
But highly confidential internal documents show that far from protecting users’ privacy, Facebook was using Onavo’s technology to track more than 30 million users who had downloaded the app.
Facebook was able to use data gathered from Onavo’s applications for its own commercial advantage, accessing non-public, sensitive commercial information about the download and open rates of mobile phone apps that it regarded as competitive.
Onavo showed Facebook what country the user was in and the model of their device, allowing the social network to track the growth of apps across different geographies.
A confidential Industry Update presentation reveals that Facebook was combining Onavo user data with Facebook’s own data, contrary to its public representations, as early as March 2013.
Javier Olivan, Facebook’s vice-president of growth, gave a presentation that compared the open rates of Facebook, Twitter, Instagram, WhatsApp, Snapchat, Pinterest and other apps on mobile phones.
A similar presentation a month later looked at an expanded pool of apps, including Line, Tumblr, Foursquare, Skype and MessageMe.
All of this data was obtained via Onavo users, who were unaware that their information was being used in this way.
By December 2014, Facebook had produced a spreadsheet ranking 82,000 apps based on engagement and reach tracked from Onavo users.
Facebook also used Onavo in its startup programme, FBStart, which offered early startups mentoring and support.
The social network actively tracked the startups to decide which companies should be accepted on to the programme and which were competitive threats.
Onavo’s days became numbered when Facebook insiders disclosed publicly to the Wall Street Journal in 2018 that Facebook had used data from Onavo to build a database to track rivals and the most promising startups.
In a now deleted blog post, security expert Will Strafach wrote that Onavo Protect enabled Facebook to collect identifying data about the user, including the cellular network name, mobile network code, the apple mobile software version and mobile country code. Strafach claimed it also sent back details on how much Wi-Fi data and cellular data each user had consumed.
Following complaints from Apple that Onavo had broken its terms of service, the social media company agreed to withdraw Onavo from Apple’s iPhone store. Facebook ultimately abandoned Onavo in May 2019.
Internal documents reveal plans to harvest more user data
Confidential logs compiled by Facebook’s head of privacy reveal that Facebook’s product team had been gathering data about Facebook users’ phone calls – including the duration and frequency, incoming and outgoing calls, and text messages – without the knowledge of Facebook’s legal and policy unit.
Facebook had been using data to recommend “people you might know” to Facebook users. Managers had put the practice on hold pending legal advice, but the product team was keen to continue harvesting call and text data, a note by Matt Scutari, Facebook’s manager of privacy and public policy team revealed.
“We’ve been working to understand privacy risks associated with several Android permissions that will go out in the next release, including permissions associated with reading call logs and SMS,” Scutari wrote.
Facebook planned to “mitigate policy risks” by creating an FAQ to give the public a general explanation of Android permissions with a link to further information on the Google Play store.
But, following pressure to be more open, it settled on an FAQ that gave “non-exhaustive examples” of the five most sensitive permissions.
The social media company was also considering a partnership with a financial services company, Argus, which had access to 90% of credit card transactions and 30% of debit card transactions in the US.
According to the confidential note, Argus bought anonymised transaction data directly from banks, and then worked with a data marketing company, Epsilon, to re-identify the data.
Other plans underway included a project with Cisco to collect anonymised insights about Facebook users when their mobile devices connect to Wi-Fi services in shops.
Facebook was also developing a quiz called “How well do you know your friends?”, designed to encourage users’ friends to fill in the blanks in their profiles. Initial feedback, Scutari wrote, “is that this flow suggests we are trying to trick users into providing data on their friends, but legal and PR have signed off on this”.
Onavo rises from the ashes
The death of the Onavo app did not stop Facebook using VPNs to collect data on people’s web and mobile phone activity.
Facebook re-used parts of the code from Onavo to develop a Facebook Research VPN and was paying users, including teenagers, up to $20 a month to give up their privacy, TechCrunch revealed.
In a programme dubbed Project Atlas, Facebook had been paying users aged 13 upwards up to $20 a month to spy on their mobile phone use by downloading a research application to their phones. Facebook used third-party organisations to distance itself from the app.
One registration page run by Applause encouraged people to sign up to “an exciting opportunity”. Participation, it said, “requires only that that participants install an application on their mobile phones”. In return, they would receive a payment each month if they kept the application installed.
In a reflection of the sensitivity of the project for Facebook, participants were urged to keep their involvement in the project “strictly confidential” and not to disclose information about the project to anyone else.
Security researcher Strafach told TechCrunch that the app gave Facebook the capability to access “astonishingly detailed” information, including users’ private social media messages, instant messaging chats, photographs and videos, web browsing and physical location. In some cases, Facebook could collect information even though it had been encrypted by an app.
Facebook came up with a ruse to distribute the app on iPhone without breaching Apple’s terms of service. The company encouraged users to download an “enterprise developer certificate” to access the app. Apple provided these certificates on the understanding they would only be used by Facebook’s own developers, not the public, so the tactic was hardly within the spirit of Apple’s terms of use.
Apple’s response was swift when the matter became public. It revoked Facebook’s enterprise certificate. Because Facebook relied heavily on Apple devices for its day-to-day operations and planning, it was brought almost to a standstill for two days.
Politicians reacted angrily to Facebook’s behaviour. US senator Richard Blumenthal, for example, told TechCrunch that the research app was “another astonishing example of Facebook’s complete disregard for data privacy and eagerness to engage in anti-competitive behaviour”.
Facebook later disclosed to Blumenthal that the app had obtained personal data on 187,000 people using it, including 4,300 teenagers. Some 31,000 records came from people in the US, and the rest from India.
How Facebook’s privacy policy became less private over time
As Facebook gained popularity, it gradually backtracked on its promises to protect users’ privacy. Between 2005 and 2015, Facebook’s privacy policy grew from 900 words to more than 12,000 words. As the policy grew in size, it became less transparent, harder for users to understand and contained fewer options for users to control their personal data.
Jennifer Shore and Jill Steinman, researchers at Harvard College, ranked each new version of Facebook’s privacy policy against a standard framework, and found a decline in 22 out of 23 of the standards they measured.
Over time, the score the academics awarded Facebook for transparency of its use of tracking technologies in its privacy policy dropped from the maximum of four to zero. Facebook’s explanation of how and when it disclosed data to third parties followed the same pattern, as did other important measures of privacy disclosure. The longer Facebook’s privacy policy became, the less helpful it was.
Facebook breaks its promises not to track users
By early 2014, Srinivasan reports in her groundbreaking study, most of Facebook’s competitors had fallen by the wayside. Rivals including MySpace, Friendster and many long-forgotten networks such as Mixi, Cyworld, hi5, BlackPlanet, Yahoo’s 360, Bebo and dozens of others had left the market. By June 2014, Facebook’s nemesis, Google, closed its social network. There was no longer competitive pressure to maintain privacy.
Facebook swooped, announcing that it would start tracking people’s activities across the web using its network of third-party web pages containing Facebook code. After spending seven years promising it would not track people online, it now planned to repurpose its Like buttons, pixels and other plug-in code to gather data on the articles people read, the products they bought and the websites they viewed.
“When Facebook entered the market, the consumer’s privacy was paramount. The company prioritised privacy, as did its users – many of whom chose the platform over others due to Facebook’s avowed commitment to preserving their privacy”
Dina Srinivasan
“Facebook kept making statements – ‘no we are not tacking you using these devices’, ‘they are not intended for tracking’, ‘they are only for your safety and protection’, and on and on and on,” says Srinivasan. “Meanwhile, they quietly file a patent to do exactly what they say they are not doing, and they wait, wait, wait for Google to exit the market, and then they start tracking.”
The announcement was bland enough. Facebook was responding to its users’ demands to see ads that were more personally relevant to them: “Today, we learn about your interests primarily from the things you do on Facebook, such as pages you like. Starting soon in the US, we will also include information from some of the websites and apps you use.”
Over the seven years, Facebook had persuaded millions of websites to install Facebook plug-ins. Over the same time, says Srinivasan, Facebook made representations that it would not use its social widgets to track consumers. Some 30% of the top one million most visited sites, including major news websites like the Wall Street Journal, the Washington Post and the Guardian in the UK, had signed up.
“They got publishers to start adopting these buttons because the explicit representation to publishers was ‘we are only receiving information about your users that click on the Like button and we are not going to use that information to compete against you’. At the time, I think it was true, but that is how they got all the publishers on board,” says Srinivasan.
From 2014, once it had turned on tracking, Facebook was in a position to know as much about the readers of each newspaper as the newspaper itself. Plus, it knew what those same readers were reading on other websites, or buying online, and could out-compete online media for ad sales.
But by then, says Srinivasan, it was too late for publishers to turn off their Facebook buttons – they had become reliant on them. “Facebook has monopoly power in the social network,” she says, “so it is incredibly difficult now for these publishers to say no, because how else are they going to distribute their content to people at large?”
Tracking all over the world
Facebook was able to use a combination of technology and the data it held on individuals, not only to track them across the web, mobile devices and mobile applications, but to associate those actions with unique individuals.
Facebook deployed its Atlas Ad Server to track users across multiple devices. Atlas used a combination of cookies and the Facebook ID to identify users browsing on their PCs. On mobile, it used a mixture of the Facebook ID, Facebook software developer kits installed in mobile apps, and unique identifiers from Apple and Android.
A subsequent report for the Belgian Privacy Commission found that Facebook’s tracking was far more invasive than tracking carried out by advertisers and other tracking services on the web.
“Facebook is in a unique position, as it can easily link the browsing behaviour of its users to real-world identities, social network interactions, offline purchases and highly sensitive data such as medical information, religion and political preferences,” it said.
According to Srinivasan, if a user deleted Facebook’s cookies, the profile Facebook had compiled could live on under the user’s real name, and the next time the user visited the social media site, Facebook could replace the cookies.
“To Facebook, it was not user 123456789 that was reading Coming out to your wife, it was simply Jacob Greenberg,” she writes.
How Facebook stopped people opting out of tracking
Facebook did not give people the option to opt out of being tracked over the internet. It declined to honour the Do Not Track setting in web browsers, created to protect people from surveillance of their online activities.
Instead, it directed people to an advertising industry-funded website, created by the Digital Advertising Alliance in the US, the Digital Advertising Alliance of Canada, and the European Interactive Digital Advertising Alliance, that claimed to protect them from tracking by advertisers.
The DAA site was cumbersome and time-consuming to use and often reported technical failures. Even when consumers succeeded in opting out, Srinivasan reveals, they were only able to opt out of receiving targeted adverts, not from Facebook surveillance.
And a technical report for the Belgian Privacy Commission found that even when Facebook users opted out of receiving advertising on the North American and European websites, Facebook continued to track their browsing activities using social plug-ins.
Consumers responded by turning to ad-blocking software to protect their privacy. Shortly after Facebook turned on its tracking software, searches for “how to block ads” peaked, more than doubling in the space of a year.
Facebook already had the technology to overcome ad blockers on mobile phones, but ad blockers on desktop computers were a serious threat to its business model. Facebook noted in its filings to the Securities Exchange Commission that ad-blocking software posed a serious financial threat: “These technologies have had an adverse effect on our financial results and, if such technologies continue to proliferate, in particular with respect to mobile platforms, our future financial results may be harmed.”
Facebook raced to develop countermeasures. By August 2016, it announced that it had found a way to circumvent ad blockers – by redesigning the Facebook website so that advertisements were indistinguishable from other content.
By the third quarter of 2016, Facebook’s chief financial officer (CFO), David Wehner, was able to report an “18% year-over-year growth in revenues from desktop ads”, claiming that such acceleration was “largely due to our efforts on reducing the impact of ad blocking”.
If you shop at Big Gay Ice Cream, Facebook will know
Facebook began to pursue plans to “tactfully bring Facebook identity into a real world filled with connected devices everywhere”, known internally as Project Gravity.
Facebook’s aim was to extend its online surveillance to Facebook users’ physical locations, using Bluetooth beacons to track their shopping habits without having to upload GPS data.
“Partnerships will be critical, as we begin to explore” the idea, said Ime Archibong, Facebook’s vice-president for partnerships, in February 2014.
By November of that year, Facebook had identified 18 merchants in New York City to take part in a Project Gravity pilot, including the Museum of Modern Art, Brooklyn Bowl, Big Gay Ice Cream, Grand Central Oyster Bar and the Strand Bookstore.
“The goal is to whittle the list down to 10 partners who 1) believe in the project’s vision and 2) are educated about the risks (both real and perceived) associated with the technology,” Archibong told the product partnerships team.
“To date, we’ve purposefully omitted the use of terms ‘beacon’ and ‘hardware’ in our discussions, and have only referenced ‘leveraging an array of location technologies’ in order to pique our partners’ appetites without disclosing any material information before we are actually prepared to manage its dissemination.”
Shortly after the pilot launched in late January 2015, the team working on Gravity were planning a Bluetooth update for Android mobile phones that would allow the in-store hardware beacons to start tracking users.
Executives worried that enterprising journalists could discover surveillance plans
But internally, some Facebook staff began to worry that the Gravity tracking programme might spark a privacy backlash, particularly as Facebook intended to launch it together with a controversial update for Android phone users.
According to internal documents, Facebook’s growth team planned an update to Android phone users that would allow Facebook to continually upload their text and phone call history, unless they actively chose to opt out.
Michael LeBeau, who was hired by Facebook to manage the Gravity programme, put it bluntly: “This is a pretty high-risk thing to do from a PR perspective, but it appears that the growth team will charge ahead and do it.”
He warned that “enterprising journalists” could dig into what exactly the new update is requesting, then write stories such as “Facebook uses new Android update to pry into your private life in ever more terrifying ways – reading your call logs, tracking you in businesses, with beacons, etc”.
Gravity has “potentially scary bits”, he wrote. “We’re still in a precarious position of scaling without freaking people out. If a negative meme were to develop around Facebook Bluetooth beacons, businesses could become reticent to accept them from us, and it could stall the project and its strategy entirely.”
LeBeau said “the safest course of action” would be “to avoid shipping our [Bluetooth] permission at the same time”.
Facebook’s deputy chief privacy officer, Yul Kwon, offered a solution. “The growth team is now exploring a path where we only request ‘read call log’ permission, and hold off on requesting any other permissions for now,” he said.
Initial tests had shown that this would allow Facebook to upgrade the Facebook app for Android users, without them having to go through a permissions dialogue to give Facebook approval to access their call log data.
“It would still be a breaking change, so users would have to click to upgrade, but no permissions dialogue screen. They’re trying to finish testing by tomorrow to see if the behaviour holds true across different versions of Android,” he said.
Outrage as people discover Facebook has their phone records
Facebook users remained oblivious to the practice until the aftermath of the Cambridge Analytica scandal, when a movement to delete Facebook gained momentum. Facebook urged users to take the less drastic step of deactivating their accounts and to download copies of their private data.
Many were shocked to find their call histories embedded in their Facebook data. One user, Dylan McKay, reported on Twitter that Facebook had kept a record of all the contacts he had ever had on his phone, including contacts he had deleted. Facebook had also recorded the time, date and duration of every call he had made, and his SMS messages.
Facebook came clean in a blog post. It had been uploading people’s call and text message histories since 2005 on Facebook’s messaging and later Facebook Lite apps, but only when people had agreed by opting in to having their data collected.
“Contact importers are fairly common among social apps and services as a way to more easily find the people you want to connect with,” it wrote, but failed to give an explanation for why Facebook wanted to collect people’s call and texting history and how it used that data.
Staff report privacy bugs, but engineers don’t rush to fix them
Some of Facebook’s own staff had been raising questions since 2011 when they discovered that other people could see their private data when they used third-party apps.
Simon Cross, a partner engineer, sparked an internal debate after his friends, who were using the Guardian newspaper app, could see data he had marked as private on Facebook.
Three years later, Connie Yang, a product designer at Facebook, complained that photographs she had marked as private were accessible on Facebook apps.
Facebook engineers were well aware of the problem. In fact, for some apps, including dating apps such as Tinder, the ability to access photographs was essential.
“In Connie’s case, the experience was poor,” wrote Eddie O’Neil, then a product manager for Facebook. “In Tinder’s case, the experience of letting people explicitly choose to widen the audience of ‘only me’ or ‘friends’ photos’ to everyone using the app is pretty good.”
In another incident, Facebook failed to act on reports of a bug that could result in people’s data being disclosed to mobile apps without them realising.
In October 2014, a Facebook employee created a task headed “Apps Others Use privacy permissions do not persist after turning Platform off/on”.
This meant that if Facebook users turned off the access of an app to Facebook data and later reinstated it, the app would be able to access their private information without their knowledge, regardless of their previous privacy settings.
Facebook engineers debated which team in Facebook would be responsible for dealing with the problem, and whether the apparent problem was a feature rather than a bug.
In January 2015, O’Neil closed the task without having fixed the problem, stating: “Friend permissions are deprecated and being removed this year – given that, I don’t expect we will make changes to how this works.”
Advertisers use Facebook data to discriminate on race and gender
The data Facebook collects for advertising is not always used in an ethical way.
In the US, the Department of Housing and Urban Development (HUD) charged Facebook with housing discrimination, alleging that its targeted advertising platform violates the Fair Housing Act by restricting who can view ads based on sensitive demographic data such as race or gender.
A research paper (which is yet to be peer reviewed) has questioned whether the alleged discrimination is a result of advertisers’ targeting choices or the way the platform itself has been built.
The researchers said that Facebook’s ad delivery process can significantly alter the intended audience chosen by advertisers, adding that “we observed skewed delivery along racial and gender lines”.
This is not the first time Facebook has come under scrutiny for discriminatory advertising practices. In 2016, for example, ProPublica found that the company’s advertising portal explicitly allowed advertisers to exclude black, Hispanic and other ethnic affinities from seeing ads.
Cambridge Analytica leads to worldwide regulatory investigations
Facebook’s practice of providing app developers with the personal data of people who sign up to an app, along with data about their friends who also sign up, was to prove its undoing.
In March 2018, Stories in the Observer and the New York Times revealed that Cambridge Analytica, a company owned by a hedge fund billionaire, and once headed by former Trump advisor Steve Bannon, harvested 50 million profiles from Facebook in a privacy breach of an unprecedented scale. Facebook later raised the figure to up to 87 million profiles.
Whistleblower and former Cambridge Analytica employee Christopher Wylie disclosed that the company had taken personal information from millions of Facebook users without authorisation in early 2014, to profile voters in the US and to target them with personalised political advertisements.
Cambridge Analytica had obtained the data through Aleksandr Kogan, a Cambridge University academic, and his company, Global Science Research (GSR). Kogan had built a personality test app called thisisyourdigitallife, as part of a programme to identify people who would be most susceptible to political advertising.
Global Science Research, together with Cambridge Analytica, paid hundreds of thousands of users to take the test and have their data collected for academic study. However, the app was also able to collect data on the Facebook friends of everyone who took the tests, creating a pool of 50 million data subjects.
Facebook’s platform policy allowed organisations to legitimately harvest data from the friends lists of people who signed up to apps, but only to improve users’ experiences. But selling data for advertising purposes was strictly prohibited.
Facebook tried to prevent the story becoming public by threatening the Observer newspaper with legal action and putting its author Carole Cadwalladr under pressure. Its publication had dramatic repercussions for Facebook, which faces lawsuits, regulatory actions and political enquiries around the world.
In the wake of the scandal, the FTC began an investigation into whether Facebook had violated its 2012 consent order, which required it to step up its privacy and security practices and to communicate honestly with its users. Facebook is bracing itself for fines of between $3bn and $5bn. The regulator is under pressure to name Zuckerberg personally in a new complaint against Facebook.
The CEO did what he has often done after Facebook is under pressure. He publicly apologised. Under fire over fake news, Russian interference in elections, and the collection of data from Facebook by Facebook apps, Zuckerberg acknowledged: “It’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well. That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy.”
Zuckerberg made another startling admission. He told journalists that most of Facebook’s 2.3 billion users had had their personal details scraped. Facebook’s default setting made it possible for people to search for users and obtain their profile data, using their mobile phone or email address.
“So, I certainly think it is reasonable to expect that if you had that setting turned on, that at some point during the last several years, someone has probably accessed your public information in this way,” admitted the CEO.
Chief technology officer Mike Schroepfer went further, admitting that the feature had been abused by malicious actors. “Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way. So we have now disabled this feature. We’re also making changes to account recovery to reduce the risk of scraping as well,” he said.
Facebook shares data with devices
New questions were raised over Facebook’s compliance with the FTC agreement in June 2018, when the New York Times revealed that Facebook had entered into agreements with at least 50 device makers, including Apple, Amazon, BlackBerry, Microsoft and Samsung, to give them access to Facebook’s user data.
Facebook had cut off the APIs that allowed third-party apps to download data belonging to users’ friends after the Cambridge Analytica scandal. But the social network failed to disclose it had continued to allow mobile phone and other device makers access to the same data.
Facebook had briefly disclosed to German lawmakers that it shared information on its users with BlackBerry. The newspaper found that Facebook had deals in place with device manufacturers and could retrieve information about Facebook users’ relationship status, religion, political leaning and upcoming events, and in some cases access sensitive data on their friends. US senator Diane Feinstein wrote to Zuckerberg demanding answers.
Damage limitation: Zuckerberg’s privacy pivot
Zuckerberg used the F8 Developer conference in April 2019 to announce a repositioning of Facebook towards privacy. The centrepiece was a plan to extend end-to-end encryption to allow people to send private messages across WhatsApp, Instagram and Facebook.
Facebook compared itself to a digital town square where there was an expectation that everything Facebook users did would be made public. In future, Zuckerberg promised, Facebook would also offer private spaces – digital living rooms – where people can have private conversations with small groups of people or share content that “does not stick around forever”.
“This is the next chapter of our service. In addition to the digital town square, we also need a digital equivalent of the living room that is just as built out as a platform with all of the different ways that we want to interact privately, with messaging and small groups and sharing, where the content does not stick around forever,” he said.
Zuckerberg said Facebook would also develop new sources of income, for example by allowing people to buy products on its Instagram photo sharing service.
Facebook is hooked on collecting data
But it is impossible to escape the fact that almost all of Facebook’s $15bn revenues come from targeted advertising, and that Facebook will need to continue gathering data about its users’ activities, and using that to make predictions about their behaviour.
For Srinivasan, that makes Facebook’s pivot to privacy more window dressing than reality.
“That is the entire thing; they make 98% of their revenues from advertising. You cannot have a privacy pivot without breaching your fiduciary responsibility to your shareholders,” she says.
Facebook does not need to analyse the contents of messages and posts to deliver targeted advertising. It is the metadata, who people are communicating with, what websites they are visiting, that matters.
“Your metadata is not private. And Facebook uses this information fully in its ad business, but it is able to maintain this public-facing pivot to privacy,” she says.
By encrypting data, Facebook has very little to lose. “So they gave up something that is worth nothing to them,” says Srinivasan.
She gives a macabre example: If a person is standing on San Francisco’s famous Golden Gate Bridge, and calls their doctor for two minutes, then their psychiatrist for three minutes, and then the suicide hotline for 14 minutes – the contents of the phone calls are irrelevant.
“Facebook does not know what you said in those conversations. But it knows you are standing at the peak of the foot of the Golden Gate Bridge at 11.39pm on a Friday, calling a suicide hotline,” she says.
Even if Facebook manages to substantially increase its revenues by taking a cut of sales from sellers who advertise on Instagram, it will still need to gather data on Facebook users’ activities and interests to target them with products they are interested in.
Srinivasan likens Facebook’s pivot to privacy to “the biggest oil driller in the history of human kind saying they are going to drill less oil – you know you are going to have to be very suspect”.
Breaking up is hard to do
US regulators are entering new territory when it comes to regulating technology. There is little precedent for applying antitrust laws to companies that offer their services for free, like Facebook.
Democratic presidential candidate Elizabeth Warren is backing the break-up of US big-tech companies using antitrust laws, and in the case of Facebook stripping away its acquisition of WhatsApp and Instagram. “Facebook would face real pressure from Instagram and WhatsApp to improve the user experience and protect our privacy,” she wrote.
Chris Hughes, who co-founded Facebook with Mark Zuckerberg at Harvard University, argued in a New York Times op-ed along similar lines.
Increasingly, regulators are beginning to think about privacy as an antitrust issue. The US Department of Justice’s assistant attorney general for antitrust, Makan Delrahim, argued in June that privacy is an important element of competition law, saying: “By protecting competition, we can have an impact on privacy and data protection.”
US state attorneys general have meanwhile written a joint letter to the FTC urging the regulator to consider issues beyond consumer prices, including the impact on privacy, quality and innovation, in antitrust cases.
The US Federal Trade Commission is expected not just to look at whether Facebook has violated the terms of its consent agreement, but whether its mergers and acquisitions of the likes of WhatsApp, Instagram and others are in breach of competition law.
Under US competition law, companies have a duty to behave in a way that does not mislead, and according to Srinivasan, Facebook has failed that test. There are many ways to mislead, and it’s not limited to telling outright lies, but includes a company’s behaviour.
“In competition law you can’t build a monopoly by engaging in illicit behaviour,” she says.
A break-up by itself, though, is unlikely to be enough, argues Srinivasan. More important is that Facebook should be made to become interoperable with other social networks.
“The crazy thing with Facebook is that it had this interoperability fully, completely functional as part of its API. But it shut it down and revoked it as it built up its market power. And now it’s benefiting from this closed communications network, that now nobody can compete with,” she says.
Facebook is said to be cooperating with the FTC during its current investigation and is said to have provided the regulator with tens of thousands of documents, emails and files.
Antitrust regulation takes a long time. Under US law, prosecutions can be brought by the FTC, state attorneys general, the Department of Justice and private individuals.
“It is difficult for me to imagine a world 10 years from now, where we did not see Facebook and Google antitrust litigation,” says Srinivasan.
Editorial advisor Duncan Campbell. Interactive graphic designed by Crina Boros.