Facebook

Facebook Admits It Might Be Poisoning Democracy

As user trust tumbles, tech giants are forced into a risky gamble.
Image may contain Mark Zuckerberg Human Person and Skin
By Justin Sullivan/Getty Images.

Last week, Apple C.E.O. Tim Cook, whose company is inching towards a trillion-dollar market valuation, told a crowd at Harlow College in Essex, England, that he’s leery of social media’s effects on younger generations. “I don’t have a kid, but I have a nephew that I put some boundaries on,” he said, adding, “There are some things that I won’t allow; I don’t want them on a social network.” He went on to say that he does not “believe in overuse [of technology] . . . I’m not a person that says we’ve achieved success if you’re using it all the time.” With the exception of some early employees, there’s been relatively minimal public hand-wringing over the possibility that Facebook and other social-media platforms could have a net negative impact on society—a question that is so far unresolved. But users’ eroding trust has spurred them to grapple with the issue, gambling that the appearance of transparency will counteract any damage done to their bottom lines.

Facebook has, in large part, spearheaded the trend. The company’s overtures continued on Monday when it published an essay series entitled “Hard Questions,” which examines the social-media giant’s larger impact. Ultimately, Facebook Civic Engagement Product Manager Samidh Chakrabarti writes in one essay, there’s no guarantee that Facebook is a net good for democracy, or that the “positives are destined to outweigh the negatives.” Still, he says, Facebook has a “moral duty” to understand how its technology affects democracy, and is “working diligently to neutralize [the] risks” of malicious actors weaponizing its platform. In another essay, Harvard Law professor Cass Sunstein writes that social-media platforms “are terrific for democracy in many ways, but pretty bad in others. And they remain a work in progress, not only because of new entrants, but also because the not-so-new ones (including Facebook) continue to evolve.”

The series’s launch follows the news on Friday that Facebook will begin to survey its users about the news sources they trust, in what amounts to an effort to rank publications by newsworthiness. “As part of our ongoing quality surveys, we will now ask people whether they’re familiar with a news source and, if so, whether they trust that source,” C.E.O. Mark Zuckerberg wrote in a Facebook post. “The idea is that some news organizations are only trusted by their readers or watchers, and others are broadly trusted across society, even by those who don’t follow them directly.” In a clunky effort to improve transparency, Facebook is putting its users—the same users who find it difficult to distinguish misinformation from truth when fake news stories populate the News Feed—in charge. (The plan was immediately panned by the tech press: “What could possibly go wrong?” asked BuzzFeed News tech editor Mat Honan, while Recode’s Tony Romm pointed out that “designing the survey is going to require Facebook to make the sort of editorial choices that it’s trying to avoid.”)

For Facebook, the fear that users have soured on the platform is well founded: in 21 of 28 countries recently surveyed by communications marketing firm Edelman, trust in search and social-media platforms has bottomed out. Sixty-four percent of respondents don’t believe that online companies are regulated enough, and 63 percent think they lack transparency. Sixty-two percent believe that social-media platforms are selling users’ data without their knowledge, and just over half of respondents—57 percent—think that platforms like Facebook and Twitter take advantage of their users’ loneliness.

The changes Facebook and other social-media giants have put in place—overhauling News Feed to prioritize posts from friends and family over posts from publishers; allowing users to pick and choose whom they deem trustworthy; releasing new data about Russian meddling efforts (last week, Twitter revealed that nearly 700,000 users were exposed to posts from more than 50,000 automated accounts with ties to the Kremlin)—are all symptoms of an internal cost-benefit analysis: earning users’ trust is worth casting sincere doubt on their own products. But with few exceptions, the “techlash” tide is strong, and noises of contrition from those who stand to lose the most may not be enough to bring about its reversal.