FaceBook is The Creator Frankenstein and Maybe It Can't Be Control for Russian Evil?


 
 Make sure you know what this is because it might be in your computer already. (Reuters pic)

Victor Frankenstein, looking over a creature he had made, eventually realized that he couldn’t control his creation. CreditHammer Film, via Photofest 
[This page was writen by in the The New York Times]
On Wednesday, in response to a ProPublica report that Facebook enabled advertisers to target users with offensive terms like “Jew hater,” Sheryl Sandberg, the company’s chief operating officer, apologized and vowed that the company would adjust its ad-buying tools to prevent similar problems in the future.
As I read her statement, my eyes lingered over one line in particular:
“We never intended or anticipated this functionality being used this way — and that is on us,” Ms. Sandberg wrote.
It was a candid admission that reminded me of a moment in Mary Shelley’s “Frankenstein,” after the scientist Victor Frankenstein realizes that his cobbled-together creature has gone rogue.
“I had been the author of unalterable evils,” he says, “and I lived in daily fear lest the monster whom I had created should perpetrate some new wickedness.”
If I were a Facebook executive, I might feel a Frankensteinian sense of unease these days. The company has been hit with a series of scandals that have bruised its image, enraged its critics and opened up the possibility that in its quest for global dominance, Facebook may have created something it can’t fully control. Facebook is fighting through a tangled morass of privacy, free-speech and moderation issues with governments all over the world. Congress is investigating reports that Russian operatives used targeted Facebook ads to influence the 2016 presidential election. In Myanmar, activists are accusing Facebook of censoring Rohingya Muslims, who are under attack from the country’s military. In Africa, the social network faces accusations that it helped human traffickers extort victims’ families by leaving up abusive videos.
Few of these issues stem from willful malice on the company’s part. It’s not as if a Facebook engineer in Menlo Park personally greenlighted Russian propaganda, for example. On Thursday, the company said it would release political advertisements bought by Russians for the 2016 election, as well as some information related to the ads, to congressional investigators.
But the troubles do make it clear that Facebook was simply not built to handle problems of this magnitude. It’s a technology company, not an intelligence agency or an international diplomatic corps. Its engineers are in the business of building apps and selling advertising, not determining what constitutes hate speech in Myanmar. And with two billion users, including 1.3 billion who use it every day, moving ever greater amounts of their social and political activity onto Facebook, it’s possible that the company is simply too big to understand all of the harmful ways people might use its products.
“The reality is that if you’re at the helm of a machine that has two billion screaming, whiny humans, it’s basically impossible to predict each and every possible nefarious use case,” said Antonio GarcĂ­a MartĂ­nez, author of the book “Chaos Monkeys” and a former Facebook advertising executive. “It’s a Whac-a-Mole problem.”
Elliot Schrage, Facebook’s vice president of communications and public policy, said in a statement: “We work very hard to support our millions of advertisers worldwide, but sometimes — rarely — bad actors win. We invest a lot of time, energy and resources to make these rare events extinct, and we’re grateful to our community for calling out where we can do better.” When Mark Zuckerberg built Facebook in his Harvard dorm room in 2004, nobody could have imagined its becoming a censorship tool for repressive regimes, an arbiter of global speech standards or a vehicle for foreign propagandists.
But as Facebook has grown into the global town square, it has had to adapt to its own influence. Many of its users view the social network as an essential utility, and the company’s decisions — which posts to take down, which ads to allow, which videos to show — can have real life-or-death consequences around the world. The company has outsourced some decisions to complex algorithms, which carries its own risks, but many of the toughest choices Facebook faces are still made by humans.
“They still see themselves as a technology middleman,” said Mr. GarcĂ­a MartĂ­nez. “Facebook is not supposed to be an element of a propaganda war. They’re completely not equipped to deal with that.”  
Even if Mr. Zuckerberg and Ms. Sandberg don’t have personal political aspirations, as has been rumored, they are already leaders of an organization that influences politics all over the world. And there are signs that Facebook is starting to understand its responsibilities. It has hired a slew of counterterrorism experts and is expanding teams of moderators around the world to look for and remove harmful content.
On Thursday, Mr. Zuckerberg said in a video posted on Facebook that the company would take several steps to help protect the integrity of elections, like making political ads more transparent and expanding partnerships with election commissions.
“We will do our part not only to ensure the integrity of free and fair elections around the world, but also to give everyone a voice and to be a force for good in democracy everywhere,” he said.
But there may not be enough guardrails in the world to prevent bad outcomes on Facebook, whose scale is nearly inconceivable. Alex Stamos, Facebook’s security chief, said last month that the company shuts down more than a million user accounts every day for violating Facebook’s community standards. Even if only 1 percent of Facebook’s daily active users misbehaved, it would still mean 13 million rule breakers, about the number of people in Pennsylvania.
In addition to challenges of size, Facebook’s corporate culture is one of cheery optimism. That may have suited the company when it was an upstart, but it could hamper its ability to accurately predict risk now that it’s a setting for large-scale global conflicts.
Several current and former employees described Facebook to me as a place where engineers and executives generally assume the best of users, rather than preparing for the worst. Even the company’s mission statement — “Give people the power to build community and bring the world closer together” — implies that people who are given powerful tools will use those tools for socially constructive purposes. Clearly, that is not always the case.
Hiring people with darker views of the world could help Facebook anticipate conflicts and misuse. But pessimism alone won’t fix all of Facebook’s issues. It will need to keep investing heavily in defensive tools, including artificial intelligence and teams of human moderators, to shut down bad actors. It would also be wise to deepen its knowledge of the countries where it operates, hiring more regional experts who understand the nuances of the local political and cultural environment.
Facebook could even take a page from Wall Street’s book, and create a risk department that would watch over its engineering teams, assessing new products and features for potential misuse before launching them to the world.
Now that Facebook is aware of its own influence, the company can’t dodge responsibility for the world it has helped to build. In the future, blaming the monster won’t be enough.

Comments