Fake News, Propaganda and Lies Can Destroy a Democracy




Who was the first black president of America? It’s a fairly simple question with a straightforward answer. Or so you would think. But plug the query into a search engine and the facts get a little fuzzy.
 When I checked Google, the first result – given special prominence in a box at the top of the page – informed me that the first black president was a man called John Hanson in 1781. Apparently, the US has had seven black presidents, including Thomas Jefferson and Dwight Eisenhower. Other search engines do little better. The top results on Yahoo and Bing pointed me to articles about Hanson as well.
Welcome to the world of “alternative facts”. It is a bewildering maze of claim and counterclaim, where hoaxes spread with frightening speed on social media and spark angry backlashes from people who take what they read at face value. Controversial, fringe views about US presidents can be thrown centre stage by the power of search engines. It is an environment where the mainstream media is accused of peddling “fake news” by the most powerful man in the world. Voters are seemingly misled by the very politicians they elected and even scientific research - long considered a reliable basis for decisions - is dismissed as having little value.
For a special series launching this week, BBC Future Now asked a panel of experts about the grand challenges we face in the 21st Century – and many named the breakdown of trusted sources of information as one of the most pressing problems today. In some ways, it’s a challenge that trumps all others. Without a common starting point – a set of facts that people with otherwise different viewpoints can agree on – it will be hard to address any of the problems that the world now faces.
Having a large number of people in a society who are misinformed is absolutely devastating and extremely difficult to cope with – Stephan Lewandowsky, University of Bristol 
The example at the start of this article may seem a minor, frothy controversy, but there is something greater at stake here. Leading researchers, tech companies and fact-checkers we contacted say the threat posed by the spread of misinformation should not be underestimated.
Take another example. In the run-up to the US presidential elections last year, a made-up story spread on social media claimed a paedophile ring involving high-profile members of the Democratic Party was operating out of the basement of a pizza restaurant in Washington DC. In early December a man walked into the restaurant - which does not have a basement - and fired an assault rifle. Remarkably, no one was hurt.
(Credit: Alamy)
After a malicious rumour spread online about a pizza restaurant in Washington DC, a man walked into the restaurant and fired an assault rifle (Credit: Alamy)
Some warn that “fake news” threatens the democratic process itself. “On page one of any political science textbook it will say that democracy relies on people being informed about the issues so they can have a debate and make a decision,” says Stephan Lewandowsky, a cognitive scientist at the University of Bristol in the UK, who studies the persistence and spread of misinformation. “Having a large number of people in a society who are misinformed and have their own set of facts is absolutely devastating and extremely difficult to cope with.”
A survey conducted by the Pew Research Center towards the end of last year found that 64% of American adults said made-up news stories were causing confusion about the basic facts of current issues and events.
Alternative histories
Working out who to trust and who not to believe has been a facet of human life since our ancestors began living in complex societies. Politics has always bred those who will mislead to get ahead.
But the difference today is how we get our information. “The internet has made it possible for many voices to be heard that could not make it through the bottleneck that controlled what would be distributed before,” says Paul Resnick, professor of information at the University of Michigan. “Initially, when they saw the prospect of this, many people were excited about this opening up to multiple voices. Now we are seeing some of those voices are saying things we don’t like and there is great concern about how we control the dissemination of things that seem to be untrue.”
There is great concern about how we control the dissemination of things that seem to be untrue – Paul Resnick, University of Michigan 
We need a new way to decide what is trustworthy. “I think it is going to be not figuring out what to believe but who to believe,” says Resnick. “It is going to come down to the reputations of the sources of the information. They don’t have to be the ones we had in the past.”
We’re seeing that shift already. The UK’s Daily Mail newspaper has been a trusted source of news for many people for decades. But last month editors of Wikipedia voted to stop using the Daily Mail as a source for information on the basis that it was “generally unreliable”.
Yet Wikipedia itself - which can be edited by anyone but uses teams of volunteer editors to weed out inaccuracies - is far from perfect. Inaccurate information is a regular feature on the website and requires careful checking for anyone wanting to use it.
For example, the Wikipedia page for the comedian Ronnie Corbett once stated that during his long career he played a Teletubby in the children’s TV series. This is false but when he died the statement cropped up in some of his obituaries when writers resorted to Wikipedia for help.
(Credit: Getty Images)
Several obituaries for the comedian Ronnie Corbett falsely claimed he had once played a Teletubby because this statement appeared in his Wikipedia entry 
(Credit: Getty Images)
Other than causing offense or embarrassment – and ultimately eroding a news organisation’s standing - these sorts of errors do little long-term harm. There are some who care little for reputation, however. They are simply in it for the money. Last year, links to websites masquerading as reputable sources started appearing on social media sites like Facebook. Stories about the Pope endorsing Donald Trump’s candidacy and Hillary Clinton being indicted for crimes related to her email scandal were shared widely despite being completely made up.
“The major new challenge in reporting news is the new shape of truth,” says Kevin Kelly, a technology author and co-founder of Wired magazine. “Truth is no longer dictated by authorities, but is networked by peers. For every fact there is a counterfact. All those counterfacts and facts look identical online, which is confusing to most people.”
For every fact there is a counterfact and all those counterfacts and facts look identical online – Kevin Kelly, co-founder Wired magazine 
For those behind the made-up stories, the ability to share them widely on social media means a slice of the advertising revenue that comes from clicks as people follow the links to their webpages. It was found that many of the stories were coming from a small town in Macedonia where young people were using it as a get-rich scheme, paying Facebook to promote their posts and reaping the rewards of the huge number visits to their websites.
“The difference that social media has made is the scale and the ability to find others who share your world view,” says Will Moy, director of Full Fact, an independent fact-checking organisation based in the UK. “In the past it was harder for relatively fringe opinions to get their views reinforced. If we were chatting around the kitchen table or in the pub, often there would be a debate.”
But such debates are happening less and less. Information spreads around the world in seconds, with the potential to reach billions of people. But it can also be dismissed with a flick of the finger. What we choose to engage with is self-reinforcing and we get shown more of the same. It results in an exaggerated “echo chamber” effect.
People are quicker to assume they are being lied to but less quick to assume people they agree with are lying, which is a dangerous tendency – Will Moy, director of Full Fact 
“What is noticeable about the two recent referendums in the UK - Scottish independence and EU membership - is that people seem to be clubbing together with people they agreed with and all making one another angrier,” says Moy. “The debate becomes more partisan, more angry and people are quicker to assume they are being lied to but less quick to assume people they agree with are lying. That is a dangerous tendency.”
The challenge here is how to burst these bubbles. One approach that has been tried is to challenge facts and claims when they appear on social media. Organisations like Full Fact, for example, look at persistent claims made by politicians or in the media, and try to correct them. (The BBC also has its own fact-checking unit, called Reality Check.)
Research by Resnick suggests this approach may not be working on social media, however. He has been building software that can automatically track rumours on Twitter, dividing people into those that spread misinformation and those that correct it. “For the rumours we looked at, the number of followers of people who tweeted the rumour was much larger than the number of followers of those who corrected it,” he says. “The audiences were also largely disjointed. Even when a correction reached a lot of people and a rumour reached a lot of people, they were usually not the same people. The problem is, corrections do not spread very well.”
The problem is that corrections do not spread very well – Paul Resnick, University of Michigan 
One example of this that Resnick and his team found was a mistake that appeared in a leaked draft of a World Health Organisation report that stated many people in Greece who had HIV had infected themselves in an attempt to get welfare benefits. The WHO put out a correction, but even so, the initial mistake reached far more people than the correction did. Another rumour suggested the rapper Jay Z had died and reached 900,000 people on Twitter. Around half that number were exposed to the correction. But only a tiny proportion were exposed to both the rumour and correction.
This lack of overlap is a specific challenge when it comes to political issues. Moy fears the traditional watchdogs and safeguards put in place to ensure those in power are honest are being circumvented by social media.
“On Facebook political bodies can put something out, pay for advertising, put it in front of millions of people, yet it is hard for those not being targeted to know they have done that,” says Moy. “They can target people based on how old they are, where they live, what skin colour they have, what gender they are. We shouldn’t think of social media as just peer-to-peer communication - it is also the most powerful advertising platform there has ever been.”
We shouldn’t think of social media as just peer-to-peer communication, it is also the most powerful advertising platform there has ever been – Will Moy, director of Full Fact 
But it may count for little. “We have never had a time when it has been so easy to advertise to millions of people and not have the other millions of us notice,” he says.
Twitter and Facebook both insist they have strict rules on what can be advertised and particularly on political advertising. Regardless, the use of social media adverts in politics can have a major impact. During the run up to the EU referendum, the Vote Leave campaign paid for nearly a billion targeted digital adverts, mostly on Facebook, according to one of its campaign managers. One of those was the claim that the UK pays £350m a week to the EU - a figure Sir Andrew Dilnot, the chair of the UK Statistics Authority, described as misleading. In fact the UK pays around £276m a week to the EU because of a rebate.
“We need some transparency about who is using social media advertising when they are in election campaigns and referendum campaigns,” says Moy. “We need to be more equipped to deal with this - we need watchdogs that will go around and say, ‘Hang on, this doesn’t stack up’ and ask for the record to be corrected.”
(Credit: Getty Images)
Many people are worried that fundamental disagreement over basic facts is damaging the democratic process (Credit: Getty Images)
Social media sites themselves are already taking steps. Mark Zuckerberg, founder of Facebook, recently spelled out his concerns about the spread of hoaxes, misinformation and polarisation on social media in a 6,000-word letter he posted online. In it he said Facebook would work to reduce sensationalism in its news feed on its site by looking at whether people have read content before sharing it. It has also updated its advertising policiesto reduce spam sites that profit off fake stories, and added tools to let users flag fake articles.
Other tech giants also claim to be taking the problem seriously. Apple’s Tim Cook recently raised concerns about fake news, and Google says it is working on ways to improve its algorithms so they take accuracy into account when displaying search results. “Judging which pages on the web best answer a query is a challenging problem and we don’t always get it right,” says Peter Barron, vice president of communications for Europe, Middle East and Asia at Google.
When non-authoritative information ranks too high in our search results, we develop scalable, automated approaches to fix the problems, rather than manually removing these one by one. We recently made improvements to our algorithm that will help surface more high quality, credible content on the web. We’ll continue to change our algorithms over time in order to tackle these challenges.”
Judging which pages on the web best answer a query is a challenging problem and we don’t always get it right – Peter Barron, Google 
For Rohit Chandra, vice president of engineering at Yahoo, more humans in the loop would help. “I see a need in the market to develop standards,” he says. "We can’t fact-check every story, but there must be enough eyes on the content that we know the quality bar stays high.” 
Google is also working with fact-checking organisations like Full Fact to develop new technologies that can identify and even correct false claims. Together they are creating an automated fact-checker that will monitor claims made on TV, in newspapers, in parliament or on the internet.
Initially it will be targeting claims that have already been fact-checked by humans and send out corrections automatically in an attempt to shut down rumours before they get started. As artificial intelligence gets smarter, the system will also do some fact-checking of its own.
“For a claim like ‘crime is rising’, it is relatively easy for a computer to check,” says Moy. “We know where to get the crime figures and we can write an algorithm that can make a judgement about whether crime is rising. We did a demonstration project last summer to prove we can automate the checking of claims like that. The challenge is going to be writing tools that can check specific types of claims, but over time it will become more powerful.”
What would Watson do?
It is an approach being attempted by a number of different groups around the world. Researchers at the University of Mississippi and Indiana University are both working on an automated fact-checking system. One of the world’s most advanced AIs has also had a crack at tackling this problem. IBM has spent several years working on ways that its Watson AI could help internet users distinguish fact from fiction. They built a fact-checker app that could sit in a browser and use Watson’s language skills to scan the page and give a percentage likelihood of whether it was true. But according to Ben Fletcher, senior software engineer at IBM Watson Research who built the system, it was unsuccessful in tests - but not because it couldn’t spot a lie.
“We got a lot of feedback that people did not want to be told what was true or not,” he says. “At the heart of what they want, was actually the ability to see all sides and make the decision for themselves. A major issue most people face without knowing it is the bubble they live in. If they were shown views outside that bubble they would be much more open to talking about them.”
We got a lot of feedback that people did not want to be told what was true or not – Ben Fletcher, IBM Watson Research 
This idea of helping break through the isolated information bubbles that many of us now live in comes up again and again. By presenting people with accurate facts it should be possible to at least get a debate going. But telling people what is true and what is not does not seem to work. For this reason, IBM shelved its plans for a fact-checker.
“There is a large proportion of the population in the US living in what we would regard as an alternative reality,” says Lewandowsky. “They share things with each other that are completely false. Any attempt to break through these bubbles is fraught with difficulty as you are being dismissed as being part of a conspiracy simply for trying to correct what people believe. It is why you have Republicans and Democrats disagreeing over something as fundamental as how many people appear in a photograph.”
One approach Lewandowsky suggests is to make search engines that offer up information that may subtly conflict with a user’s world view. Similarly, firms like Amazon could offer up films and books that provide an alternative viewpoint to the products a person normally buys.
There is a large proportion of the population living in what we would regard as an alternative reality – Stephan Lewandowsky, University of Bristol 
“By suggesting things to people that are outside their comfort zone but not so far outside they would never look at it you can keep people from self-radicalising in these bubbles,” says Lewandowsky. “That sort of technological solution is one good way forward. I think we have to work on that.”
Google is already doing this to some degree. It operates a little known grant scheme that allows certain NGOs to place high-ranking adverts in response to certain searches. It is used by groups like the Samaritans so their pages rank highly in a search by someone looking for information about suicide, for example. But Google says anti-radicalisation charities could also seek to promote their message on searches about so-called Islamic State, for example.
But there are understandable fears about powerful internet companies filtering what people see - even within these organisations themselves. For those leading the push to fact-check information, better tagging of accurate information online would be a better approach by allowing people to make up their own minds about the information.
Search algorithms are as flawed as the people who develop them – Alexios Mantzarlis, director of the International Fact-Checking Network 
“Search algorithms are as flawed as the people who develop them,” says Alexios Mantzarlis, director of the International Fact-Checking Network. “We should think about adding layers of credibility to sources. We need to tag and structure quality content in effective ways.”
Mantzarlis believes part of the solution will be providing people with the resources to fact-check information for themselves. He is planning to develop a database of sources that professional fact-checkers use and intends to make it freely available.
But what if people don’t agree with official sources of information at all? This is a problem that governments around the world are facing as the public views what they tell them with increasing scepticism.
Nesta, a UK-based charity that supports innovation, has been looking at some of the challenges that face democracy in the digital era and how the internet can be harnessed to get people more engaged. Eddie Copeland, director of government innovation at Nesta, points to an example in Taiwan where members of the public can propose ideas and help formulate them into legislation. “The first stage in that is crowdsourcing facts,” he says. “So before you have a debate, you come up with the commonly accepted facts that people can debate from.”
When people say they are worried about people being misled, what they are really worried about is other people being misled – Paul Resnick, University of Michigan 
But that means facing up to our own bad habits. “There is an unwillingness to bend one’s mind around facts that don’t agree with one’s own viewpoint,” says Victoria Rubin, director of the language and information technology research lab at Western University in Ontario, Canada. She and her team have been working to identify fake news on the internet since 2015. Will Moy agrees. He argues that by slipping into lazy cynicism about what we are being told, we allow those who lie to us to get away with it. Instead, he thinks we should be interrogating what they say and holding them to account.
Ultimately, however, there’s an uncomfortable truth we all need to address. “When people say they are worried about people being misled, what they are really worried about is other people being misled,” says Resnick. “Very rarely do they worry that fundamental things they believe themselves may be wrong.” Technology may help to solve this grand challenge of our age, but it is time for a little more self-awareness too.

  • By Richard Gray

Comments