“We encourage everyone to seek election and voting information from reliable sources…and to be thoughtful, careful, and discerning consumers of information online.”
This statement was made by Christopher Wray, the head of the F.B.I., at a press conference on October 21, which warned that America’s democratic system was under attack from Russia and Iran.
Technology has made it easier for misinformation to spread, whether it’s created by foreign powers or by groups in the U.S. It’s important to be aware of what’s circulating online and be able to distinguish between what’s factual and what’s not. Social platforms increasingly see a role for themselves in this.
As we get closer to Election Day, let’s take a look at the world of misinformation and how technology companies are trying to protect America’s elections and democratic values.
Registering to vote
Every election cycle, we’re inundated with “Get Out The Vote” ads, encouraging us to register. This year tech companies are leveraging their respective platforms to support this effort. Perhaps the biggest such initiative came from Facebook. The social networking company launched a nationwide voter registration campaign in June with a goal of signing up 4 million new voters before Election Day — on Monday, it announced it had registered 4.4 million people. Not to be outdone, Snapchat, TikTok and Twitter have all launched their versions of a voter registration drive.
Curbing misinformation
Foreign entities, Super PACS, and even politicians are all playing a part in sowing uncertainty over the upcoming election. While social media has made many of us feel more connected, it has also allowed disinformation, misinformation and propaganda to spread quickly, and Silicon Valley firms are wary about letting that happen, especially during an election. From unsubstantiated claims of mail-in voter fraud to premature victory declarations, tech companies say they’re taking steps to ensure that Americans have the right information from official sources.
Facebook would like to avoid repeating its mistakes of the 2016 race and has announced a series of steps it’s taking to, as CEO Mark Zuckerberg says, “protect our democracy.” It has expanded its fact-checking operations, scrutinizing posts by President Trump; placed a ban on political ads starting a week prior to the elections, and says it’ll label posts that prematurely declare or falsely claim victory for a candidate before official results are announced. That being said, Facebook’s enhanced ad policy is already being put to the test. And the company said that it has already removed from its platform 120,000 posts that were trying to obstruct voting and affixed warning labels to more than 150 million posts.
Twitter has long eliminated political ads from its platform and over the past months has increased its fact-checking operation, which has involved labeling some of Trump’s tweets as misinformation. The company recently enhanced its policies around election misinformation, placing deceptive tweets from politicians, candidates, political parties or high-profile U.S. users behind a warning screen. It has also temporarily modified its retweet feature in a way it hopes will force users to pause and reflect before potentially further spreading misinformation.
For its part, Google says it’s banning political ads for at least a week after the election, with the anticipation that a winner won’t be officially announced on November 3. YouTube will be applying a warning label to specific search queries and videos on Election Day letting viewers know that election results may not be final.
Wikipedia is also locking down its platform on Election Day, implementing new measures that it hopes will mitigate misinformation from its community such as requiring anyone making edits to a Wikipedia page to have a user history of longer than 30 days and maintaining a “watch list” of political pages, congressional races, election pages for each state.
What does it all mean?
Even though tech companies are taking steps to stop the spread of misinformation and disinformation on their platforms, there is not a foolproof solution. We must examine every piece of news and information shared by our friends, family, and group members with a grain of skepticism and wait for confirmation from election officials and reliable media sources. The popularity of Facebook, Twitter, Google, YouTube and other social platforms can make it easy to find information that we want, but it’s not without its dangers, so before you share that post, first make sure you’re not unwittingly spreading misinformation.
If you’re curious, here’s how we’re trying to make sense of it all.
—Ken Yeung, Flipboard’s Technology and Science editor, is currently curating Tech 2020.