ChatGPT
ChatGPT
The image depicts a person partially obscured by two national flags, one being the flag of Israel, characterized by its blue Star of David on a white background with blue stripes near the top and bottom, and the other being the flag of Palestine, with its black, white, green, and red stripes.
Credit: AP Photo/Oded Balilty

By now, most thoughtful observers of social media recognize that an unfettered algorithm can be a dangerous thing. Beyond elevating annoying clickbait, it turns social media into an outrage machine, fueling polarization. Even worse, because algorithms are so often optimized for engagement, they can amplify false and dangerous information.

At Elon Musk’s X, blue-check accounts that pay for membership get a boost in the system, regardless of the quality or accuracy of content they share. My former employer Microsoft has recently come under fire for replacing its talented human editors with an algorithm-driven approach that has elevated spectacularly awful and false stories to MSN readers. Meanwhile, Facebook is deprioritizing news in users’ home feeds.

All of this amounts to bad news for people who want real news, particularly at a time when the world seems to be on fire. Where do users go to get well reported, fair content from a variety of reliable sources? At Flipboard, our machine-learning algorithms remain a core component of the product, serving users content that matches their interests. But as we explained in a recent blog post, we place controls on our artificial intelligence to ensure accuracy and ethical behavior.

Beyond training our AI, we hand-pick sources for topics that often attract disinformation campaigns, such as #News, #Israel-Hamas War and #Coronavirus. Our team carefully vets the publishers that are included in these topics to weed out sites that spit out misinformation, highly partisan views or poorly sourced articles. And for major news events we also offer hand-curated Magazines that only include articles chosen by experienced human editors. The most recent example is our Israel-Hamas War collection, in which we try to present the situation accurately, fairly and from many perspectives. We’re following the same playbook for the Ukraine War and will do the same for the 2024 elections

How do we vet publications? First, we have long-standing relationships with the major media outlets in the U.S. and around the globe so they are prioritized. But smaller and new publications can offer unique and fresh perspectives, so we’re always looking to introduce new sources to Flipboard users. Here are some of the criteria we use:

  • Does the publisher follow standard journalistic best practices?
  • Does the publisher produce original reporting? (vs. merely rewriting the work of others)
  • Does the publisher disclose its ownership?
  • Do the articles have bylines to show who’s written a piece?

Secondly, we’re constantly reviewing and adjusting our largest and most popular feeds to ensure users are getting the most accurate, up-to-date information. And we respond to user feedback if they see something that doesn’t look quite right.

It’s all a never-ending job, but one that’s crucial to our goals, which include the desire to inform and inspire users with content they can trust.

— Carl Sullivan, North America managing editor, co-curates the The House and The Senate Magazines on Flipboard’s Politics Desk.