Facebook has announced the latest steps in its efforts to stop the spread of fake and misleading news on the platform, releasing a new, 12-minute video on their back-end processes, along with several explainers, and a new website dedicated to providing oversight into the News Feed process.
The video looks at Facebook’s work to address concerns with fake news following the 2016 US Presidential Election, while in an accompanying blog post, Facebook’s John Hegeman, the company’s new Head of News Feed, has also outlined their additional measures, including a new news literacy campaign which will provide Facebook users with tips to spot false news (which will appear at the top of News Feed and in print ads) and a partnership with academic research teams to measure the impacts, and address issues with fake news.
The new measures will provide additional transparency over the News Feed process, and go hand-in-hand with Facebook’s hiring of thousands of new moderators to reduce the impact of such content.
And the insights here are definitely valuable – never before has Facebook provided as much access to the inner workings of their algorithm, and how its team addresses specific concerns. The platform has also introduced a range of related measures over the last year, including labels for issues-based ads, additional links for further context, and indicators on potentially fake news.
How effective they’ll be, however, is a totally different question.
The issue of fake or misleading news, and the way that it spreads through social networks, is, understandably, a tough one to address.
Using the reach and ubiquity of social platforms, those who would seek to influence conversations and opinions have found that they can weaponize the medium, and its capacity to fuel division and tribalism. This is especially true in the age of algorithms, which work to show you more of what you like and agree with, and less of what you don’t.
As noted by Wired:
“[Facebook’s] News Feed has been tuned, for years, to maximize our attention – and in many ways our outrage. The same features that incentivized publishers to create clickbait are the ones that let false news fly.”
Indeed, the system has been taught to respond to your personal triggers, creating a self-affirming feedback loop – if you don’t like Donald Trump, you’re far more likely to see more and more posts which reinforce that opinion. And with a growing number of users relying on Facebook for their daily news intake, that’s a dangerous combination.
The question, then, is what can Facebook do about it? Is it even possible for Facebook to alter the News Feed enough to accomplish their aim of keeping users actively engaged, while also reducing the prevalence of divisive content – and doing so without impeding free speech?
There’s obviously a heap to consider in that equation, and while Facebook’s latest measures seem to be a step in the right direction, the very process of journalism, and the incentives behind news publication, have changed as a result of the growth in digital content consumption.
Facebook needs to take responsibility for its role in this, for sure, and should seek to address what it can. But the broader balance of more targeted advertising, with the conflicting effort to diversify media inputs, provides a challenge not previously seen.
No one has the answers yet.
From a digital marketing perspective, the new measures should have little direct impact, however the information resources do provide new insights into how the News Feed algiorithm works, which can be hugely valuable for those trying to maximize their Facebook reach.