Meta Platforms Inc. admitted today that for a period of six months, its algorithm was doing exactly what it was intended not to do, which was increase views of some dubious content rather than suppress it.
According to The Verge, which first reported the matter, engineers working on Meta’s Facebook platform said there was a “massive ranking failure” in the News Feed. An internal report stated that it started in October last year and carried on for about six months.
Instead of suppressing news that had already been tagged by third-party human fact-checkers or artificial intelligence for possibly being misinformation, the algorithm seems to have gone berserk and spiked the views of that content by as much as 30%. According to the same report, the algorithm promoted all kinds of dishonorable content, including posts that showed violence and nudity, but also posts by Russian state media – with the last one being something Meta had just clamped down on, along with other tech platforms.
Meta didn’t deny this embarrassing gaffe, with spokesperson Joe Osborne telling the Verge that it was a software bug that’s now fixed. He said Facebook had “detected inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics.”
The downranking of certain content and what is known as shadow banning has never exactly been very transparent at Facebook. Thanks to the leaks last year, it was discovered that Facebook provided a kind of moderation immunity to millions of accounts.
Meanwhile, certain activists with certain views have complained that social media platforms such as Facebook downrank their content even though the stories don’t necessarily break any rules. Facebook, however, does have a policy on what it calls “borderline” content.
Still, in 2021, Facebook said it had made changes to give people more power over what they see by giving them more options to choose what kind of content they want to see. “People should be able to better understand how the ranking algorithms work and why they make particular decisions, and they should have more control over the content that is shown to them,” said the company.