Less than 24 hours after we learned about Meta’s smear campaign against rival social network TikTok, it has been confirmed that Facebook was showing harmful content to users over a period of six months.
The failure was detailed in an internal document obtained by the Verge. It detailed a “massive ranking failure” where Facebook’s systems failed to suppress posts containing nudity, violence and propaganda from Russian state media.
Why we care. Facebook wants to create a brand-safe environment. They’re failing. When Facebook allows ads to appear alongside the types of content it failed to downrank here, that’s incredibly troubling for brands and publishers. Facebook has a history of self-inflicted wounds, scandals and a lack of accountability when issues like these have been exposed and made headlines. To date, it hasn’t irreparably hurt them. The big question is how long brands will stop investing money in a platform that has shown great interest in taking their money but little interest in protecting them from being associated with such harmful content.
What happened. Over a period of six months, due to a ranking bug, Facebook’s feed distributed an unknown amount of dubious content, including debunked misinformation, that it typically downranks. This helped increase views for this content by up to 30 percent globally, the Verge reported.
Meta demotes several types of content – clickbait, engagement bait, and several types of low-quality content and spam. You can read the full list here.
What Meta said. In a statement, Meta spokesperson Joe Osborne confirmed the company had detected inconsistencies in demoting posts on five occasions, starting in October, which correlated with small, temporary increases to internal metrics. The company blamed it on a software bug and have applied needed fixes March 11. Osborne said the bug did not have “any meaningful, long-term impact on our metrics.”