Posted inBusiness

Facebook says it invested over $13 billion on safety and security

Following a series of leaks published by Wall Street Journal, the company outlines how it has acted on many of these issues

In the past five years, Facebook has spent over USD13 billion and employed nearly 40,000 people for ‘safety and security’.

The company revealed this in a blog post after week of leaks published by Wall Street Journal, alleging the company was aware of its platforms’ negative effects on users but did little to correct issues. The numbers are meant to demonstrate how seriously the company takes safety and security issues.

Among the issues raised by WSJ were invasion of privacy of users; inability to tackle misinformation, especially during the COVID-19 pandemic; how Instagram was a toxic place for young girls and how some privileged users were exempt to general rules of Facebook.

In the blog, Facebook stated: “How technology companies grapple with complex issues is being heavily scrutinised, and often, without important context. There is a lot more to the story. What is getting lost in this discussion is some of the important progress we’ve made as a company and the positive impact that it is having across many key areas.

“We firmly believe that ongoing research and candid conversations about our impact are some of the most effective ways to identify emerging issues and get ahead of them. This doesn’t mean we find and fix every problem right away. But because of this approach, together with other changes, we have made significant progress across a number of important areas, including privacy, safety and security, to name a few. Just as the world has changed a lot, so has Facebook.”

Facebook admitted that it did not address safety and security challenges early enough in the product development process. Instead, it “made improvements reactively in response to a specific abuse”. The company insisted it had fundamentally changed that approach.

“Today, we embed teams focusing specifically on safety and security issues directly into product development teams, allowing us to address these issues during our product development process, not after it. Products also have to go through an Integrity Review process, similar to the Privacy Review process, so we can anticipate potential abuses and build in ways to mitigate them. Here are a few examples of how far we’ve come.”

Some of the most important changes made in recent years include 40,000 people working on safety and security and investment of more than $13 billion in teams and technology in this area since 2016. Facebook’s security teams have disrupted and removed more than 150 covert influence operations. Its advanced AI has helped block 3 billion fake accounts in the first half of 2021 alone and it has also gotten better at keeping people safer on the platform.

Facebook said it is proactively removing content that violates its standards on hate speech. It now removes 15X more of such content across Facebook and Instagram than in 2017. It has also started using technology that understands the same concept in multiple languages — and applies learnings from one language to improve its performance in others. 

On combating misinformation, Facebook said: “Misinformation has been a challenge on and off the internet for many decades. People are understandably concerned about how it will be handled for future internet technologies. At Facebook, we’ve begun addressing this comprehensively — rather than treating it as a single problem with a single solution.

“This means we’ve gotten better at addressing this complex challenge. We’ve worked to develop and expand our systems to reduce misinformation and promote reliable information.”

The company said it removes false and harmful content that violates Community Standards, including more than 20 million pieces of false COVID-19 and vaccine content. It has also built a global network of more than 80 independent fact-checking partners who rate the accuracy of posts covering more than 60 languages across our apps.

“We’ve displayed warnings on more than 190 million pieces of COVID-related content on Facebook that our fact-checking partners rated as false, partly false, altered or missing context. 

We’ve helped over 2 billion people find credible COVID-19 information through our COVID-19 Information Center and News Feed pop-ups,” it added.