Tech Start-Ups
Meta Tightens Noose on Harmful Content on Facebook
- Prevalence of hate speech on Facebook continued to decrease for the fourth quarter in a row. We’ve cut prevalence by more than half within the last year through improvements in our technology and changes we made to reduce problematic content in News Feed.
- For the first time, we’re sharing prevalence metrics for bullying and harassment on both Facebook and Instagram, prevalence metrics for hate speech on Instagram and all metrics for violence and incitement.
- To validate that our metrics are measured and reported correctly, we will undergo an audit by EY covering the fourth quarter of this year, and results will be released in the Spring of 2022.
You can find a summary of the Meta for Business report here to Download Now.
How Facebook Reduce Harmful Content People
“Abuse of our products isn’t static — and neither is the way we approach our integrity work. We know we’re never going to be perfect in catching every kind of violating content, but we’re always working to improve. Our goal is to reduce the prevalence of harmful content, or harmful content people see on our apps, while minimizing the mistakes that we make. Learn more about how we reduce the prevalence of harmful content, below:” according a statement by Meta.
The company added that “We’re also using warning screens to educate and discourage people from posting something that may include hostile speech such as bullying and harassment, violating our Community Standards or Guidelines. The screens appear after someone has typed a post or comment explaining that the content may violate our rules and may be hidden or distribution reduced.”