Here are some key takeaways from the tens of thousands of pages of internal documents.
In one SEC disclosure, Haugen alleges “Facebook misled investors and the public about its role perpetuating misinformation and violent extremism relating to the 2020 election and January 6th insurrection.”
“While this was a study of one hypothetical user, it is a perfect example of research the company does to improve our systems and helped inform our decision to remove QAnon from the platform,” a Facebook spokesperson told CNN.
In response to these documents a Facebook spokesperson told CNN, “The responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them.”
Global lack of support
Although Facebook’s platforms support more than 100 different languages globally, a company spokesperson told CNN Business that its global content moderation teams are comprised of “15,000 people who review content in more than 70 languages working in more than 20 locations” around the world.
A Facebook spokesperson told CNN the company added hate speech classifiers for “Hindi in 2018, Bengali in 2020 and Tamil and Urdu more recently.”
According to one internal report from September 2019 2019 entitled a Facebook investigation found that “our platform enables all three stages of the human exploitation lifecycle (recruitment, facilitation, exploitation) via real-world networks. … The traffickers, recruiters and facilitators from these ‘agencies’ used FB [Facebook] profiles, IG [Instagram] profiles, Pages, Messenger and WhatsApp.”
“We prohibit human exploitation in no uncertain terms,” Facebook spokesperson Andy Stone said. “We’ve been combatting human trafficking on our platform for many years and our goal remains to prevent anyone who seeks to exploit others from having a home on our platform.”
Inciting Violence Internationally
Internal documents indicate Facebook knew its existing strategies were insufficient to curb the spread of posts inciting violence in countries “at risk” of conflict, like Ethiopia.
Facebook relies on third-party fact-checking organizations to identify, review and rate potential misinformation on its platform using an internal Facebook tool, which surfaces content flagged as false or misleading through a combination of AI and human moderators.
Impact on Teens
According to the documents, Facebook has actively worked to expand the size of its young adult audience even as internal research suggests its platforms, particularly Instagram, can have a negative effect on their mental health and well-being.
Although Facebook has previously acknowledged young adult engagement on the Facebook app was “low and regressing further,” the company has taken steps to target that audience. In addition to a three-pronged strategy aimed at having young adults “choose Facebook as their preferred platform for connecting to the people and interests they care about,” the company focused on a variety of strategies to “resonate and win with young people.” These included “fundamental design & navigation changes to promote feeing close and entertained,” as well as continuing research to “focus on youth well-being and integrity efforts.”
Algorithms fueling divisiveness
In 2018, Facebook pivoted its News Feed algorithm to focus on “meaningful social interactions.” Internal company documents reviewed by CNN reveal Facebook discovered shortly afterwards that the change led to anger and divisiveness online.
A late 2018 analysis of 14 publishers on the social network, entitled “Does Facebook reward outrage,” found that the more negative comments incited by a Facebook post, the more likely the link in the post was to get clicked.
“The mechanics of our platform are not neutral,” one staffer wrote.
CNN’s Clare Duffy, Donie O’Sullivan, Eliza Mackintosh, Rishi Iyengar and Rachel Metz contributed reporting.