It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1.
These releases come in the wake of the Cambridge Analytica scandal, which has left the company battling to restore its reputation with users and developers - though employees have said the decision to release the Community Standards was not driven by recent events.
The data also illustrates where Facebook's AI moderation systems are effectively identifying and taking down problematic content - and the areas where it still struggles to identify problems. Of all the content viewed by users, between 0.22-0.27% of views was of rule-breaking content showing graphic violence.
Over the previous year, the company has repeatedly touted its plans to expand its team of reviewers from 10,000 to 20,000.
"Yes there are clear skews in many of these metrics", said Schultz.
The number of pieces of content depicting graphic violence that Facebook took action on during the first quarter of this year was up 183% on the previous quarter.
Facebook's new Community Standards Enforcement Report "is very much a work in progress and we will likely improve our methodology over time", Chris Sonderby, VP and deputy general counsel, wrote in a blog post about the report. AI did, however, flag 99.5pc of terrorist content on Facebook and 95.8pc of posts containing nudity.
In total the social network took action on 3.4m posts or parts of posts that contained such content.
However, it declined to say how many minors - legal users who are between the ages of 13 and 17 - saw the offending content. Additionally, the company acted on 21 million pieces of nudity and sexual activity, 3.5 million posts that displayed violent content, 2.5 million examples of hate speech and 1.9 million pieces of terrorist content.
This led to old as well as new content of this type being taken down. For every 10,000 views of content on Facebook, the company said, roughly 8 of them were removed for featuring sex or nudity in the first quarter, up from 7 views at the end of previous year.
"Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards", the company added in the report.
The social network says when action is taken on flagged content it does not necessarily mean it has been taken down. Those tools worked particularly well for content such as fake accounts and spam: The company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and "nearly 100%" of the spam. In Q1, it disabled 583 million fake accounts, down 16% from 694 million a quarter earlier.
This was up by three quarters from 1.1m during the previous quarter because of improvements in Facebook's ability to find such content using photo detection technology.