Facebook has already 1.56 billion active users base, and Facebook already disabled over 3 billion fake accounts in the first quarter of 2019. However, Facebook introduced its latest Community Standards Enforcement Report, which is even more disturbing.
Recently Facebook updated it’s third Community Standards Enforcement Report, for the first time facebook shows the data on appeals and content restored, plus data about standardized goods on the website.
Now facebook started tracking some primary metrics policies on their website, which includes: adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation of children, fake accounts, hate speech, regulated goods, spam, global terrorist propaganda and violence, and graphic content.
“We have a responsibility to protect people’s freedom of expression in everything we do,” Facebook CEO Mark Zuckerberg said on a call with reporters shortly after the Community Standards update published. “But at the same time, we also have a responsibility to keep people safe on Facebook and prevent harm from playing out across our services.”
It means the company is getting high volumes of harmful content, but this is not the end because numbers show something else than you probably ever realized.
We have shortlisted some of the all significant points on Facebook, according to its Community Standards update:
- In Q1 2019 company disabled 2.19 billion fake accounts. However, the site has 1.65 billion engaged users, 2.38 billion monthly engaged users.
- According to their data, there are about 5% of monthly active accounts fake.
- In there, first quarter facebook took down almost 4 million hate conversation posts. It says it cannot share the “prevalence” of hate speech yet.
- Justin Osofsky, Facebook’s vice president of the global operations team, said on a press call that FB will have a pilot program where some of its content reviewers will specialize in hate speech.
- In the last three quarters, FB faces 21 million situations of child nudity and sexual exploitation
- Facebook took a step on an approximated 900,000 pieces of drug selling content, 83.3% of which it said it caught proactively. In the same period, it took action on 670,000 pieces of pistol sale content, 69.9% of which it caught proactively.
- Facebook state that, before taking any action on a piece of content, it always notifies the user who posted it and in many circumstances allow them the ability to tell us if they think we made a mistake — though it also admits its “enforcement isn’t perfect.”
- “Our budget [for Facebook’s safety and security systems] in 2019 is greater than the whole revenue of our company in the year before we went public, in 2012,” Zuckerberg told reporters Thursday. A Facebook spokesperson had earlier clarified, in a report in Variety, that Zuckerberg was referring to the company’s 2011 financials. That year, FB pulled in $3.7 billion in revenue.
In its recent blog, Facebook highlighted two primary points are “prevalence” and “content actioned.”Prevalence stands for how much harmful content team identifies and how much the company hasn’t yet identified.”Content actioned” stat that how many times Facebook rejected post for publishing. Facebook states it has placed together this data “by periodically sampling content viewed on FB and then reviewing it to see what percent violates our standards.” In another hand, we could not identify Facebook’s statistics are accurate because zuke doesn’t allow to access everything to its trove of content. (Facebook already stated that Data Transparency Advisory Group to check Facebook’s work independently, how even group doesn’t allow to excess all the FB data across its website.)
Facebook states “Proactive Rate” metric; It’s work to recognize posted content by people to identify by its AI in a particular policy area, expect a human reported for it. In the past recent year, it proactively detects 65% of the content, automatically and 24% just over a year ago.
Alex Stamos, the former chief security officer at FB, Tweets, recommending that the billions of disabled accounts the company listed indicate a dramatic rise in blame on FB and efficient firm action in policing them.
The intentional misuse of this statistic by some folks in the media is why the companies don't give us transparency.
This is an attacker-influenced metric, you have to be extremely careful in how you use it. https://t.co/hY5z3ZezLX
— Alex Stamos (@alexstamos) May 6, 2019
Facebook — as its own chief technology officer admits. “It’s never going to go to zero,” Mike Schroepfer said of problematic posts on Facebook in a recent interview with the New York Times. “AI is not a silver bullet,” Guy Rosen, Facebook’s vice president of integrity, said in a call with reporters Thursday.