Meta is launching an updated Community Feedback Policy in the United States to ensure that reviews on Facebook are based on real purchasing experiences. Although the company has already taken steps against abusive reviews, the new policy lays out these rules and guidelines in writing.
In a blog post, Meta said businesses on Facebook must prohibit the manipulation of reviews, incentivization, irrelevance, graphic content and spam. These new guidelines should address situations where people are paid to leave positive reviews on a business’s page. The policy should also address instances where people leave fake bad reviews as a way to get refunds.
“Our Community Feedback Policy is intended to provide equal voice for all viewpoints that comply with Meta policies, including the full range of positive, negative and neutral feedback,” Meta said in a blog post. “As such, we treat all positive and negative feedback equally. We do not subject negative feedback to greater scrutiny when reviewed for policy violations nor do we alter feedback in any way before publishing.”
Meta notes that it relies on automated technology and human reviewers to help ensure all feedback is compliant with its policies. The company says it can take some time for its enforcement mechanisms to learn how to enforce new policies, but that its machine learning models and human reviews will improve overtime.
The company says although it will continue to improve its detection and enforcement to identify policy violations, it also encourages people to report suspicious reviews. You can report a suspicious review on Facebook by clicking the three dots in the upper-right hand corner. Businesses can report violations by accessing Commerce Manager.
“We create and update all of our policies in line with the industry and technology as it evolves and we design our policies to keep misleading, false and abusive content off our platforms,” Meta notes. “While we know our work is never done, we are committed to making our technologies a trusted place for our community.
The updated policy should, in theory, help address the slew of fake reviews on Facebook, as they have continued to plague the platform for quite some time now.
The updated policy comes as Facebook removed 16,000 groups last year that were trading fake reviews on its platform. The company issued the removals after an intervention by the UK’s Competition and Markets Authority (CMA).
At the time, the social media giant had made further changes to the systems it uses for “identifying, removing and preventing the trading of fake and/or misleading reviews on its platforms to ensure it is fulfilling its previous commitments.” This included suspending or banning users who are repeatedly creating Facebook groups and Instagram profiles that promote, encourage or facilitate fake and misleading reviews. It also included the introduction of new automated processes that will improve the detection and removal of this content.
Facebook also made it harder for people to use Facebook’s search tools to find fake and misleading review groups and profiles on Facebook and Instagram. Lastly, the company was said to be implementing dedicated processes to make sure that these changes continue to work effectively and stop the problems from reappearing.