Meta has unveiled a new community feedback policy in the U.S.
It targets fake reviews that appear on its Ratings and Review or Question and Answers features on Marketplace and Shops.
Why Meta created the policy. Meta said it wants reviews to be based on real purchase experiences from real customers and “to keep irrelevant, fraudulent and offensive feedback of our platforms.”
The policy. Meta has put together a list of five things that could get reviews flagged:
- Manipulation. Community Feedback must not be used to misrepresent, deceive, defraud or exploit others for a financial or personal benefit.
- Incentivization. Community Feedback must not be directly or indirectly incentivized, unless otherwise disclosed compliant with Meta’s policies on Branded Content. We define incentivization as a business partner or seller providing something of value, such as a monetary payment, free gift, or refund, in exchange for Community Feedback ratings, reviews, or answers.
- Irrelevance. Community Feedback must be based on a reviewer’s direct experience with a product, business, or seller. Additionally, the feedback must be related to the intended use of the product and/or business.
- Graphic or Inappropriate Content. Community Feedback must not include any content or media that is excessively graphic, inflammatory, violent, discriminatory, or threatening to any person or group.
- Spam. Community Feedback must comply with Meta’s policies on Spam. Including, but not limited to engagement bating, high frequency content posting, sharing, or promoting is not allowed.
You can view the policy here.
How Meta will enforce the policy. Meta said it will use a mix of automation and human reviewers. And based on this passage from their announcement, it sounds like we can all expect some false positives – and more “Facebook jail” sentences – as Meta tries to figure this out.
“When we launch a new policy, it can take time for the various parts of our enforcement mechanisms to learn how to correctly and consistently enforce the new standard. But as we gather new data, our machine learning models, automated technology and human reviewers will improve in their ability to ensure our Community Feedback tools—especially reviews—maintain their integrity, relevance and authenticity.”
Review sentiment shouldn’t impact reviews. “Our Community Feedback Policy is intended to provide equal voice for all viewpoints that comply with Meta policies, including the full range of positive, negative and neutral feedback. As such, we treat all positive and negative feedback equally. We do not subject negative feedback to greater scrutiny when reviewed for policy violations nor do we alter feedback in any way before publishing,” according to Meta.
Fake Facebook reviews are an ongoing issue. It’s still cheap to get fake reviews, as one consumer group proved as recently as October.
And Meta has been slow to take action.
In April 2021, Facebook removed 16,000 groups dealing in fake and misleading reviews, but only after they were identified by the UK’s competition regulator, the Competition and Markets Authority (CMA)
Facebook agreed to make changes to its systems:
- Suspend or ban users who repeatedly create Facebook groups and Instagram profiles that promote, encourage or facilitate fake and misleading reviews.
- Introduce new automated processes to improve the detection and removal of fake and misleading reviews.
- Make it harder for people to use Facebook’s search tools to find fake and misleading review groups and profiles on Facebook and Instagram.
- Create dedicated processes to ensure these changes are working effectively.
In March, Meta announced a lawsuit against one person who had provided a fake Facebook engagement service.
Why we care. Fake reviews have long been an ongoing issue on all platforms, not just Meta. If you rely on, or have relied on fake reviews, tread carefully. Weigh your long-term success vs. some artificially inflated reviews. And even if all your reviews are legit, be warned: Meta’s review processes are heavily flawed.