Search for answers or browse our knowledge base.
AI moderation
What is AI moderation
AI moderation in Lipscore is an automated solution instantly screening the reviews before they get published to prevent inappropriate content from being published on your website.
How to enable AI moderation
By default, AI moderation is turned off. You can turn it on anytime in your Lipscore account. Go to Settings – General – scroll down until you find AI moderation section. Expand the section and switch the toggle to enable AI moderation.

Depending on which features are active in your Lipscore account, you will notice one or two lists of guidelines for moderation below – one for Product reviews and another for Service reviews. Both lists have the same set of default guidelines:
Guidelines:
- General Conduct
- Respect Others
- No Hate Speech
- No Harassment or Bullying
- Content Restrictions
- No Adult Content
- No Graphic Violence
- No Misinformation
- No Personal or Confidential Information
- Safety and Security
- No Encouragement of Harm
- No Illegal Activities
- Avoid “Doxxing”
- Miscellaneous
- No Trolling
- No Baiting
- Use Proper Language
The list is customizable, however, new guidelines can only be added by the Lipscore team to avoid misuse of the guidelines. If you need more specific guidelines due to, for instance, legal provisions regulating what content can be published on websites of businesses operating in your industry, do not hesitate to reach out to our Support or your Customer Success Manager with a request for expanding the list.
Example: your company is selling CBD oils. According to law, the reviews on your website cannot contain health claims – details of sicknesses, information about pain relief or sleep improvement. You may request that the following guideline is added to the settings in your account: “No mention of sickness”, “No mention of sleep or pain”, etc.
Testing if the guidelines work well for your usecase
You are more than welcome to test whether the filters will work as you’re expecting them to, using the Test your review box.
If a review meets the guidelines, you will see a green notification saying “The review does not violate the instructions”.

If a review goes against the guidelines, you will see a red notification that specifies which guideline is violated.

Management of AI moderated reviews
If AI moderation is activated in your Lipscore account, moderation will be happening in the background on an ongoing basis. If a review violating the specified guidelines comes in, it will be not be published automatically. Instead, it will be stored in Reviews – Not published – Need attention section of your Lipscore account and you will receive an email notification entitled: “A review has been flagged and needs your attention”.

You will also see an indication of a number of reviews awaiting in this section in your Lipscore account.

In the Need attention section you will notice an AI moderated label and a Reason for AI moderation on the review.

You can decide what to do next. If you think the reviw is acceptable, hit the Publish button to publish the review to the site.
If not, use the Private message to author function – shortly explain to your shopper why thier review is on hold and instruct them on the required changes, if they want their review published. Once you send the private message, the review will be moved to Pending – Author messaged tab, awaiting the review update.
Notifications
After you activate the AI moderation, reload the page and scroll up to the Notifications section. Make sure that Email address for general notifications is added and switch on the AI Moderation notification to receive emails.

If both Support trigger & AI moderation are active
If a review is trapped by Support trigger and flagged by AI moderation at the same time:
- You will get a Support trigger email notification and can go through the follow process up with the shopper as usual. In your follow up message you may also mention that the shopper’s review was flagged due to the moderation and request that the the they change whatever violates the guidelines. You will see the review in Not published – Pending – Support trigger tab and you will notice the AI moderated label and Reason for AI moderation on the review.
- After 5 working days, if the review was updated accordingly, it will be republished to the website. If not, and it still contains the inappropriate content, it will not be published to the website but moved to the Not published – Need attention section of your Lipscore account, awaiting the required update.
The correctness of AI moderation
Please note that since the moderation is done by AI, it may not always be a 100% correct. If you notice a few reviews which got trapped by AI moderation, but you cannot see the reason why, simply publish them manually to the website. If that happens more often than not, please think of more specific guidelines and reach out to our Support or your CSM with a request for additional criteria to be added to the guidelines in the AI moderation setting.

