Anti-vaxxers ‘use carrot emoji’ on Facebook to avoid detection from moderators

Researchers have warned social media algorithms are often poor at picking up harmful use of emojis
File: Facebook app on a phone
PA Wire
Josh Salisbury16 September 2022

Anti-vaxxers are using the carrot emoji as a code on Facebook to avoid detection from moderators, according to a report.

Facebook groups in which anti-vaccine views are espoused are using the emoi instead of the word “vaccine” in an effort to beat Facebook’s moderation algorithms.

According to a BBC investigation, these groups are being used to share verified claims of people being either injured or killed by vaccines.

One of the groups had more than 250,000 members and instructed its members to use “code words for everything” and to never use the words Covid, vaccine or booster.

Marc Owen-Jones, a disinformation researcher, and associate professor at Hamad Bin Khalifa University in Qatar, told the BBC he was invited to join it.

“It was people giving accounts of relatives who had died shortly after having the Covid-19 vaccine", he said.

“But instead of using the words "Covid-19" or "vaccine", they were using emojis of carrots.

“Initially I was a little confused. And then it clicked - that it was being used as a way of evading, or apparently evading, Facebook’s fake news detection algorithms."

The NHS advises that vaccines are both safe and effective, and side effects of vaccination are typically very mild and do not last long.

The Standard contacted Meta, Facebook’s parent company for a comment and did not receive a response.

But a spokesperson told the BBC: "We have removed this group for violating our harmful misinformation policies and will review any other similar content in line with this policy.

“We continue to work closely with public health experts and the UK government to further tackle Covid vaccine misinformation.”

Last year, research published by the Oxford Internet Institute found that algorithms on social media sites often cannot recognise abusive or harmful use of emojis.

“Despite having an impressive grasp of how language works, AI language models have seen very little emoji. They are trained on a corpora of books, articles and websites, even the entirety of English Wikipedia, but these texts rarely feature emoji,” said researcher, Hannah Rose Kirk.

“The lack of emoji in training datasets causes a model to err when faced with real-world social media data—either by missing hateful emoji content (false negative) or by incorrectly flagging innocuous uses (false positives)”.

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Sign up you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy notice .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in