Facebook is rolling out a new set of rules aimed at curbing the spread of manipulated media as the specter of highly convincing deepfake videos looms large over not only the US presidential elections.
An announcement by the platform’s vice president of global policy management Monika Bickert reveals that Facebook is deploying a multi-pronged approach to deal with the growing threat of manipulated media that are created to spread disinformation and sway public opinion.
For one thing, Facebook will remove manipulated content that ticks these two boxes:
- it has been “edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say”, and
- it is “the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic”.
It follows that the tighter policy applies to deepfake technology, which an example of a way in which machine-learning algorithms can be deployed for nefarious purposes. [Deepfakes were also singled out by ESET experts as one of the cybersecurity trends to watch out for in 2020.]
So far so good, but the ban won’t actually extend to other types of doctored media. More precisely, the social network won’t banish “video that has been edited solely to omit or change the order of words” or content altered for the sake of parody and satire.
One issue that may sometimes arise is, where do you draw the line and decide that something is meant to be humorous?
RELATED ARTICLE: Deepfakes: When seeing isn’t believing
Nevertheless, Facebook vows not to sit on its hands when it comes to media that are doctored, including using less advanced methods, and don’t meet the criteria for removal. Such content may still be subject to an independent fact-check and ultimately regulated, as it were.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false,” said Bickert.
She argued that taking such videos down wouldn’t stop people from viewing them elsewhere – all the while being unaware that the videos are fake. Leaving them up and labeling them as false instead will provide people with crucial information and context, she said.
ESET security specialist Jake Moore recognized Facebook’s move but also noted that bans can only go so far and that we need to be more discerning, as well as be ready for what’s to come. “Not only do we need better software to recognize these digitally manipulated videos, but we also need to make people aware that we are moving towards a time where we shouldn’t always believe what we see,” says Moore.