From congressional hearings to meeting with law enforcement and intelligence officials, social media companies have been in the spotlight in advance of the 2020 election season. Indeed, the 2016 US presidential election, which was mired in controversy and allegations of election meddling, is still fresh in the minds of some. So, it comes as no surprise that social media behemoths are ramping up their game, preparing measures to ward off threats to election integrity.
With the 2020 US presidential election just over a year away, Facebook, for one, announced measures earlier this week that are designed to stop election interference. Their efforts are concentrated into three main pillars: fighting interference, increasing transparency, and reducing misinformation. Each of these pillars is broken down into a variety of steps that the social network takes to achieve these goals.
Combating inauthentic behavior is one of such steps. An example would be Facebook’s removal of Pages, Groups, accounts on both Facebook and Instagram exerting influence by engaging in coordinated manipulative activities. The plug was pulled based on the behavior of these accounts and not purely on the content shared.
The social network has also launched a new Facebook Protect feature, which adds an extra layer of security to the accounts of political figures and their staff. The feature includes mandatory two-factor authentication, and accounts using Facebook Protect will be actively monitored for hacking. On the other hand, the social media giant admits that the measure is not foolproof, stating in their press release that “because campaigns are generally run for a short period of time, we don’t always know who these campaign-affiliated people are, making it harder to help protect them”.
To curb misinformation on both Facebook and Instagram, they reduce its distribution to a lower number of people and remove it from the Explore and News Feed features. If content has been rated false or partly false using a third-party fact-checker, it will be labeled as such and will be up to the users’ discretion to determine if it is trustworthy or not. If users attempt to share such content, a pop-up will appear to warn them that the post in question contains false information debunked by the fact-checker.
To increase transparency, Facebook now provides users with the ability to check the provenance of the page by sharing its primary country location and if it has merged with other pages. In addition, more context is provided by extra information that must be disclosed such as “Organizations that manage this Page” tab. This tab will contain the organization’s legal name, verified city, phone number, and website. This will be visible on Pages that have a great number of US audiences and have undergone Facebook’s business verification process.
A new approach will be implemented towards state-controlled media and towards media outlets that are wholly or partially under the editorial control of their government. These outlets will be labeled both on their Page and in Facebook’s Ad Library and will be held to a higher standard of transparency.
Meanwhile, Twitter disclosed its efforts in preventing platform manipulation last month. It plans to keep enhancing its efforts in this area, by routinely disclosing data that is related to state-backed information operations on their network.