As U.S. voters wait to hear who the next president will be, Twitter, Facebook, Google and other internet firms will be busy doing something else: Monitoring their sites and deciding if and when to stop the spread of misinformation. After the 2016 U.S. election, in which internet firms were criticized for allowing foreign-sponsored actors to use their networks to spread misinformation, they vowed to take steps to better protect their sites. Once the coronavirus pandemic hit, companies began to more directly tackle misinformation related to the health crisis, observers say, and turned to more automated ways to moderate content, such as artificial intelligence. Those practices have carried over to efforts to address misinformation around the election, said Spandana Singh, a policy analyst with New America’s Open Technology Institute. “A number of the policies and practices that they adopted for the U.S. elections were largely informed by their COVID-19 response,” she said.   Now that they’ve signaled more of a willingness to address misinformation, the tech firms are walking a tightrope: Take steps to stop misinformation about the election from spreading or allow people to express themselves, whether it’s sharing truth or falsehoods. Sorry, but your browser cannot support embedded video of this type, you can
FILE – People wearing face masks during the coronavirus pandemic walk by the Twitter logo outside the New York City headquarters in Manhattan, Oct. 14, 2020.Facebook said it could turn to its so-called “break-glass options.” What that exactly means, the company hasn’t said. But the Wall Street Journal reported that the company may turn to measures taken in Sri Lanka and Myanmar, such as possibly deactivate hashtags related to false information about election results or suppress viral posts that spread messages of violence or fake news. “This election cycle is a really good testing ground for a number of new policies and practices,” Singh said. “Should they be effective, I definitely think they will be rolled out globally.” FILE – The Facebook application is displayed on a mobile phone at a store in Chicago, July 30, 2019.One problem with online misinformation is that it can spread widely before internet sites, which are also sensitive to claims they are suppressing certain viewpoints, decide to act, said Shannon McGregor, an assistant professor at the University of North Carolina, Chapel Hill. “I worry if they will break the glass as quick as it might need to be done depending on what is happening in our post-election period,” she said. While U.S. voters chart the future course of the nation, this Election Day is another test case of whether social media helps or hurts the democratic process. 
 

leave a reply: