In part one of this series we covered the effect of media on democracy in the digital world. Though many people still get their news from newspapers and TV, much of today’s news is shared on social networks. There’s no doubt that social media can enhance a political campaign. The most high-profile example of this was the 2016 American presidential election.
However, is the increasing move to digital helpful or harmful to democratic process? This article will show the relationship between social media and democracy in the digital era. Think about something. Do you know where your news comes from? You may think that the article you saw your friend share on Facebook or Twitter is legit. Usually, people don’t examine a piece of media too closely when it’s shared by a trusted friend or associate.
The rise of what is called fake news has put an end to most of that. Political candidates are spending millions on trying to influence voters, most of whom are on social media. Fake news travels much quicker than real news because it’s sensational and invokes strong emotions.
Turkey is one example of how a digital campaign can hurt democracy. As part of a campaign to stop the free spread of information, there was a move to control all content online. One could argue that this restricts the voting populace’s access to news media to government-controlled sources only. On the other side of the coin is social media’s ability to give people a voice and encourage democracy. In the United Kingdom, the Labor Party in its 2017 campaign used social media to increase voter registration. This especially affected the younger generation, and the party saw its share of the votes among 18-34-year old rise because of this.
Social media companies and responsibility
The users on social media consume the content, and political parties produce it. The middleman in all of this, the platform, is often overlooked.
Should social media companies be responsible for what’s shared on their sites?
After all, Facebook, Google, Instagram, etc. are all companies who want to make money. They have proven to be bad at regulating the spread of misinformation on their platforms. Facebook tried to regulate it by using a “fake news” flag, a project that they’ve scrapped. There have been studies that show even when a source that isn’t credible is flagged, reading the piece of media still affects the user’s views.
Governments have tried to regulate the platforms, such as Germany’s move to fine platforms who don’t remove abusive posts. The potential fines run to the millions, which is bad news for tech companies. If they don’t regulate themselves, governments will impose harsh measures that cut into their profits. Even with regulation on the content that is shared, oppressive political regimes can still restrict access to the platforms themselves. This means in the future we will all have to be more mindful of where our news comes from.