We have recently seen social media become increasingly volatile. The proliferation of ‘fake news’ and the widening divide between left and right are not only disrupting politics, they are also presenting significant challenges to those in charge of a corporates social media policy.
The algorithmic structure of social media platforms, (such as Facebook and Twitter) feeds information to users based upon the content that they interact with most. Designed to boost engagement with the platform, this has resulted in users only being fed information that they agree with. This is known as a ‘social media bubble’. It is in these bubbles where ideological concentration is fostered, allowing for the creation, intensification and normalisation of misinformation. Viral ‘fake news’ is a deliberate exploitation of social media algorithms and the normalisation of fabrication across news feeds.
The threat to corporations lie in the potential for savvy activists (and trolls) to exploit the bubble and create rogue content that harms the reputation of an organisation amongst its consumers. For example, customer complaints posted directly on social media, whether real or false, often garner significant attention from users. In more serious cases, they attract negative media attention that can damage reputation and consumer relations. During an external crisis, the same processes can also serve to intensify issues by fuelling public outrage. If left unmanaged, the damage may far exceed that caused by attacks from mainstream media alone.
Further, the administrators of corporate social media pages are responsible for all third party content, and may be liable for content that is defamatory. During a social media crisis, there may be vast amounts of customer content posted to a company’s social media and it is vital that the administrators monitor every interaction.
This means the aims of a social media policy must not only be to generate positive interaction, but to also be prepared for rogue content. As current political events are revealing, the correction of viral misinformation is not enough to remedy reputational damage. As the algorithm literacy of social media users is set to increase, corporate social media policy must always be one step ahead. Corporates and their social media managers must understand these algorithms if they are to prevent and manage misinformation before it reaches crisis. The best way to achieve this is to ensure a crisis preparation manual includes a clear plan for social media and its use and misuse in issues management.
James Mort, Intern