WEDNESDAY 15 NOV 2023 10:11 AM

MISINFORMATION: THE CONSEQUENCE OF GOING VIRAL ON SOCIAL MEDIA

Richard Bagnall, co-managing partner at CARMA, explores the perils of misinformation and how technology can help to tackle it.

Misinformation has become a rampant force that challenges the trust between organisations and consumers. From the AI-generated “Barbenheimer” advert to celebrities dubbed covering Kanye West and Drake songs - it’s a trend and a risk that’s just getting started and only set to grow.

The accountability and cancel culture era demands organisations to be on high alert. Acting fast is essential to combat the influx of misinformation circulating online effectively. With research by Newsworks revealing that 79% of consumers in the UK are concerned about fake news, legacy technology is no longer enough to provide businesses with clarity amid the chaos.

The proliferation of Artificial Intelligence (AI) means how and where content is generated is no longer just a person with a pen and a computer. As a result, the scale and velocity at which AI-generated content is shared means misinformation is following suit. So, how can communications professionals use AI and data for good to be crisis-ready?

How to flag zero or hero content

In the battle against misinformation, time is of the essence. Misleading narratives can spread like wildfire. Consider the 2020 U.S. presidential election, where deepfake videos of Joe Biden using AI manipulated the words of the now-president and sold a distorted version of reality to the public. The videos called the integrity of the election into question by sowing doubt and confusion among voters. 

AI tools built into monitoring and measurement tools can be vital for identifying and flagging hero or zero content. To effectively implement this, organisations can use automation to enhance their crisis management strategies. By automating the analysis of large volumes of data, AI can summarise the sentiment of a hashtag or volume of articles in a way that is impossible for a crisis team to achieve manually.

AI can also analyse content to detect subtle inconsistencies and discrepancies, enabling organisations to debunk false narratives before they inflict lasting damage. By incorporating these measurement approaches into workflows, organisations can significantly enhance their ability to respond swiftly and effectively to crises. 

Curb a crisis in real-time

Organisations need more than just sporadic assessments of the situation to combat the spread of misinformation effectively. To do so, organisations can start by integrating monitoring and analytics solutions that measure the main drivers of discussion, identify which platforms and stakeholders are influencing the narrative, and assess how the audience is reacting.

For instance, during the COVID-19 pandemic, monitoring the media in real-time played a pivotal role in tracking the spread of virus misinformation. To implement analytics effectively, organisations can establish automated monitoring systems that continuously track coverage distribution across various media channels and how sentiment evolves in real time. This proactive approach, powered by technology, helps organisations stay ahead of the curve and identify potential threats before they gain a life of their own.

Armed with these insights, organisations can craft targeted responses to their stakeholders. Whether reaching out to customers directly over email with the facts or issuing a press release to journalists to settle the score. 

Closing gaps in a tech stack

Legacy systems often lack the agility and processing power required to navigate the fast-paced world of misinformation. Take the Volkswagen emissions scandal in 2015, for example. The German automobile manufacturer faced widespread criticism when the media revealed they had manipulated vehicle emission data. The rumoured reluctance to upgrade their data proved costly. Not only in financial terms but also in significant damage to their reputation and consumer trust. As a result, the commitment of Volkswagen to environmental standards was questioned.

Outdated digital architecture, poor data structures, a lack of updates, and an inability to scale efficiently contribute to the limitations of existing tech stacks. Organisations can begin modernising their digital infrastructure with a thorough assessment of where the gaps in their existing tech stack are and what needs an update. Once an audit is done, organisations can swiftly identify and rectify issues that can negatively impact the business in the short and long term.

Moderating misinformation

The scale and velocity of AI will only continue to grow over the next twelve months, and so will the challenge of curbing the spread of misinformation. Communicators are confronted by the decision to adapt and put safety precautions in place to monitor and verify content or risk fuelling the spread of dangerous and inaccurate information.

A wealth of tools are available to help scan, analyse and disseminate content, but humans play a fundamental role in moderating these. Both need to work together to track conversations and sentiment online consistently. Technology can help organisations and people make informed decisions and tackle a rising misinformation epidemic that, if not managed, will have devastating consequences for everyone.