MONDAY 18 DEC 2023 4:40 PM

THE DANGERS OF DISINFORMATION FOR COMPANIES

This year, digital bank runs and reputational crises have served as a warning to business leaders to take the influence of social media on stakeholders much more seriously.

The dramatic collapse of Silicon Valley Bank this year has been described by politicians as the “first Twitter-fuelled bank run.” Having invested a lot in its relationship with depositor customers through fancy events and expensive wines, the bank likely felt confident in the loyalty of its customer base.

However, rising interest rates and SVB’s large portfolio of long-term, low-interest assets fuelled liquidity concerns, and soon the highly networked, active and demanding depositors had worked themselves into a frenzy stoked by WhatsApp notifications and Twitter threads. It suddenly made having a sleepy, disengaged depositor base seem much more appealing.

Like many other banks, governments and businesses, SVB was dependant on a relationship of trust with its customers, and yet the spread of information and heightened emotions across social media platforms saw this hard-won trust unravel breath-takingly fast. While SVB may be an extreme case, it has exposed a larger issue: most organisations are ill-prepared to deal with communications crises in a modern world, where social media moves at untraceable speed and leaves you overwhelmingly vulnerable to disinformation attacks.

A recent example is the controversy around Zara, which drew criticism for an advertising campaign featuring models alongside mannequins wrapped in white, which some viewers thought was insensitive for resembling images of dead bodies in Gaza. The campaign, which Zara has pulled, triggered calls for a boycott, and criticism escalated from online backlash to protests outside some of the chain’s stores. An analysis by social threat intelligence company Cyabra identified 39% of the profiles interacting with Zara’s X (previously Twitter) account as fake.

Traditionally designed to work with government intelligence units, Cyabra is increasingly being contacted by publicly listed companies for help. “We’re seeing a massive increase in the same techniques and methods that are used in elections - against governments and against societies - now being used against companies,” says Rafi Mendelsohn, vice president of marketing at Cyabra. “We’re seeing impersonation, brand reputation attacks, stock market manipulation and the use of social media in ‘phishing’ attempts.

“And this is particularly stark when it comes to crisis communications: the old playbook is being re-written.”

One of the biggest concerns around such attacks is that, naturally, emotions run high in times of crisis, and brands are hasty to respond. Mendelsohn explains: “In such scenarios, it’s hard for communicators to understand the scale of the conversation and how quickly it is growing, in a very short space of time. We call this the cyber-snowball effect.

“There is definitely a gap in the tools that crisis communicators have access to, in terms of being able to make those sophisticated, distinguished differences between the real and the fake.”

Information moves - and snowballs - so quickly in cyberspace that it is hard to detect what is accurate and what is not. And while this leaves many disgruntled bankers keen to muzzle social media, this seems an impossibility while X is run by self-described “free speech absolutist” Elon Musk. Brands are still learning how to respond to such crises. “There's a huge pressure for brands to move quickly and then the pressure is probably applied even more so on the communications teams,” Mendelsohn explains. “And the first response is to apologise and take the campaign down.

“I wonder if those professions had better, quicker and easier tools to be able to run analyses [for disinformation] quickly, then the conversation internally would be slightly more would be different - it would be more sophisticated.

“Being able to differentiate between real and fake conversations allows brands to know what real people saying and to gauge the true sentiment among real people.”

Providing a more nuanced and informed response to disinformation crises has a preventative appeal too. “The other thing to consider as well is the cybersecurity issue – cybersecurity hackers are much more sophisticated,” Mendelsohn says.

“If brands have a slightly more advanced monitoring and tracking response, and are able to be able to distinguish between the real and the fake and also communicate that as well, then those malicious actors who are behind the attacks are less likely to continue their attack on that brand.”

Recent research by communications agency Kekst CNC shows that 95% of FTSE 100 companies were impacted by non-credible reporting in 2023. In the first six months of the year, over 100 websites known for spreading false or misleading narratives drove an estimated 9,650,000 million impressions and 348,000 shares on social media about FTSE100 businesses. "Disinformation is increasingly targeting businesses and the corporate world," says agency director Michael White.

"That threat is likely to increase as generative AI is mobilised by hostile actors in new and novel ways, from targeted and coordinated bot networks that amplify stories at unprecedented scale, to the creation of synthetic media used to mislead unwitting audiences."

As we approach the new year, Mendelsohn is at least not wholly pessimistic over future efforts to combat disinformation. "We're definitely seeing increasing awareness and education around disinformation in general society, and across the communications industry. 

"However, with AI tools able to make more believable content, fake accounts that are becoming more authentic to the naked eye, which means we can expect more of these occurrences."