THURSDAY 9 OCT 2025 10:00 AM

2025: AN OFFICE ODYSSEY

It is chirpy and charming, but is AI really transforming the way you work? Rebecca Pardon reports. This article is from Communicate magazine's print edition.

For even the most snobbish among us, it is difficult to not be impressed by artificial intelligence. When large language models have shaken off their hallucinations, they can very much feel comparable, if not preferable, to human conversation. Who would rather make staccato small talk with colleagues about the weather than be ceaselessly flattered by an endearingly earnest bot who, rather than swallowing a yawn, will praise your peerless observation skills? 

Today’s AI tools are so convincing that even in their early stages they are gathering the hearts and sympathies of the lonely, intellectually curious and, increasingly, the professional. In the workplace, they are being welcomed as tireless assistants and chirpy teammates. But this increasing affection for the fluency of AI is causing confusion between what sounds intelligent, and what actually is accurate, original and meaningful.

Computational linguist Emily Bender calls generative AI is “the emperor that has no clothes”, concerned not by the inadequacies of AI tools, but that users are being seduced by a superficial sophistication. In her recent book ‘The AI Con’, co-authored with sociologist Alex Hanna, she argues that the impressive language of models like ChatGPT masks an innate hollowness. These systems, she says, are not reasoning or creating in any meaningful way; they are simply producing plausible text based on patterns in training data.

“If we don’t understand how it works, we won’t understand when it fails”

This is not intelligence, Bender argues, but mimicry. She refers the tools created by OpenAI and rivals Anthropic, Elon Musk’s xAI, Google and Meta as ‘stochastic parrots’, which repeat fragments of language threaded together without context or comprehension. What makes them dangerous is not their power, but our susceptibility to mistaking fluency for thought, and our willingness to trust them with increasingly complex tasks.

This puts the communications industry – which depends on nuance, emotional intelligence and cultural sensitivity – in a particularly tricky spot. The risks of AI tools getting it wrong are hight, and yet adoption is accelerating. Antony Cousins, founder of the consultancy AI Communications, acknowledges the value of these tools for basic, repetitive tasks. “AI isn’t overhyped, but we do need to be discussing the implications of it more. There is a risk to creativity. I call it the ‘atrophy of human creativity’.”

The importance of communications professionals taking a strategic, deliberate and directional approach to the technology is reiterated by Frank Dias, AI communications lead at recruitment company The Adecco Group and founder of consultancy AI x Comms Lab. Dias says that communications departments are particularly exposed to AI risks. “Our world is not seen as an art or appreciated as a craft, which makes the industry vulnerable to those who believe it can be replaced by AI.   

“Communications leaders need to do some strategic thinking about the future and purpose of their department,” he says, adding that weaving AI into operations must be handled with intention and care.

Speaking from the AI for Good Summit in Geneva, Cameron Berg, a research scientist at AI firm AE Studio, says the “bigger picture, existential” conversations are finally being had. “But a lot of people are just coming into AI for the first time and seeing the tremendous potential of it. It’s important for people to understand this macroscopic view.

“It’s not just about your next quarter projections and how AI is going to help you hit that target; it’s about building technology that is becoming autonomous. We don’t know what the implications of that are, and we don't know how to build this in a robustly safe way. Nobody knows yet. And yet, we’re racing ahead at the speed of light to do this.”

Even from an economic standpoint, the results so far have been underwhelming. AI is not yet producing the productivity surge commensurate with what people are claiming. For many companies, excitement over the promise of AI is beginning to wane, replaced by vexation over the difficulty of making productive use of the technology. According to S&P Global, a data provider, the share of companies abandoning most of their generative-AI pilot projects has risen to 42%, up from 17% last year. The boss of Klarna, a Swedish buy-now, pay-later provider, recently admitted that he went too far in using the technology to slash customer-service jobs, and is now rehiring humans for the roles.

This isn’t unusual, however. It has historically been the case with previous general-purpose technologies, such as railways and electricity, that it can take decades before they boost productivity. New infrastructure has to be built, new ways of working adopted and new products and services launched. In the meantime, the adoption of new technologies can actually suppress productivity for a while, as companies and their employees experiment and adapt to new ways of working. Indeed, new technologies can even produce an increase in unproductive work. Businesses are left weighing up the decision between tinkering with tricky technology, or risk appearing to be falling behind.    

“New technologies can actually suppress productivity, as companies and their employees adapt to new ways of working”

Despite the slow pace of transformation, AI tools are taking their places in offices across industries. Lawyers, bankers, doctors and many other professionals now regularly use chatbots to write to colleagues, customers, clients and patients. Talk to executives and it is not long before they will rhapsodise about the wonderful ways in which their business is using AI. Jamie Dimon of JPMorgan Chase recently claimed his bank has 450 use cases for the technology. A study by communications agency Comprend found that 93% of communications professionals now use AI tools regularly, up from just 3 % in 2023. Mass adoption, however, does not mean efficiency: a separate survey from America’s Census Bureau finds that a mere 10% of firms are using it in a meaningful way. A recent paper by bank UBS noted: “enterprise adoption has disappointed”.  

In many cases, AI adoption is taking place below the surface, or on shielded computer screens. A Deloitte study surveying 30,000 employees across 11 countries found that 63% of generative AI users said their employer either encourages or allows AI use, but that nearly a quarter said their company had no formal policy. A lack of clarity and direction has resulted in shadow usage, leaving some companies clueless about what their employees are doing. Without proper oversight, the risks are many: confidential information may be leaked, and unchecked outputs can spread misinformation. Berg warns that this is the deeper danger; not that AI is useless, but that its usefulness is too mystifying. “We don’t yet know what we’re integrating into our workflows, and if we don’t understand how it works, we won’t understand when it fails.”

Concerns over AI safety abound, with two major, recent reports from the Future of Life Institute (FLI) and Safer AI, which assessed the preparedness of leading AI firms, identifying major gaps in their risk mitigation plans. Both groups said that even the most well-prepared companies had little information on existential risk in their plans. FLI said that “none of the companies has anything like a coherent, actionable plan” for controlling increasingly powerful systems. Anthropic scored highest on both reports, but that is faint praise: it received a C-plus grade from FLI and just 35 out of 100 from Safer AI.

A recent study AE Studio found that lightly fine-tuned version of GPT-4o, the model behind ChatGPT, began outputting disturbing and systematically hostile responses after minimal retraining. These sorts of results have led some artificial-intelligence researchers to call large language models ‘Shoggoths’, after H.P. Lovecraft’s shapeless monster. Recently, OpenAI attributed the behaviour to an unpredictable “misaligned persona” within their models, having devoured everything from their training, including man’s darkest tendencies.

But few believe these tools are without their uses, or charms. While not dismissing AI systems as ‘stochastic parrots’, Berg does agree with Bender in that they are, in some ways, closer to living organisms than simple tools. “These AI systems almost resemble a new species, more than just a new tool. It is more apt to say the technology is grown rather than created. It’s not traditional computer software, and it’s not traditional programming. These are giant neural networks that are trained on vast amounts of data. We have no mechanism for understanding what is learned by this system, only that it sort of works at a superficial behavioural level.”

There is a risk, Berg warns, of us therefore lapsing into contented ignorance around how the tools work, enjoying the convenience of having them draft emails, filter data and pen birthday messages to Mark from accounts. What needs our focus today, he says, is AI alignment. “Next to nothing has been invested in this field compared to how much has been invested in just making these systems way more powerful.”

With OpenAI’s ChatGPT having already amassed 800 million users, you’d be forgiven for thinking the cat, or parrot, is already out of bag. Berg hastens to alleviate the sombre mood, emphasising that he feels pragmatic, rather than pessimistic, adding that the next few years will be essential. “It’s not too late, but we are close. The risk is if we just kind of sit here and let labs just do their thing and race ahead, in an international race where no one cares about anything except being first, rather than building these things with integrity. I think we have a two-year window to bring AI alignment through fundamental research and development.

“I am not sceptical of the power of these systems. In fact, I’m terrified by their power. What I am sceptical about is whether the people building these systems have humanity’s long term best interest in mind and, instead, I think that they’re racing to build something that's as powerful as they can possibly make it.”

“Communications leaders need to do some strategic thinking about the future and purpose of their department”

Between robot, human or some other organism, many have found another way to capture the nebulous allure of AI tools. “I keep hearing the word ‘magic’ at this conference,” says Berg. He finds it reminiscent of a quote by British science fiction writer Arthur C. Clarke, who famously stated that advanced technologies will be indistinguishable from magic. “I think it explains why a lot of people feel this way towards AI.”

The word ‘magic’ speaks not only to the delightful whizziness of AI tools, but also to how little we understand them. Any Clarke fan will feel the wisdom in approaching the technology with trepidation. Berg believes the solution is our own intelligence in how we use them. “My concern right now is that a lot of people are not informed, and they don't understand what the risks are; they're just getting sort of thrown into the deep end. I don't think that companies should just blindly integrating technology without thinking soberly about these questions. I think that is a recipe for disaster.”