MONDAY 3 JUN 2024 11:55 AM


As tech companies hurtle towards ever-smarter applications of the technology, governments are scrambling to keep up with a growing array of risks.

Last month, a summit on artificial intelligence safety took place in Seoul, where 16 tech companies made fresh safety commitments. As several of the companies and officials who took part in a similar summit in the UK last year were absent however, the scaled-down gathering seemed to mark the public's ebbing concern over AI regulation.   

One panel discussion at the International Association for Measurement and Evaluation of Communication’s (AMEC) annual conference in Sofia, which took place during the same week, reflected the tension between the pace of AI and those concerned about its risks. Panellists Bulgarian journalist Svetoslav Ivanov and Kalin Dimtchev, general manager at Microsoft, saw the conversation vacillate between one of cautious concern and then fervent enthusiasm. Microsoft is now already working with OpenAI on ChatGPT-4o, described by Dimtchev as a more “personal” version of the tool.  

The advances in OpenAI’s new ChatGPT model marks the intensification of its competition with other Big Tech groups pushing for breakthroughs in the technology. Last month, the company stated that it anticipated the “resulting systems to bring us to the next level of capabilities”, although it did not say what these capabilities might be.

“You can interact with this model in different ways,” Dimtchev explained. “The ‘O’ is for ‘omni’, which means ‘inclusive’.”

With OpenAI founder Sam Altman having claimed to have wanted to build “superintelligent” systems that were smarter than humans, the word ‘omnipotent’ – rather than inclusive – was what came to mind as Dimtchev listed the capabilities of Chat GPT-4. “You can interact with this model in different ways: the new model can read, hear our voices, recognise our faces and emotions and even respond with emotion.”

Ivanov, who represented a conservative viewpoint on the panel, met Dimtchev’s enthusiasm with a solemn tone. He questioned the unrestrained reach of the technology, and its potential to enhance misinformation: “Who owns the media? AI?

“We all consume information, but the question is how. The media is being hit hard by each wave of technology, which is leading to greater divisions in society.” Ivanov added that just 3% of Meta’s content today is news. “Sometimes lies sound more interesting than the truth.”

A Bloomberg article on the Seoul safety summit claimed the “AI safety movement”, which peaked when experts and researchers called for a six-month pause in AI development last year, “is dead”. This, however, being replaced by efforts towards “actually making artificial intelligence safer”, which have just begun. The pace of AI advancement does not look to slow down anytime soon. When the panel was invited to share what their hopes were for AI in twelve months’ time, Dimtchev said: “I would like for us all to have personal assistants.”

Ivanov sounded almost doubtful as he answered: “I would like us to slow down the progress, and to think about ethics.”