Skip to content
Menu
Menu

Experts push for responsible development of ‘AI consciousness’

The call comes as the development of apparently conscious AI systems seems more possible than ever, with some scientists believing such systems will evolve by 2035
  • Without guiding principles, the open letter warns, we risk harming AI systems that are more akin to animals or humans than simple computer programs

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

UPDATED: 05 Feb 2025, 7:39 am

Irresponsible development risks harm to future artificial intelligence systems capable of feelings or self-awareness, AI experts and public intellectuals have warned in an open letter.

More than a hundred experts, including academics such as Sir Anthony Finkelstein of the University of London and AI professionals at companies like Amazon and advertising group WPP, signed onto the letter, which puts forward five principles for conducting responsible research into AI consciousness. 

Chief among them is prioritising research on understanding and assessing AI consciousness in order to prevent “mistreatment and suffering.” Other principles focus on the development process – setting constraints on developing conscious AI, sharing findings with the public and refraining from making misleading or overconfident statements about creating conscious AI.

The letter was published alongside a research paper outlining these principles which argues that conscious AI systems (or systems that at least give the impression of consciousness) could be developed in the near future.

[See more: Shenzhen-based AI startup DeepSeek catches Silicon Valley off-guard with tech breakthrough]

Published in the Journal of Artificial Intelligence Research, the paper notes that these precautions are necessary as even companies not intending to create conscious systems will need guidelines in case of “inadvertently creating conscious entities.” Co-authors Patrick Butlin of Oxford University and Theodoros Lappas of the Athens University of Economics and Business acknowledge the widespread uncertainty and disagreement over how to define consciousness in AI systems, and whether it’s even possible, but stress that the risk of harm is an issue “we must not ignore.”

Both the paper and open letter were organised by Conscium, a research organisation funded in part by WPP and co-founded by WPP Chief AI Officer Daniel Hulme. Presenting itself as “the world’s first applied AI research organisation,” Conscium brings together leaders across multiple scientific disciplines to “deepen our understanding of how to build safe AI that will benefit humanity.”

A group of senior academics argued in a paper last year that there was a “realistic possibility” of some AI systems being conscious and “morally significant” by 2035. Sir Demis Hassabis, CEO of Google DeepMind and a Nobel Prize winner for AI-related work, told CBS in 2023 that he believed there was “a possibility” that AI could one day be conscious. 

Should that happen, guidelines like those outlined by Conscium may be critical to the ethical treatment of AI systems, some of which may even qualify as “moral patients” – a term for subjects of moral concern, like humans or other animals, to which humankind has a moral duty.

UPDATED: 05 Feb 2025, 7:39 am

Send this to a friend