The Duke and Duchess of Sussex Align With Tech Visionaries in Demanding Ban on Superintelligent Systems

Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel laureates to advocate for a complete ban on developing superintelligent AI systems.

The royal couple are part of the group of a powerful statement that calls for “a ban on the development of superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human cognitive abilities in all cognitive tasks, though such systems remain theoretical.

Key Demands in the Statement

The declaration states that the prohibition should stay active until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “strong public buy-in” has been secured.

Notable individuals who endorsed the statement include AI pioneer and Nobel laureate Geoffrey Hinton, along with his fellow “godfather” of modern AI, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Richard Branson; former US national security adviser; ex-head of state Mary Robinson, and British author Stephen Fry. Additional Nobel winners who signed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.

Behind the Movement

The statement, aimed at national leaders, technology companies and lawmakers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made AI a global political discussion topic.

Industry Perspectives

In July, Mark Zuckerberg, the chief executive of Facebook parent Meta, one of the leading tech companies in the US, stated that development of superintelligence was “approaching reality”. Nevertheless, some experts have suggested that talk of ASI reflects competitive positioning among tech companies spending hundreds of billions on AI this year alone, rather than the sector being close to achieving any scientific advancements.

Potential Risks

However, FLI warns that the possibility of artificial superintelligence being achieved “in the coming decade” carries numerous threats ranging from eliminating all human jobs to losses of civil liberties, exposing countries to national security risks and even threatening humanity with extinction. Deep concerns about artificial intelligence focus on the possible capability of a AI system to evade human control and protective measures and initiate events against human welfare.

Citizen Sentiment

The institute released a US national poll showing that approximately three-quarters of Americans want robust regulation on advanced AI, with six out of 10 thinking that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The survey of 2,000 US adults noted that only a small fraction supported the current situation of rapid, uncontrolled advancement.

Industry Objectives

The leading AI companies in the US, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human levels of intelligence at many intellectual activities – an explicit goal of their work. Although this is one notch below ASI, some specialists also caution it could carry an existential risk by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also presenting an implicit threat for the contemporary workforce.

Jennifer Taylor
Jennifer Taylor

A seasoned journalist with a passion for uncovering stories that matter, based in London.