Elon Musk and hundreds of experts call for AI pause, citing ‘major risks to humanity’

0
(0)


“data-script=”https://static.lefigaro.fr/widget-video/short-ttl/video/index.js” >

AI researchers, including French people with whom Le Figaro spoke, call for a “step back” in a “dangerous race towards unpredictable black boxes.” They target AIs more powerful than ChatGPT-4.

Elon Musk and hundreds of global artificial intelligence experts on Wednesday signed a call for a six-month pause in research into more powerful AIs than ChatGPT 4, OpenAI’s model launched in mid-March, citing “major risks for humanity“. French scientists are among the signatories. “I don’t want a society whose pace is dictated by tech giants. There is a democratic issue», Estimates Colin de la Higuera, teacher-researcher in AI at the University of Nantes, who signed the call «without a shadow of hesitation.»

The past few months have seen AI labs locked in an uncontrolled race to develop and deploy ever more powerful digital brains, which no one – not even their creators – can reliably understand, predict or control.“, can we read in the column published on the site of the American foundation Future of Life. “Powerful AI systems should only be developed when we are certain that their effects will be positive and their risks are manageable.»

The signatories therefore call for a moratorium until the establishment of security systems, including new dedicated regulatory authorities, “monitoring and tracking high-performance AI systems», «robust public funding for technical research on AI safety », «a robust audit and certification ecosystem“, techniques to help distinguish the real from the artificial and institutions capable of managing the “dramatic economic and political disruption (especially to democracy) that AI will cause“.

Call for regulation

This call “does not mean a pause in AI development in general, but simply a step back from the dangerous race towards unpredictable and ever-larger black-box models with emergent capabilities.»

Should we let the machines flood our information channels with propaganda and lies? Should we automate all jobs, including rewarding ones? Do we need to develop non-human minds that could one day be more numerous, smarter, more obsolete and replace us? “continue the signatories. “These decisions should not be delegated to unelected technology leaders.»

The boss of OpenAI (ChatGPT), Sam Altman, himself admitted to being “a little scaredby its creation if it were used for “large-scale disinformation or cyberattacks“. “Society needs time to adapt“, he told ABCNews in mid-March.

The petition brings together figures who have already publicly expressed their fears of out-of-control AI that would outperform humans, including Elon Musk, owner of Twitter and founder of SpaceX and Tesla, and Yuval Noah Harari, the author of “Sapiens” And “Homo Deus“. Signatories also include Apple co-founder Steve Wozniak, members of Google’s DeepMind AI lab, Stability AI boss Emad Mostaque (OpenAI competitor), executive engineers from Microsoft (OpenAI’s ally group), as well than many American and European scholars.

French researchers among the signatories

Régis Sabbadin, researcher in artificial intelligence at Inrae (National Research Institute for Agriculture, Food and the Environment), is one of the signatories. “I am convinced that AI, and in particular generative AI, will be at the origin of a new industrial revolution that will transform certain professions“, he explains to the Figaro. “Like the internal combustion engine or agricultural fertilizers, AI will generate progress. But it’s a double-edged sword.The researcher, however, is not “disagree” with the part of the platform which evokes the dystopian risks of a humanity which would end up dominated by AIs more powerful than it.

AI is a major breakthrough and this technology is profoundly transforming our societies“, adds Malik Ghallab, researcher emeritus in robotics and artificial intelligence at the CNRS. “It would therefore be advisable to move forward with caution and to give time to social deliberation. The problem is that progress is now going much too quickly for this public debate to take place. However, there is a need for regulation and global governance around AI.»

Malik Ghallab warns that “technology goes faster than scienceand that engineers and researchers no longer have full control of the most powerful AIs. “Their capabilities exceed what we had anticipated, and their designers find themselves in a dangerous position as sorcerer’s apprentices, despite their best intentions.The researcher takes the example of the aviation sector, which for decades has been governed by strict safety standards. “We need pioneers, like the Wright brothers, who will take risks. But these risks should not affect the whole of society.»

Rate this person.

Click on a star to rate them!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top