EIon Musk, CEO of Tesla, is warning humanity against the dangers of artificial intelligence (HAVE). The staggering progress of ChatGPT has him worried that “machines will flood our information channels with propaganda and untruths” to the point of “losing control of our civilization.”
This message should be taken very seriously. Elon Musk is arguably one of the most legitimate people in the world to talk about AI. He’s behind ChatGPT, self-driving cars, and AI-based reusable rockets.
He also founded Neuralink, a company specializing in brain-controlled interfaces that don’t require a mouse or joystick, and is launching a new competitor to ChatGPT. He can even take on Mark Zuckerberg and Bill Gates for their “limited” knowledge of AI.
A threat to which he contributes
Musk’s message is supported by hundreds of recognized experts. Eight eastern European prime ministers are urging Big Tech (the likes of Google, Amazon, Tesla, Alphabet, Microsoft, Meta, and Intel) to contain Russia’s war of disinformation, something which aims to “destabilize our countries and weaken our democracies.” NATO recognizes that while propaganda isn’t new, its scope and intensity are growing in a disturbing way. Italy has decided to block access to ChatGPT.
However, we should not be fooled by Musk’s game of denouncing a threat that he contributes to, with fake news proliferating on Twitter since he took control of the platform. This game also promotes the interests of his company Neuralink. This firm wants to put implants in our brains in order to keep machines under control, but is facing the refusal of the FDA (Food and Drug Administration). With a caricature of the science-fiction style to which we’re accused of him, he goes on about the threat of a Terminator – an instrument of authoritarianism – so that we let him build his Robocop, a supposedly more intelligent cyborg.
The European Parliament (EP) should immediately summon Musk for a hearing in Brussels, followed by his counterparts from other Big Tech companies. The message from the Tesla CEO expresses willingness for cooperation between public and private sectors, and the magnitude of the threat justifies the urgent elaboration of countermeasures against disinformation.
The first line of defense is identifying misinformation. We already know that the main cause of danger comes from the way AI is programmed, resulting in a “black box” whose results nobody, not even the programmers, can explain. Malicious or military use of this tool is therefore beyond the control of its designers.
You have 53.11% of this article left to read. The rest is for subscribers only.