[ad_1]

Less than 10 words were enough for Elon Musk to announce a new venture. The tycoon has entered the artificial intelligence business with xAI, a new company that, in his opinion, will help “understand reality” and the nature of the universe. The company has made its debut on social networks by asking a philosophical question: “What are the most fundamental questions that remain unanswered?” It’s the latest coup from the world’s richest man, who is living low hours on Twitter, the platform he bought last year and whose existence has been threatened after the launch of Threads, backed by tech giant Meta.
The xAi team has presented this Wednesday through a web page. It is headed by Musk and made up of 11 other engineers, all men, who have experience in projects like Google’s DeepMind, OpenAI and Microsoft. They are talents who have helped develop versions 3.5 and 4 of ChatGPT, which has meant a before and after in the sector with more than 100 million subscribers in the first two months after its launch. The team will make its official presentation this Friday in a chat on Spaces, on Twitter. The announcement has made it clear that they are hiring new profiles.
This is not the first time Musk has taken an interest in artificial intelligence. The businessman has spent more than ten years investing in the development of tools of this type, some of which are already in operation at Tesla. But it has been in recent months and, above all, as a result of the success of ChatGPT that it has decided to step on the accelerator. In March, he and his partner, Jared Birchall, registered the company name with Nevada authorities. A month later, he was already in negotiations to convince investors from his car company and SpaceX to get fresh resources to inject into xAI. According to the newspaper Financial TimesMusk bought thousands of processors from Nvidia, a company that skyrocketed on the stock market boom of artificial intelligence.
The owner of Tesla and SpaceX was linked to OpenAI, the company that launched the ChatGPT robot. Musk left the company’s board of directors in 2018. Since then, he has been an outspoken critic of the company, which he has called being run by Microsoft, which has invested $13 billion in developing the chatbot.
In late March, Musk’s was one of the most prominent voices amid an industry chorus calling for caution in the new wave of AI. Through an open letter, experts and technology executives requested a six-month truce to pause the progress of the investigations. The argument was that these tools represent “profound risks to society and humanity.” The document referred to the rules of the game adopted by various developers and industry leaders in 2017 at a conference convened by the Future of Life institute. They agreed to devote intense care to the ethical and financial resources invested in bots given the profound capacity they have to “change the history of life on earth.”
The release of xAI seems to address these concerns. In addition to the 12-man team that makes up the new company, the company has also added Dan Hendrycks, a Berkeley doctor who heads the Center for Artificial Intelligence Security (CAIS). It is a San Francisco-based non-profit organization that researches the development of the sector and focuses on reducing potential harm to society. The organization offers scholarships to study philosophy and teaches courses that help detect anomalies in programming, among other elements.
Hendrycks, along with two other authors, explained in a recent essay published by Cornell University that there are four broad categories in which AI can cause harm to society. The first is that groups or individuals use the tool with bad intentions. The second is the race between companies to develop, which can cause haste or investor pressure to make unfinished or imperfect versions available to users that give too much control to algorithms. The third is organizational risks, how the interaction of human error and complex systems can result in “catastrophic accidents.” The last is perhaps the most feared, programs that take advantage of an intelligence far superior to human to rebel against society. “Our goal is to promote a deep understanding of these risks and inspire collective efforts to make sure AI is used safely,” Hendrycks says in the essay.
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.
[ad_2]