The Washington PostDemocracy Dies in Darkness

Elon Musk talks xAI, AI superintelligence, aliens and more on Twitter

Musk announced his new AI company Wednesday, and spoke about the future of the tech with U.S. lawmakers

Elon Musk speaks at a tech conference in Paris in June. (Gonzalo Fuentes/Reuters)
7 min

SAN FRANCISCO — Twitter owner Elon Musk spoke about his new artificial intelligence company xAI during a live event Wednesday evening, advocating for government regulation and suggesting there’s room for international cooperation with the Chinese government on the tech.

Musk was interviewed by Rep. Ro Khanna (D-Calif.), and Rep. Mike Gallagher (R-Wis.), who asked him about his views on whether super-intelligent AI was dangerous to humanity, as well as how he squares his concerns about AI with his decision to begin a company to push the tech forward.

“If I could put a pause on AI or really advanced AI, superintelligence, I would. It doesn’t seem that is realistic,” Musk said. Instead, he said he believes it’s possible to “grow” an AI that was curious about humanity and the universe and wasn’t dangerous. He predicted AI that was smarter than humans was five or six years away, a timeline that is shorter than those set out by almost all AI researchers.

Musk said he welcomed government oversight, that he’d recently had conversations with senior Chinese government officials about AI risks and regulation, and said he believes China would be open to international cooperation on regulating the tech.

“It’s important for us to worry about a Terminator future to avoid a Terminator future,” Musk said, referring to the sci-fi movie series.

The Twitter Spaces event helped mark the formal launch of Musk’s new AI company, xAI, officially marking his bid to compete with AI leaders such as OpenAI, Microsoft and Google in the race to build computers that might take over more tasks from humans. The conversation, typical for Twitter live events featuring Musk, strayed between various topics, from geopolitics, to whether the United States needs to increase its manufacturing capacity, to whether aliens exist.

“You look up at the night sky and see all those stars. I wonder what’s going on up there, are there alien civilizations? Is there life up there? And hopefully, one day, we find out,” Musk said.

Musk has talked about forming an AI company for months, and registered xAI in Nevada in March. On Wednesday, he unveiled a team of 11 employees, drawn from OpenAI, Google and the University of Toronto, a center of academic AI research. The company is separate from Twitter and Musk’s other companies, SpaceX and Tesla, but would work closely with them, according to the site.

Tesla, which for years has been working on building self-driving cars, already has a robust AI team and a massive supply of number-crunching computers, something that is critical for training the complex “large-language models” that give chatbots such as OpenAI’s ChatGPT the ability to have conversations, write code and pass professional exams. And Twitter is a trove of data to help train any large-language model.

Musk and Tesla spokespeople did not respond to requests for comment. Musk tweeted earlier Wednesday, “Announcing formation of @xAI to understand reality.”

Skip to end of carousel
What is artificial intelligence?
AI is an umbrella term for any form of technology that can perform “intelligent” tasks. For decades, AI has been mostly used for analysis — trawling huge sets of data to find patterns. But a boom in generative AI, which uses this pattern-matching to create words, images and sounds, has opened up new possibilities.
What is generative AI?
The technology backs chatbots such as ChatGPT and image generators, such as Dall-E, which can create words, sounds, images and video, sometimes at a level of sophistication that mimics human creativity. This technology can’t “think” like humans do; it can find patterns and imitate speech, but it can’t interpret meanings.
How does AI learn?
AI can “learn” without programmer to tell it each step, a process called machine learning. It uses neural networks, mathematical systems modeled after the human brain, to find connections in huge data sets. The poems or images it makes may seem creative, but it’s really pattern matching based on which word is most likely to come next.
Is AI dangerous?
The boom in generative AI brings many exciting possibilities — but also concerns that it might cause harm. Chatbots can sometimes spread misinformation or “hallucinate” by producing information that sounds plausible, but is irrelevant, nonsensical or entirely false. It can be used to make fake images of real people, called deepfakes.
End of carousel

It’s unclear how xAI and Tesla would cooperate, as Tesla is a public company with a broad base of investors. When Musk bought Twitter for $44 billion last year, he was criticized for pulling engineers from Tesla to help in the first weeks of his ownership of the company.

“This new xAI company is again pulling resources from Tesla at a time when that company is facing a massive increase in competition,” said Rob Enderle, a tech analyst and head of the Enderle Group. “Musk has played fast and loose with company assets for some time; surprisingly, he seems to get away with it.”

Musk has opined on AI for years and was an early proponent of the belief that humans should be careful in developing smarter computers, fearing that super-intelligent AI might one day get out from human control. He was a founding member of ChatGPT creator OpenAI, but left the company’s board in 2018 and has recently criticized its transformation from a nonprofit to a profit-seeking company. In an interview with CNBC in May, Musk said he was “the reason that OpenAI exists,” and that the company did not take concerns about AI safety seriously enough.

OpenAI CEO Sam Altman has said Musk is a “jerk” but that he believes the billionaire cares about the future of AI and humanity.

Musk’s new AI team includes University of Toronto assistant professor Jimmy Ba, an AI researcher who trained under AI pioneer Geoffrey Hinton, Toby Pohlen, a former researcher at Google’s DeepMind AI lab, and Christian Szegedy, who also did research for Google.

Dan Hendrycks, the director of the Center for AI Safety, which advocates for greater awareness around the risks of AI slipping out from human control, is advising the company. Hendrycks said he is only taking a $1 salary so he can “remain unbiased and not have incentive to limit my criticism.”

In xAI, Musk has responsibility for yet another company, adding to Twitter, SpaceX, Tesla and a handful of other, smaller ventures. Tesla investors have grumbled since Musk bought Twitter that he is stretching himself too thin and shirking his duty to his other companies.

Competition in AI is fierce now. Google, Microsoft and other Big Tech companies have for years poured billions of dollars into AI research, integrating their breakthroughs into existing products such as Google Search or to make their data centers more efficient. Last year, OpenAI kicked off a new wave of excitement around the tech by releasing ChatGPT directly to consumers, giving them the ability to see the advances in AI tech that have taken place.

That spurred Microsoft and Google to speed up their own efforts at building AI tools for people to directly use, and Microsoft signed a massive deal to use OpenAI’s tech, while Google has rushed out its own competing products. Start-ups such as Anthropic and Cohere are also building their own large-language models, relying on partnerships with Big Tech or massive venture capital funding to get the necessary computational power to train the AI on their own.

During the Twitter Spaces event, Musk said it would be a while before xAI would be at the level of OpenAI and Google.

“Those are really the two big gorillas in AI right now by far,” he added.

Musk also spoke at length about his recent trip to China, where he met with senior government and business leaders. He said he believes China was aggressively regulating its own AI industry, and that he warned government officials that a super-intelligent AI could wrest control of the country from the Chinese Communist Party.

“I think that resonated,” he said.

Gallagher, a former Marine officer who has been outspoken about his views that China’s government can’t be trusted and needs to be contained through a strong U.S. military, said he was skeptical the Chinese government would ever partner with the United States on regulation. “They are going to use the technology for evil,” Gallagher said.

“I’m kind of pro-China,” Musk said, admitting that he has some business interests in the country, which is one of Tesla’s biggest markets. “China is underrated and I think the people of China are really awesome … they want the same things that people in America do.”

“Once the very difficult question of Taiwan is resolved, I’m certainly hopeful there will be positive relations between China and the United States,” he said. “We probably have a bumpy road between now and then.”