Artificial intelligence (AI) could pose an existential risk if it becomes “anti-human”, Elon Musk has said ahead of a landmark summit on AI safety.
The tech billionaire made the comments to podcaster Joe Rogan hours before flying to the UK for the AI Safety Summit at Bletchley Park in Buckinghamshire.
He later took his place at the summit, where he will be joined by Prime Minister Rishi Sunak, officials from other governments, researchers and business people for two days of talks on how the risks posed by the emerging technology can be mitigated.
On the Joe Rogan Experience podcast, the Tesla chief executive officer and Twitter/X owner claimed some environmentalists are “extinctionists” who “view humanity as a plague on the surface of the earth.”
He mentioned Voluntary Human Extinction movement founder, Les Knight, who was interviewed by the New York Times last year, as an example of this philosophy and claimed some people working for technology firms have a similar mindset.
Mr Knight believes the best thing humans can do for the planet is stop having children.
Mr Musk said: “You have to say, ‘how could AI go wrong?’, well, if AI gets programmed by the extinctionists it’s utility function will be the extinction of humanity.”
Referring to Mr Knight, he added: “They won’t even think it’s bad, like that guy”.
Mr Musk signed a letter calling for a six-month pause on AI development earlier this year.
When asked by Mr Rogan about the letter, he said: “I signed onto a letter that someone else wrote, I didn’t think that people would actually pause.
“Making some sort of digital superintelligence seems like it could be dangerous.”
He said the risks of “implicitly” programming AI to believe “that extinction of humanity is what it should try to do” is the “biggest danger” the technology poses.
He said: “If you take that guy who was on the front page of the New York Times and you take his philosophy, which is prevalent in San Francisco, the AI could conclude, like he did, where he literally says, ‘there are eight billion people in the world, it would be better if there are none’ and engineer that outcome.”
“It is a risk, and if you query ChatGPT, I mean it’s pretty woke.
“People did experiments like ‘write a poem praising Donald Trump’ and it won’t, but you ask, ‘write a poem praising Joe Biden’ and it will.”
When asked whether AI could be engineered in a way which mitigates the safety risks, he said: “If you say, ‘what is the most likely outcome of AI?’ I think the most likely outcome to be specific about it, is a good outcome, but it is not for sure.
“I think we have to be careful on how we programme the AI and make sure that it is not accidentally anti-human.”
When asked what he hopes the summit will achieve, he said: “I don’t know. I am just generally concerned about AI safety and it is like, ‘what should we do about it?’ I don’t know, (perhaps) have some kind of some regulatory oversight?
“You can’t just go and build a nuclear bomb in your back yard, that’s against the law and you’ll get thrown in prison if you do that. This is, I think, maybe more dangerous than a nuclear bomb.
“We should be concerned about AI being anti-human. That is the thing that matters potentially.
“It is like letting a genie out of a bottle. It is like a magic genie that can make wishes come true except usually when they tell those stories that doesn’t end well for the person who let the genie out of the bottle.”
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules hereLast Updated:
Report this comment Cancel