Richard Branson believes the environmental costs of space travel will “fall even lower.”
Patrick T. Fallon | AFP | Getty Images
Dozens of prominent figures in business and politics are calling on world leaders to address the existential risks of artificial intelligence and the climate crisis.
Virgin Group founder Richard Branson joins former United Nations Secretary-General Ban Ki-moon and Charles Oppenheimer, grandson of American physicist J. Signed an open letter calling for action. Weapons, and ungoverned AI.
This message calls for world leaders to develop a long-term strategy and a “determination to not just manage, but solve, intractable problems, the wisdom to base their decisions on scientific evidence and reason, and the wisdom of making decisions based on scientific evidence and reason, and the It asks us to embrace the humility of listening to the voices of others.
Signatories are committed to urgent action, including funding the transition away from fossil fuels, signing a pandemic justice deal, restarting nuclear weapons negotiations, and building the global governance needed to make AI a force for good. He called for multilateral action.
The letter was released on Thursday by Elders, a non-governmental organization founded by former South African President Nelson Mandela and Branson to address global human rights issues and advocate for world peace.
This message is also supported by the Future of Life Institute, a nonprofit founded by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallin. The organization aims to steer innovative technologies like AI towards benefiting life and away from large-scale risks.
Tegmark said the Elders and their organizations believe that while the technology itself is not “evil,” it remains a “tool” that could have dire consequences if left to rapidly advance into the hands of nation-states. “I wanted to convey that.” Wrong people.
“An old strategy for steering towards good uses. [when it comes to new technology] “We’ve always learned from our mistakes,” Tegmark said in an interview with CNBC. “We invented fire, then the fire extinguisher. We invented the car, learned from our mistakes, and invented seat belts, traffic lights, and speed limits.”
“But when things are already over threshold and power, the strategy of learning from mistakes…well, the mistakes are going to be terrible,” Tegmark added.
“As a geek myself, I think of this as safety engineering. We send people to the moon, but what can happen when you put a person in an explosive fuel tank and send them somewhere where no one can help them?” We thought about all the issues carefully.” That’s why it worked out in the end. ”
He continued, “That wasn’t ‘Doomerism.’ That was safety engineering. We need this kind of safety engineering in our future, whether it’s nuclear weapons, synthetic biology, or even more powerful AI.”
The letter was issued ahead of the Munich Security Conference, where government officials, military leaders and diplomats will discuss international security amid escalating global armed conflicts, including the Russia-Ukraine war and the Israel-Hamas war. . Tegmark plans to attend the event to advocate for the letter’s message.
Last year, the Future of Life Institute published an open letter backed by key figures including Tesla President Elon Musk and Apple co-founder Steve Wozniak, calling for AI research institutes such as OpenAI to also called for a pause on training efforts for powerful AI models. 4 — Sam Altman’s most advanced AI model today with his OpenAI.
Engineers have called for a moratorium on AI development to avoid a “loss of control” of civilization. The result is massive job losses and the potential for computers to outsmart humans.