AI was a major theme at Davos 2024. luckthe event had more than 20 sessions focused directly on AI, covering everything from AI in education to AI regulation.
Some of the biggest names in AI were in attendance, including OpenAI CEO Sam Altman, Inflection AI CEO Mustafa Suleyman, AI pioneer Andrew Ng, Meta Chief AI Scientist Yann LeCun, and Cohere CEO Aidan Gomez.
Moving from wonder to realism
At Davos 2023, the conversation was full of speculation based on the then-just-released ChatGPT, but this year it was more muted.
“The conversation last year was, ‘Wow,'” Chris Padilla, IBM’s vice president of government and regulatory affairs, said in an interview. washington post. “Now, the question is, ‘What are the risks?’ What do we have to do to make AI trustworthy?”
Among the concerns discussed at Davos were the proliferation of misinformation, job losses and widening economic disparities between rich and poor countries.
Perhaps the most discussed AI risk at Davos was the threat of massive misinformation and disinformation, often in the form of deepfake photos, videos, and voice clones, which can further confuse reality and undermine trust. It was an information threat. A recent example is a robocall made before the New Hampshire presidential primary using a voice clone impersonating President Joe Biden, apparently with the purpose of suppressing votes.
AI-powered deepfakes can create and spread false information by making it appear as if someone said something they didn’t say. “This is just the tip of the iceberg of what can be done in terms of voter suppression and attacks on election officials,” Carnegie Mellon University professor Kathleen Curley said in an interview.
Enterprise AI consultant Reuben Cohen also recently told VentureBeat that with new AI tools, we should expect to see a flood of deepfake audio, images, and videos in time for the 2024 election. Told.
Despite considerable efforts, there is still no reliable way to detect deepfakes. Jeremy Kahn says: luck Article: “You’d better find a solution now. Mistrust is insidious and corrosive to democracy and society.”
AI mood swings
This mood swing from 2023 to 2024 led Suleiman to write: foreign affairs He says a “Cold War strategy” is needed to contain the threats posed by the proliferation of AI. He said fundamental technologies such as AI are becoming cheaper and easier to use all the time, permeating all levels of society and penetrating all kinds of positive and harmful uses.
“The ability of hostile governments, fringe parties, and single actors to create and broadcast material that is indistinguishable from reality has the potential to cause chaos, and the verification tools designed to stop them are It could overtake the generation system.”
Concerns about AI go back decades, with the first and best-known being the 1968 film 2001: A Space Odyssey. Since then, there has been constant concern and concern surrounding Furby, the cyber-heavy pet that was wildly popular in the late 1990s. washington post In 1999, the National Security Agency (NSA) banned these devices from its premises due to concerns that they could function as eavesdropping devices that could leak national security information. I reported. His recently released NSA documents from this period describe the toys’ ability to “learn” using “onboard artificial intelligence chips.”
Thinking about the future trajectory of AI
Concerns about AI have become acute in recent days as more and more AI experts claim that artificial general intelligence (AGI) may soon become a reality. Although the exact definition of AGI remains vague, it is believed to be the point at which AI becomes smarter and more capable than a university-educated human in a wide range of activities.
Altman said he believes AGI may not be far from becoming a reality and could be developed in the “fairly near future.” Gomez emphasized this view, saying, “I think the technology is readily available.”
However, not everyone agrees with the aggressive AGI schedule. LeCun, for example, is skeptical about the impending arrival of his AGI. he recently spoke to Spanish media. Elpas “Human-level AI is not just around the corner. This will take a long time. And it will require new scientific advances that we don’t know about yet.”
Public awareness and future direction
We understand that uncertainty remains about the future direction of AI technology. In his 2024 Edelman Trust Barometer, presented at Davos, respondents around the world are split on whether they reject (35%) or embrace (30%) AI. People recognize the great potential of AI, but they also recognize the risks that come with it. According to the report, people are more likely to accept AI and other innovations if they are vetted by scientists and ethicists, and believe they have control over how AI impacts their lives. They say they will feel that AI will bring them a better future.
While it is tempting to rush to solutions that “lock down” technology, as Suleiman suggests, it is useful to remember Amara’s Law, as defined by Roy Amara, former president of the Institute for the Future. “We tend to overestimate the effects of technology in the short term and underestimate its effects in the long term,” he said.
Although a great deal of experimentation and early adoption is currently underway, widespread success is not guaranteed. Raman Chaudhry, CEO and co-founder of Humane Intelligence, a nonprofit organization that conducts AI testing, said: We are led to believe that this is so. ”
2024 may be the year when we learn just how shocking this will be. Meanwhile, most people and businesses are learning about the best ways to leverage generative AI for personal and business benefits.
“We’re still in a place where everyone’s excited about technology, but not connected to the value of it,” Accenture CEO Julie Sweet said in an interview. The consulting firm is currently conducting workshops for C-suite leaders to learn about the technology as a key step in realizing potential and moving from use case to value.
Therefore, the benefits and most harmful effects of AI (and AGI) may be imminent, but not necessarily immediate. As we navigate the complexities of AI, with wise stewardship and an innovative spirit, we can envision a future where AI technologies expand human potential without sacrificing humanity’s collective integrity and values. We are at a crossroads that can lead us towards. We must harness our collective courage to imagine and design a future where AI serves humanity. Not the other way around.
Gary Grossman is vice president of Edelman’s technology practice and global leader of the Edelman AI Center of Excellence.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers are experts, including data technologists, who can share their data-related insights and innovations.
If you want to read about cutting-edge ideas, updates, best practices, and the future of data and data technology, join DataDecisionMakers.
You may also consider contributing your own article.
Read more about DataDecisionMakers