MJan LeCun, eta’s lead AI scientist, on Sunday received the TIME100 Impact Award for his contributions to the world of artificial intelligence, adding another honor to his long list of awards.
Ahead of the awards ceremony in Dubai, LeCun spoke to TIME magazine about the barriers to achieving “artificial general intelligence” (AGI), the benefits of Meta’s open source approach, and the “ridiculous idea that AI can be achieved.” ” said what he thinks about the allegations. It poses an existential crisis to humanity.
TIME spoke with LeCun on January 26th. This conversation has been condensed and edited for clarity.
Today, many in the technology industry believe that training large language models (LLMs) with more computing power and more data will lead to artificial general intelligence. do you agree?
how amazing [LLMs] Training on a large scale can help, but it’s very limited. Today we see that those systems are hallucinating and don’t really understand the real world. It takes a huge amount of data to reach a level of intelligence, but in the end it doesn’t get much better. And they can’t really think logically. They cannot plan anything other than what they are trained to do. So they are not the path to what people call “AGI”. I hate that term. They are useful, no doubt about it. But they are not the path to human-level intelligence.
You mentioned that you don’t like the acronym “AGI.” That’s the term Mark Zuckerberg used in January when he announced that Meta was pivoting to building artificial general intelligence as one of its core goals as an organization.
There are many misconceptions out there.Therefore, FAIR’s mission is [Meta’s Fundamental AI Research team] This is human-level intelligence. This ship has sailed, a battle I lost, but I don’t like calling it AGI because human intelligence is not common at all. Intelligent beings have characteristics that today’s AI systems do not have, such as an understanding of the physical world. Plan a course of action to achieve a goal. Reasoning in a way that can take a long time. Humans and animals have a special part of the brain that they use as working memory. LLM doesn’t have that.
Babies learn how the world works in the first few months of life.I don’t know how to do it [with AI]. Learning a “world model” just by observing how the world works, and combining this with planning skills, and perhaps combining this with a short-term memory system, could pave the way to general rather than general intelligence. It’s called cat-level intelligence. Before reaching the human level, one must experience simpler forms of intelligence. And we are still far from there.
read more: AI learns to talk like a baby
In a way, this metaphor makes sense because cats can see the world and learn things that a cutting-edge LLM cannot. But the entire condensed history of human knowledge is not available to cats. To what extent is the metaphor limited?
So let’s do some very simple calculations. Large-scale language models are trained on more or less the entire text available on the public internet. Typically, this equates to 10 trillion tokens. Each token is approximately 2 bytes.That’s twice 10. [power of] 13 bytes for training data. And you say, “Oh my god, that’s incredible. It would take a man 170,000 years to read this.” It’s just a tremendous amount of data. However, if you consult a developmental psychologist, they will tell you that a 4-year-old child is awake for 16,000 hours in their lifetime. We can then try to quantify how much information entered the visual cortex over a four-year period. And the optic nerve is about 20 megabytes per second. That’s 20 megabytes per second, 60,000 hours, and 3,600 seconds per hour.it’s up to 10 [power of] 15 bytes, which is 50 times more than 170,000 years of text.
Yes, but text encodes the entire history of human knowledge, whereas the visual information a 4-year-old is getting only encodes basic 3D information about the world, basic language, etc. .
But what you say is wrong. Most of human knowledge is not expressed in text. It’s in the subconscious part of your mind, learned during the first year of your life before you could speak. Most knowledge actually relates to our experience of the world and how it works. That’s what we call common sense. I don’t have that because I don’t have access to the LLM. And they can make really stupid mistakes. That’s where hallucinations come from. What we take completely for granted turns out to be extremely complex to reproduce on a computer. In other words, AGI (human-level AI) is not just around the corner and will require a fairly deep shift in perception.
Let’s talk about open source. You’ve been a big proponent of open research in your career, and Meta recently adopted a policy to effectively open source its most powerful large-scale language models. llama 2. This strategy sets Meta apart from Google and Microsoft, which do not release the so-called weight of their most powerful systems. Do you think Meta’s approach will continue to be relevant as Meta’s AI becomes more and more powerful and even approaches human-level intelligence?
The first answer is yes. This is because in the future, everyone’s interactions with the digital world, and more generally with the world of knowledge, will be mediated by AI systems. They essentially end up playing the role of human assistants who are always with us. I’m not going to use a search engine. All you have to do is ask the assistant a question, which will help you in your daily life. Our entire information diet is therefore mediated by these systems. They will become the repository of all human knowledge. And we cannot have this kind of dependence on our own closed systems, especially given the diversity of languages, cultures, values, and centers of interest around the world. It’s like asking a for-profit organization somewhere on the West Coast of the United States to create Wikipedia. No, Wikipedia is crowdsourced because it works. The same goes for AI systems, which need to be trained, or at least fine-tuned, with the help of everyone in the world. And people will only do this if they can contribute to a widely available open platform. They’re not going to do this for their own systems. Therefore, for reasons of cultural diversity, democracy, and diversity, the future must be open source above all else. We need diverse AI assistants for the same reason we need diverse news organizations.
One of the criticisms we often hear is that open sourcing can put very powerful tools into the hands of people who want to misuse them. And if there is some degree of asymmetry in offensive and defensive power, it can be very dangerous for society as a whole. What makes you sure that won’t happen?
There are a lot of things that are said about this that are basically complete fantasy. In fact, there was a report recently published by the RAND Corporation in which they looked at how much easier it could be using the current system. [it] Is it because malicious people come up with recipes for biological weapons? The answer is “No.” The reason is that the current system is actually not that smart. They are trained on publicly available data. In other words, they basically cannot invent anything new. They’re going to spit out pretty much everything they’ve been trained on from public data. That means you can get data from Google. People are saying, “Oh my god, LLM is so dangerous that it needs to be regulated.” That’s not true.
Now, when it comes to future systems, it’s a different story. So if you have a super smart, powerful system, it could help science, it could help medicine, it could help business, it could break down cultural barriers by enabling simultaneous interpretation. . Therefore, there are many benefits. So there’s a risk-benefit analysis. Is it productive to keep the technology secret, hoping the bad guys can’t get their hands on it? Or, on the contrary, open it up as wide as possible, advance as fast as you can, and keep the bad guys always on your heels? Is that the strategy? And I’m exactly in her second category of thinking. What needs to be done is for society as a whole, good people, to lead the way by progressing. And my good AI to your bad AI.
You called the idea that AI poses an existential risk to humanity “ridiculous.” why?
There are many misconceptions out there. The first fallacy is that the system is intelligent and therefore wants to seize control. That’s completely wrong. That is a mistake even within the human species. The wisest among us do not want to control others. As we have seen in recent years in the international political arena, the smartest among us are not always the leaders.
of course. But ultimately those who hold power are those with the urge to dominate.
I’m sure you know a lot of people who are great at problem solving and are incredibly smart. They don’t want to be someone’s boss. I am also one of them. The desire to dominate has no correlation with intelligence.
But it correlates with domination.
Okay, but the urge for control, or at least influence, that some humans have is hardwired into us by evolution. This is because we are a social species with a hierarchical structure. Look at the orangutans. They are not social animals. They have no urge to control. Because it’s completely useless to them.
That’s why humans, not orangutans, are the dominant species.
The point is that AI systems, no matter how smart they are, are subordinate to us. We set their goals, but they have no intrinsic goals that we incorporate to control them. It’s really stupid to build that. That would also be a waste. No one would buy it anyway.
What would happen if humans with an urge to dominate programmed that goal into an AI?
And again, my good AI to your bad AI. If you have AIs that behave badly, either by design or by design, smarter and better AIs will destroy them. It’s like having a police force or an army.
However, police and military have a monopoly on the use of force, which is impossible in a world of open source AI.
what do you mean? You can buy guns anywhere in America. Even in much of the United States, police have a legal monopoly over the use of force. However, there are many people who can get their hands on very powerful weapons.
And is it working?
It turns out that it is a far greater danger to the lives of the inhabitants of the North American continent than AI. But no, we can imagine all kinds of catastrophe scenarios. There are millions of bad, dangerous, and useless ways to build AI. But the question isn’t whether it can get worse. The question is, is there a way to make it work?
Designing a system with safety guardrails can be a long and difficult process. This makes the system reliable, secure, and useful. It doesn’t happen in one day. It’s not like one day you build a giant computer, turn it on, and the next moment it conquers the world. That’s a ridiculous scenario.
One last question. What can we expect from Llama 3?
Well, performance will probably be better. video multimodality, etc. But it’s still in training.