One of the great quotes about AI that I think about a lot is something Jack Clark, co-founder of the artificial intelligence company Anthropic, said to me last year. “It’s really strange that this isn’t a government project.”
Clark points out that Anthropic’s staff, and many at major competitors like OpenAI and Google DeepMind, believe that AI is not just a major innovation, but a major change in human history, and that it is, in effect, the future. He said that he truly believed that this was the creation of a new species. Eventually, we will surpass human knowledge and gain the power to decide our destiny. This is no ordinary product that a company can sell to willing customers without causing too much trouble to anyone else. It’s something very different.
Perhaps you think this view is reasonable. Maybe you think that’s grandiose, arrogant, or delusional. Honestly, I think it’s too early to tell. In 2050, we may look back at these dire AI warnings as technologists get excited about their products. Or you might look around at a society ruled by ubiquitous AI and think, “They had a point.” However, especially if the latter scenario materializes, governments will likely need to take a more active role.
I’ve written a bit about what the government’s role might look like, but most of the proposals so far have focused on creating a sufficiently large AI that has bias against certain groups, security vulnerabilities, and dangerous purposes. This includes mandating testing for certain hazards, such as the ability to be used in For example, the manufacture of weapons or the “agent” characteristic that indicates that they pursue goals other than those intentionally given to them by humans. Regulating these risks will require the creation of new key government agencies, and will require a lot from them, especially to avoid being captured by regulated AI companies. . (Notably, lobbying by AI companies increased 185 percent in 2023 compared to the previous year, according to data collected by OpenSecrets for CNBC.)
Regulatory efforts are underway, but they are extremely difficult. That’s why an interesting new paper by law professor Gabriel Weil that suggests a very different kind of path, one that doesn’t depend on building that kind of government capacity, is so important. The key idea is simple. AI companies must now take responsibility for the harm their products have created, or (more importantly) may create in the future.
Let’s talk about torts, baby.
Weil’s thesis concerns tort law. To wildly oversimplify, torts are civil damages rather than criminal damages, specifically damages unrelated to a breach of contract. That includes all kinds of things. It is a tort (and a crime) for you to punch me in the face. It is a tort for me to infringe on a patent or copyright. Companies selling dangerous products are illegal.
The last category is where Weil focuses most. He argues that AI companies should face a “no-fault liability” standard. Under normal, less stringent liability rules, a finding of some intentionality or at least negligence on the part of the party responsible for the damage is generally required for a court to award damages. If you drive erratically and crash your car into someone, you are responsible. If you crash your car because you had a heart attack, that’s not the case.
Strict liability means that if your product or property causes any foreseeable harm, you will be responsible for any damages whether or not you intend them to occur and whether or not you fail to try to prevent them. Using explosives to blow up rocks is an example of today’s no-fault liability activity. If you blow things up close enough that people could get hurt, you’ve already failed.
Weyl does not apply this standard to all AI systems. For example, a chess program would not meet the strict liability requirement that “Even with the exercise of reasonable care, it would create a foreseeable and very serious risk of harm.” He said the AI’s developers “knew that even if reasonable care was taken in the training and deployment process, the resulting system would pose a very significant risk of physical harm.” , or should have known,” then AI should follow this standard, they wrote. An example would be a system that can synthesize chemical or biological weapons. Systems with advanced capabilities that are known to be misaligned or have secret goals hidden from humans (sounds like science fiction, but they’re already being created in labs) may also be eligible.
Imposing these types of requirements on a system can cause significant harm to developers. If someone uses this category of AI to harm you in any way, you may be able to sue that company for damages. As a result, companies have a significant incentive to invest in safety measures to prevent such harms, or at least reduce their occurrence to an extent that covers their costs.
But Weil takes things a step further. Experts who believe AI poses a catastrophic risk say it could cause irreparable harm…because we’re all going to die. No one can sue even if humanity goes extinct. Again, this is necessarily speculation, and this idea could be very wrong, and AI is not at risk of extinction. However, Weil suggests that even if this risk is real, it could be addressed using tort law.
His idea is to “frontload” the cost of other potential harms that may be caused by the technology, so that damages can be awarded before the harm occurs. The idea is that punitive damages (i.e., awards that do not compensate for damages but are intended to punish wrongdoing and deter future wrongdoing) are based on the existential risks posed by AI. is to add. He gives as an example a system that has a one in a million chance of causing the extinction of humanity. Under this system, those currently harmed by this AI could sue and receive damages for minor harms, and he could also receive punitive damages of around $61.3 billion. You can receive it. This is one millionth of his conservative estimate of damages. extinction of humanity. Given how many people use and are affected by AI systems, the plaintiff could be almost anyone.
Interestingly, these are changes that courts can make on their own by changing their approach to tort law. Weil argues that additional legislation would help. For example, just as car owners must carry insurance (in most places), Congress and other national legislatures require that AI companies carry liability insurance to protect against this type of hazard. They could do something similar to how some states require doctors to carry malpractice insurance. .
However, in common law countries like the United States, where the law is based on tradition and precedent, legislative action is not necessarily necessary to force courts to adopt new approaches to product liability.
Will a lawyer save us?
The downside to this approach is the downside to any means of regulating or slowing down new technologies. If the benefits of the technology significantly outweigh the costs and regulations significantly slow progress, the costs can be significant. For example, if advanced AI greatly accelerates drug discovery, delays can literally cost lives. The difficult part of AI regulation is balancing the need to prevent truly catastrophic outcomes with the need to preserve the technology’s transformative potential for good.
That said, the United States and other wealthy countries have been very good at using legal frameworks and regulations to block highly profitable technologies such as high-rise buildings, genetically modified foods, and nuclear power. There is something poetic about making an enemy of the very same means. For the first time, this technology could pose a real threat.
Author Scott Alexander once put this point more eloquently than I could: Now we face a problem that can only be solved by a brave coalition of obstructionists, overreaching regulators, anti-technology fanatics, socialists, and people who hate everything new on general principles. It’s like a movie where Shaq finds himself in a situation where he can only save the world by playing basketball. ”
A version of this story was originally future perfect Newsletter. Please register here!