This is also why the governance debacle at OpenAI in late November 2023 was so troubling. (I resigned from the Board of Directors on June 1, 2023 to pursue the Republican presidential nomination.) In just five days, four of the six-member Board of Directors The chairman was dismissed and the CEO was dismissed. and fellow board member Sam Altman. The board ultimately reinstated Altman after more than 90 percent of the remaining employees threatened to quit. His current OpenAI board includes three members: one continuing member and two new members.
We still don’t really understand why the board did this. OpenAI certainly has to answer fundamental questions about governance. Is it possible for him to run a $90 billion company with four people? Is the structure of OpenAI, perhaps the most advanced AGI company in the world, too complex?
However, this controversy raises some larger philosophical questions when it comes to the development of AGI. Who can be trusted to develop such powerful tools and weapons? Once a tool is created, who should be entrusted with it? Is the discovery of AGI not an extinction-level event but a net positive for humanity? How can I be sure?
As this technology becomes less science fiction and more scientific fact, its governance can no longer be left to the whims of a few. Similar to the nuclear arms race, bad actors, including our adversaries, are advancing without regard for ethics or humanity. This moment is not just about corporate politics. This is a call to action to ensure guardrails are put in place to ensure AGI is a force for good rather than a harbinger of catastrophic consequences.
legal liability
Let’s start by assigning legal liability. All AI tools must be ensured that they comply with existing laws, and there are no special exemptions that protect developers from liability if their models fail to comply with the law. With AI, you can’t make the same mistakes you can with software or social media.
The current landscape consists of a fragmented set of city and state regulations, each targeting specific applications of AI. AI technologies, including those used in sectors such as finance and healthcare, generally operate without specific guidance targeting AI and based on interpretations of existing legal frameworks applicable to the industry. I am.
This patchwork approach, combined with intense market pressure for AI developers to be first to market, favors the brightest minds in the field with a repeat of the regulatory and legal leniency seen in other technology fields. This can lead to gaps in accountability and oversight. The responsible development and use of AI could be undermined.
By 2025, Americans are projected to lose $10.5 trillion to cybercrime. why? One reason is that our legislatures and courts do not consider software to be a product and therefore not subject to strict liability.
Social media has caused an increase in self-harm among teenage girls, allowed white supremacists to spread hatred, allowed anti-Semitic groups to promote bigotry, and given foreign intelligence an opportunity to try to manipulate elections. ing. why? One reason is that Congress has cut social media off from the regulatory rules that radio, television and newspapers must follow.
When AI is used in banking, those who built the tools and those deploying them must comply with all existing banking laws and be held accountable. No industry should be granted exemptions because AI is “new.”
Protecting intellectual property in the age of AI
Second, protect your intellectual property. The creator who creates the data that trains these models should be appropriately compensated when that work is utilized in her AI-generated content.
If someone writes a book, makes a profit from it, and in the process uses material from my blog beyond the fair use statute, I am entitled to royalties. The same regulatory framework should apply to AI.
Companies like Adobe and Canva already allow creators to earn royalties when their content is used. Applying and adapting existing copyright and trademark laws to AI that require companies to follow existing rules for paying creators for their content, providing a steady stream of data to train algorithms. You can also secure it. This encourages the creation of quality content by a robust industry of content creators.
Safety permit enforcement
Third, safety permits must be implemented. Just as companies need permits to build nuclear power plants or parking lots, powerful AI models will also need permits. This ensures that powerful AI systems are safe, reliable, and operate according to agreed standards.
The Biden administration has made a valiant effort to continue the trend established by American presidents since Barack Obama of addressing AI issues with executive orders. But President Joe Biden’s recent executive order on security clearances missed the mark. It’s like saying, “Hey guys, if you’re doing something interesting with AI, please let Uncle Sam know.”
The White House should use its convening powers to develop a truly strong definition of AI. I urge the White House to prioritize defining strong AI based on its level of autonomy and decision-making ability, especially in situations where AI decisions have a significant impact on the rights, safety, and privacy of individuals. is recommended. Additionally, we must be wary of AI systems that process large amounts of personal or sensitive data or that can easily be repurposed for harmful or unethical purposes.
To ensure comprehensive protection against truly powerful AI risks, all companies that create AI models that meet this new standard must obtain clearance from the National Institute of Standards and Technology before releasing their products to the public. You need to apply.
Vision for the future of AI
At the heart of all these regulations is transparency and accountability. Transparency means being able to understand how an AI system works, allowing experts to evaluate how decisions are made. This is important to prevent hidden biases and errors. Accountability makes it clear who is responsible for fixing AI systems when they cause harm or make mistakes. This is essential to maintain public trust and ensure responsible use of AI technology.
These values are especially important as AI tools become more integrated into critical sectors such as healthcare, finance, and criminal justice, where decisions have a significant impact on people’s lives.
The events at OpenAI are a vital lesson and serve as a lighthouse for action. Governance of artificial general intelligence is not just a corporate issue, it is a global concern that impacts every aspect of our lives.
The path forward requires a strong legal framework, respect for intellectual property, and rigorous safety standards, similar to the care taken with nuclear energy. But beyond regulation, we need a common vision. The vision of technology contributing to humanity and innovation is balanced with ethical responsibility. We must seize this opportunity with wisdom and courage and make a united effort towards a future that uplifts all humanity.