Technological advances have a track record of productively transforming society.
Many technological innovations, from sewing machines to automobiles to elevators, have flourished under the setting of industry standards and government oversight designed with ethical guidelines, transparency, and responsible implementation in mind.
However, in many cases these policy frameworks were enacted much later, after the impact of the technology had been assessed.
Seat beltFor example, it was not mandatory equipment for cars until 1968.
However, when it comes to the impact of artificial intelligence, governments around the world need to ensure the responsible development and deployment of the technology on a more accelerated schedule, amid growing concerns about the innovation’s far-reaching capabilities and the potential impact of misuse. We are placing more and more emphasis on In every field, including work, politics, and daily life.
The United States on Thursday (February 8) announced a new consortium to support the secure development and deployment of generative AI. The consortium will be supported by more than 200 organizations, including academic institutions, leading AI companies, nonprofit organizations, and other leading players in the fast-growing field of AI. AI ecosystem.
The newly formed American Artificial Intelligence and Safety Institute Consortium (AISIC), established by the National Institute of Standards and Technology (NIST), will foster collaboration between industry and government to advance the safe use of AI. The goal is to help prepare the United States to address the capabilities of artificial intelligence. Next-generation AI systems with appropriate risk management strategies.
“AI is moving the world into a very new realm,” Laurie E. LoCascio, Under Secretary of Commerce for Standards and Technology and Director of NIST, said in a statement. “And, as with any new technology or new application of technology, we need to know how to measure its capabilities, limitations, and impacts. That’s why NIST is working with industry, academia, civil society, and government representatives to We are bringing together incredible collaborations and coming together to address issues of national importance.”
At a press conference announcing the creation of AISIC, Secretary of Commerce Gina Raimondo said the work the safety institute is doing “cannot be done in a bubble, disconnected from what’s going on in the industry and in the real world.” He emphasized.
See also: How AI companies plan to build and control superhuman intelligence
AI pioneers continue to lead the way
Among AISIC’s more than 200 members, companies representing the AI space include Adobe, OpenAI, Meta, Amazon, Palantir, Apple, Google, Anthropic, Salesforce, IBM, Boston Scientific, Databricks, Nvidia, Intel, and many more. But they are alone.
Financial institutions such as Bank of America, JPMorgan Chase, Citigroup, and Wells Fargo, as well as financial services companies such as Mastercard, have also agreed to provide support for the safe and responsible development of the domestic AI industry. .
“Progress and responsibility must go hand in hand,” Nick Clegg, Meta’s president of global affairs, said in a statement. “Collaboration across industry, government and civil society is essential to developing common standards for safe and reliable AI. We are eager to be part of this consortium and work closely with the AI Safety Institute.” We are working on it.”
Arvind Krishna, IBM Chairman and CEO, added: “The new AI Safety Institute will play a critical role in ensuring that American-made artificial intelligence is used responsibly and in ways people can trust. We are proud to support the Institute and applaud Secretary Raimondo and his administration for making responsible AI a national priority.”
Also read: NIST says protecting AI systems from cyberattacks is ‘still an open question’
NIST has been pushed to the forefront of the U.S. government’s approach to dealing with AI, with a White House executive order tasking it with developing national guidelines for evaluating and red-teaming AI models. Facilitate the development of consensus-based standards. Our work includes providing test environments for evaluating AI systems.
According to a PYMNTS Intelligence study, approximately 40% of executives believe generative AI needs to be adopted immediately, and 84% of business leaders believe generative AI will have a positive impact on employees. The answer is yes.
“[AI] is the general-purpose technology most likely to lead to significant productivity gains,” Avi Goldfarb, Rotman Professor of AI and Healthcare and Professor of Marketing at the University of Toronto’s Rotman School of Management, said in December. he told PYMNTS in a posted interview. “…The important thing to remember in all discussions of AI is that if you slow down AI, you will also slow down its benefits.”
But AISIC will have the work to do. AI safety is a multi-faceted, multi-headed beast.
“There is a belief that there is a difference between cybersecurity and AI security,” Kojin Oshiba, co-founder of end-to-end AI security platform Robust Intelligence, told PYMNTS in an interview published in January. Ta. “CISOs know the different components of cybersecurity, such as database security, network security, and email security, and have solutions for each. But when it comes to AI, they know what constitutes AI security and what each one is. What needs to be done is not widely known. The landscape of risks and required solutions is unclear.”
By combining the efforts and perspectives of the more than 200 ecosystem stakeholders supporting AISIC, we will be able to create a more robust and responsible framework for the development and deployment of generative AI technologies.
For all of PYMNTS AI coverage, subscribe to our daily subscription AI Newsletter.