Artificial intelligence holds promise for innovation and advancement, but it also has the potential to cause harm. To enable the responsible development and use of AI, the International Organization for Standardization (ISO) recently released a new standard for AI management systems, ISO/IEC 42001. According to ISO, the standard “provides organizations with the comprehensive guidance they need to use AI responsibly and effectively, even as the technology rapidly evolves.” .
As AI rapidly matures and is widely deployed around the world, a tangle of conflicting standards has emerged from major AI companies such as Meta, Microsoft, and Google. (However, Meta reportedly disbanded his Responsible AI group in November.) Also, the Responsible AI Institute, based in Austin, Texas, is a We have a certification program. But maintaining consistent standards and practices is also a perennial challenge throughout the history of technology. And standards maintenance organizations like ISO and IEEE could be a natural place to look for a widely agreed set of parameters for the responsible development and use of AI.
“When you see this kind of buy-in from organizations that promote the responsible development and use of AI, others will follow.” —Virginia Dignam, Umeå University, Umeå, Sweden
In the case of ISO, that standard is about AI management system. It’s a catalog or inventory of the different AI systems that companies are using, along with information about how, where, and why these systems are being used, said Dr. said Umang Butt, faculty fellow and advisor to the university. Institute for Responsible AI. And, as specified in the standard, an AI management system “establishes policies and objectives related to the responsible development, delivery, and use of AI systems, and processes for achieving those objectives.” This is the purpose.”
So ISO’s new standard provides a set of specific guidelines to support responsible AI, rather than just high-level principles, said Hoda Heidari, co-director of the Responsible AI Initiative at Carnegie Mellon University. states. Haidari said the standard requires his AI developers to ensure that “proper processes are in place in the creation and evaluation of systems before release, and that appropriate processes exist to monitor the system and address any adverse effects.” He added that it gives him confidence.
IEEE, ISO, and governments are considering:
meanwhile, IEEE spectrumIEEE, its parent organization, also maintains and develops a wide range of standards across many technical areas. At the time of writing this article, spectrum We learned about at least one effort currently underway within the broader global reach of IEEE standards-writing organizations to develop responsible AI standards. This will reportedly be an outgrowth of the 2020 Code of Recommended Practice for the Development and Use of AI. Additionally, the IEEE Global Initiative on Ethics for Autonomous and Intelligent Systems has published this document to promote the ethically consistent development of autonomous systems.
Like some standards in the technical field, ISO standards are not mandatory standards. “What will make companies adopt this? Standards themselves are not enough. There needs to be a reason and an incentive for developers to adopt it,” said Founding Co-Director of RAISE, the Center for Responsible AI at the University of Washington. says Chirag Shah. He added that organizations may also view standards as an indirect task, especially for smaller companies without sufficient resources or larger companies that already have their own standards.
“This is really just a trail, and we hope it becomes part of the culture of the software development community.” —Umang Butt, New York University
Virginia Dignam, Professor of Responsible AI and Director of the Institute for AI Policy at Umeå University in Sweden, agrees, saying that the standard “will only be used in practice if enough organizations have adopted it.” And in doing so, we also identify what works and what doesn’t in the standard.” To address this issue, Dignam turned to large technology companies. We are proposing to persuade them to adopt this standard. Because “this kind of buy-in from organizations promoting the responsible development and use of AI will encourage other companies to follow.” For example, Amazon’s AWS participated in the creation of the standard and is currently in the process of adopting it.
Another motivation for applying this standard is to prepare for and create a framework for upcoming regulations from other standards-producing bodies that may be consistent with ISO’s new standards. For example, the US government recently announced an executive order on AI, and the European Union’s AI law is expected to be fully implemented by 2025.
Trust is also important
An additional incentive for AI companies to adopt this standard is to foster trust with end users. In the United States, for example, people express more concern than excitement about the impact AI will have on their daily lives, with concerns ranging from the data used to train AI, its bias and inaccuracy, and its potential for abuse. It will span. “If you can assure consumers that standards and best practices exist and are being followed, they will trust the system more and be more willing to use it,” Haidari said. Masu.
Similar to a car’s braking system, which is built and tested to specific standards and specifications, “things are developed a certain way and problems are solved even if the user doesn’t understand what the standards are.” This includes auditing and checking and overseeing what is being developed.” Dignam says.
For AI companies considering adopting this standard, Bhatt advises them to view it in the same way as the practices they have established for tracking issues in their AI systems. “These standards will be deployed in a very similar way to the continuous monitoring tools that you build and use,” he says. “This is really just a trail, and we hope it becomes part of the culture of the software development community.”
Beyond implementation, Haidari hopes that ISO’s new standard will prompt a change in the mindset of AI companies and the people who create them. She cites design choices when training machine learning models as an example. While it may seem like you’re just making engineering or technical decisions that have no meaning outside of the machine you’re working with, “all of these choices have a big impact when…” The resulting models will be used to automate decision-making processes and field practices,” she says. “The most important thing for the developers of these systems is to keep in mind that many of the choices they make have real-world consequences, whether they know it or not, and whether they accept it or not. is.”
From an article on your site
Related articles on the web