NVIDIA has joined the National Institute of Standards and Technology’s new Artificial Intelligence Safety Association Consortium as part of the company’s efforts to advance safe, secure, and trustworthy AI.
AISIC will work to create tools, methodologies, and standards to facilitate the safe and reliable development and deployment of AI. As a member, NVIDIA will work with NIST (an agency of the U.S. Department of Commerce) and other consortium members to advance the consortium’s mission.
NVIDIA’s participation builds on its track record of working with governments, researchers, and industry of all sizes to help ensure that AI is developed and deployed safely and responsibly.
NVIDIA enables AI safety through a wide range of development initiatives, including NeMo Guardrails, open source software that ensures the responses of large-scale language models are accurate, relevant, on-topic, and secure. We are actively working to do so.
In 2023, NVIDIA supported the Biden administration’s voluntary AI safety efforts. Last month, the company announced a $30 million contribution to the National Science Foundation’s National Artificial Intelligence Research Resources Pilot Program, which aims to expand access to the tools needed to advance responsible AI discovery and innovation. did.
AISIC research focus
Through the consortium, NIST aims to accelerate innovation in trustworthy AI by fostering knowledge sharing and advancing applied research and evaluation activities. AISIC members include more than 200 of the nation’s leading AI creators, academics, government and industry researchers, and civil society organizations, with technical expertise in areas such as AI governance, systems and development, and psychometrics. brings.
In addition to participating in the working group, NVIDIA plans to leverage various computing resources and best practices to implement AI risk management frameworks and AI model transparency. Also, several open source AI safety, red teaming, and security tools developed by NVIDIA.
Learn more about NVIDIA’s guidelines here. Trustworthy AI.