AI/ML, Generative AI, Governance, Risk and Compliance

OpenAI, Anthropic to give model access to NIST’s AI Safety Institute

Share
(Credit: Koshiro – stock.adobe.com)

OpenAI and Anthropic have signed an agreement with the National Institute of Standards and Technology’s (NIST) AI Safety Institute (AISI) to grant the government agency access to the companies’ AI models, NIST announced Thursday.

The Memorandums of Understanding signed by the creators of the ChatGPT and Claude generative AI platforms provide a framework for the AISI to access new models both before and after their public release.

“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models. For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!” OpenAI CEO Sam Altman said in a statement on X.

The U.S. agency will leverage this access to conduct testing and research, evaluating the capabilities and potential safety risks of major AI models. The institute will also offer feedback to the companies on how to improve the safety of their models.   

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said U.S. AISI Director Elizabeth Kelly. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The U.S. AISI is housed under NIST, which is part of the U.S. Department of Commerce. The institute was established in 2023 as part of President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Early efforts by Anthropic, OpenAI to work with feds

OpenAI and Anthropic have previously shown proactive efforts to work with U.S. government entities on improving AI safety; for example, both companies joined as members of the U.S. AI Safety Institute Consortium (AISIC) in February to assist on developing guidelines for AI testing and risk management.

Both companies were also among a group of seven major AI companies that made voluntary commitments to the White House last year to prioritize safety and security in the development and deployment of their AI models, share information across industry, government and academia to aid in AI risk management, and provide transparency to the public regarding their models’ capabilities, limitations and potential for inappropriate use.

Last year, Anthropic publicly called for $15 million in additional funding to NIST to support research into AI safety and innovation. Recently, the company played a role in pushing amendments to California’s controversial AI safety bill, which aimed to lessen concerns that the bill would stifle AI innovation by placing undue burdens on AI developers.

Previously, Anthropic allowed pre-deployment testing of its Claude Sonnet 3.5 model by the U.K.’s AI Safety Institute, which shared its result with its U.S. counterpart as part of an ongoing partnership between the institutes.  

“Looking forward to doing a pre-deployment test on our next model with the US AISI! Third-party testing is really important part of the AI ecosystem and it’s been amazing to see governments stand up safety institutes to facilitate this,” Anthropic Co-founder Jack Clark said in a statement on X.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.