In this Politico article, Brenda Leong explains why the White House AI Action Plan’s requirement that the National Institute of Standards and Technology (NIST) remove references to “misinformation” is misguided. Read the original article here.
A day after President Donald Trump unveiled plans to accelerate the adoption of artificial intelligence in health care, one expert warns that some aspects of the initiative could complicate state laws aimed at preventing discrimination in health care.
Dan Silverboard, a health care attorney at Holland & Knight, says the White House AI Action Plan’s requirement that the National Institute of Standards and Technology remove references to diversity, equity and inclusion could create significant challenges for state regulations.
As the nation’s primary technology standards body, NIST suggests standards and guidance on AI development and implementation. Within the health care sector, there’s considerable concern that AI could make decisions that discriminate against certain patients. In 2022, NIST addressed the concerns by releasing recommended practices for managing bias and discrimination in AI — which Silverboard says may soon disappear.
To understand the implications of the upcoming changes to NIST, Ruth sat down with Silverboard to discuss the potential impact on patient care and state regulations.
This interview has been edited for length and clarity.
How do states use the NIST AI framework?
The NIST framework is basically a compliance plan for addressing risk posed by AI, including discrimination. The National Association of Insurance Commissioners came out with a model bulletin that will require insurance companies to have programs to mitigate risk caused by AI, things like unfair claims practices, unfair business practices and also algorithmic discrimination. And that bulletin has been adopted, I think, in 24 states, red and blue.
If you have the NIST framework in place that would satisfy this requirement.
And if the NIST framework no longer specifies how to mitigate discrimination risk?
It bungles that.
You also have specific state laws which prohibit insurance companies from using AI in such a way that results in algorithmic discrimination, Colorado, for example, California.
The NIST changes complicate the enforcement of these laws.
How else might removing DEI from NIST’s AI framework impact how companies and developers are designing their technology?
Multinational companies have to comply with more than just U.S. law. There is EU law out there. So, how the EU requirements might conflict with the requirements that come at a federal level is anybody’s guess, but a multinational company would have no choice but to comply with other standards in place on the international level.
Silverboard isn’t the only one raising concerns about the AI Action Plan. Brenda Leong, director of the AI division at legal firm ZwillGen, says the call to remove references to “misinformation” from NIST’s risk frameworks is misguided.
“AI systems’ tendency to generate factually inaccurate, misleading, or confidently wrong outputs — hallucinations — is a well-documented challenge,” she said. “The plan shifts away from acknowledging this fundamental technical and safety hazard.”
Originally Published in Politico on July 24, 2025 by Ruth Reader and Erin Schumaker