© 2024 WYPR
WYPR 88.1 FM Baltimore WYPF 88.1 FM Frederick WYPO 106.9 FM Ocean City
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

The White House issued new rules on how government can use AI. Here's what they do

Vice President Harris watches as President Biden signs an executive order on artificial intelligence on Oct. 30. On Thursday, the Biden administration issued new rules on how government agencies can implement AI.
Brendan Smialowski
/
AFP via Getty Images
Vice President Harris watches as President Biden signs an executive order on artificial intelligence on Oct. 30. On Thursday, the Biden administration issued new rules on how government agencies can implement AI.

The Biden administration announced new guidance to federal agencies on how they can and cannot use artificial intelligence, in a memo released by the Office of Management and Budget.

It's a significant step in trying to ensure safe use of AI, which private companies and other countries are also grappling with.

A draft of the guidance was released last fall, ahead of Vice President Harris' trip to the first global AI summit, in the United Kingdom. The draft was then opened up for public comment before being released in its final form Thursday.

Harris said the guidance was "binding" and emphasized the need for guidelines to put public interest first on a global scale.

"President Biden and I intend that these domestic policies serve as a model for global action," Harris said in a call with reporters Wednesday. "We will continue to call on all nations to follow our lead and put the public interest first when it comes to government's use of AI."

The guidance to agencies tries to strike a balance between managing the risks of artificial intelligence and also encouraging innovation.

It also requires that each agency appoint a chief artificial intelligence officer, a senior role that will oversee implementation of AI. And it outlines how the government is trying to grow the workforce focused on AI, including by hiring at least 100 professionals in the field by this summer.

"The public deserves confidence that the federal government will use the technology responsibly," said Shalanda Young, the director of the Office of Management and Budget (OMB).

Agencies have until Dec. 1 to implement AI safeguards

The guidance to agencies says that any AI technology they use has to have proper safeguards in place by Dec. 1. If they can't provide those safeguards, they have to stop using the technology, unless they can prove using it is necessary for the function of the agency.

The safeguards that are required include assessing, testing and monitoring the impacts of the AI technology — but the specifics of the process are still unclear in this guidance.

Alex Reeve Givens, the president and CEO of the Center for Democracy and Technology, told NPR that she still has questions about what the testing requirements are and who in the government has the expertise to greenlight the technology.

"I see this as the first step. What's going to come after is very detailed practice guides and expectations around what effective auditing looks like, for example," Reeve Givens said. "There's a lot more work to be done."

One of the next steps that Reeve Givens is eyeing is the guidance that the administration will release on the procurement process and what requirements will be in place for companies whose AI technology the government wants to buy.

"That really is the inflection point when a lot of decisions and values can be made and a lot of testing can be done before the government is spending dollars on the system in the first place," she said.

Transparency from agencies will allow for better scrutiny

Reeve Givens said the part of the guidance from OMB on transparency was particularly noteworthy.

Under the new guidance, agencies are required to share online each year an inventory of how they're using AI and what risks are associated, and the inventory has to be accessible. That provision is "key," Reeve Givens said.

"We can then ask questions about, well, 'What testing did you do? What did that look like?' There can be more eyes and more public scrutiny on those use cases, but this gives us the hook to be able to start that public conversation," she said.

The Defense Department and intelligence agencies are exempt from sharing their use of AI, though.

The guidance could be a "catalyst" for more use of AI

The OMB guidance also sets out to encourage innovation through AI. Ifeoma Ajunwa, a law professor at Emory University, told NPR the guidance sends a signal to agencies that it's OK to look into using AI technology.

"I think this will be a catalyst for agencies that may perhaps have had some trepidation or reservation about using AI technologies," she said.

"I don't want agencies to take this as carte blanche to use AI technologies in all instances," Ajunwa added. "But I do want them to see this as an opening, as a catalyst that they can use it when appropriate and when safety guardrails have been put in place."

Several government agencies already use artificial intelligence, but the memo from the Biden administration outlines other ways the technology could be impactful — from forecasting extreme weather events to tracking the spread of disease and opioid use.

Copyright 2024 NPR. To see more, visit https://www.npr.org.

Tags
Deepa Shivaram
Deepa Shivaram is a multi-platform political reporter on NPR's Washington Desk.