Virginia AG leads coalition on recommendations for artificial intelligence governance
The letter expresses the desire to properly handle the technology of AI – with due deference to the harm it could cause – while not stifling innovation and technological progress.
(The Center Square) -
(The Center Square) - A bipartisan coalition of 23 attorneys general, co-led by Virginia Attorney General Jason Miyares, submitted a letter advising the National Telecommunications and Information Administration on governance policies on artificial intelligence.
As part of creating policy recommendations on AI, the NTIA invited policymakers and subject matter experts for expertise and commentary.
In the letter, the attorneys general urge the NTIA, part of the U.S. Department of Commerce primarily responsible for “advising the president on telecommunications and information policy,” to support policies that “prioritize robust transparency, reliable testing and assessment requirements, and allow for government oversight and enforcement for high-risk uses.”
The letter continually echoes the language of the NTIA and of President Biden as it expresses the desire to properly handle the technology of AI – with due deference to the harm it could cause – while not stifling innovation and technological progress.
“As with other emerging technologies,” says the letter, “a critical challenge in this area is to encourage and oversee the proper development of dynamic and trustworthy tools without hampering innovation.”
The attorneys general acknowledge early on that not all AI is equal. There is AI that does not handle risky personal information like bank account numbers or legal data, so the regulatory approach to types of AI should match the level of risk involved.
To promote transparency, they specifically recommend consumers be notified of the presence of AI, how it is used in a particular product or service, and their methods of redress, either for legal action, if necessary, or simply correcting inaccurate personal information.
They also suggest creating a rating system that lets consumers know the level of risk associated with the AI they’re using – for example, high risk if the AI is handling legal or banking information – so they can opt-out if uncomfortable.
The NTIA should also, the attorneys general recommend, establish standards for “transparency, testing, assessment, and audit[s],” for certified auditors and a code of ethics for AI. And again, once those criteria are codified, how an AI stacks up to those standards could perhaps be public knowledge through a similar rating system.
The letter outlines there should be frequent evaluation and testing to ensure the AI is functioning correctly, accurately and doing no harm.
Lastly, the attorneys general suggest ways that any developed standards work cooperatively with existing state regulations that protect “individual privacy and rights” and the attorneys general “should have concurrent enforcement authority in any Federal regularly regime governing AI.”