Agencies across the federal government are taking steps to regulate artificial intelligence,seeking to promote safety and minimize the technology’s harms, as the overnight explosion of AI tools such as ChatGPT spurs scrutiny from policymakers around the globe.
The Commerce Department took its most significant action to address the emerging technology Tuesday, when it asked the public to weigh in on how it could create regulations that would ensure AIsystems work as advertised. The agency raised the potential of an auditing system, which could assess whether AI systems include harmful bias or distort communications to spread misinformation or disinformation.
New assessments and protocols may be needed to ensure AI systems work without negative consequences, much like financial audits confirm the accuracy of business statements, said Alan Davidson, an assistant commerce secretary.
The Federal Trade Commission isinspecting the ways generative AI could exacerbate scams or fraud, Sam Levine, the director of the agency’s consumer protection bureau, said in a Tuesday interview. The agency issued a blistering February blog post warning companies it is watching for deceptive claims about AI-powered technology.
“Some of the recent developments around generative AI bring a whole new set of risks,” Levine said.
Meanwhile, the National Institute of Standards and Technology, a federal laboratory that develops standards for new technology, has developed frameworks for ways to responsibly use artificial intelligence.
No clear consensus has emerged over what method regulators might use to oversee the technology, or which agency would take the lead. In a recent meeting in Silicon Valley between tech executives and lawmakers, participants floated the idea that the NIST might coordinate AI efforts across the government, according to a person familiar with the meeting, who spoke on the condition of anonymity to describe private conversations.
As an AI arms race heats up in Silicon Valley, Washington agencies are grappling with what role they should play in regulating the rapidly evolving technology. Turf wars, political divisions and limited tech expertise have stymied attempts to govern the tech industry, which has no centralized regulator and falls under the jurisdiction of multiple agencies.
AI, which is embedded in critical areas of American life, presents a particularly thorny task for regulators. Interrogating these often opaque systems requires advanced technical operations, and providing auditors with access to the large data sets AI uses to train presents privacy concern. Some technologists have warned compliance burdens could hamstring American companies’ ability to compete.
But in the absence of new laws allocating resources and increasing regulatory powers, agencies are hustling to apply their existing tools onto the Wild West of generative AI, in an attempt to govern an area that many consumer advocates warn is ripe for exploitation.
Adam Thierer, a senior fellow for the technology and innovation team at the think tank R Street Institute, said the chances of Congress passing comprehensive AI laws “are slim to none.”
“All of the action therefore is really at the agency level federally,” he said.
Washington vows to tackle AI, as tech titans and critics descend
In Silicon Valley last week, members of the House panel focused on competition with China huddled with tech executives and discussed how the government could develop more AI expertise, according to the person familiar with the meeting.
The executives praised NIST’s AI frameworks for industry, which describe best practices for industry to address AI risks, and speculated that the agency could play a role in strengthening the government’s work on AI, much like the Cybersecurity and Infrastructure Security Agency coordinates the government’s response to cyberattacks.
NIST did not immediately respond to a request for comment.
In recent weeks, government’s interest in AI has accelerated, as consumer advocates and technologists descend on Washington, aiming to influence oversight of a technology said to be as transformative as the internet. As companies compete to bring new AI tools to market, policymakers are struggling to both foster innovation in the tech sector while limiting public harms.
Many policymakers express a desire to move quickly on AI, having learned from the slow process of assembling proposals for social media.
As pressure grows on the industry, AI executives are holding their own meetings to discuss public policy standards and rules. Veteran venture capitalist Ron Conway on Wednesday plans to gather executives from companies including Google, Open AI and others for a meeting on AI policy, according to a representative for Conway. (Axios first reported the meeting.)
Washington agencies’ increased focus on AI has so far received mixed reviews. The Commerce Department on Tuesday unveiled its efforts to hold AI more accountable at an event at the University of Pittsburgh. Standing at a lectern at the school’s University Club, Davidson presented the agency’s plan to explore an auditing process.
“Accountability policies will help us shine a light on these systems, and help us verify whether they are safe, effective, responsible and lawful,” Davidson said at the event.
The announcement was celebrated by lawmakers concerned about the AI risks, who have been pressuring companies to do more to promote safety.
“The use of AI is growing — without any required safeguards to protect our kids, prevent false information, or preserve privacy,” tweeted Sen. Michael F. Bennet (D-Colo.). “The development of AI audits and assessments can’t come soon enough.”
Yet venture capitalists in Silicon Valley expressed unease over greater government regulation.
“This is very concerning language coming from Washington,” tweeted David Ulevitch, a general partner at the venture capital firm Andreessen Horowitz. “AI innovation is an American imperative — our adversaries are not slowing. Officials would be wise to clear a path, not create roadblocks.”
NTIA, the FTC and other government agencies have weighed the risks of AI for years. Last year, the Biden administration unveiled a “blueprint for an AI bill of rights,” which said that consumers should not face discrimination by algorithms, and that people should be protected from abusive data practices. But the guidelines are simply voluntary and stopped short of setting new restrictions around the use of AI.
“We can’t afford to confuse any of these approaches with enforceable regulation,” said Amba Kak, a former FTC AI adviser who now serves as the executive director of the AI Now Institute. “Industry certainly knows the difference.”
Former FTC advisors urge swift action to counteract ‘AI hype’
Absent new powers or systems, the government is creatively wielding existing tools to respond to new AI risks.
Levine, the FTC consumer protection chief, said the FTC Act — the more than 100-year-old law that the FTC uses to address privacy abuses, scams and other deceptive practices — could be used to address new abuses of generative AI.
Levine said for instance the targeted advertising business model, which is powered by algorithms, has been a “gold mine” for scammers eager to find prey online. With the advent of recent tools, Levine said the agency is now not just concerned about AI being used to target ads, but also to generate advertising that impersonates people and companies.
“What we see time and time again with new technology is that scammers are often the first to figure out how to take advantage, and that’s exactly our fear here,” Levine said.
The agency is also on the lookout for companies that are falsely marketing the capabilities of their products.
Kak said the recent activity at the FTC is a signal that the federal government is not starting with a “blank slate” with AI. But she warned against allowing companies making generative AI to lead the debate about how to address safety in artificial intelligence.
“It’s regulators, and the public, that must determine what these standards of evaluation are, and when they are sufficiently met,” she said.