About damn time. That was the response from AI policy and ethics wonks to news last week that the Office of Science and Technology Policy, the White House’s science and technology advisory agency, had unveiled an AI Bill of Rights. The document is Biden’s vision of how the US government, technology companies, and citizens should work together to hold the AI sector accountable.
It’s a great initiative, and long overdue. The US has so far been one of the only Western nations without clear guidance on how to protect its citizens against AI harms. (As a reminder, these harms include wrongful arrests, suicides, and entire cohorts of schoolchildren being marked unjustly by an algorithm. And that’s just for starters.)
Tech companies say they want to mitigate these sorts of harms, but it’s really hard to hold them to account.
The AI Bill of Rights outlines five protections Americans should have in the AI age, including data privacy, the right to be protected from unsafe systems, and assurances that algorithms shouldn’t be discriminatory and that there will always be a human alternative. Read more about it here.
So here’s the good news: The White House has demonstrated mature thinking about different kinds of AI harms, and this should filter down to how the federal government thinks about technology risks more broadly. The EU is pressing on with regulations that ambitiously try to mitigate all AI harms. That’s great but incredibly hard to do, and it could take years before their AI law, called the AI Act, is ready. The US, on the other hand, “can tackle one problem at a time,” and individual agencies can learn to handle AI challenges as they arise, says Alex Engler, who researches AI governance at the Brookings Institution, a DC think tank.
And the bad: The AI Bill of Rights is missing some pretty important areas of harm, such as law enforcement and worker surveillance. And unlike the actual US Bill of Rights, the AI Bill of Rights is more an enthusiastic recommendation than a binding law. “Principles are frankly not enough,” says Courtney Radsch, US tech policy expert for the human rights organization Article 19. “In the absence of, for example, a national privacy law that sets some boundaries, it’s only going part of the way,” she adds.
The US is walking on a tightrope. On the one hand, America doesn’t want to seem weak on the global stage when it comes to this issue. The US plays perhaps the most important role in AI harm mitigation, since most of the world’s biggest and richest AI companies are American. But that’s the problem. Globally, the US has to lobby against rules that would set limits on its tech giants, and domestically it’s loath to introduce any regulation that could potentially “hinder innovation.”
The next two years will be critical for global AI policy. If the Democrats don’t win a second term in the 2024 presidential election, it is very possible that these efforts will be abandoned. New people with new priorities might drastically change the progress made so far, or take things in a completely different direction. Nothing is impossible.