Technology

Senate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillance

Anthropic’s fight with the Pentagon is expanding to Congress. Sen. Adam Schiff (D-CA) is working on a new bill to “codify” Anthropic’s red lines and ensure humans make the ultimate decisions in questions of life and death, and Sen. Elissa Slotkin (D-MI) recently introduced a bill to limit the Defense Department’s ability to use AI for mass surveillance of Americans.

The Trump administration blacklisted Anthropic earlier this month after it set limits on how the military could use its AI models, designating it a supply-chain risk. Anthropic has filed suit, accusing the government of violating its constitutional rights. It’s insisted that the Pentagon avoid using its products for fully autonomous weapons and mass domestic surveillance — resisting a deal signed by major competitor OpenAI. Anthropic is waiting to hear if a court will block the administration’s decision to label it a supply chain risk.

“I was alarmed to see the Pentagon take aim at Anthropic because Anthropic was simply trying to insist on policies that the vast majority of American people agree with,” Schiff told The Verge in a phone interview last week. “The idea that they would therefore then try to turn around and kill the company, kill one of the preeminent leaders of AI is such a hostile, dictatorial kind of an act. They would set back America’s leadership in AI, and Anthropic is one of the very best.”

Schiff’s office is still in the process of drafting the legislation, but he said the aim is to ensure AI isn’t used for “certain illicit purposes.” Slotkin recently introduced a similar bill last week called the AI Guardrails Act, to reinforce protections against domestic mass surveillance and the use of autonomous lethal weapons without human intervention. It’s not yet clear how Schiff’s bill will differ or align on key points, though it covers similar ground. Schiff spokesperson Ruby Robles Perez said his office continues to talk with stakeholders and industry leaders before finalizing their bill. Slotkin’s bill restricts the Department of Defense’s ability to use AI to detonate a nuclear weapon or track people or groups in the US, but also outlines how the Defense Secretary can notify Congress in the event that “extraordinary circumstances” necessitate the use of AI to deploy autonomous lethal weapons.

In the bill Schiff is drafting, the specifics about what constitutes an autonomous weapon or domestic surveillance are still the subject of discussion, but he said they are also looking to existing frameworks from the Biden administration. “We haven’t resolved all of those questions yet, including how this language would apply to those who were non-citizens, but people who are lawfully in the country are deserving of protection. And then as a human rights matter, it may go beyond that as well,” Schiff said.

“We don’t want to delegate that kind of responsibility over life and death to an algorithm”

One principle guiding this effort is the idea of a human in the loop. “Whenever a technology has the capability of taking a human life, there needs to be a human operator in the chain of command. We don’t want to delegate that kind of responsibility over life and death to an algorithm,” Schiff said.

But that doesn’t mean there’s no role for AI on the battlefield. “There are certainly circumstances in which, because AI can operate faster than human beings can, you want AI to be able to tip and cue information for human operators either that need to take steps to defend the country or that need to adjust given what it can see in real time on the battlefield,” Schiff said. “So the applications are very significant. They can be very beneficial from a national security and defense perspective. But they can also mean life or death. They can mean distinguishing between a civilian target and a military target, or getting those things wrong.”

With a Democratic minority in both houses, the short-term success of the bill may depend on Republicans’ willingness to be seen as critical of the administration. With midterms approaching, it will only get harder until the end of the year to pass new legislation, though the balance of power in Congress could shift if Democrats regain one or both chambers. It could still take at least another week or two to unveil the proposal, but Schiff is looking at legislative vehicles like the National Defense Authorization Act (NDAA) to move it forward.

“There’s certainly bipartisan support in the public for these kinds of limitations,” Schiff said. “As always, you confront the issue when you’re taking steps to prevent any kind of administrative abuse that it raises issues with some of my colleagues about whether it can be read as an implicit criticism of the administration. So we’ll have to deal with that, but I’m hoping that we can make it bipartisan.”

Since Anthropic put up its fight with the Pentagon, OpenAI has scrambled to defend its reasons for signing terms that have garnered public pushback. Even with OpenAI saying it will insist on the same terms, Schiff said he’d rather not have to place that trust in the Pentagon or any CEO. “I would have a lot more confidence, frankly, if these were statutory requirements, than relying on the lawfulness of the Pentagon or the word of an AI CEO,” he said.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.



Source link