Bombing Iran immediately after the president goes on an unhinged rant about Anthropic and having the Secretary of War (emphasis mine) label them a “supply-chain risk” (NIST definition; basically saying that using Claude could open government users up to vulnerabilities because, presumably, they’re an “out-of-control, Radical Left AI company”) seems entirely on-brand for the chaos monkey style of politics this administration loves. Flood the zone, etc.

I appreciate that Anthropic is making a stand, but it’s a fairly milquetoast one. The two red lines they’ve drawn are:

There are a lot of very scary things that can (and presumably are) being enabled by frontier models today that fall short of these. Anil Dash puts it well:

To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform would enable a sitting official of any government to knowingly commit such crimes.

Of course, not everyone clears even that low bar. And we live in a world where you can be excoriated by the President for any step out of line, however milquetoast. Anthropic surely knew that.

FWIW, I think this was all far more about power than it was about capabilities. I don’t think the Pentagon has fully autonomous weapons waiting to be unleashed as soon as their Claude Code API key goes live. It was more about the power to force a high-flying company to bend the knee, and then rage when they refuse.

Terminators when?

The difference between partially and fully autonomous weapons is an interesting one. Guided missiles are a kind of partially autonomous weapon; they don’t choose the target or when to fire, but they autonomously track and destroy the target once launched. Ship-based antimissile weapons, like the US Navy’s famous CIWS, are yet another level—they don’t choose when to turn on, but once on they choose and engage their targets autonomously (sometimes incorrectly) because their targets (low-flying antiship missiles) are too fast to have humans in the loop.

Most of the drones used in Ukraine aren’t autonomous at all; they’re actively flown by human pilots, and severing the link between drone and pilot (either by jamming the control signal or breaking the fiber optic cable they spool out behind themselves) renders the drone ineffective. Clearly, almost-fully autonomous drones (like the Russian V2U) are coming. Perhaps they’re still launched by humans and told to go hunt in a particular area, but after that, they decide what and how to attack.

It’s unclear (to me at least, but I’m just an interested spectator) how frontier models would play a role in this world. Presumably it would be in the targeting and launch decision part of the kill chain, rather than onboard intelligence, given the size and expense of the equipment required to run them. I’m slightly less worried about that use case, given how much control we already give over to machines in running weapons systems. But a future where the equivalent of Opus 4.6 is running on an embedded system at real-time speeds and is in control of a flying bomb, having been launched by another autonomous system based on sensor data that is too fast and voluminous for any human to comprehend, is rightfully terrifying.

Parting gift: this IEEE article is a great introduction to the current state of drones and anti-drone technology operating in Ukraine.

Subscribe to get new posts by email. Follow on Mastodon.

Follow on Mastodon

Enter your Mastodon instance: