Donald Trump and the US administration have been feuding with AI company Anthropic over its refusal to allow utilisation of their AI model Claude for autonomous weapons. But, what are autonomous weapons, and why are there major concerns for the use of AI?
Autonomous weapons are weapons that require no human decision making — utilising AI to carry out complex military tasks and missions. They are currently used in warzones such as Israel and Ukraine, however, they perform very specific tasks, and expanding these weapons to integrate layers of decision making is a challenge.
The most famous type of AI are large language models (LLMs), the chatbots we are so accustomed to. In advanced autonomous weapons, LLMs can be used for planning, interpretation, and coordination, all tasks that require accuracy. The problem is that LLMs generate responses based on statistical relationships between words and concepts alone, with no worldly grounding. This means LLMs have no human doubt or consideration.
This causes a big issue. LLMs can make stuff up, not to deceive, but simply because a falsity can be more statistically likely than the truth. This is known as hallucination. A more unknown scenario will increase the likelihood of hallucination, which, given the unpredictable nature of warfare can lead to increasingly unreliable outputs.
Another problem is that LLMs are often far too confident in their outputs. Even if something is hard to work out, the LLM may not reflect this. Overconfidence in decision making is what makes the idea of a fully autonomous soldier so concerning, a killer not just without conscience but without the ability to be indecisive.
Further concerns come through tampering, an LLM will make decisions from given inputs, but what if the inputs are wrong? If you can alter what an LLM can see, or what it knows to be true, you can make it do all manner of things without ever doubting itself. A soldier that cannot doubt itself is very dangerous if tricked.
These worries led Anthropic to reject the US Department of Defense’s request to use Claude, to their own detriment. In February 2026 they were designated a “supply-chain risk” by the US government. Anthropic defended their decision, saying “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values” and added “AI systems are simply not reliable enough to power fully autonomous weapons.” It is clear the company has concerns with attempting to shove its technology into weaponry. However the issue is far from over. Within weeks of Anthropic’s decision, rivals OpenAI signed a new deal with the Pentagon.
When a private Tech company becomes the apparent voice of reason in policy, questions are raised about the morality of elected officials. Who is making the big decisions, and who is attempting to shift the most difficult ones to unaccountable AIs?
“Claude AI by Anthropic” by RyanDonegan is licensed under CC BY 2.0.

