The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software.
In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.
In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years of experience in chemical weapons and/or explosives defense as well as knowledge of radiological dispersal devices – also known as dirty bombs.
The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created.
Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI, which lists a vacancy for a researcher in biological and chemical risks, with a salary of up to $455,000 (£335,000), reportedly nearly double that offered by Anthropic.
Some experts have expressed alarm at this approach, warning that it provides AI tools with information about weapons—even if they have been instructed not to use it. Dr. Stephanie Hare, a tech researcher, questions, Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?
There is no international treaty or regulation for this type of work and the use of AI with these types of weapons. All of this is happening out of sight, she emphasizes.
The urgency of the matter has heightened as the US government increases its calls on AI firms amid ongoing military operations.
In parallel, Anthropic has initiated legal action against the US Department of Defence for labeling it a supply chain risk, citing its insistence that its systems should not be utilized for fully autonomous weapons or mass surveillance of American citizens.
The potential risks associated with AI technology continue to mount, presenting ongoing ethical dilemmas as advancements proceed without significant regulatory oversight.





















