Anthropic seeks weapons expert to prevent AI misuse

Artificial intelligence firm Anthropic is seeking a weapons expert to help reduce the risk of harmful uses of its technology. The company said it wants to prevent what it described as “catastrophic misuse” of its AI systems.

The role is expected to focus on assessing and limiting potential threats linked to advanced AI tools. This includes concerns that such systems could be used in ways that cause serious harm. Anthropic said the move forms part of its wider efforts to improve safety and oversight as its technology develops.

AI companies have faced growing scrutiny over how their systems might be misused. Experts have warned about risks linked to powerful AI models, including their possible use in military or weapons-related contexts. Firms in the sector have said they are working to strengthen safeguards and reduce potential dangers as the technology becomes more advanced.

Illustration of Anthropic seeks weapons expert to prevent AI misuse