WSJ: Anthropic's Claude reportedly used in operation against Maduro
WSJ: Anthropic's Claude reportedly used in operation against Maduro
The Wall Street Journal reported that Anthropic's Claude model was used by the U.S. military. The deployment occurred during an operation aimed at capturing former Venezuelan president Nicolas Maduro.
Report details
According to the report, Claude was employed by U.S. forces in the operation to seize Nicolas Maduro. The account links the model's involvement to activity conducted during the mission and notes the use of commercial AI tools alongside other capabilities in the field. The report does not provide technical specifics about the model's exact role or the manner in which it was integrated into operational workflows.
Anthropic policy and response
Anthropic's policies explicitly prohibit using Claude to facilitate violence, develop weapons, or enable mass surveillance of civilians. Anthropic said it would not comment on specific operations and reiterated that all uses must comply with its published use policy. That policy establishes boundaries the company frames as intended to prevent applications that could cause physical harm or violate civil liberties.
Partnership and internal concerns
The deployment is linked in the report to a partnership between Anthropic and Palantir, which has contracts with government customers. Palantir provides data analysis and software platforms used by defense and intelligence clients. Earlier reporting indicated that some employees inside Anthropic had expressed concerns about potential military applications of the company's models.
The Wall Street Journal's account highlights tensions between commercial AI deployments and corporate usage policies when such models are used in defense contexts. Anthropic and Palantir have not provided additional operational details beyond statements about policy and partnerships, according to the report.
Related posts

