Tech giant Palantir has pushed back against concerns that military use of its AI platforms could lead to unforeseen risks, in an exclusive interview with the BBC, insisting that the way the technology is used is the responsibility of its military customers. It comes as experts have expressed concern over the use of Palantir's AI-powered defence platform - Maven Smart System - during wartime and its reported use in US attacks on Iran.

Analysts have warned that the military's use of the platform, which helps personnel plan attacks, leaves little time for meaningful verification of its output and could lead to incorrect targets being hit.

But the company's UK and Europe head, Louis Mosley, told the BBC in a wide-ranging interview that while AI platforms like Maven have been instrumental to the US management of the Iran war, responsibility for how their output is used must always remain with the military organization.

There's always a human in the loop, so there is always a human that makes the ultimate decision. That's the current set-up.

The Maven Smart System was launched by the Pentagon in 2017 and is designed to speed up military targeting decisions by bringing together masses of data, including a range of intelligence, satellite and drone images. The system analyses this data and can then provide recommendations for targeting. It can also suggest the level of force to use based on the availability of personnel and military hardware, such as aircraft.

But scrutiny has grown over the use of such tools in warfare. In February, the Pentagon announced that it would be phasing out Anthropic's Claude AI system - which helps to power Maven - after the company refused to allow the use of its AI in autonomous weapons and surveillance. Palantir says alternatives can replace it.

Since the war with Iran began in February, the US has reportedly used Maven to plan strikes across the country.

Pushed by the BBC on the risk that Maven might suggest incorrect targets - which could include civilians - Mosley insisted that the platform is only meant to serve as a guide to speed up the decision-making process for military personnel and that it should not be seen as an automated targeting system.

You could think of it as a support tool, Mosley said. It's allowing them to synthesise vast amounts of information that previously they would have had to do manually one by one.

However, Mosley deferred to individual militaries when challenged by the BBC on the risk of time-pressured commanders ordering their officers to take Maven's output as being rubber-stamped.

That's really a question for our military customers. They're the ones that decide the policy framework that determines who gets to make what decision, he said. That's not our role.

Since 28 February, the US has launched more than 11,000 strikes against Iran, many reportedly identified by Maven. Adm Brad Cooper, head of the US military in the Middle East, has praised AI systems for helping officers to sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.

But some worry AI's involvement in mission planning creates significant risks. Prof Elke Schwarz of Queen Mary University of London has warned that the prioritization of speed and scales, combined with the use of force, leaves very little time for meaningful verification of targets to avoid civilian casualties.

In recent weeks, Pentagon officials have faced questions regarding whether AI tools like Maven were used to identify targets in a deadly strike on a school in Iran that killed 168 people, including around 110 children, on the opening day of the war. In Congress, senior Democrats have urged for increased scrutiny on AI platforms like Maven and called for clearly enforced rules regarding their use in military operations.

Mosley, however, countered that the speed of the platform reflects the increased efficiency it has brought for military operations, rather than rushing decisions.

Citing operational security, the Pentagon has not commented on future uses or accountability regarding AI systems like Maven, but officials appear to be moving ahead with plans to integrate the technology long-term. The Pentagon has recently designated Maven as an official program of record, suggesting its long-term integration is underway.