WASHINGTON — The National Geospatial-Intelligence Agency is moving to establish guidelines and standards for the use of artificial intelligence (AI) technology in critical areas such as identifying potential targets using satellite imagery.

Based in Springfield, Virginia, NGA collects, analyzes and distributes geospatial intelligence derived from satellite and aerial imagery to support national security, military operations and disaster response efforts.

Vice Adm. Frank Whitworth, director of NGA, announced last week that the agency is launching a pilot program aimed at ensuring the reliability and trustworthiness of AI models used by its analysts. The initiative seeks to create guidelines for evaluating the performance and accuracy of computer vision models employed in the analysis of satellite imagery and other geospatial data.

“Accreditation will provide a standardized evaluation framework, implement risk management, promote a responsible AI culture, enhance AI trustworthiness, accelerate AI adoption and interoperability and recognize high quality AI while identifying areas for improvement,” Whitworth said during a meeting with reporters.

The move comes as NGA and other intelligence agencies increasingly rely on AI-powered computer vision to rapidly process the vast amounts of satellite imagery and geospatial data collected daily. By developing a consistent method for evaluating these AI models, the NGA aims to bolster confidence in the AI-generated insights that inform military operations and national security decision making.

Targeting is ‘one of hardest things we do’

Whitworth emphasized the need for accuracy in intelligence gathering as human lives are at stake. “We try to be certain on the distinction between a combatant and non combatant, an enemy and a non enemy,” he stated. “And that’s hard, and I will tell you, based on my 35 plus years of experience, that one of the hardest things we do is targeting.”

The pilot program is still in its early stages, with many specifics yet to be determined. Broadly speaking, Whitworth said it aligns with the Department of Defense’s guidelines for the ethical use of AI and responds to a recent White House executive order on the subject.

The agency has also established a training program on responsible AI for all coders and users of geospatial intelligence data, with the goal of creating a culture of responsible AI use throughout the intelligence community.

The new AI program comes at a time when the volume and complexity of geospatial data are growing. AI helps manage this data deluge by automating the detection and classification of objects in images, allowing human analysts to focus on critical tasks and interpretation.

Whitworth highlighted the agency’s role in current global conflicts, noting that NGA provides geospatial intelligence support to Israel in its war against Hamas-led Palestinian militant groups in Gaza and to Ukraine in its defense against Russian aggression. “Our responsibility is to ensure that Israel and Ukraine can defend themselves,” he said, underlining the importance of accurate and reliable intelligence in these sensitive situations.

“Let’s not forget that there’s some very clever adversaries out there that will confound some of the training data and confound some of these model solutions,” Whitworth cautioned, emphasizing the need for rigorous evaluation of AI systems.

Sandra Erwin writes about military space programs, policy, technology and the industry that supports this sector. She has covered the military, the Pentagon, Congress and the defense industry for nearly two decades as editor of NDIA’s National Defense...