As artificial intelligence takes center stage in the technology landscape, U.S. national security agencies see huge opportunities but also frightening risks.
The rise of AI — and what it means for the future of intelligence — was a hot topic at the recent GEOINT 2023 conference held in St. Louis.
During a panel discussion, officials from the National Geospatial-Intelligence Agency hailed the value of AI tools and machine learning to analyze thousands of satellite images and quickly unravel critical insights. They also alluded to the dark side of AI and the danger it creates when it’s exploited for misinformation.
“The future of geoint is about going faster; it’s about using artificial intelligence,” said James Griffith, NGA’s director of source operations and management. He noted that NGA is responsible for collecting, analyzing and distributing data in support of national security as well as civilian agencies.
Philip Sage, director of NGA’s analytics technology office, said there is a growing realization that in the age of deepfakes, the agency has to make the safety of geoint products a top priority.
Sage commented that he and many of his NGA colleagues were alarmed by news reports May 22 that a phony image of an explosion at the Pentagon, generated by artificial intelligence, was shared by a verified Twitter account with a blue check mark that falsely claimed it was associated with Bloomberg News. This led to widespread confusion and a brief dip in the stock market before authorities confirmed no explosion had occurred.
“It was a terrifying tweet,” said Sage, and a reminder of how easily images are manipulated for nefarious purposes.
“Data assurance, that’s something that we do within this agency very well,” he said. But AI is going to put the system to the test.
The bogus image of the Pentagon on fire was “a very simple and straightforward example of some of the things that we’re seeing today,” said Sage. While that particular item was swiftly debunked as a fake, there may be other instances where counterfeit data is less obvious and more difficult to detect.
“In our foundation models, we’re trying to discover ways to determine if bias is actually being introduced into these models,” said Sage. “If bias is being introduced, how can we discover it quickly and remove it?” He called that an emerging challenge for the intelligence professionals.
A made-up image of a Pentagon explosion is just an inkling of what’s to come, said Jay Moeder, senior geoint adviser at NGA.
“We have to be on the outlook for those little things” that could lead to erroneous analysis of pixels on satellite images, said Moeder.
As NGA increasingly relies on commercial data and analytics support from the private sector, data integrity is going to require collaboration between government and industry, he said. “One of the challenges we all face together is the integrity of the data.”
Much of the talk about AI today is about what human jobs it will replace. At NGA, the bigger concern is how the agency will prepare its workforce to harness the new technologies and understand the risks.
What’s happening with AI is “a big deal,” said Mark Munsell, director of data and digital innovation at NGA.
“It’s going to scale as fast as we can throw compute power at it,” he said. “In our community, we’re learning to apply those techniques being used to build large language models to imagery.”
Munsell said the rapid growth of AI is going to create a demand for “super analysts” who can leverage models built with all the available knowledge and data about every U.S. intelligence problem and can accurately identify objects of interest on imagery “with a level of precision that we haven’t seen before.”
More importantly, these super analysts will have to ensure the data can be trusted.
This article originally appeared in the ‘On National Security’ commentary feature in the June 2023 issue of SpaceNews magazine.