Credit: SpaceNews AI-assisted illustration by B. Berger

Patrick Biltgen, Ph.D., is a principal at Booz Allen Hamilton and a leader in the firm’s artificial intelligence practice.


Over the past few years, we’ve seen dramatic advances in space, including NASA’s launch of a giant rocket, deployment of a magnificent telescope, and flight testing of a tiny Martian helicopter. Billionaires Jeff Bezos, Richard Branson, and Elon Musk launched themselves (or their cars) into space. Foreign adversaries demonstrated capabilities for following, moving, and destroying objects in space. These spectacles herald a new generation of space exploration and technology revolution after a long period of linear evolution. As the United States, China and Russia jostle for global influence, space is again moving to the forefront as a major arena for superpower competition.

How can the United States come out on top as the space frontier expands?

Rather than focusing on dominating physical space and producing military hardware, the current era will be decided to a large extent by which nation-states can capture the decisive edge in groundbreaking technologies – most notably in artificial intelligence (AI).

AI will transform how decisions are made across every human activity, especially as nations begin to further operate and compete in space. We are already seeing a rise in the number of satellites, debris and other objects that can potentially collide in orbit. Furthermore, the short decision cycles required to defeat increasingly capable threats demand new, innovative approaches to information fusion, decision support, and automation. The integration of carbon-based and silicon-based decision systems (sometimes called human-machine teams) will better define success for the freedom of access across all domains as joint intelligence becomes mission-critical to national security.

AI will revolutionize ISR

AI will revolutionize Intelligence, Surveillance and Reconnaissance (ISR), a process still primarily operated with limited automation or rule-based systems. One of the advantages of AI-based algorithms is that they are not biased toward preconceived plans or assumptions. Some algorithms can learn patterns over time and discover anomalies in large, multi-source data sets that no human would ever notice.

As the U.S. integrates technology and communications systems and creates an information home base for joint forces, the concept of Joint All-Domain Command and Control (JADC2) requires effective, optimized, and resilient use of space for remote sensing and communications. In a competition against a peer adversary, centers of gravity, including critical communications assets, will surely be the first targets. Analysts manually re-routing data through remaining channels will not be able to respond quickly enough to get the right data to the right users in enough time to make a difference. AI-based optimization routines can consider billions of alternatives in seconds, react to adversary actions faster than real-time, and in some cases, anticipate future moves in a complex space-cyber multi-domain battlespace. Simulations and wargames that explore concepts for human-machine teaming are needed to test these concepts under realistic conditions and build trust that we can depend on validated algorithms when they absolutely have to work.

This past year was also a remarkable time for Generative AI. Citizens worldwide experimented with AI-generated art applications like DALL-E and Stable Diffusion. We were all intrigued and captivated by the Large Language Model implemented by OpenAI as ChatGPT, with many futurists postulating a new era of how we interact with computer systems. But these technologies could someday revolutionize how we collect and disseminate space-based data. For example, a conversation model like ChatGPT could alter the way we develop intelligence collection plans via space assets by allowing human operators to leverage natural-language queries with machine-learning models trained on everything that has previously been successful. A user might simply say, “SpaceGPT, please generate a multi-INT collection plan to maximize the information gain on conventional military force movements on the Ukraine-Russia border” or “SpaceGPT, please optimally route this 10 terabyte file from here to Forward Operating Base Echo.”

Thousands of work roles that depend on laborious calculations, human-driven optimization, and deconfliction may become automated or augmented with Generative AI.

Yet, the path forward for this technology is not simple.

While AI is increasingly being used across every industry and in almost every imaginable capacity, there are still limitations to its capabilities. Notably, AI decision-making is only as good as the data used to train models. Space systems proliferate with few standard formats for data interchange and fusion. Even though the amount of data increases exponentially every year, putting multiple formats together and making sense of them is still a tremendous challenge.

A lack of trust in AI — fueled in part by dystopian fiction — hinders adoption. Maintaining human involvement during experimental phases will build trust for letting AI make time-critical decisions.

Thankfully, catastrophic events aren’t frequent; fortunately, this is also true in space. However, the infrequency makes it challenging to train a model based on what a normal “pattern-of-life” looks like. In order to prepare the U.S. for space dominance, we must expose AI to constant, rigorous modeling of synthetic environments and situations. Companies performing evaluations of AI capabilities and testing in synthetic environments generate valuable data that inform better decision-making by machines. As AI learns by doing, it must be trained to respond to a large number of possible events, with humans observing its reactions to determine its effectiveness and the limits of its capability. Labeling a data set as a cat or a dog is easy, but it’s still hard to explain to a computer why the answer is right or wrong.

Learning to trust AI

The ultimate barrier to adoption is our general lack of trust in AI shaped by observed failures and our long historical exposure to dystopian fiction dominated by machines gone awry. Humans must stay in the decision-making loops during experimental phases so that AI can ultimately be trusted to act independently for time-critical decisions. As AI overtakes human speed and abilities, we urgently need to ensure that algorithms autonomously operating in space can do so consistently and reliably without human oversight.

The burden must not be on the U.S. Armed Forces to experiment in a vacuum. The more friendly actors there are in this area, the better—as when it comes to AI in space, we’re dealing with many unknowns and no predetermined path to success.


New approaches for collaborating across the public and private sectors and with foreign partners, and fostering new mindsets for transforming operations leveraging AI, will ensure our national security in a new era of Great Power competition.

This article originally appeared in the Commentary section of the February 2023 issue of SpaceNews magazine.

Patrick Biltgen, Ph.D., is a principal at Booz Allen Hamilton and a leader in the firm's artificial intelligence practice.