Found inTechnology

Leveraging AI in Space: Experts Assess the Impact on the Defense and Space Domain

AI is poised to be transformative in space in areas such as exploration, in-space servicing, command-and-control decision making, and more resilient communications. Space Force officials continue to call for more AI and machine learning investments to maintain the country’s air and space superiority. August 21st, 2024
Picture of Anne Wainscott-Sargent
Anne Wainscott-Sargent

Artificial Intelligence’s (AI) disruptive potential has consumed the terrestrial world, especially with the arrival of large language models like ChatGPT that are rewriting the nature of work and what’s possible from an efficiency standpoint.

Excelling at processing and analyzing large data sets, AI is poised to be similarly transformative in space in areas such as exploration, in-space servicing, command-and-control decision making, and more resilient communications. AI-powered tools are bringing new levels of autonomy for everything from spacecraft and crew health monitoring to satellite navigation, collision avoidance and situational awareness at a time when the end game for the United States is ensuring a stable and sustainable space domain, even as threats to that domain continue to escalate.

The threat to the United States’ historic space dominance is real, contends Audrey Schaffer, vice president of Policy and Strategy at Slingshot Aerospace, who last year held the post of space policy director for the National Security Council staff in the Biden White House.

Noting that China intends to launch several large-scale constellations that collectively could amount to 35,000 satellites in orbit, Schaffer says, “This isn’t some far-off future hypothetical problem that the government might one day have to grapple with. It’s a problem at our doorstep.”

She adds that analyzing data from so many spacecraft simultaneously “is beyond what a human analyst or teams of analysts can do, at least in operationally relevant timelines the Space Force operates in.”

It’s no surprise that Space Force officials continue to call for more AI and machine learning investments to maintain the country’s air and space superiority.

Data and AI Strategic Action Plan

The government clearly understands the gravity of what’s at stake. In May, the U.S. Space Force published its Data and Artificial Intelligence FY 2024 Strategic Action Plan to modernize its analytic capabilities. The goal is to enable secure discovery, access, integration and use of intelligence data at the speed of mission requirements.

The action plan outlines four areas: enterprise-wide data and AI governance, advancing a data and AI-driven culture, reoptimizing data, advance analytics and AI technologies and strengthening government, academic, industry, and international partnerships.

“Partnerships equal progress and there are a number of opportunities for commercial companies to partner with the Department,” says

Chandra Donelson, acting chief data and AI officer in the Office of the Chief Information Officer, Chief Data and AI Office. The goal is to “ensure there is never a day without space data for our nation, allies and partners,” and that the services are resilient enough to endure while under attack, she adds.

Another way the government is encouraging AI for space is Front Door, a resource hub from Space Command for commercial space AI innovators who want to work with the U.S. Space Force. Front Door functions as a resource hub for early-stage startups that guides firms on how to do business with the government and align their technologies to help the U.S. maintain its dominance in space.

Government agencies, commercial space firms and top research universities are teaming to accelerate AI research and technology advancements, and once validated, fast tracking how to bring basic research to operational reality.

NGA: Reliant on Commercial AI Expertise

An early mover in AI inside the Department of Defense (DoD) is the National Geospatial-Intelligence Agency, the Pentagon’s source for geospatial intelligence, or GEOINT, which is key to giving military commanders the exact location of U.S. forces, coalition partners, adversaries and noncombatants. NGA set up its Data and Digital Innovation Directorate three years ago to lay out its AI implementation strategy and get ready for what was “coming down the pike,” recalls Mark Munsell, the directorate’s director.

For NGA, that was Project Maven, focused on bringing AI to warfare by enabling the U.S. military to use automatic target recognition in combat.

“We’re always trying to make our computer vision models that detect objects of interest from video or satellites better and better by making them accurate and faster,” explains Munsell, who adds that “even when we build a capability for the United States government, we rely heavily on commercial companies, either to source us with a foundational model, or to provide expertise to help us build models with our own data.”

Brad Boyd, visiting fellow at the Hoover Institution, and a lecturer in public policy and international policy at Stanford University, agrees.

“Most of the resources that allow cutting-edge AI are in the commercial sector – the resources, the processors and algorithms, and the salaries to pay the top experts. The benefit is that commercial industry is moving very fast, but it’s in the direction of commercial interests.”

Boyd, a 30-year veteran of the U.S. Army and Marines and an expert on autonomy of military systems, emphasizes that commercial space partners and government end users need each other to leverage AI’s potential: The government’s “secret sauce” is its vast stores of data, while commercial firms are constantly innovating new AI models, which depend on a steady diet of large data sets.

“Where the friction comes is the government sits on this highly classified data, and you’ve got a young startup that may have employees from different countries that can help – how do you make those pieces fit together?” Boyd questions.

What needs to happen to open up innovation in AI? “For organizations or government to build a development environment for AI research that academics can use that’s as powerful as the stuff you find in the commercial sector,” he says.

Cutting-edge AI Research Collaborations

There are some encouraging signs of early cross-sector synergy around AI in space. Since 2019, the Department of the Air Force and MIT have jointly run the DAF-MIT Accelerator to advance AI to improve Department operations. The Center for AEroSpace Autonomy Research (CAESAR) at Stanford University formed a year ago with founding corporate sponsors, Redwire Space and Blue Origin, with a focus on spacecraft autonomy.

CAESAR is led by well-known Stanford researchers “who understand classical approaches and the impact that AI is having on space,” says Al Tadros, Redwire’s CTO, noting that Redwire plans to apply AI across its technology portfolio, including modeling and simulation, robotics and autonomy, satellite servicing, intelligent vision systems, as well as power management and avionics.

He’s passionate about partnering with academia for good reason: “Too often our SBIR programs and other small business research programs are mismatched with the national mission needs and warfighter needs. University relations allows a company like Redwire to connect the dots between what we directly see as mission needs to where cutting-edge research is being done. Guiding where AI is applied is one example of that,” he explains.

Redwire Space’s initial research interest is on augmenting machine vision and machine computer vision capabilities of its camera systems, which flew on the Artemis 1 mission in November 2022 and are slated to be on board the Orion spacecraft for Artemis II in 2025 and Artemis III, NASA’s first human mission to the Lunar South Pole, currently planned for 2026.

“We’re looking at how we can integrate algorithms that can advance the capability of the cameras both in the far range, when you’re just resolving an image, and the nearer range for rendezvous proximity operations,” says Tadros.

AI Driving Mission Results in Space

Redwire’s chief technologist contends that having these algorithms operating on orbit in the next year or two “will open doors on missions that we weren’t able to accomplish before.”

“Imagine a spacecraft being able to monitor its surroundings while conducting its mission, process imagery on-board to determine intent and conduct proximity operations, all autonomously and have results with actionable information delivered to the operator. That’s mission results versus satellite operations,” Tadros says. “Imagine a world where you can launch a slew of satellites that comprehensively observe cislunar space, and have results come back with identified risks and threats versus focus on individual satellites operations. The result is true space domain awareness, space supremacy and space operations that are consistent with terrestrial systems.”

Computer vision paired with large language models, adds Munsell of NGA, is the next killer application for generative AI. It depends on the application of transformer technology, originally used in large language models like ChatGPT, tapping into every tagged image descriptor on the internet, so it can classify everything about an image.

“All the big companies like Open AI, Anthropic, Amazon, Google and Microsoft are all beginning to greatly improve their computer vision capabilities by applying that transformer technology to these large vision models. Where in the past we might have had a trained computer vision model to look for certain types of military equipment, these large vision models are trained to look for millions of different types of objects,” Munsell explains.

The Pentagon’s R&D agency, Defense Advanced Research Projects Agency (DARPA), also is making AI inroads working closely with commercial space innovators. In June, Slingshot Aerospace announced its work with DARPA to create Agatha, the AI-enabled system that can scan entire satellite constellations to find anomalies.

“The Agatha system uses an ensemble of models, which means, if there are simple answers, it finds simple answers, but if there are more sophisticated investigations that need to be done to find those answers, it does those using something more cutting edge called inverse reinforcement learning,” says Dylan Kesler, director of data science at Slingshot Aerospace.

The technique evaluates the satellite’s behavior to determine its intentions. For example, is a malfunctioning spacecraft just that, or has it been hidden in a large constellation for nefarious purposes such as spying?

Good training data feeds the AI models, and in Agatha’s case, there weren’t enough large constellations deployed to use, so Slingshot simulated 60 years of constellation data – 30 constellations for two years each. Once the model was trained on the simulated data, Slingshot Aerospace then had Agatha test its ability to find “outliers,” or satellites not behaving as expected, on constellations already launched in space. When an outlier satellite was found, Slingshot contacted the satellite operator to validate Agatha’s findings.

“They did tell us in those cases that maybe the satellite was having a malfunction or was in the process of being moved to a new mission,” Schaffer explains.

Ultimately, Agatha will be capable of predicting future behavior of a satellite that is identified as potentially anomalous.

Slingshot Aerospace isn’t the only satellite business focused on driving insights from AI. Executives at Kratos Defense, which has a broad footprint of products across the satellite ground segment, is actively working to make its data products deliver more value with AI. Their offerings include providing intelligence for space domain awareness. A key push is to make it easy for customers to do data fusion, that is, run their own AI models on top of Kratos’s data.

“No matter how much we invest in AI, we still wouldn’t capture all the places that AI could create value from our data. That’s why we try to produce data and data sets and formats that are easily consumable,” says Stuart Daughtridge, VP of Advanced Technology at Kratos.

Key Issues Hindering AI in Space

Without question, AI in space will contribute significantly to the United States’ ability to keep pace with adversaries in the space domain. But significant challenges remain, from ensuring quality data and adequate training data for the AI models, to addressing issues of governance and trust.

On people’s tendency to underestimate the importance of data quality to drive AI projects, Donelson, the acting Department of the Air Force chief data and AI officer, cites a metaphor of car racing: “AI is just the vehicle; data is the gasoline. The best F1 driver can get in car and no matter what type of engine he has, if he doesn’t have fuel, he is not going to perform.”

Munsell contends that the U.S. has been focused on implementing AI responsibly as evidenced by last October’s White House Executive Order on the safe, secure, and trustworthy development and use of AI, and the multiple meetings about the implementation of AI.

Despite that, the NGA leader is realistic about the hurdles, including the difficulty of getting cybersecurity officials to approve the use of large models.

“There are some very dangerous and unethical uses for these models. There’s just a ton of policy, law, ethics and governance to be worked out,” he admits.

Chris Badget, vice president of Technology at Kratos Defense, sees a challenge around what level of autonomy customers want their systems to make decisions. He cites as an example how a user on the Army side may want a combination of satcom information fused with other data, but accessible at the lower-level tactical echelon.

“They don’t get the option to call back to the strategic planners, so their definition of autonomy and what AI tools they’re allowed to use is smaller than the big enterprise folks like the Space Force that see across these different systems,” he says.

Several experts say the biggest challenge is access to large amounts of training data needed to avoid hallucinations, when AI systems produce inaccurate, biased or unintended information.

Often the main culprit is weaknesses in training data, asserts Kratos’s Daughtridge: “How do I get really good training data and then how do you get a set of validation data? In every AI project I’ve ever been in, that has always been the challenge. That’s 80 percent of the problem and 20 percent is the algorithm.”

Boyd predicts AI modelers in commercial or government will increasingly face cognitive and automation biases as they look to advance the state of the art in decision support systems.

“Right now, people don’t really know how to deal with those problems. When you create this technology, what sort of organizational and human cognitive biases come with it?” he asks.

AI leaders agree that having large enough training sets is an ongoing challenge. Unlike in the terrestrial world, where there are billions of data sets available on the ground from the cataloging of the internet and YouTube, space-based data is relatively limited, explains Redwire’s Tadros. That limitation forced his team to rely instead on simulated data to train the company’s Argus system.

Trust is another big hurdle, with Donelson noting mixed feelings towards AI are common. Some fear it may make their jobs obsolete or irrelevant while others are strong advocates.

“So, for large-scale adoption of AI across an organization, you need buy-in. We must be able to show how humans and machines can work together,” Donelson says. VS

Anne Wainscott-Sargent is an award-winning content strategist and journalist in space, science, and deep tech