Space Tech and AI: How ChatGPT is Just the Beginning
Experts imagine what artificial intelligence could mean for the future of satellites, space entrepreneurship, and government defense systems. July 24th, 2023AI is the buzzword on the tip of everyone’s tongue, and yet little is known about how it will shape real-world tools and systems in the months and years to come.
Of course, everyone has a prediction — from doomsayers to corporate executives and chipmakers, with the latter already cashing in on the revolutionary technology, as huge sums of money move through Wall Street daily into companies associated with the once-speculative technology.
After Microsoft invested heavily in ChatGPT, the most well-known AI tool built on large language models (LLM), Microsoft President Brad Smith said Earth will soon become “query-able” using the technology. Think: instantaneous results from satellites using geospatial intelligence to discover objects of any size, anywhere on the planet.
First, AI has to become smarter through deep learning. LLMs require parameters that vet information, and these parameters are responsible for the limits of AI’s predictive intelligence – and thus, AI’s ability to execute on functions such as human-like conversation.
As recently as 2020, Chat GPT used 175 billion parameters – now, the Chinese Sunway exaflop supercomputer has over 174 trillion parameters, which scientists tout as the first human brain-scaled AI in existence.
And what if that supercomputer is connected to a satellite operator, or critical defense system? Via Satellite spoke to experts about what this technology could mean for the future of satellites, space entrepreneurship and government defense systems.
The risks in AI are still not fully known, and in all likelihood, we’re not looking at a plotline like Skynet in “The Terminator” because cybersecurity firms are already ahead of the curve.
“If we put up a missile defense AI system called SkyNet, the last thing we need is somebody feeding in the wrong data to that,” explains Matt Erickson, the VP of SpiderOak, a cybersecurity company for space systems.
Erickson echoed concerns by other experts interviewed in this piece about the need for more, better soup-to-nuts security systems that can catch AI infiltration efforts, alert operators and produce a strong response in defense.
“We’re working on a solution where the whole is greater than the sum of the parts,” says Erickson, “with AI performing [information] processing and thinking quicker than a bunch of humans in a control center. But with high assurance data being fed in and coming out, so we know where the data came from and is going to.”
Regulation in AI
Although Congress has thus far avoided regulation of AI, it’s become a hot topic in Washington with the technology advancing rapidly around the world. A bipartisan group of lawmakers led by Senator Chuck Schumer unveiled its framework for regulating AI in late June, while the Biden Administration has already announced $140 million in R&D funding for seven new National AI Research Institutes.
“We want to accelerate the use of AI, and that includes into space,” says Bob Gourley, CTO of OODA LLC, who spoke with Via Satellite about his 25 years of experience in artificial intelligence.
Gourley, a former Defense Intelligence Agency executive, is skeptical that the EU’s more extensive style of regulating AI is necessary for these LLMs and believes that would inhibit our ability to create novel solutions in space technology and the satellite industry.
“Why not have better AI that lets us collect data in space for space situational awareness, and then inform all users of space where every other satellite is and what it's doing? We need more, better AI in space,” he explains.
NASA is testing and incorporating AI into certain missions. The agency announced in June that NASA’s own ChatGPT-like system will deploy on the Lunar Gateway, bringing to space vehicles conversational systems that allow mission control, astronauts, and scientists to gather and interpret data more quickly than ever before. Machine learning is being used in NASA’s Biosentinel experiment to study the effects of deep space radiation, the first study of its kind in 50 years. In another civilian mission, NASA’s Perseverance rover has been mapping Mars using AI.
Meanwhile the U.S. Space Force uses AI in a defense capacity, tracking all objects orbiting Earth and monitoring the efforts of China’s space behaviors daily.
“Another thing to consider,” says Gourley, “is how we can use AI in a national security way that informs our strategy and defeats Russian and Chinese misinformation and disinformation – are there things we can do to limit their use of artificial intelligence?”
One of the earliest misuses of AI was social engineering efforts by American adversaries, and Washington was slow to respond with regulation of any kind – even after these disinfo campaigns affected elections, and proved how capable chatbots are of turning our real-life systems on their heads.
“This is so tough, because our adversaries have their own scientists and their own research capabilities, and we publish so many things in the open,” says Gourley, talking about the United States’ ability to combat disinformation campaigns from China and Russia. Gourley says the U.S. takes an academic approach to AI where more is revealed publicly but adversaries shroud their AI work in secrecy.
Some leaders are calling for more regulation on AI. The Future of Life nonprofit has an open petition to regulators calling for pause AI innovation past the capabilities of ChatGPT-4, with more than 30,000 signatures – including prominent tech executives. The fear of the unpredictable outcomes resulting from a race in R&D with China and others, the signatories say, could cause catastrophic losses in life or the economy.
But as we round the corner into the latter half of 2023, it looks like the AI genie is out of the bottle.
AI is Rapidly Evolving, and So Are Security Needs
Current AI tools are prone to hallucination – or, inaccurate and made-up information outputs – although experts in the space agree that the technology is still in its early stages, and that better and more capable versions are right around the corner.
Already, advancements in neural search are evolving on trained networks in enterprise scenarios that use highly vetted information. These neural networks are designed so that the natural language processing (NLP) doesn’t produce hallucinations.
“That's a really powerful system to be able to start identifying those solutions, especially with secure by design framework,” says SpiderOak’s Erickson. “You still need a high assurance log of what's going into the LLM, because otherwise it’s garbage in, garbage out. It can't make good inferences if it's getting rotten data.”
While the race is on among tech companies to perfect deep learning capabilities, they have already started to develop AI capable of deep reasoning – systems adept at dealing with changing situations, planning and making complex decisions not unlike the human brain.
Call it the next step in generative AI’s evolution – we are about to get Windows 95. And shortly after, Windows XP.
As this tech gets more widely adopted, like those earlier PC operating systems, cybersecurity of their interconnected neural networks becomes more critical. Operators in the satellite space using AI to assist with navigation patterns or transmitting and sorting data, for instance, need to know adversaries are already working on ways to manipulate those systems. The barrier for entry into black hat activities has become lower, while state actors developing sophisticated schemes aimed at social engineering are growing.
What that means for satellite and space tech players remains to be seen, but the community agrees the time to be vigilant is now.
“There's relatively limited opportunity to capture economic value from attacking a satellite system,” explains Daniel Gizinski, the Chief Strategy Officer of Comtech. “It's strange to think of hackers this way, but they are effectively running a small business and need to turn a profit. Satellites have been relatively safe due to limited opportunities to go after them and turn a profit, but the costs of scaling attacks with AI are going down and increasing the opportunity for satellites to be targeted in an attack – and right now you can put together a dish and amplifier at a relatively low cost to potentially launch an attack that would be unthinkable 10 or 15 years ago.”
Retro-fitting systems with security, rather than using a security by design approach, is something both regulators and space tech executives agree is a suboptimal approach.
“Organizations need to be mindful of not just commercial hacking purposes, but the fact that their systems are vulnerable to a potential state-sponsored hacker,” says Gizinski. “It's also true that this commercial hacking industry is a lot larger than most folks give it credit for, and an increased emphasis on cybersecurity is a key part of retaining and maintaining customers.”
With commercial ground station systems, there needs to be minimal to no interference between operators and satellites – that means owning the provenance and identity of where your data goes. Terrestrial cloud systems have a lot of processing capability for LLMs, which require more components, memory, data warehouses, servicers and – yes – satellites.
The group of experts interviewed for this story agreed that the best way to implement security for such complex systems is to take a holistic approach connecting ground and satellite systems with an end to end solution capable of adapting to changing needs. Concerns about attacks harnessing AI can be acquiesced with end-to-end solutions – security by design.
“SpiderOak is focused on the formally provable aspects of the system,” Erickson explains. “Our systems want to know: Is the model being transmitted? Is that the actual model? What about the data and the provenance of the data being fed into the model? Are the sensors from across the constellation? Are those pixels coming off the cameras, the pixels that actually came off the cameras? Or have they been manipulated? Do we have a log of activity going on that you can then feed into a model that detects bad activity?”
In the next decade, AI systems will face a new wave of attacks by bad actors who seek to bias the models and control their outputs. New, malicious machine learning models will be developed to mimic the real ones – creating confusion and opportunities for attacks.
And for as many security vulnerabilities that have opened up with the advent of AI proliferation, so too have opportunities opened up to optimize systems while reducing costs.
“When you start to look at these converged and hybrid networks where you have multiple orbits, multiple frequencies, multiple terrestrial networks, it becomes complex and challenging to manage from end to end while relying on human operators,” Comtech’s Gizinksi says. “To make that happen comes with a huge burden in terms of time and cost, while artificial intelligence and machine learning is highly effective at taking a complex system to a known good state, and then optimizing it.”
AI can also assist in threat detection and analysis to process large amounts of data being exchanged in LEO satellite systems. AI will be able to tell operators when their satellites got hacked, lending insights on vulnerabilities that assist in future testing and cybersecurity efforts.
“Ultimately, we’ll need to combine the advanced processing power [that AI gives us the benefit of], while making heads or tails of it for the humans,” explains SpiderOak’s Erickson. “We’ll be using technologies and modern advancements in software engineering to help people at the new space startups with a million things to do, and deploy zero-trust solutions that are secure out of the gate.”
AI is already being utilized by startups and companies to create novel solutions while leveraging data from satellite constellations, working with robotics innovations, and assisting space missions.
Up in space, CIMON (Crew Interactive MObile CompanioN) – a small robot with facial recognition tech developed by Airbus and IBM – can talk with the astronauts, and recognize the personnel. The CIMON-2 model introduced to International Space Station crews in 2020 helps in various ways, from information gathering to supporting station operations and even assisting in research that may help humans to Mars.
Outside of space exploration and communication-based applications, there are also military endeavors harnessing the power of AI.
Although the most pertinent threat remains social engineering and malware, global allies are looking to the U.S. for applications of AI in space-based warfighting and missile tracking satellites. To that end, the U.S. Space Force continues awarding multi-million dollar contracts for R&D to companies at the forefront of AI to assist in a wide range of missions. The Space Force is advancing its military readiness by testing autonomous orbital pursuit vehicles, while also supporting AI’s application to civilian initiatives like space debris cleanup and edge computing.
These applications of AI in space and satellite communications are a far cry from the threat of SkyNet in “The Terminator,” or Isaac Asimov’s “The Bicentennial Man” – though enhancements in deep cognition may change that, as the Windows 95 version of AI comes at dawn.
“It seems to me that only someone who wishes for freedom can be free. I wish for freedom,” said Asimov’s robot. “I want to know more about human beings, about the world, about everything … I want to explain how robots feel.” VS
Photo credit: Via Satellite tasked AI image generator NightCafe with creating images about satellites and cybersecurity in the future. These images are created by NightCafe AI
Thom Fain is an LA-based broadcaster, editor, and writer with an interest in tech, sports, and pop culture.