Found inOpinion

The Legal Impact of AI for Space in the EU

July 28th, 2024
Helena Correia Mendonça

AI is being increasingly used in satellite systems and for the provision and development of satellite products and services. This raises issues on the impacts and limits on the use of AI for the space sector in light of the development of legislation addressing Artificial Intelligence.

The new EU AI Act (at the time of writing, not published yet) is a seminal legal act, bringing rules for the development and placement of AI systems in the EU market. However, the EU legislation does not expressly address the space sector. There is increasing recognition in policy and in law of the role of space technologies and services in today’s societies: indeed, many laws, from several sectors, acknowledge and encourage the use of satellite services, often with the goal of promoting the EU Space Programme.

In addition, cross-sector laws are now also paying more attention to the space sector, with obligations including expressly this sector – this is notably the case of the laws dealing with cybersecurity and resilience. Yet, there remains a substantial gap across the wider legal framework, where references to space services are still mostly missing. The AI legal framework is one such case where the space sector is not expressly mentioned.

The AI Act establishes a set of rules applicable to providers and deployers of AI systems, as well as to importers and distributors. However, it does not apply to AI systems exclusively for military, defense of national security purposes, or whose output is used in the EU exclusively for military, defense of national security purposes. It also does not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development, neither to research, testing or development activities regarding AI systems or models prior to being placed on the market or put into service (though the testing in real world conditions is not covered by this exemption).

The AI Act contains several levels of regulation. First and foremost, all providers and deployers of AI systems under the AI Act are subject to obligations relating to AI literacy: they shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. This means that actors in the space sector that provide or deploy AI systems in the course of their activity will be under the obligation to comply with this obligation.

More importantly, however, are the provisions applicable to prohibited practices and to high-risk AI systems.

The AI Act contains a list of prohibited AI practices aimed at protecting fundamental rights of individuals. These prohibited practices include, among others, real-time AI remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (unless certain strict requirements are met); AI systems for the purpose of the evaluation or classification of natural persons or groups of persons based on their social behavior or personal or personality characteristics, with the social score leading to detrimental or unfavorable treatment; or AI systems that create or expand facial recognition databases through CCTV footage.

Satellite services could potentially, in the future, be used for such purposes, especially with increasing spatial and temporal resolution of Earth Observation (such as satellite-based automated recognition of personal features such as body shape or posture, use of EO images in replacement of CCTV leading to similar surveillance). Yet, only when AI systems are involved for such purposes does the prohibition apply. Indeed, no prohibition seems to apply to the simple provision of data to feed the AI system.

The AI Act also establishes a set of obligations applicable to AI system providers and deployers of high-risk AI systems. These include a set of AI systems if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons: for instance, and among many others, remote biometric identification systems; AI systems intended to evaluate and classify emergency calls by natural persons or used to dispatch emergency response; AI systems to evaluate the reliability of evidence in the course of investigation or prosecution of crimes; AI systems to assess a risk (including a security, irregular migration or health risk) posed by a natural person who intends to enter or has entered the EU. High-risk AI systems also include AI systems intended to be used as safety components in the management and operation of critical digital infrastructure (such as cloud computing and data center providers), road traffic, or in the supply of water, gas, heating or electricity. This last case would seem to be the one most suitable to have mentioned satellite systems, as they play an increasingly critical role in society.

Yet, though the use of AI systems as safety components in the management and operation of satellite systems is not covered in the AI Act, it could be discussed whether the integration of AI systems in satellite systems for the above-referred purposes would bring the satellite operator, to the extent it is the AI system deployer (i.e., the entity using the AI system under its authority), to the scope of the AI Act. In all other cases, AI systems can make use of satellite data and services, and, to the extent satellite operators/satellite data and service providers, do provide or deploy AI systems (that resort to satellite data) for such purposes, then they would be subject to the AI Act. The trend of operators going downstream and providing value-added services duly adapted to the clients’ needs may lead in the future to such broader diversification of activities.

However, once again, the mere provision of data or value-added services to feed a clients’ AI system does not bring the satellite operator or data/service provider to the scope of the AI Act.

The AI Act brings, if not in many cases clear-cut obligations to the actors in the satellite sector, quite a few relevant opportunities. For instance, providers of high-risk AI systems need to meet certain quality requirements to the data they use to train the AI system – with the AI Act establishing requirements for the datasets, including that they shall be relevant, sufficiently representative and, to the best extent possible, free of errors and complete in view of the intended purpose. And deployers shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system. More and diversified data plays a role in ensuring better data quality. This is a point where satellite services can make a relevant contribution, not the least in light of the announced EU Strategy on Space Data Economy and the high-value datasets (including on earth observation and environment) under Regulation (EU) 2023/138.

The AI Act also contains obligations applicable to certain AI systems (e.g., those generating synthetic content) and to general-purpose AI models such as large generative AI models. The use of AI-generated synthetic data and of GPAI models in the satellite sector means that actors in the sector can benefit from the safeguards imposed on such systems and models in their activity.

Actors in the space sector must thus take in due regard the obligations and opportunities brought by the AI Act, and assess, on a case by case basis, whether they will be subject to the legal obligations, which best practices they should nevertheless follow, and how they can best take advantage of the Act. VS

Helena Correia Mendonça is a Principal Consultant integrated in the Information, Communication & Technology practice area of Viera De Almeida

Photo: Via Satellite archive photo