The European Union’s AI Act, which came into force on the 1 August 2024, introduces substantial new requirements for generative AI, in particular regarding the disclosure of copyrighted works used in training these systems. Here’s what this means for AI developers and providers in the space sector.
Background on the AI Act Amendments
The amendments to the AI Act specifically target generative AI, mandating providers to publicly disclose details of copyrighted works used in training their models. This move addresses growing concerns about copyright infringement in the training processes of generative AI systems. These systems, capable of creating text, images, and other media, have seen explosive growth and investment, exemplified by OpenAI’s ChatGPT and Microsoft’s significant investments in AI.
For the space sector, where AI and machine learning are increasingly used for satellite data analysis, mission planning, and autonomous navigation, these new requirements mean that developers must be transparent about the data sources used to train their models.
New Obligations for Generative AI Providers
Broader IP Considerations
The AI Act aims to balance innovation with the protection of fundamental rights, including IP rights. The specifics of achieving this balance remain to be seen, but the legislation emphasises protecting the intellectual property of AI system developers, such as trade secrets, while mandating public disclosure under certain conditions. For the space sector, this means ensuring that suppliers of AI tools must provide necessary information for compliance without compromising their IP or trade secrets, and authorities applying the AI Act must also protect these interests.
Impact on the UK and Future Developments
Post-Brexit, the UK’s approach to AI regulation has diverged from the EU’s. The UK has not implemented the EU’s text and data mining exceptions, limiting such exceptions to non-commercial research. While the UK Government is consulting on broader exceptions, it has yet to propose obligations for AI providers to disclose copyrighted works used in training.
The UK aims for a lighter regulatory touch to foster innovation within the AI space. However, this approach may face challenges in a globalised market where the EU’s stringent regulations could set a de facto standard. The UK’s narrower exceptions and different regulatory frameworks could impact its competitiveness in AI development compared to the EU.
Practical Steps for AI Developers in the UK
UK-based AI developers in the space sector should collaborate closely with IP lawyers to navigate the complex landscape of training data and copyright compliance.
Key steps include:
Compliance Deadlines
This landmark regulation takes a risk-based approach, categorising AI applications into levels: minimal, limited, high, and unacceptable risk. Full compliance deadlines vary by risk category: bans on “unacceptable” applications take effect six months post-enactment, while general-purpose AI compliance requirements and the Code of Practice must be met within twelve months. Most other obligations will apply two years after the Act’s entry into force, allowing time for companies and national authorities to align their practices with the new regulations.
Conclusion
If you need help navigating this intellectual property ‘space’, feel free to contact ip21.