Issues around AI (Artificial intelligence/machine learning) are fast moving and recently the Government issued a white paper consultation on the regulation of AI and how that could be done: A pro-innovation approach to AI regulation. Questions in the paper were posed around what AI regulators should be doing, should organisations be required to make it clear when they are using AI to ensure adequate transparency, what’s required for confidence and trust in AI technology use and routes to compensate for AI-related harms, amongst others.
The white paper outlines government’s five principles that these regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor. The principles are:
• safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
• transparency and explain-ability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
• fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR.
• accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
• contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI Instead of specific legislation
AOI responded, making the case for illustrators, the massive importance of copyright where images are used for training AI and the requirement for regulators to ensure respect for creators’ rights, plus comment on their proposal for voluntary solutions between AI organisations and creators (rather than legislation).
We emphasised that illustrators support progress in technology but also recognise the need for protections and safeguards where technology is applied. That there are valid concerns over the unauthorised use of their images to train AI/Machine learning for text to image platforms, and around possible undermining of their livelihoods by AI generated imagery. We said the default position should be that creators work will not be used for machine learning without their express permission and that the legal framework for AI-related copyright infringement needs to be clear and in line with the UK’s existing copyright regime
Copyright was not addressed throughout the consultation document, and we said the need for recognition of and compliance with our copyright laws is fundamental to any workable solution to the issues raised by AI. New AI businesses should not be seen as a legitimate reason for the erosion of creators’ rights, and livelihoods and human skills need to be given special protections, or authentic and individual voices may be lost.
Transparency over use of copyrighted works in training of AI should be clear, and developers should be required to disclose all the sources of data used to develop their systems. In terms of illustrations, this would include what images make up their training datasets and how those have been sourced.
We also covered, amongst other points, that it is essential to protect individual voices. The text prompts for AI text-to-image platforms can allow the input of words ‘in the style of (name)’, meaning that the generated images may be produced in the visual style of the named artist. and use of their name in prompts could create direct competition of their own works.
In terms of helping individuals and consumers confidently use AI technologies, we said illustrators using AI as an assistive tool will require a clear definition of what constitutes an AI generated vs AI assisted work, and an understanding of how this affects the copyright status of their work. Their clients will also require a clear understanding of this status to be able to confidently use/publish commissioned artwork that may be AI assisted.
Further responses covered questions on the approach to monitoring and evaluation for regulators and who would be involved in the most effective way to address capability gaps and help regulators apply the white paper principles.
See the full response here.
For AI updates go here.
Thanks to the CRA and BCC