This briefing is the first in a series by Codified Legal on the key legal issues posed by artificial intelligence.
Much of the current concern surrounding AI centres on large language models or generative AI, such as ChatGPT (sometimes referred to as "weak AI" or "narrow AI"), which is capable of performing tasks to a high level under direction from humans. However, other forms, such as sentient "True AI" (sometimes referred to as "strong AI" or "general AI"), capable of operating independently much like human intelligence, are currently theoretical. This briefing is focused on the legal issues surrounding AI as is currently available. Research into strong AI is ongoing and could be possible in the not-too-distant future.
THE CURRENT UK POSITION
There is currently no specific, standalone AI law or regulation in the UK. There are some sector-specific laws, notably GDPR rules on automated decision making. Also, most UK law and regulation is "technology neutral"; therefore, if an AI product or service creates an outcome that is contrary to existing laws (for example, the Equality Act or product safety laws), then the existing laws will apply.
UK GOVERNMENT'S NEW PROPOSALS
On the 29th of March, 2023, the UK government produced a white paper on its plans for AI regulation. With the UK aiming to maintain its position as one of the world's leaders in AI technology, the white paper has a pro-innovation approach. This means an encouragement in investment and building trust in AI rather than placing a regulatory burden on AI developers.
A white paper does not have any legally binding status; instead, it seeks to gauge views on a subject and form discussion on key topics. Growth in AI is happening at an exponential rate, and this begs the question as to whether a white paper is moving quickly enough. The likes of Elon Musk and other tech leaders have called for a six-month halt on AI developments to allow regulators to get up to speed. There are major fears that, if not kept in check, AI could place itself in an uncontrollable arms race. The UK approach does have some potential advantages on flexibility, but there are concerns that potential economic gain is being prioritised over other issues.
The white paper argues that the government doesn’t want to rush into creating legislation that isn’t fit for purpose or would stifle growth of the industry, particularly with regard to small businesses and startups. It has suggested five principles that existing regulators and future regulation should adhere to:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
These principles do not have any statutory effect for the time being and will instead be applied by existing regulators who will be left to further define the applicability and scope of these principles themselves. In time, the plan is to impose statutory duties on those regulators to have due regard to the principles, though the paper is definitive that there will not be a new AI regulator. The main intention will be to allow for maximum flexibility while minimising disruption for business and increasing public confidence in AI.
The white paper does not lay down definitive plans as to what the regulatory regime will look like; however, there is the intention for a road map to be produced within the next six months that will add further clarity.
The paper provides some direction of how this process will operate, listing out under each principle what it anticipates will need to be done. Under safety, security and robustness, for example, the anticipated tasks are to “provide guidance about this principle”, “refer to a risk management framework, which AI life cycle creators should apply”, and “consider the role of available technical standards”. By the nature of a white paper, these are somewhat vague.
It is important to note that the white paper makes no mention of implementing a blanket ban on certain forms of AI, opting for what could be a quite reactive position. There could be some downsides to this approach, such as harmful AI outcomes being more likely to slip through the regulatory net, or it being too late to act when harm has already occurred, which may be worrying for consumers who could be on the receiving end.
We are already witnessing the harmful elements of various AIs, including instances of racial bias in healthcare provision in the US, where an AI was trained to predict who would require additional healthcare based on expenditure, but predicted incorrectly because some racial groups spent less on the same health issues, meaning the AI failed to identify those potentially needing additional healthcare. Other examples include where it is used to cheat in exams or has "hallucinated", producing output that it deems to be plausible despite it being false.
There seems to be an intention in the UK to promote AI innovation and investment, with a clear bias towards encouraging the UK tech industry and assisting the UK economy. AI encouragement also has huge potential benefits to society, such as the ability to predict which people are likely to suffer certain diseases or picking up on data that doctors may have missed and then alerting them, increasing the chances of a correct medical diagnosis.
The UK's approach to use light-touch, devolved regulation has some advantages. It is faster to devolve power to individual regulators than to get a big piece of AI legislation on the statute books (directly at odds with the EU approach outlined below). However, it runs the risk of creating a patchwork of regulation that lacks consistency and increases uncertainty of how to apply the regulations, especially in areas that may overlap (such as the use of data). There are proposals in the white paper for a small co-ordination layer of centralised monitoring and co-ordination.
Although there is growing excitement for many over AI, there is certainly fear too, so the question of whether this less prescriptive and more reactive approach balances the benefits and opportunities against the risks of AI remains to be seen.
EU REGULATORY APPROACH
As noted above, GDPR does have some existing provisions for automated decision making under Article 22, therefore restricting AI use to certain scenarios.
In terms of future regulation in the EU, there is a far more stringent and centralised regulatory regime being created. The EU is not willing to take the flexible path that the UK is currently pursuing, evidenced by the proposed EU AI Act, which is set to be the first AI-specific legal framework passed by a major regulator. It seeks to cement the place of the EU as, arguably, the world's leading technology regulator, building on the reach created by GDPR in 2018.
There are some similarities between the EU and UK proposals, with one key driver in both being increasing trust in AI. The way that the EU AI Act will do this is using a risk-based approach. Under the Act, there will be designated categories of risk with four tiers. The highest level will be named "unacceptable risk" and will target the most socially harmful AIs by banning them immediately, considered to be those that cause a “threat to the safety, livelihoods and rights of people”, such as “toys using voice assistance that encourages dangerous behaviour”.
A step down from that will be "high risk" and will include AIs used in projects such as construction, education and the administration of justice, with these AI tools being subject to authorisation by judicial or supervisory bodies. In addition, use of such AIs will require evidence of human oversight as a further safety measure.
The final two levels will be called "limited risk" and "minimal or no risk". Under the "limited risk" category, transparency obligations will still be imposed, such as having to notify someone that they are interacting with a machine in the case of chatbots like ChatGPT and Bard.
Most AI that is currently used in the EU will fall within the lowest category of risk ("minimal risk or no risk"); for example, spam filters and AI-enabled video games. Use of those AIs are to be unrestricted in the EU, subject to other regulations such as GDPR.
The proposed Act also comes with large sanctions for non-compliance: up to €30 million or 6% of turnover for breaching a prohibition, €20 million or 4% of turnover for an infringement of obligations, and €10 million or 2% of turnover for supplying misleading, incomplete or incorrect information. An EU Artificial Intelligence Board will be established too, supervising the operation of the Act, making recommendations, and providing guidance among other duties.
The passing of the AI Act into legislation had seemed imminent with the EU parliament due to finalise its position (which now looks likely to happen towards the end of April) before the Commission, Council and Parliament discuss final details. The idea had been for the Act to be passed by the end of 2023, but that deadline seems increasingly unlikely to be met. How long the delay could be is not entirely clear. The disadvantage with the EU approach of trying to get a large piece of AI legislation passed is the time it takes is much slower than the pace of growth of AI.
INTERACTION BETWEEN THE UK AND EU REGIME
When in force, the EU AI Act could impact the UK’s regulatory position, potentially forcing the UK off the flexible path to a more comprehensive and legislated regime as many UK businesses seek to meet the standards of the EU for trading purposes.
Although the circumstances were different with GDPR (being regulation agreed prior to the UK leaving the EU), we have seen that UK business welcomed working to a single standard rather than designing processes that work just in the UK and have different arrangements for EU customers or EU operations. The same is likely to apply to AI. A UK development team will want to ensure that its AI tools and software can be used by customers in the EU, and it will not want to have to retrospectively change development processes to ensure this is the case.
The UK accounts for more than one third of Europe’s AI companies, and the white paper seems to be focused on maintaining or increasing that share. The relaxed approach may encourage more companies to base themselves in the UK, but once maturing, they will likely want to be able to operate in the EU market, and that will be a driver to adopt EU regulations.
AI IN THE US
The US is also beginning to make AI regulatory changes, proposing AI regulatory frameworks to accompany existing regulations. There is AI regulation in place to reduce the risks of AI used in employment decisions in New York, Illinois and Maryland. New York law, for example, now prescribes that automated employment decision tools should be subject to bias audits on an annual basis.
Other states (such as California) are implementing privacy rules that are similar to GDPR in dealing with automated decision-making tools. There are also a number of state legislatures discussing draft legislation on AI. These tend to focus on the protection of individuals in high-impact areas of AI, and also requiring transparency where AI tools are used. Pennsylvania is proposing a state registry of businesses operating AI systems, which would have to include details on the systems used.
Congress has passed bills on US government AI systems; there have been executive orders and voluntary guidance issued. However, there is little sign of federal law being passed on the more general aspects of AI.
At a regulatory level, the Federal Trade Commission (FTC) has published ground rules largely aimed at increasing fairness and requiring AIs to be trained in a way so as to remove bias. The FTC has also taken enforcement actions against companies misusing AI, requiring them to delete certain algorithms and training data.
The US position on AI regulation seems to be state and regulatory driven rather than at federal level (again creating a patchwork of rules much like there could be in the UK), and state legislatures seem to be looking at aspects of EU regulation (partly in existing GDPR) for inspiration on how to react to the challenge of AI.
In conclusion, the speed of advancement of AI technology might mean that by the time the best approach is figured out, we may be living in a world with pervasive AI, too difficult to undo. However, it could be that large language models and generative AI is the wake-up call required to get our regulatory AI house in order before we become capable of creating strong, sentient AI that will produce even more challenges for our societies. This type of AI may need much stricter control than even the EU is currently proposing.