
Artificial intelligence is here to stay, disrupting almost every industry and changing the way we work beyond recognition. Despite this and the current phase of exponential growth, it is widely recognised that AI risk within organisations is poorly understood and that poor deployment of AI systems can perpetuate existing inequalities. The potential benefits of using any AI model must be balanced with its associated risks. So, what can companies do to prepare for the use of AI systems such as smart virtual assistants, and what can they do to protect their business against AI claims?
This article by Consultant Solicitor, Stefanie Powell, will examine the UK’s current regulatory landscape, the key legal risks involved in using AI systems, and the safe and ethical deployment of AI.
What do we mean by AI?
There are many different types of artificial intelligence:
- Traditional AI performs specific tasks following predetermined rules or algorithms and does not learn from data nor improve over time.
- Machine learning AI learns from data as opposed to programming. The system adapts and learns from new data independently, discovering insights and trends.
- Conversational AI systems are interactive and designed to engage in human like dialogue and understand and respond to human language in a human-like manner.
- Generative AI systems are designed to generate new content in the form of written text, audio, images, or videos. For example, Chat GPT, Microsoft CoPilot, Google Gemini. Generative AI can also be used to generate a realistic image of a person who doesn’t exist, diagnose medical conditions, or create a video clip from a simple textual description.
- Artificial general intelligence (AGI) refers to highly autonomous systems that can outperform humans at economically valuable work.
Is AI currently regulated within the UK?
Currently, there is no regulation nor formal legal framework for AI within the UK. The UK has adopted a pro-innovation approach to AI regulation, relying on voluntary guidelines rather than enforceable frameworks, which has created legal uncertainty.
How does the UK’s approach differ to other jurisdictions?
The US
The US and UK, which have deliberately sought to attract AI investment and promote their jurisdictions as global leaders in AI, are currently broadly aligned in their approach to AI regulation, prioritising a flexible approach over a strict legal regime. In a similar vein to the UK, the US has opted for voluntary AI standards over statutory regulation. In April 2024, the UK and US entered into a landmark agreement on AI to test and assess risks arising from AI models.
This year, the transatlantic collaboration has been strengthened by the publishing of AI Action Plans and announcement of a new economic trade agreement, focusing on AI co-operation, advanced technologies, and investment. On 8 May 2025, the General Terms for the Economic Prosperity Deal (“EPD”) was announced, which includes a commitment to increasing digital trade and discussing high-standard commitments related to intellectual property rights protection and enforcement. The UK and US are currently preparing to sign a landmark multi-billion dollar technology agreement during Trump’s state visit this week, demonstrating a united front in respect of AI and aligning incentives for tech giants and institutional investors.
The EU
In contrast to the UK (and US), the European Commission has taken a leading role in the development of AI law. The EU AI Act which commenced on 1 August 2024 and will come fully into force by 2026, imposes strict legal obligations on AI developers and users with its comprehensive regulatory regime, categorising AI systems by risk to protect fundamental rights and ensure transparency and importantly, safety. Failure to comply with the EU AI Act can lead to significant fines. Other EU rules, for example, in respect of data, product safety, and the digital economy, are also significant for AI even though they are applied more generally.
The UK’s proposed Artificial Intelligence (Regulation) Bill [HL] (2025)
Following concerns about AI governance, several advisory bodies have convinced the government to reintroduce the Artificial Intelligence (Regulation) Bill [HL] (2025) which aims to introduce AI-specific legislation in the UK (the “AI Bill”).
The AI Bill was reintroduced on 4 March 2025 and aims to “make provision for the regulation of artificial intelligence; and for connected purposes”.
The AI Bill, which was tabled during the 2023-24 parliamentary session, failed to progress into law before the dissolution of the previous government, but has been reintroduced following concerns about AI governance in response to the rapid evolution of AI, in particular frontier AI systems and global regulatory developments.
The AI Bill proposes:
1. To create an AI Authority: a new regulatory body dedicated to overseeing AI development and ensuring compliance with new legislation. This is similar to the EU AI Office under the EU’s AI Act.
2. To introduce governance structures building upon the UK government’s AI Regulation White Paper (March 2023), which established five core AI regulatory principles: safety and security and robustness; transparency; fairness; accountability and governance; and contestability and redress.
3. A public consultation in respect of AI risks and AI ethics, including the requirement to obtain informed consent when using AI training data.
It remains to be seen whether the AI Bill will pass and, given the UK’s current focus on AI innovation, it may not do so in its current form unless regulators and industry stakeholders continue to push for stricter governance. However, if passed, the AI Bill would mark a significant shift in AI regulation, bringing the UK closer to the EU’s approach to AI.
What are the key risks of using AI?
The use of AI in business brings various legal and ethical risks, including accuracy, AI bias, privacy, data protection, confidentiality, transparency and trust, job displacement, accountability and litigation, to name a few. Businesses should be aware of the risks involved in using AI and have procedures in place to mitigate against any unforeseen circumstances.
Accuracy
AI systems are not always accurate – they can make errors or output incorrect information, which may have serious consequences. The response of an AI system will depend on its training data and programming. Careful steps must therefore be taken to ensure that data collected and inputted is accurate and verified.
AI bias
AI bias takes many form and can creep in at many stages of AI development and deployment. It has the potential to severely affect the reliability and fairness of an AI system, by, for example, reinforcing inequalities, eroding trust and discriminating. This has already been found to be an issue in, for example, job recruitment, loan applications and facial recognition systems. Addressing AI bias is critical to ensuring AI systems are fair and equitable.
Data bias
In addition to ensuring that data is accurate, data used to train AI models stemming from unrepresentative data pools, for example, from certain demographics or from data with historical biases, will lead to unfair or discriminatory outcomes. In these circumstances, the AI will ultimately favour certain groups or individuals and unfairly disadvantage others, by perpetuating these imbalances in its projections and decisions.
Algorithmic bias
Algorithmic data bias in AI occurs when the design of an AI model inadvertently introduces bias in the way that it processes and prioritises certain features – even if the data is unbiased. This is a serious concern resulting in unfair outcomes.
Generative AI bias
Generative AI can produce biased or inappropriate content based on the biases present in its training data, which may in turn generate outputs that discriminate against certain groups or reinforce stereotypes.
Human decision/cognitive bias
Cognitive bias is where the subjective decisions of the individuals and teams developing AI technologies are used in data labelling and other stages of AI model development.
Privacy, data protection and confidentiality
The tension between the seemingly continuous access to data by AI systems and the data protection rights of individuals is only increasing given the vast volumes of data collected, shared, analysed and stored by AI systems. Ensuring data protection (particularly over sensitive personal data) and cyber security are well established is fundamental in complying with privacy law and preserving client confidentiality.
AI systems often require large volumes of information, which may include personal data pooled from various sources. If this information is stolen and leaked, client confidentiality and data protection laws may be breached. While data protection laws do not explicitly mention AI, data protection principles are still relevant to AI. Therefore, it is vital that businesses ensure that their cyber security, storage systems and data protection and breach procedures are adequate for the AI system in place to safeguard against this.
Where an AI system plays a role in influencing a decision, or providing assistance or advice, businesses should inform the client/stakeholder about such use and possible implications and limitations, and obtain informed consent. Ultimately, clients and stakeholders should be fully informed about the role of AI in their matters.
Transparency and trust
If businesses use AI systems, but do not tell the client/customer/employees that they are doing so, or the AI system does not produce accurate or quality work, then there is a risk that trust in the business may be eroded due to lack of transparency. Clients, customers, consumers and employees should be aware of the AI systems in use so they can make informed decisions about the benefits and risks associated with the company and AI technology deployed. Whilst not a legal obligation, businesses will gain trust if they are upfront about the AI systems they use, for example, by incorporating an AI clause into their terms and conditions.
Job displacement
The use of AI systems is already leading to job displacement – or at least people who know how to use AI are replacing those who do not. According to the World Economic Forum’s 2025 Future of Jobs Report, 41% of employers globally plan to downsize their workforce where AI can replicate human work between 2025-2030.1 This reflects the already growing tendency to reduce staff whose skills are becoming less relevant or whose roles are no longer needed. For example, where AI systems can work faster and cost less than human salaries, such as repetitive work, which gets outsourced to smart virtual assistants. Microsoft, IBM, Meta and Amazon have even made significant cuts to software engineers and developers this year due to AI automation.
Accountability
The automated nature of AI systems can make it difficult to trace the responsible algorithm or responsible person/s within the AI decision-making process. As such, liability may be difficult to establish, particularly where an incorrect decision is made or something goes wrong, exposing a business to potential claims, for example, from customers or employees. Determining responsibility for any faults/errors can be complex and involves issues of professional responsibility, accountability, legal liability and insurance.
In the absence of AI legislation in the UK, redress for those who have suffered damage as a result of an AI system’s failure is most likely to be sought in a private action under the tort of negligence. The claimant would need to establish that the defendant owed a duty of care, breached that duty, and that the breach caused injury to the claimant.
Liability for negligence lies with the person/s or entities causing the damage or defect or who might have foreseen the product being used in the way it was used. However, the responsible person/defendant in the AI decision-making process may be difficult to establish where things go wrong – is it the designer, manufacturer, developer, owner or operator/user? Ultimately, where AI systems are fully autonomous, negligence may be difficult to establish due to the lack of foreseeability and proximity. Therefore, a strict liability regime, such as that enacted under the EU AI Act, is a helpful development for countries within the EU.
Litigation
AI claims are starting to come before the courts, particularly in the US. Generative AI is at the heart of many new IP cases, with questions involving the ownership of inputs and outputs of third-party programmes and copyright infringement being tested. For example, in a copyright case earlier this year, a Delaware judge concluded that Westlaw’s editorial content created and maintained by Thomas Reuters’ attorney editors, is protected by copyright and cannot be used without the consent of Reuters. The copying of its content was not ‘fair use.’
In the absence of regulation, early court decisions from these cases are likely to shape the legal landscape.
What can businesses do to ensure the safe and ethical deployment of AI systems and mitigate against the risks of its use?
Organisations can take many steps towards deploying AI systems safely and ethically and to limit the risks of its use. Businesses should (amongst other steps):
- Ensure they have frameworks in place for decision making on AI systems within the business, for example, an AI board which is accountable to the board of directors for all AI decisions (selection of system, data inputted, ethics, training, auditing etc.).
- Ensure that algorithms used in AI systems are transparent, free from biases and explainable.
- Ensure that data inputted into AI systems is diverse and representative of the workforce, and have a diverse selection of the workforce involved in training the AI. For example, consult with women and other minorities in the workplace when deciding whether to use a certain AI system and the way in which it is to be used to ensure systemic injustices are not perpetuated.
- Establish policies and procedures regarding the use of AI systems/tools and ethical guidance, for example, an AI policy for staff to follow, setting out which AI systems they are permitted to use and in what circumstances.
- Audit AI use and the data inputted.
- Maintain a catalogue of AI being used and why, and keep this up to date.
- Establish a training programme on AI systems for employees.
- Establish a complaints procedure for AI.
- Fully understand the AI systems/tools selected and be able to explain the selection, use, and supervision systems in place.
- Make clients aware of the AI systems/tools being used.
- Ensure data protection (particularly over sensitive personal data) and cyber security are well established to comply with data protection laws.
- Sign up to voluntary codes of practice for AI.
Overall, in the absence of AI regulation in the UK, businesses should be able to deploy AI systems safely and ethically and limit the risks of their use with strong governance and oversight structures in place.
Contact Stefanie Powell today for legal advice on taking steps to protect your business when using AI systems. Stefanie Powell is a Consultant Solicitor with a specialist interest in AI.
- WEF Future of Jobs Report 2025, pages 63, 42 ↩︎