Using AI as a legal tool

AI tools may not always be correct or confidential, so don’t use it for legal advice on your IP

In the context of intellectual property (IP), a hidden and potentially costly issue with using AI tools is confidentiality. This is particularly important when providing details of an invention for which a patent application is yet to be filed.

If chat history is left on – which many users do – any input a user provides to an AI tool may be used to further train and improve the language model. This comes with the risk that prompts can become available to the public before an application is filed. On top of this hazard there are no attorney-client privileges and, therefore, no legal protection exists for any ‘dialogue’ a user has had with an AI tool.

Further security issues exist, given that usually most interactions with tools such as ChatGPT occur online and data is processed in the Cloud. And finally, it is not always clear if there is any formal agreement which ensures non-disclosure of sensitive and/or confidential information (such as an NDA).

Unfortunately these factors do not sum up everything that is potentially harmful by engaging legal services ‘rendered’ by AI tools. Various recent high profile cases in the US and UK have shown lawyers having their reputation damaged, and people representing themselves being burned by using AI.

In the Ayinde case, Mr Justice Ritchie held: “on the balance of probabilities, I consider that it would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading”.

This puts trained legal professionals using AI tools on notice. They should be cautious and thoroughly check any work developed through using said tools. It also raises the bar even higher for those with no legal training, who may be at an even larger disadvantage in discerning when AI tools may be leading them astray.

These cases highlight the limits of AI tools in the legal sector and the risks associated with the blind use of AI generated material, as well as the responsibilities and ethical obligations of legal service providers around this issue.

In the Solicitors Regulation Authority Risk Outlook Report, it warned against the use or misuse of AI tools, stating: “All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this. That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’. The result is known as ‘hallucination’, where a system produces highly plausible but incorrect results”.

It would be naïve to dismiss the role of AI in future society. However, while AI has the potential to assist European patent attorneys by automating routine tasks and improving efficiency, it cannot yet replace the human expertise, judgment, and creativity that are essential to the role. The work of a patent attorney involves a complex interplay of legal, technical, commercial and interpersonal skills. These aspects of the role are deeply rooted in human intelligence and cannot yet be replicated by AI. As such, patent attorneys will continue to play a vital role in the patent system, working alongside AI tools to provide the highest level of service to their clients.

If you have an invention or innovation you think might be patentable, speak to one of the authors, Oliver Pooley or David Combes, who will be able to confidentially advise and guide you through the process, or contact an AI specialist in our Computing & Software sector team.

Share