From text generation to patent prosecution: could ChatGPT be the next legal tech breakthrough?

The use of Artificial Intelligence (AI) within Intellectual Property (IP) is an issue that continues to make headlines. The most noteworthy news piece is the ongoing saga of Dr Steven Thaler and his AI ‘Creativity Machine’ called DABUS, which has received widespread attention within the world of IP and in the wider media.

Dr Thaler claims that DABUS is the inventor of two separate inventions and has filed numerous applications in many jurisdictions. So far, the consensus is that an AI cannot be regarded as an inventor, with the UKIPO, USPTO and EPO issuing decisions to that effect. The matter is ongoing, having been heard at the UK Supreme Court on 2 March 2023. At the moment we seem to be some way off AI being able to actually invent very much, outside of very carefully curated circumstances.

Perhaps it is more interesting to ask what role AI could play in the wider IP process. Recent developments in the field have many industries questioning what their future may look like in a world that is becoming increasing integrated with, and reliant upon, AI. The IP industry should be no different and, as we will see below, we are at a technological juncture where it makes sense to consider these issues.

A key event that has prompted a lot of the present discussion surrounding AI was the launch of ChatGPT (Chat Generative Pre-trained Transformer) in November 2022; an AI chatbot developed by OpenAI to mimic human conversations through the use of a large language model. ChatGPT is able to answer exam questions; write songs, short stories and poems; and even generate and debug computer code, all from relatively simple text inputs. In fact, the title for this article was generated by ChatGPT after being asked to “create a title for an article about whether ChatGPT would make a good patent attorney”. A quick search for ChatGPT online will show countless examples of some rather impressive capabilities. This was closely followed by the release of GPT-4 on 14 March 2023, a large multimodal model that allows for both text and image inputs. Early indications show that this AI is substantially more powerful than its chatbot predecessor and clearly this is a technology that is rapidly becoming a reality.

But these developments have caused many spectators to question what the implications of such capabilities may be. The reaction has been mixed to say the least. AI does have the potential to revolutionise many aspects of our daily lives, and some practical applications of the technology are already starting to become available. Microsoft, who was a major backer of OpenAI, has announced that ChatGPT‑like AI will be integrated into its Bing search engine to improve search algorithms.

There is widespread concern that these text-generation tools could be disastrous, particularly in the modern age of online misinformation. US academic journal Science quickly updated its editorial policies to specify “that text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools”. Meanwhile, education professionals are concerned of reports that AI has ‘passed’ university level law, business and medical exams (amongst other things) which may require a wholesale rethink of how students are assessed.

So, it seems that now is a good time to reflect on what AI could mean for the patent profession (and other areas of IP), if not as an inventor, as part of the patent prosecution process. Could AI be a useful tool for professionals going forward? Could it even replace some roles altogether? And just how good is ChatGPT today at handling the intricacies of patent law?

We have been putting ChatGPT to the test, seeing how it fares with some patent law queries of varying complexity. Given the worries regarding AI being used to cheat on tests, this seems like an obvious starting point. Section A of CIPA’s foundation level FC1 UK Patent Law exam is made up of short, simple questions designed to test students’ recall of key sections of the Patents Act and Rules. 50% is the pass mark for the paper (though section B has longer and generally more in-depth, contextual questions).

We asked ChatGPT the section A questions from the 2021 paper, without any prior training or interaction. The results? A score of 18/40, just below the ‘pass’ mark. Within this, there was a mixed bag of answers.

ChatGPT performed quite well on question 1: “Your client claims to have invented a perpetual motion machine. List three grounds on which the UKIPO might raise an objection”. This question is not a simple ‘recall a particular section of the Patents Act’ task, but ChatGPT was able to correctly explain that:  the invention is not capable of industrial application (such a device would be impossible so would not perform a useful function), and the application would lack sufficiency, as the description could not enable the skilled person to replicate the invention.

The AI did not pick up the third mark given in the mark scheme – that the claims would not be supported by the description – but did quite reasonably state that the invention would likely lack novelty/inventive step, given than many different perpetual motion machines have been proposed and designed in the past.

Additionally, training the chatbot is usually straightforward. For example, ChatGPT was not able to list any of the periods given in Section 20A of the Patents Act for which reinstatement is not possible. But once told what the correct answer should have been, the AI can repeat this back when later asked the same question. Running the test a second time (after providing the correct answers during the first test) ChatGPT was able to score 31/40. Scoring 45% on a first attempt with no revision and 78% on a second attempt is promising.

We also tested whether ChatGPT would be able to handle case-specific deadlines, the kind that are a daily occurrence when working as a patent attorney. In summary, the AI performed poorly, even when it had been provided with training information.

When questioned on the compliance period for a UK patent application (Rule 30) and the first renewal date and renewal period for a UK patent (Rule 37), it failed almost entirely. Even when provided with the rules and having been corrected multiple times, it struggled to account for the different factors that must be considered and how to determine when the relevant periods start/end (e.g., accounting for the deadline being at the end of the month).

When asked about the deadline for Article 19 PCT amendments, ChatGPT instead talked about the 30/31 month deadline for entering the national/regional phase. However, it was able to answer this once it had been corrected and told of the actual deadline (the later of two months from the transmittal of the International Search Report (ISR) or 16 months from the priority date).

But even then, ChatGPT struggled to acknowledge when it was lacking information. For example, when told that the ISR was issued on 1 December 2022, it would confidently state that the deadline for making Article 19 amendments was 1 February 2023. Although this is two months from the date of the ISR, it had not been told what the priority date is, so cannot determine the deadline with certainty.

In fact, ‘overconfident’ is probably how ChatGPT can be described in one word. When faced with a question that it doesn’t know, it confidently provides an answer that seems intelligible but is in fact nonsense. It also tends to greatly over-explain things, providing superfluous or irrelevant details that look convincing but are also incorrect.

This is a common issue for large language models and is known as ‘hallucination’. OpenAI are aware of this and have made real efforts to prevent ChatGPT from making harmful or incorrect statements. But the issue is a difficult one because, primarily, chatbot AIs are trained to predict the next word of a sentence so as to sound as human as possible, even if the content of what is being said is incorrect or nonsensical.

Overall, ChatGPT is not currently a functional tool for providing legal advice in relation to patent prosecution. This is not particularly surprising – having been trained as a language model using general material, it simply does not have the information to answer certain questions. If it has never been provided with a point of law, such as what a particular deadline is, it will obviously not be able answer questions on this.

But that is not to say that AI will never be a useful tool. We have seen (and other sectors have shown) that, when trained with relevant data, ChatGPT does have an impressive ability to understand instructions and notice subtle contextual prompts. One could imagine that in the future an AI could be trained specifically using legal text or perhaps a law firm’s actual case files. Provided with this information, it may well be able to provide useful answers.

It is unlikely that we will see AI being integrated into day-to-day work in the IP profession in the very near future. But as these capabilities become more well trained and specialised, it is reasonably likely that AI could at least be used to supplement existing tools and workflows. At the very least, it seems possible that in the near future a patent attorney may first ask an AI backed search engine for a niche deadline they have forgotten, rather than thumbing the index of a hefty textbook.

Barker Brettell has a dedicated computer and software sector group that can assist and advise you on AI matters. To continue the conversation, please contact the authors, Matthew Philpotts and David Combes, or your usual Barker Brettell attorney.