If artificial intelligence (AI) can write a book, unravel the story hidden within document collections for e-discovery, and beat a grandmaster at chess, surely it is capable of creating a patentable invention. But the question on the minds of the intellectual property community worldwide is: Should we recognize AI as a patent inventor of its own merit?
In 2019, Dr. Steven Thaler began filing patents around the world, citing his artificial intelligence DABUS—which stands for “Device for the Autonomous Bootstrapping of Unified Sentience—as the “inventor.” Since then, much deliberation has led to dismissals in courts across the US, EU, and UK.
However, in August, Australia’s Federal Court reached a decision that an artificial intelligence system is capable of being named as an “inventor” of a patentable invention.
Curious about what such a landmark decision could mean for the future of AI, patents, and IP litigation, I recently spoke with Aaron Hayward, a senior associate in Herbert Smith Freehills’ Sydney office about a few questions I had:
Jacque Flaherty: What are your key takeaways from Thaler v. Commissioner of Patents?
Aaron Hayward: The key finding the Court made is that, under Australia’s Patents Act, an AI system can be named as an “inventor” of an invention that is the subject of a patent.
It’s important to say from the outset, though, that the Court’s finding is more technical and limited in scope than it might first appear. The need to identify an inventor is really a formal requirement before a patent can be granted, and under Australian patent law it is used for things such as determining who the rightful owner of the patent is (which, in practice, is often the inventor’s employer). That’s why a lot of the decision seemed to focus on seemingly arcane legal questions about property ownership, such as 18th century commentaries and cases about who owns the fruit from a tree you might have planted on your land.
But what the Court did not, strictly speaking, decide, is whether an invention created by an AI system should be patentable. On the specific facts of this case, a decision against Dr. Thaler would have meant that this particular patent application would have been refused, but that was because Dr. Thaler and the team who filed the patent applications deliberately only identified the AI system—DABUS—as the inventor. As I’m sure we’ll discuss, there may be other ways that AI-assisted or generated inventions can be protected by patents.
What are the arguments for and against allowing an AI system to be named an “inventor” for a patent?
As I mentioned before, it’s important to keep in mind the difference between what the Court actually decided—whether an AI system can be identified as an “inventor”—and the question of whether an invention created using (or, perhaps, by) an AI system should be patentable.
In relation to that second question, AI systems are already being used in any number of different fields to aid in research and development. Indeed, the Court referred to the various ways AI systems have been used in pharmaceutical research as an illustration, where AI has been used to identify molecular targets, identify potential “hit” or “lead” pharmaceutical agents, and assist in “polypharmacology” (the design of pharmaceutical agents that act on multiple biological targets). We can readily assume the use of AI in those fields, and others, is only going to increase. There is an obvious benefit in rewarding and encouraging investment in systems that assist in research like that. By contrast, as the DABUS team argued, it would seem to discourage that investment if the patent system only protected an invention with a patent if it was created using a less efficient manual method, or if a human had been artificially (pun intended) inserted into the process, even though that wasn’t required.
However, the real debate in this case was how the patent system should accommodate such inventions. Overseas, the US Patent and Trademark Office has suggested that the “inventors” should be identified as the humans who contributed to the invention. A similar suggestion was raised by the English Patents Court in a judgment relating to the UK equivalent of the same DABUS patent as in this case, where the Court remarked (without deciding) that it did not think it would be “improper” to argue that the owner or controller of an AI system could be identified as the “inventor” on a UK patent.
Part of the challenge is that, in some jurisdictions, there remains some (perhaps romantic) notion that the “inventor” must have engaged in what you or I might naturally regard as an “inventive process.” That isn’t the case in Australia, though, where an invention can be patented even if it is “remembered from a dream.” Here, however, the challenge was more procedural. The Commissioner of Patents argued that an AI system cannot own property—something that the DABUS team, and the Court, agreed with. However, since Australia’s Patents Act states that a patent can only be granted to the inventor, or someone who derives title from the inventor, the Commissioner argued that it was necessary for the “inventor” to be someone who could, legally, own property, since they must be able to “own” it before they could assign it to someone else. This was essentially the same reasoning that led to the English Patents Court finding against the DABUS team.
The DABUS team, on the other hand, argued that one can “derive” title to property without the creator of the property ever owning it—thus the (otherwise seemingly odd!) references to fruit from trees and offspring from livestock. They argued that having to identify a human, somewhat arbitrarily, as an “inventor” of something created by AI created confusion and uncertainty, and indeed have pointed to an example in which an AI system owned by Siemens created a new and unexpected design for a product, but none of the engineers involved were willing to say they were the “inventor.” By contrast, they argued, identifying the AI system can incentivise the disclosure of the AI system, and above all, would reflect reality.
As we now know, the Court agreed with the DABUS team.
The decision differed when the same case was brought in the UK, the EU, and the US. What kind of international precedent does Australia’s decision set for intellectual property?
The simple answer would be to say that the Australian decision strictly applies only to Australia and won’t act as a precedent overseas. That is particularly so given the case concerned a relatively technical question, which depended on the wording of Australia’s Patents Act. Even the UK Act, which has a similar provision, uses slightly different wording, and so that could be seen to be enough to distinguish the cases.
In practice, however, even though overseas decisions don’t create any form of binding precedent, given that patent law has at least some degree of international harmony, Courts do commonly consider decisions from other jurisdictions to the extent that they might provide guidance, especially in relation to questions that are fundamental to what the patent system seeks to protect. That is particularly so among common law countries such as Australia, New Zealand, the UK, and Canada, and to some extent the US. For example, we have seen ultimate courts of appeal in those countries do exactly this in relation to questions concerning the patenting of genetic material and computer-implemented methods. We know that judgments of the IP specialist judges of the Federal Court of Australia on IP matters are well regarded by IP specialist judges in those jurisdictions. As such, the fact that the Federal Court of Australia has found that it is possible, within the confines of its existing patent law, to recognise AI inventors may well have some influence on similar decisions overseas.
What does this mean for the application of AI in Australia?
Obviously, the decision is a significant milestone for the use of AI in inventive fields in Australia, since people or organisations designing or using AI systems can have confidence that Australia’s patent system will seek to reward that investment if it produces something new, inventive, and useful. To some extent, that is the case even if the DABUS team is ultimately unsuccessful in any appeals, since the decision—and the debate surrounding it—will no doubt continue to agitate public discussion about the need, and method used, to encourage and protect investment in AI systems in scientific or technical fields.
As you mentioned, this has been a widely debated decision worldwide and may be subject to further appeals. What challenges do you think both parties could face moving forward?
Yes, and indeed the Commissioner of Patents has recently filed an appeal against the decision, and we shouldn’t be surprised if whoever is unsuccessful in that appeal in turn seeks leave to take the matter to the High Court of Australia (Australia’s highest court and ultimate court of appeal). And, as you say, decisions overseas relating to equivalent patents are also currently in various stages of being argued, ruled upon, and appealed. The coming months will be an interesting time to keep watch on patent systems around the world as they grapple with this question.
One thing that is clear is that, if the (ultimate) decision in any particular jurisdiction is that an AI system cannot be named as an inventor, it is likely that the relevant patent office, and perhaps legislature, will be under some pressure to ensure there’s a workable alternative to ensure AI-derived inventions can be protected.
A more general issue that patent offices around the world, and eventually their corresponding courts, are likely to have to deal with in the not-too-distant future is the impact that recognising the role of AI in inventions will have on the assessment of whether a patented invention involves an “inventive step,” which is necessary for the patent to be valid. In many jurisdictions, including Australia, that assessment is “objective”—that is, it is not concerned with what the inventor actually did, but what a hypothetical researcher can be expected to have done—and so it does not strictly change depending on whether the AI is regarded as the “inventor.” But the Court in this case nonetheless recognised that, with the increasing use of AI in research and development, such a question might well arise in the foreseeable future. For example, would that hypothetical researcher have also had access to an AI system? No doubt that will be a hotly contested issue among different interested parties. So, we are sure to be in for a fascinating time for those interested in AI and innovation!
Jacque Flaherty is a senior marketing manager at Relativity, focusing on advocating for our user communities in EMEA and APAC.
Artwork for this article was created by Natalie Andrews.