Published in X-Patents.ai.
Stephen Hou, VP & COO of American Patent Agency and long-time customer of Solve Intelligence™, explores the impact of artificial intelligence on patent law, focusing on new regulations for AI patent drafting and AI inventorship across major global patent offices. Stephen describes how jurisdictions in the IP5 (China, Europe, Japan, Korea, and the United States) are updating their guidelines to address the challenges of AI in the patent process, specifically stating that AI cannot be an inventor but highlighting its role in aiding human-driven patentable innovations. This highlights the growing role of artificial intelligence in patent law.
SSRN article: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4843648
Author: Stephen M. Hou [LinkedIn Profile]
Vice President & Chief Operating Officer, American Patent Agency PC
Send comments to: stephen@apvusa.com
In response to the rapid rise of artificial intelligence (AI), major patent offices around the world are issuing rules regarding the inventorship of and the patentability requirements (subject matter eligibility, novelty, inventive step, and sufficiency of disclosure) for AI inventions, the use of AI tools in patent practice, the impact of generative AI on prior art, and how AI affects the “person having ordinary skill in the art” (PHOSITA) standard. This article summarizes the most recent directives from the IP5 jurisdictions: the People’s Republic of China, Europe, Japan, the Republic of Korea, and the United States.
China’s revised guidelines expand patent claim categories for computer-related innovations and clarify that AI and big data algorithms that enhance computer system performance or user experience are pertinent to subject matter eligibility and inventiveness assessments. Europe issued new provisions for AI patents, requiring detailed disclosure of machine learning algorithms and the characteristics of the training data affecting the technical effect of an algorithm, but not requiring the training data itself. Japan added new case examples for AI-related technologies, highlighting the eligibility of methods that integrate software and hardware, the sufficiency of disclosing data correlations, and requirements for non-obviousness. Korea’s updated guidelines stress the detailed disclosure of AI implementation, such as a correlation between input and output data in trained models, and explain that inventive step is assessed based on the technical features and unexpected effects of AI training data, modeling, and applications in specific fields. The U.S. clarifies that while AI cannot be an inventor, it can be used in the conception or reduction to practice of an innovation, with patent protection contingent on a natural person’s substantial contribution to the inventive process. Further U.S. guidance focuses on the ethical use of AI in patent practice, emphasizing the duty of candor and verification of AI-generated documents, the disclosure of AI’s role in drafting if material to patentability, and the responsibility for AI-assisted filings and maintaining confidentiality. Finally, the U.S. is seeking public input on the impact of generative AI on prior art and the PHOSITA standard, addressing the challenges AI-generated content poses to patentability assessments and the potential shift in the non-obviousness and enablement requirements due to AI integration in the inventive process.
The IP5 is a consortium of the world’s five largest intellectual property offices: the People’s Republic of China (PRC) National Intellectual Property Administration (CNIPA), the European Patent Office (EPO), the Japan Patent Office (JPO), the Korean Intellectual Property Office (KIPO), and the United States Patent & Trademark Office (USPTO). To explore the growing impact of AI on the patent system, an IP5 expert roundtable was held in Munich, Germany in October 2018 to begin framing an ongoing discussion of the legal aspects of AI on the patent system, particularly inventorship, ownership, patent eligibility, sufficiency of disclosure, and inventive step. Key points from the 2018 roundtable include the following (see [IP5 2018]):
From the perspective of inventorship, the roundtable identified three categories of AI-related inventions: (a) human-made inventions using AI for the verification of the outcome; (b) inventions arising from a human identifying a problem and using AI to find a solution; and (c) AI-made inventions, i.e., AI identifies a problem and proposes a solution without human intervention. All IP5 jurisdictions currently maintain the stance that inventors must be human beings, but each recognizes the challenges in discerning whether an AI or a human has created a particular innovation.
From the perspective of patentability, AI-related innovations are subject to the same patent eligibility criteria as other types of innovations within each IP5 office. This includes the exclusion of abstract ideas, natural phenomena, and mathematical methods from patentability. With that said, AI innovations themselves are typically classified as computer-implemented innovations (CII), and any examination guidance related to CII from the respective office may be applicable.
Each IP5 office also requires certain disclosure requirements to ensure reproducibility and repeatability of an invention. However, AI innovations may also present challenges in disclosure due to the opaque nature of AI decision-making processes. Despite this, the IP5 offices uphold the sufficiency of disclosure requirements. In the context of AI innovations, the disclosure requirements can be satisfied, for example, by detailing the training process of the AI model and providing the data used for training. Also, disclosure requirements may vary based on the particular invention. For example, if the inventive aspect is the algorithm, the algorithm should be disclosed; however, if the inventive aspect lies in the use of data (rather than the algorithm), then the algorithm may not need to be disclosed.
The proliferation of AI tools in various industries may also redefine the baseline knowledge and skills of a “person having ordinary skill in the art” (PHOSITA). In effect, a PHOSITA is presumed to have access to the typical knowledge and technology within their industry, which will now include AI tools. Consequently, the inventive step threshold may rise, making it more challenging for innovations to meet the non-obviousness criteria. Furthermore, as AI technologies become more integrated into standard practices, the volume of prior art is expected to grow.
In December 2023, the People’s Republic of China (PRC) National Intellectual Property Administration (CNIPA) promulgated an update to the CNIPA Examination Guidelines (Office Order No. 78), effective January 2024, which introduced a wide range of revisions to PRC patent practice (see [CNIPA 2023] for details). Summaries of the updates are provided by [Bénetière 2024], [Rowe 2024], [Su 2024], [R. Wang 2024], and [S. Wang 2024]. In particular, the subject matter eligibility and inventiveness assessments for computer-implemented inventions with respect to AI technology were clarified (see [Meng 2024] and [Zhuo 2024]).
First, two new claim categories of computer-related subject matter were introduced: (a) computer-readable storage media, and (b) computer program products. Following this amendment, eligible subject matter in China now include “a method,” “a computer device/apparatus/system,” “a computer-readable storage media,” and “a computer program product.” This revision continues the trend of the CNIPA gradually expanding the claim categories for computer-related inventions since the PRC first promulgated patent law in 1985.
Furthermore, two new AI and big data algorithm scenarios are introduced to guide practitioners. One scenario states that AI and big data algorithms may be eligible subject matter if: (a) the algorithm has a specific technical relationship with the internal structure of the computer system, and (b) it can solve a technical problem to improve the internal performance of the computer system that conforms to the laws of nature. The internal performance as mentioned includes not only the hardware structure, but also data storage and data scheduling of the computer system. Another subject matter-eligible scenario requires the solution of the claim to be directed to the processing of big data in the specific application field by: (a) using the internal relationships in the data mining that conforms to the laws of nature, and (b) solving the technical problem of how to improve the reliability or accuracy of big data analysis in the specific application field, as well as achieving corresponding effects.
With regards to the inventive step requirement, two directives were provided. First, if the algorithm, which has a specific technical relationship with the internal structure of the computer system, achieves an improvement of the internal performance of the computer system, then the algorithm shall be considered as contributing to inventiveness. Next, if a solution produces an improvement in user experience, and such improvement is brought about by technical features, or by technical features and algorithm features, or by business rules and method features that mutually support and interact with one another, then such an improvement in user experiences shall be taken into consideration when assessing inventiveness. Some features of the user experience that can be considered include operation comfort, sensual pleasure, and shorter waiting times. Note that all these features reflect an objective technical effect rather than a subjective preference. If an invention objectively improves user experience, it would be prudent to further expound on this feature in the specification and drawings.
The European Patent Convention (EPC) created the European Patent Office (EPO) in 1973. Today, 39 states, including all 27 European Union (EU) members, are bound by the EPC. In January 2024, the EPO announced an update to the Guidelines for Examination in the EPO, effective March 2024, which included several types of revisions to European patent practice (see [EPO 2024] for details). Summaries of the updates are provided by [Brooks 2024], [Haywood 2024], [Hughes 2024], [Rodriguez 2024], and [Rose 2024]. In particular, in addition to emphasizing that an inventor must be a “natural person,” the new guidelines provided further clarifications regarding the sufficiency of disclosure requirements for AI inventions (see [Antoine 2024], [Cupitt 2024], [Gilliam-Scott 2024], [Winlow 2024], and [Woodhouse 2024]). Although a chapter on “Artificial Intelligence and Machine Learning” had been added in October 2018, the EPO Guidelines did not treat AI any differently from other technologies with respect to disclosure requirements prior to the March 2024 update.
A new passage under the heading “Artificial Intelligence and Machine Learning” has been added to section G-II, 3.3.1: “The technical effect that a machine learning algorithm achieves may be readily apparent or established by explanations, mathematical proof, experimental data or the like. While mere allegations are not enough, comprehensive proof is not required, either. If the technical effect is dependent on particular characteristics of the training dataset used, those characteristics that are required to reproduce the technical effect must be disclosed unless the skilled person can determine them without undue burden using common general knowledge. However, in general, there is no need to disclose the specific training dataset itself (see also F-III, 3 and G-VII, 5.2).”
The phrase “particular characteristics of the training dataset used” is clarified in another new passage entitled “Insufficient Disclosure” under section F-III, 3: “Another example can be found in the field of artificial intelligence if the mathematical methods and the training datasets are disclosed in insufficient detail to reproduce the technical effect over the whole range claimed. Such a lack of detail may result in a disclosure that is more like an invitation to a research programme (see also G-II, 3.3.1).”
The practical implication of these guidelines is that applicants and their legal representatives must ensure that patent applications for AI inventions contain ample technical information to enable replication of the invention. The level of detail required varies depending on the nature of the invention. For instance, if the invention employs well-known machine learning (ML) algorithms, then referencing those algorithms by name and explaining their usage may be sufficient. However, if a novel ML technique is central to the invention, the application must offer a comprehensive description of its implementation, including details such as neural network structures, topology, activation functions, end conditions, and learning mechanisms.
Regarding training data, the guidelines specify that it is unnecessary to disclose the actual data itself. Instead, characteristics of the training data that affect the technical effect of an ML algorithm must be disclosed. This requires explaining how to obtain or generate suitable training data so that a person having ordinary skill in the art (PHOSITA) can reproduce the invention without undue difficulty. In some cases, an alternative to disclosing the training data itself might be disclosing learned coefficients and/or weights of a model. However, practitioners should await ongoing developments on these issues, as best practices will emerge only after further AI inventions proceed through examination and opposition procedures.
While these guidelines do not signify a fundamental shift in the EPO’s approach to such issues, they extend reasoning from previous cases where patent applications were rejected due to insufficient disclosure. These updates are consistent with the findings of recent Technical Board of Appeal (TBA) rulings, where a lack of information on details of AI training data could lead to an invention being considered insufficiently disclosed, and the new guidelines may direct examiners to scrutinize the technical detail in patent applications more closely. Hence, attorneys drafting applications for submission to the EPO should be meticulous in describing the algorithms and training data used in the invention.
As discussed in [Fox 2024], [Harkness 2023], [Lawrence 2024], [McNamee 2023], [San Martin 2024], [Trigg 2023], [Ward 2023], and [Winlow 2024], there is also corresponding guidance in the United Kingdom (UK), which remains a party to the European Patent Convention (EPC), which governs the EPO, despite the UK’s 2020 exit from the European Union (EU), a pan-European entity that is separate from the EPO. In particular, in November 2023, the High Court in Emotional Perception AI Ltd v Comptroller-General of Patents, Designs and Trade Marks ruled that artificial neural networks (ANNs), which aim to approach how humans semantically perceive information, are indeed patentable subject matter. By considering both hardware and software implementations, the court concluded that neither involves a computer program, despite acknowledging both implementations involve a computer. This is the first time in the UK that the computer program patentability exclusion has been considered in the context of AI.
In response to this decision, the UK Intellectual Property Office (UKIPO) revised its September 2022 Guidelines for Examining Patent Applications Relating to Artificial Intelligence (AI) so that examiners are now directed to refrain from objecting to inventions involving ANNs under the computer program patentability exclusion. The guidelines were further updated in May 2024 (see [UKIPO 2024]) to include a new section (paragraphs 32–39) on ANNs specifically. First, the update clarifies that “[t]o qualify as an invention involving an ANN, the invention must either claim an ANN itself or include claim limitations to training or using an ANN.” Although the update affirms the subject matter eligibility of ANNs, it warns that other considerations may exclude an ANN invention from patentability: “For example, devoid of any application, an ANN is an abstract mathematical model, so the mathematical method exclusion may apply in appropriate cases. Further, following Merrill Lynch, a computer implemented invention (such as an ANN) may be rejected as a method of doing business as such. If an invention involving an ANN performs nothing more than a method of doing business, then it is excluded under the business method exclusion.” The new section concludes by delineating the applicability of the ANN guidelines: “If a claimed invention is not directed to an ANN, its training, or its use, then the computer program exclusion must be considered. For example, if a claim merely refers to machine learning or training a model, then it engages the computer program exclusion. The allowability of such claims should be determined by asking whether the invention makes a relevant technical contribution.” Despite the updated guidelines, the UKIPO’s approach to the patentability of AI inventions still diverges from EPO practice, particularly with regard to computer-implemented inventions and technical effects considerations in the assessment of inventive step.
The Japan Patent Office (JPO) maintains a list of case examples on AI-related technologies. After five case examples were first published in March 2017, ten additional case examples were published in January 2019, and yet another ten case examples were published in March 2024 (see [JPO 2024] for details). The updated set of case examples and their accompanying explanatory notes provide insights into how subject matter eligibility, disclosure requirements, and inventive step evaluations are developing in Japan (see [Phelan 2020] and [Rogitz 2019]).
Regarding subject matter eligibility, the case examples distinguish between training data, data structure, and trained models. Generally, cases where training data itself is a “mere presentation of information” or where the trained models are not a “program” are ineligible for a patent. In contrast, cases where the method for generating training data satisfies the requirement that the software and hardware work together are indeed eligible. Furthermore, cases where the data structure used is “equivalent to a program” or where the trained models are a “program” are eligible, so long as these claims satisfy the aforementioned software-hardware requirements.
The updated case examples are accompanied by detailed tables that compare and contrast aspects of satisfying the various patentability requirements. For the enablement and support requirements in particular, much of the analysis is focused on correlations among various data. When drafting a Japanese patent application involving AI technology, it is prudent to address the relationships between various data types used in machine learning. The application may satisfy the description requirements by demonstrating either: (a) a discernible correlation among the data types, as explicitly stated in the description, or (b) an inferred correlation based on general technical knowledge. However, it is not mandatory to detail a specific correlation among the data types. The clarity of AI-related claims is maintained if the application, considering the description, drawings, and common knowledge at the time of filing, clearly identifies the claimed subject matter as a “program,” regardless of the terminology used (e.g., “module,” “library,” “neural network”, “support vector machine”). These cases do not violate the clarity requirements. Conversely, if the claims use terms such as “trained model” without any reference to a “computer” and it is not apparent that the claims refer to a computer, then the claims may lack clarity.
Furthermore, in cases where AI is applied to a technical field, it is insufficient to merely state the application of AI, such as using a neural network to detect emotions in photographs. Instead, the disclosure of the patent application should include specific correlations that the AI applies to function effectively. For instance, rather than simply stating that a neural network is used, one might also describe example correlations like “a smile correlates with joy” or “wide eyes correlate with astonishment.” It is possible to rely on common general technical knowledge to establish these correlations, but this approach may pose limitations and risks. For computer products that presumably include an AI function, it is prudent to provide actual test results or other forms of validation for the AI model, unless the performance of a product that has been physically created by the AI can replace such an evaluation. For example, if an AI is used to determine the composition of a new material that achieves specific chemical requirements, one may include test results from the AI model’s operation or evidence that the model’s output has been verified for accuracy.
With respect to inventive step evaluation, the 2019 case examples focused on three aspects: the mere application of AI, the modification of training data, and the pre-processing of training data. The most recent 2024 case examples expand the scope to include the application of generative AI, the changes in AI estimation methods, and the systematization of human work. In general, straightforward adaptations of the state of the art are deemed obvious to a person having ordinary skill in the art (PHOSITA), and the compiled case examples provide guidance regarding how the JPO considers specific areas of AI.
In January 2021, the Korean Intellectual Property Office (KIPO) released Examination Guidelines for AI-Related Inventions. These guidelines were updated in March 2022 (see [KIPO 2023] for details). The original and updated guidelines address subject matter eligibility, claim formats, enablement, and inventive step (see [Ahn 2021], [Byun 2021], [Lee 2021], and [Son 2022]).
Like in many other jurisdictions, the KIPO treats AI inventions as computer-implemented inventions for the purposes of subject matter eligibility. This requires: (a) the execution of information processing through a combination of software and hardware, and (b) claims that exclude human mental activities or offline activities. Notably, the KIPO assesses eligibility without considering prior art. While method claims in Korea do not have distinctive requirements, product claims are subject to specific formats acceptable to the KIPO, such as “an apparatus (device),” “a computer-readable medium with a program recorded therein,” “a computer-readable medium with a data structure recorded therein,” “a computer program or application stored in a computer-readable medium,” and “a computer program for implementing a training model stored in a computer-readable medium.”
The updated guidelines provide clarity on enablement for AI-related patents. In particular, detailed disclosure of the specific means for implementing the AI technology is mandatory unless a person having ordinary skill in the art (PHOSITA) would find it obvious. This includes training data, data pre-processing steps, learning models, and loss functions. Furthermore, the correlation between input and output data in the learning model is to be disclosed unless it is apparent to a PHOSITA. If a claim employs a conventional machine learning method or algorithm, a detailed description of the conventional component is unnecessary.
For AI-related patent applications, it is imperative to establish a clear correlation between the input and output data of a trained model. This correlation is adequately described when the following elements are detailed: (a) the learning data utilized for training the model is explicitly identified, (b) a correlation between the characteristics of the learning data and the technical problems addressed by the innovation is established, (c) the learning model or method intended for training with the learning data is concretely described, and (d) a trained model capable of resolving the technical issues of the innovation is produced using the specified learning data and method.
In the event of a preliminary rejection due to an inadequately explained correlation in learning data, the applicant may contend that a PHOSITA could deduce the omitted correlation from the common technical knowledge available at the time of the application filing. If such a contention is not viable, the applicant may consider removing any claims that are specifically tied to the questioned learning data. The KIPO does not permit the introduction of additional explanations or working examples regarding the correlation in response to the preliminary rejection, as this would constitute the addition of new matter.
If data pre-processing is a distinctive feature of the innovation, but either the application fails to detail the steps and functions of data pre-processing, or the application does not clearly describe the correlation between the raw data and the learning data, or a PHOSITA would struggle to infer this relationship, then the application does not meet the enablement criterion.
Finally, for AI innovations that utilize reinforcement learning, if the application does not specifically outline the method of reinforcement learning, including the interactions among the agent, environment, state, action, and reward, or if a PHOSITA would find it challenging to deduce the method of reinforcement learning, then the application does not fulfill the enablement criterion.
The KIPO evaluates inventive step by the difficulty in developing the technical features of an AI technology and by its remarkable or unexpected effects over the prior art. In the updated examination guidelines, AI technologies are categorized into: (a) AI training data, (b) AI modeling, and (c) AI applications. The inventiveness of each category is assessed according to its own criteria. For AI training data, inventiveness exists if the claim describes in detail how raw data is processed and how an unexpectedly advantageous effect is achieved from the features of the training data. For an AI modeling invention, inventiveness exists if the claim specifically defines the configuration of a learning model, and there is an unexpectedly advantageous effect resulting from such configuration. For example, simply replacing a recurrent neural network (RNN) in a prior art invention with a convolutional neural network (CNN) instead would not satisfy the inventive step requirement unless such a change shows superior effect. Finally, when an AI invention is applied in another field, inventiveness exists if the invention resolves a long-term problem or technical difficulty in the specific field or if unexpectedly advantageous effects are found from its application to that specific field.
Pursuant to President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued in October 2023, the U.S. Patent & Trademark Office (USPTO) has take action by issuing three directives: (1) Inventorship Guidance for AI-Assisted Inventions (February 2024); (2) Guidance on the Use of Artificial Intelligence-Based Tools in Practice (April 2024); and (3) a Request for Comments Regarding the Impact of the Proliferation of Artificial Intelligence on Prior Art, the Knowledge of a Person Having Ordinary Skill in the Art, and Determinations of Patentability Made in View of the Foregoing (April 2024). Each of these developments is discussed below.
In February 2024, the USPTO issued guidance that clarifies the role that AI plays in the invention process (see [USPTO 2024a] for details). In addition to addressing human/AI inventorship, the guidance also discusses the duties owed to the USPTO (the duty of disclosure and the duty of reasonable inquiry), the naming of inventors, the requirements for information, the inventor’s oath or declaration, applicant and ownership concerns, and the benefit of priority claims to prior-filed applications (see [De Vellis 2024], [Hanson 2024], [Lamaute 2024], [Lin 2024a], [Masutani 2024], and [S&C 2024]).
The guidance emphasizes that a central purpose of the patent system is to incentivize innovation: “[w]hile AI-assisted inventions are not categorically unpatentable, the inventorship analysis should focus on human contributions, as patents function to incentivize and reward human ingenuity. Patent protection may be sought for inventions for which a natural person provided a significant contribution to the invention.” Accordingly, AI-assisted inventions are not automatically precluded from patent protection, but care must be taken to analyze the interaction between human conception and AI tools, particularly on a claim-by-claim basis. The guidance reiterates the ruling from Thaler v. Vidal (CAFC 2022): “[o]nly a natural person can be an inventor, so AI cannot be.” Thus, AI cannot be listed as the sole inventor or even a co-inventor of a U.S. patent, though the use of AI in the conception or reduction to practice of an invention does not negate human inventorship. The analysis then centers on the question: Did a natural person significantly contribute to the invention?
To answer this question, the guidance draws inspiration from Pannu v. Iolab (CAFC 1998), a case directed to the determination of proper joint inventorship among a group of human contributors. The “Pannu factors” established by the court state that an inventor must: (a) contribute in some significant manner to the conception or reduction to practice of the invention; (b) make a contribution that is not insignificant in quality, when that contribution is measured against the dimension of the full invention; and (c) do more than merely explain well-known concepts or the current state of the art. The guidance goes on to explain the “conception or reduction to practice” aspect of the first factor, citing Fina Oil & Chemical Co. v. Ewen (CAFC 1997) and Hitzeman v. Rutter (CAFC 2001). Conception remains the cornerstone of inventorship, and merely reducing to practice of an invention conceived by another is insufficient to constitute inventorship, regardless of the significance of that reductive work. “The conception or reduction to practice of the invention” in the first Pannu prong concerns the “doctrine of simultaneous conception and reduction to practice,” which is invoked in cases where an inventor may only be able to establish a conception by pointing to a reduction to practice through a successful experiment. This principle is most apparent in unpredictable arts (such as biochemistry), where there may not be a reasonable expectation that the claimed invention could be made in accordance with theory, in which case conception does not occur until the reduction to practice occurs. This clarifies that the reference to “reduction to practice” in the first Pannu factor merely accounts for such scenarios, and it does not create a new rule that reduction to practice alone is sufficient for invention in the absence of conception. The guidance also notes that interference proceedings, which involved the determination of the date of conception for competing parties under the “first to invent” system in place before the America Invents Act (AIA) was enacted, generated a large body of case law that addressed the various issues surrounding conception and inventorship. For example, there must be a contemporaneous recognition and appreciation of the invention for there to be conception, and, as articulated by the court in Invitrogen Corp. v. Clontech Labs., Inc. (CAFC 2005), conception does not occur when there is merely an “unrecognized accidental creation.”
When the Pannu factors are applied to AI-assisted inventions, each claim of a patent must have a human inventor, but there is no requirement that each named human inventor on the patent contribute to each and every claim. Precise answers to questions surrounding the Pannu factors for AI inventions “may be difficult to ascertain, and there is no bright-line test,” but non-exhaustive guiding principles for the analysis are provided:
First, AI assistance is permitted: “A natural person’s use of an AI system in creating an AI-assisted invention does not negate the person’s contributions as an inventor. The natural person can be listed as the inventor or joint inventor if the natural person contributes significantly to the AI-assisted invention.”
Second, recognizing a problem is insufficient: “Merely recognizing a problem or having a general goal or research plan to pursue does not rise to the level of conception. A natural person who only presents a problem to an AI system may not be a proper inventor or joint inventor of an invention identified from the output of the AI system.” The guidance notes: “However, a significant contribution could be shown by the way the person constructs the prompt in view of a specific problem to elicit a particular solution from the AI system.”
Third, reduction to practice is insufficient: “Reducing an invention to practice alone is not a significant contribution that rises to the level of inventorship. Therefore, a natural person who merely recognizes and appreciates the output of an AI system as an invention, particularly when the properties and utility of the output are apparent to those of ordinary skill, is not necessarily an inventor… [h]owever, a person who takes the output of an AI system and makes a significant contribution to the output to create an invention may be a proper inventor.” The guidance continues: “Alternatively, in certain situations, a person who conducts a successful experiment using the AI system’s output could demonstrate that the person provided a significant contribution to the invention even if that person is unable to establish conception until the invention has been reduced to practice.”
Fourth, developing an essential building block may be sufficient: “A natural person who develops an essential building block from which the claimed invention is derived may be considered to have provided a significant contribution to the conception of the claimed invention even though the person was not present for or a participant in each activity that led to the conception of the claimed invention.” What this principle means for AI inventions is instructive: “In some situations, the natural person(s) who designs, builds, or trains an AI system in view of a specific problem to elicit a particular solution could be an inventor, where the designing, building, or training of the AI system is a significant contribution to the invention created with the AI system.”
Finally, mere ownership or oversight of an AI system is insufficient to claim inventorship: “Maintaining ‘intellectual domination’ over an AI system does not, on its own, make a person an inventor of any inventions created through the use of the AI system.. [t]herefore, a person simply owning or overseeing an AI system that is used in the creation of an invention, without providing a significant contribution to the conception of the invention, does not make that person an inventor.”
To illustrate the application of these principles, the USPTO provides two highly detailed examples along with scenarios to analyze. The first is a mechanical engineering example, where engineers design a transaxle for a remote-control car using an AI system that receives natural language prompts as input and generates text, images, and other media as output. Several scenarios are then posed, where the engineers and the AI contribute to the conception of the invention in varying respects. For this example, three scenarios do not have proper human inventorship: (1) mere application of the output of an AI, (2) mere reduction of an AI output to practice, and (3) mere oversight of the creation and training of an AI. In contrast, two scenarios where humans are proper inventors are provided: (4) substantial modification of an AI output, and (5) further modification of a design using AI. The next example involves the development of a therapeutic compound, where a professor is researching drugs to treat prostate cancer. Seeking to identify lead drug compounds that selectively target a specific protein, she consults with an expert in the field of AI and instructs him to use a deep neural network (DNN)-based prediction model to find viable candidates among a large dataset of compounds. A data scientist trains the prediction model on sets of compounds and targets from previous experiments. The professor selects the output compounds that indicate potential for high efficacy. Various scenarios surrounding this example are explored, and the inventive contributions from each party are then analyzed in accordance with the aforementioned principles.
Interestingly, the guidance even considers the possibility of foreign jurisdictions recognizing AI inventorship and directs how this would impact a U.S. patent application that claims priority to a foreign application where AI is a proper inventor. If AI is listed as a joint inventor in the priority foreign application, then the AI must be removed before filing in the U.S. Logically, if the AI is the sole inventor, then the application cannot be filed in the U.S. at all because there is no human inventor.
Strikingly, the guidance did not state any requirement or duty to disclose to the USPTO that the inventor used AI as part of the invention process. This contrasts the United States Copyright Office’s policy on AI applications, which does require the disclosure of AI tools used in the generation of creative works and further explanation of the human author’s contribution. However, in view of the Guidance on the Use of Artificial Intelligence-Based Tools in Practice that the USPTO issued just two months later, inventors should note that they may be compelled to disclose known facts under their duty of candor and good faith where there is a need to assess whether the contributions by natural persons qualify as inventorship (see [USPTO 2024b]).
The USPTO recognizes that the “inventorship guidance on AI-assisted inventions is an iterative process and may continue with periodic supplements as AI technology continues to advance and/or as judicial precedent evolves,” so practitioners and patent applicants are advised to pay attention to ongoing developments in this field.
In April 2024, the USPTO issued guidance that outlines the ethics and rules regarding the use of AI in patent practice (see [USPTO 2024b] for details). With the growing role that AI is playing in the legal profession, the USPTO is addressing the need to ensure that the particular use of AI in practice before the USPTO adheres to the highest standards for executing ethical obligations, generating accurate information, and representing applicants’ interests. The guidance reviews the USPTO’s existing rules and policies regarding the duty of candor and good faith, signature requirements and corresponding certifications, confidentiality, foreign filing licenses, export regulations, electronic systems’ policies, and duties owed to clients. The guidance then applies the existing rules to the use of AI, particularly generative AI, in practice before the USPTO, including the use of computer tools for document drafting, filing documents with the USPTO, accessing IT systems, confidentiality, national security considerations, fraud, and intentional misconduct (see [Crouch 2024a], [Das 2024], [Dever 2024], [Doop 2024], [Hanks 2024], [Rich 2024], [Shieh-Newton 2024], [Smith 2024], and [Wolf 2024]).
Regarding the use of computer tools for document drafting, the guidance notes the increasingly sophisticated capabilities of these tools and states that although there is no current restriction on their use for preparing documents to be submitted to the USPTO, nor is there a general mandate to disclose the employment of these tools to the USPTO, it advises practitioners to adhere to obligations to the USPTO and to clients when utilizing these computer tools. For instance, presenting any paper to the USPTO implies a certification that the statements within are true to the presenter’s knowledge and that a reasonable inquiry has been conducted. To fulfill these certifications, the presenter of the document is obligated to thoroughly review and verify its contents. Sole reliance on an AI tool’s accuracy does not constitute a reasonable inquiry. Given the potential for generative AI systems to produce errors or fabricate information, it is the duty of the presenter to ensure the veracity of the paper’s statements.
Moreover, individuals involved in USPTO matters may be compelled to disclose known facts under their duty of candor and good faith. For example, patent claims require a substantial contribution from a human inventor (see [USPTO 2024a] regarding the February 2024 Inventorship Guidance for AI-Assisted Inventions). Therefore, if an AI system has been used to draft claims without such contribution, this fact must be disclosed to the USPTO. Furthermore, when AI contributes to drafting specifications or claims, it is imperative to assess whether the contributions by natural persons qualify as inventorship. Likewise, errors or omissions identified in a document drafted with AI assistance necessitate correction before submission. Submitting a paper with inaccuracies or material omissions could lead to penalties, including the striking of the paper or termination of the proceedings. While there is no general duty to inform the USPTO of AI tool usage in document drafting, practitioners are expected to competently represent their clients, which includes having the requisite knowledge to do so, rather than relying on AI to fill in any gaps. Similarly, while notifying the USPTO of AI tool usage in the inventive process is not mandatory, the duty of disclosure remains paramount. If the use of an AI tool is material to patentability, it must be disclosed. This includes situations where an AI system’s assistance in drafting a patent application introduces alternative embodiments not conceived by the inventor. The duty of disclosure extends to ensuring that all material information is submitted to the USPTO, and it cannot be delegated to an AI tool or another person.
The guidance affirms that AI tools may also assist with the procedural aspects of filing documents with the USPTO, such as autocompleting forms and uploading documents. However, users of these tools are responsible for ensuring compliance with USPTO rules and policies. All correspondence filed with the USPTO requires a natural person’s signature, and this task cannot be delegated to an AI tool or another entity. The signer is accountable for ensuring submissions adhere to USPTO rules and policies. While AI tools may access and interact with USPTO information technology (IT) systems generally, the guidance emphasizes that only authorized users, such as applicants, registrants, practitioners, or sponsored support staff, may file documents or access information through the USPTO electronic filing system (EFS) in particular. Users are responsible for ensuring that the tools do not exceed their authorized levels of access.
The use of AI in practice before the USPTO may inadvertently disclose sensitive or confidential information. Practitioners are urged to be vigilant in maintaining client confidentiality when using AI tools or third-party services. Additionally, national security, export control, and foreign filing license issues may arise if AI tools utilize servers outside the United States or if data breaches occur.
In April 2024, the USPTO published a Request for Comments (RFC) that invites public input regarding the impact of generative artificial intelligence (GenAI) on prior art and its effect on the person having ordinary skill in the art (PHOSITA) standard as part of its ongoing engagement with stakeholders to understand the impact of AI on patent policy (see [USPTO 2024c] for details). In particular, the massive growth in published content generated by AI potentially produces prior art disclosures that could undermine the patentability of later human-conceived innovations. Additionally, the integration of AI into the standard toolkit of inventors may elevate the threshold for an innovation to be deemed non-obvious. Stakeholders have diverse opinions on whether AI-generated prior art presents new challenges for patentability assessments as well as on the interplay between AI and the determination of obviousness and enablement (see [Crouch 2024b] and [Lin 2024b]).
A claimed invention may face rejection under 35 U.S.C. § 102 if it is not novel. There are several ways in which a § 102 bar may be triggered, but the relevant trigger for this discussion concerns disclosures that had been generated by AI as printed publications. Although such printed publications must be sufficiently enabling to serve as prior art under § 102, non-enabling printed publications may be prior art under § 103, which would render the claimed invention obvious. To be considered a “printed publication,” a prior art reference must be publicly accessible, meaning that it is available to such an extent that PHOSITAs, exercising reasonable diligence, can locate the reference (see MPEP 2128 and In re Wyer (CCPA 1981)). The use of AI to generate a large volume of disclosures, potentially without human involvement, supervision, or review, raises questions about the assumption that a PHOSITA would be aware of such art (see MPEP 2141.03). Furthermore, the RFC questions whether AI-generated disclosures should enjoy the same rebuttable presumption as their human-generated counterparts that such disclosures are operable and enabled.
The PHOSITA standard appears throughout patent law. Most notably, a patent specification must contain sufficient information to enable a PHOSITA to make and use the invention without undue experimentation (the “enablement” requirement) and a claimed invention must be non-obvious to a PHOSITA (the “non-obviousness” or “inventive step” requirement). Therefore, whether and how AI plays a role in determining the skill level of and knowledge available to a PHOSITA would have far-reaching effects on patent law and policy. For example, if the skill level of a PHOSITA rises due to a typical scientist having access to AI, then non-obviousness would present a high bar to patentability, but enablement would be an easier hurdle to clear.
The RFC has posed fifteen thought-provoking questions targeting the aforementioned issues. The major themes include: Can AI-generated content be recognized as “prior art” under current U.S. patent law, or is the generation of prior art limited to human creators? Is there a rationale for treating AI-generated disclosures differently from those generated without AI, especially considering the potential for AI to produce inaccurate information (e.g., hallucinations), and how might this influence their status as prior art? At what point could the volume of AI-generated prior art be sufficient to create an undue barrier to the patentability of inventions? What if this volume is sufficient to detract from the public accessibility of prior art (i.e., a PHOSITA exercising reasonable diligence may not be able to locate relevant disclosures)? In what ways does the accessibility of AI tools alter the perceived skill level of a PHOSITA, particularly as AI becomes more widespread? Will it really be more difficult for an invention to be considered “non-obvious?” Given that current law stipulates that prior art be “analogous” to the claimed innovation to render an idea obvious, what are the implications if an AI can identify connections across disparate, interdisciplinary (“non-analogous”) fields that would be beyond the reasonable capabilities of human inventors? Should the criterion of “analogous art” be abandoned?
Some commentators, most notably Dennis Crouch in [Crouch 2024b], aren’t bothered by AI-generated prior art per se, as long as the work advances the art. Instead, they are concerned by the abundance of unread AI-generated data that would never be seen or reviewed by humans but may be used to further train other AI, the potentially fictional nature of AI-generated content, and reconsideration of the “motivation to combine” aspect of obviousness. Crouch points out parallels between the disclosure and non-obviousness requirements for a patent to be valid: If it is improper to claim a genus based upon a disclosure that includes a large number of inoperative species because fully practicing the claim by another party would require too much additional research, one could similarly argue that sifting through and synthesizing a flood of AI-generated disclosures to arrive at a useful invention should be considered non-obvious if such work is burdensome. Crouch goes on to mention other scholars and practitioners who have contemplated the issues raised even before the RFC was published. For example, in their article Patents in an Era of Infinite Monkeys and Artificial Intelligence [Hattenbach 2015], Ben Hattenbach and Joshua Glucoft suggest a balancing test involving both quality and accessibility, and advise against extending the presumption of enablement to all AI-generated disclosures. They warn that companies are already harnessing brute-force computing power to compose volumes of patent claims covering potentially novel inventions and also to generate defensive publications as prior art designed to prevent others from obtaining patent rights. Lucas Yordy argues in The Library of Babel for Prior Art: Using Artificial Intelligence to Mass Produce Prior Art in Patent Law [Yordy 2021] that AI-generated disclosures may decrease the patent incentive to research and disclose ideas unless fundamental changes are made to patent law. He proposes a “conception” requirement for disclosures to count as prior art to “ensure that AI-generated disclosures have actually contributed to public knowledge and have undergone some evaluation before they can render an invention unpatentable.”
Following a review of public feedback on these issues, the USPTO anticipates issuing official guidelines to further delineate the fast-evolving relationship between AI and patent law. Given the pressing nature of the questions posed, practitioners and inventors alike eagerly await the first set of rules surrounding AI-generated prior art and the impact of AI on the PHOSITA standard.