‘AI-Generated Inventions’: Time to Get the Record Straight?

This article attempts to clarify the notion of an ‘AI-generated’ invention, an issue which has triggered an intense debate on the future of patent law and policy. While there is a general consensus that such inventions are incompatible with the concept of human inventorship, it remains largely unclear to what extent concerns regarding ‘non-human’ ingenuity can be justiﬁed. Most uncertain is how AI ‘autonomously generates’ inventions, and in what way ’AI-generated’ inventions differ from inventions developed with the aid of AI. Drawing on the extensive literature review, this article depicts AI techniques as methods of computational problem solving. It emphasises that such methods should not be equated with a computer’s ‘cognitive autonomy’. Further, it clariﬁes that the types of AI that have been most debated in the patent law literature – artiﬁcial neural networks and evolutionary algorithms – essentially require detailed instructions that determine how the relation between inputs and outputs is derived through computation. Accordingly, it is argued that, as long as computers rely on instructions deﬁned by a human as to how solve a problem, the separation between human and non-human (algorithmic) ingenuity is, in itself, artiﬁcial. Ultimately, the article calls for a broader technical inquiry that would elucidate the relevance of the currently debated normative concerns over ‘non-human inventorship’ against the background of the technological state of the art. computers that think 1


I. Introduction
The debate surrounding 'AI-generated' inventions continues to build momentum, reaching the agenda of policymakers at the international level 2 as well as prompting numerous scholarly inquiries. 3 On 21 December 2019, the European Patent Office (EPO) announced its refusal to examine two patent applications, designating an AI system DABUS as the inventor, on the formal ground of failure to fulfil the requirement of the European Patent Convention that 'an inventor designated in the application has to be a human being, not a machine'. 4 Shortly before this, the World Intellectual Property Organisation (WIPO) issued a call for comments raising, among others, the question of how patent law and policy should react to inventions 'autonomously generated by AI'. 5 That initiative was preceded by a request for comments by the U.S. Patent and Trademark Office (USPTO) addressing similar issues. 6 Concerns were raised that under the current patent system third parties can indicate themselves as inventors of technologies generated by intelligent systems, and that the grant of such rights would impose an unjustified welfare loss on the society. 7 Proposals were made as to how the patent system should be adjusted in the wake of artificial ingenuity, if not 'abolished altogether '. 8 Yet, it remains largely unclear: What do we mean by AI-generated inventions? How do we define computer autonomy during the inventive process? The amount of legal writing highlighting the incompatibility 9 of the existing patent system with 'artificial inventions' appears to be in stark contrast with the seeming non-existence of technical inquiries on the very source of concerns -the phenomenon of 'autonomous generation of inventions' by computers. 10 It is remarkable that, when raising the fundamental question of how patent law needs to be adjusted in the advent of 'artificial inventions', policymakers neither provide an operative technical definition of such inventions, nor clarify how they differ from AI-aided inventions, nor review the technological state of the art. 11 Patent law literature on this topic refers to a handful of examples 12 without providing or referencing a technical analysis, which could explain how the 'intelligent systems' were designed, and how the overall computational process leading to an invention was set up. Rather, the existence of 'artificial' inventions' is taken as a premise 13 for legal and policy discussions.
AI is often portrayed as yielding inventions with a wave of a magic wand -or a magic click, 14 or simply by asking 15 -autonomously from humans. However, researchers in the field of automatic programming acknowledge that the aspiration to make computers perform tasks by giving orders in a high-level language without specifying how they should be accomplished is 'unrealistic, at least in the foreseeable future'. 16 Moreover, experts in AI and robotics caution that characteristics such as 'autonomous', 'unpredictable' and 'selflearning' are 'based on an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities [. . .], a robot perception distorted by Science-Fiction and a few recent sensational press announcements'. 17 While it is highly uncertain when Artificial General Intelligence ('Strong AI') 18 can be achieved, the mean average, according to a survey conducted among prominent researchers in the field of AI, is predicted to be the year 2099. 19 Meanwhile, the anthropomorphic rhetoric in relation to AI -while being 'helpful when explaining complex models to audiences with minimal background in statistics and computer science' 20 -was criticised for being 'misleading and potentially dangerous'. 21 Such views should, at a minimum, raise curiosity as to what legal and policy discussions mean by 'autonomously generated' inventions. A more plausible scenario is, perhaps, where AI is applied as computational methods in the course of solving problems in various fields of research and development. 22 In such situations, however, it appears unclear what degree of AI involvement should be considered to be prejudicial for recognising a human as an inventor, especially, given that the use of problemsolving tools and methods has not been a material factor from an inventorship perspective. (Otherwise, we should also be concerned about situations where microorganisms are used in research and development of biotechnological inventions, as they appear be more viable candidates to act as 'autonomous agents' having consciousness of their own. 23 ) 7 See eg WEF (n 2) 9 (stating that 'more patents, resulting from AI-generated inventions, will increase social costs and monopolies, and stifle the entry of new ventures, thereby hampering innovation'); Craglia Massimo and others, Artificial Intelligence: A European perspective (Publications Office of the European Union 2018) 66-67 <https://ec.europa.eu/digitalsingle-market/en/news/trends-and-developments-artificial-intelligence-chal lenges-intellectual-property-rights> accessed 3 March 2020 (doubting 'whether incentives are needed [for AI-generated inventions], especially in cases where the investment cost is low, and what consequences such rights might have on the market, including on creations or inventions made by humans'); Yanisky-Ravid and Liu (n 3) 2221 (pointing out that autonomy and creativity of AI systems 'make justifications such as personality theories and incentive/efficiency arguments irrelevant'). 8 Yanisky-Ravid and Liu (n 3) 2215, 2222 (stating that '[t]raditional patent law has become outdated, inapplicable and irrelevant with respect to inventions created by AI systems' and arguing 'for abolishing patent protection of inventions by AI altogether'). 9 See eg Hattenbach and Glucoft (n 3) 32 ('The coming wave of computer-generated material is on a collision course with our patent laws.'); Vertinsky and Rice (n 3) 575 (envisioning that 'machines will perform the majority of the work in the invention process and originate novel solutions not imagined by their human operators, transforming the invention process in ways not easily accommodated within the current U.S. patent system'). 10 The search conducted by the author could not identify any expert assessment report on this subject. The topic, however, has been discussed in blogs and internet publications. See eg Angela Chen, 'Can an AI Be an Inventor? Not yet.' (MIT Technology Review, 8 January 2020) <https:// www.technologyreview.com/s/615020/ai-inventor-patent-dabus-intellec tual-property-uk-european-patent-office-law/> accessed 3 March 2020 (stating that '[a] more fundamental problem is that we're nowhere near general artificial intelligence, so few people will believe that the AI is truly the inventor'); Rose Hughes, 'The first AI inventor -IPKat searches for the facts behind the hype' (IPKat, 15 August 2019) <http://ipkitten.blog spot.com/2019/08/the-first-ai-inventor-ipkat-searches.html> accessed 3 March 2020 (pointing out that '[b]efore the legal questions are considered, it is important to note that evidence demonstrating the capabilities of the inventive algorithm has not yet been provided'). 11 For an overview of policy inquiries on this subject, see below at II.1. 12 See below at II.2. 13 See eg WEF (n 2) 6 ('The fact that patents have already been granted for inventions created by AI [...] raises concerns [. . .]' (emphasis added)); 'WIPO Conversation on IP' (n 2) 3 (stating that 'it would now seem clear that inventions can be autonomously generated by AI' (emphasis added)). 14 See eg Feldman and Thieme (n 3) 77 (picturing the process as follows: 'an AI [. . .] takes as input a topic ("toothbrushes") and after a button press, spits out a new product (novel toothbrush bristle designs)' (emphasis added)). In his book with the telling title 'Genie in the Machine', Robert Plotkin contemplates that 'the role of human inventors in the Artificial Invention Age [will be] to formulate high-level descriptions of the problem to be solved, not to work out the details of the solution [. . .] Once given this problem description (wish), the artificial invention software (genie) produces a design for a concrete product [. . .] that solves the stated problem.'). Plotkin  The main objective of this paper is to highlight the need for a further inquiry into the technical underpinnings of 'artificial inventions' and to identify the starting points in the relevant technical literature. Part II frames the issue: it reviews recent inquiries on patent policy and AI, lists instances of reported 'artificial inventions', points out the distinction between automation and autonomy, and formulates the legal uncertainty regarding implications for inventorship in the absence of autonomously acting computers. Part III synthesises insights gained from the literature review on computational problem solving and sketches out a basic understanding of how inventive process is automated through computational methods such as artificial neural networks (ANNs) and evolutionary algorithms (EAs). Part IV argues that the design of the overall procedure, which determines how the given inputs are transformed into the intended outputs, plays the decisive role in computational problem solving. It highlights that, as long as instructions on the derivation of the input-output relation are provided by a human, the delineation between human and non-human (algorithmic) ingenuity is pointless. Part V concludes by reinforcing the point that, without an in-depth inquiry into the technological state of the art, challenges to patent law and policy cannot be identified adequately.

The lack of technical definitions in policy inquiries
Notably, while raising the questions of how patent law and policy should respond to 'autonomously generated' inventions, none of the reviewed policy documents provides a technical definition of such inventions. For instance, the WIPO draft issues paper states that 'it would now seem clear that inventions can be autonomously generated by AI'. 24 While no explicit reference is provided in support, 25 it is worth noting that, only recently, this scenario was considered by WIPO to be 'a science fiction'. 26 The World Economic Forum white paper assumes that 'AI is no longer "just crunching numbers" but is generating works of a sort that have historically been protected as "creative" or as requiring human ingenuity'. 27 However, no technical literature but only legal sources are referenced. 28 Somewhat puzzlingly, the request for comments initiated by the USPTO uses the term 'AI inventions' to refer to both inventions that utilise AI and inventions developed by AI. 29 It is assumed that both types can comprise elements such as 'the application of AI, the structure of the database on which the AI will be trained and will act; the training of the algorithm on the data; the algorithm itself; the results of the AI invention through an automated process'. 30 (This view, however, requires further precision. 31 For instance, if machine learning is applied in the process of drug discovery and development, the AI technique involved in that process would not be part of the resulting drug claimed as an invention.) Further, the document deliberates that, in both cases, a natural person can contribute to the conception of an invention, including by 'designing the algorithm and/or weighting adaptations, structuring the data on which the algorithm runs, running the AI algorithm on the data and obtaining the results'. 32 This suggests that, in the USPTO's view, the development of an invention 'by AI' can still involve human input, which raises a critical question as to where to draw the line between situations where AI 'develops' an invention and where it is used as a tool (i.e. as a problemsolving technique).

Examples of 'artificial inventions'
Legal narratives of AI-generated inventions often refer to almost the same set of examples: the Oral-B toothbrush and other accomplishments of the 'Creativity Machine' designed by Stephen L. Thaler, 33 the NASA antenna, 34 achievements in the field of genetic programming reported by John Koza, 35 and AI applications in drugs discovery and development. 36 More recently, the project 'Artificial Inventor' 37 presented several inventions attributed to the connectionist system DABUS: 38 a method for constructing and simulating artificial neural networks, 39 a food container, 40 and devices and methods for attracting enhanced attention. 41 None of the reviewed legal sources, however, provide a technical explanation of how the computational process 24 'WIPO Conversation on IP' (n 2) 3. 25 The text refers to 'several reported cases of applications for patent protection in which the applicant has named an AI application as the inventor' (supposedly, inventions developed by the connectionist system DABUS). See also below (n 37-38). 26 WIPO, 'Background document on patents and emerging technologies' SCP/30/5 para 55 (WIPO, 28 May 2019) <https://www.wipo.int/edocs/ mdocs/scp/en/scp_30/scp_30_5.pdf> accessed 3 March 2020. 27 WEF (n 2) 6. 28 ibid (in particular referencing Fraser (n 3), Vertinsky and Rice (n 3), Hattenbach and Glucoft (n 3)). 29 USPTO (n 6) para 1. 30 ibid. 31 On a sharper delineation between AI-generated, AI-assisted inventions (ie where AI is applied as a tool to invent), and AI-implemented inventions (ie where AI is implemented as part of the invention), see Josef Drexl and others, ' 49 Yet, one does not find in these accounts the language that would refer to inventions generated by 'autonomous entities'. Rather, they depict processes of designing computational systems and applying computational approaches as instances of computer-aided problem solving, design, and engineering. 50 In contrast to legal narratives claiming that computers generate inventions 'autonomously', 51 technical literature usually uses the term 'automated'. 52

Automation vs. autonomy
Automation means that a task can be carried out by a device without the human participation during the performance of a function. 53 The term automation can be equally used with regard to physical labour (robotics) and cognitive phenomena and functions, such as problem solving. 54 Machine learning, for instance, is defined as 'a field of computer science that studies algorithms and techniques for automating solutions to complex problems that are hard to program using conventional programing methods'. 55 Automation should not be equated with autonomy. While autonomy implies self-determination or self-rule, 56 it is doubtful whether computers can, at all, be autonomous from humans and perform computation 'on their own'. The impossibility to reproduce a self-organising 51 Ryan Abbott, 'I think, Therefore I invent' (n 3) 1083 (stating that '[c]omputers have been autonomously creating inventions since the twentieth century'). 52 A representative example is the space antenna developed by NASA scientists. See Abbott (n 15) 29 (stating that 'NASA recruited an autonomously inventive machine to design an antenna' (emphasis added)). But see the account by NASA scientists: Gregory S Hornby and others, 'Automated Antenna Design with Evolutionary Algorithms' American Institute of Aeronautics and Astronautics' (American Institute for Aeronautics and Astronautics, 2006) <https://arc.aiaa.org/doi/pdf/ 10.2514/6.2006-7242> 1 ('Whereas the current practice of designing antennas by hand is severely limited because it is both time and labor intensive and requires a significant amount of domain knowledge, evolutionary algorithms can be used to search the design space and automatically find novel antenna designs that are more effective than would otherwise be developed. Here we present automated antenna design and optimization methods based on evolutionary algorithms' (emphasis added).). For another example, see Fraser (n 3) 318-319 (claiming that '[s]o-called robot scientists are systems that integrate AI algorithms with physical laboratory robotics to autonomously conduct scientific experimentation' (emphasis added), and that such 'technology represents a marked step towards autonomous scientific discovery over the status quo where humans are primarily responsible for these functions' (emphasis added) (referencing Ross D King and others, 'Functional Genomic Hypothesis Generation and Experimentation by a Robot Scientist' 427 Nature 247 (2004)). But see the original paper by King and others, referring throughout the text to the automation of research and automatic systems. 53 Shimon Nof, 'Automation: What It Means to Us Around the World' in Nof (n 43) 14 ('Automation involves machines, tools, devices, installations, and systems that are all platforms developed by humans to perform a given set of activities without human involvement during those activities.'). 54 George A Schillinger, 'Automation' in Carl Mitcham (ed), Encyclopedia of Science, Technology, and Ethics (Macmillan Reference 2005) 146, 146 (referring to automation as a process, which is implemented by utilising a device 'as a substitute for human physical or mental labor'). 55 Gopinath Rebala, Ajay Ravi and Sanjay Churiwala, An Introduction to Machine Learning (Springer 2019) 1 (emphasis added). See also Nof (n 53) 22 (pointing out that the significant advantage of AI is that 'it can function automatically, ie, without human intervention during its operation/function' (emphasis added)); Wolfgang Ertel, Introduction to Artificial Intelligence (Springer 2017) 8 (defining AI as 'a practical science of thought mechanization [that] could [. . .] only begin once there were programmable computers'); Ivan Jureta, The Design of Requirements Modelling Languages (Springer 2015) 18 (noting that the use of AI in problem solving is equal to 'making languages and algorithms that can automate specific [. . .] problem solving tasks'). 56 Online Etymology Dictionary, 'Autonomy' <https://www.etymon line.com/word/autonomy> accessed 3 March 2020. See also Sara Goering, 'Autonomy' in Mitcham (n 54) 155-157, 155 (noting that the concept of autonomy, 'like freedom, combines two aspects: the negative condition of freedom from external constraints and the positive condition of a self-determined will').
system is considered to be the fundamental limitation of automation. 57 For any operation to be run on a computer, it needs to be programmed 58 (even in the case of self-improving software 59 ). Put figuratively: 'Programmers are the hand that feeds AI. It's improbable that they'll get bitten anytime soon.' 60 Admittedly, one does come across the terms 'autonomous' and 'automated' being used interchangeably in the technical literature. However, unlike in legal scholarship, 'autonomous' is used (rather inaccurately) as a synonym of automation -i.e. referring to processes executed without the human intervention. 61 (Implications of this distinction for inventorship are further discussed in Section II.5. and Part IV of this paper.)

No 'genie in the machine'
While AI is sometimes portrayed as 'the genie in the machine' capable of fulfilling human wishes for new inventions, 62 this allegory does not correspond to reality (yet). The design of systems, which would respond to commands given in a high-level language without any instructions as to how to perform a task, has been a research aspiration in the field of genetic programming. 63 Yet, researchers acknowledge that such aspiration is 'unrealistic, at least in the foreseeable future', 64 and that '[t]he mere possibility of recursively self-improving software remains unproven'. 65 Thus, 'machine intelligence' appears to be the function of designing computational systems and programming computers and, hence, an expression of human intelligence. 66 More so, the fundamental limitation of computational systems to perform cognitive operations is seen in that such operations can be reproduced through computer programs to the extent, to which human designers of computational systems understand the underlying intellectual mechanisms. 67 5. Does the use of AI as a problem-solving technique pose a legal uncertainty regarding inventorship?
As far as the allocation of the inventor entitlement is concerned, patent law, in principle, does not discriminate between inventions that might simply occur to the inventor's mind and those that might be developed with the help of problem-solving techniques and instruments. 68 Neither does it matter whether an invention came into being by sheer chance 69 or through the intentional process of trial and error. Instead, this factor, 70 as well as the means of problem solving 71 can be relevant for the definition of a person skilled in the art in the context of the inventive step assessment. Even more so, any legal constraints on the use of problem-solving techniques would be at odds with the very rationale of patent law to promote the diffusion of knowledge. The use of the problem-solving tools has not been prejudicial to the allocation of the inventor entitlement to a natural person, even where they may surpass human capabilities (e.g. optical instruments), or where biological organisms -that, unlike computers, are self-organising -might be involved in research. 72 57 Richard D Patton and Peter C Patton, 'What Can Be Automated? What Cannot Be Automated?' in Nof (n 43) 305, 305 (stating that, even though it would be 'a brilliant act of creating a new form of life (ie, a selforganizing system), [. . .] that is certainly not what automation is all about'). Further, they note that 'the mechanistic model lacks the system's inherent capability for self-organization'. ibid 308. 58 John P Sullins III, 'Artificial Intelligence' in Mitcham (n 54) 111, 112 (pointing out that a computer needs to be programmed in order to 'display advanced levels of intelligence'). 59 See below (n 64-65) and the accompanying text. 60 Niamh Reed, 'Artificial Intelligence and the Future of Programming' (Datafloq, 13 June 2018) <https://datafloq.com/read/artificial-intelli gence-future-of-programming/5124> accessed 3 March 2020. 61 Dilip Kumar Pratihar and Lakhmi C Jain, 'Towards Intelligent Autonomous Systems' in Dilip Kumar Pratihar and Lakhmi C Jain (eds), Intelligent Autonomous Systems: Foundations and Applications (Springer 2010) 1 (defining an autonomous system as a system that can 'perform the assigned task without continuous human guidance'); Nof (n 53) 22 (stating that 'the reliance on a process that can proceed successfully to completion autonomously, without human participation and intervention, is an essential characteristic of automation' (emphasis added)). In the context of policymaking, 'autonomous driving' is a prime example. See eg European Commission, 'Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee, the Committee of the Regions "On the road to automated mobility: An EU strategy for mobility of the future"' COM(2018) 283 final (17 May 2018) (primarily referring to automated (mobility, vehicles) throughout the document but, at times, using automated interchangeably with autonomous (driving, systems). Notably, when defining the levels of automation, the Communication uses the classification of the Society of Automotive Engineers-SAE, which defines full automation as where a 'system can cope with all situations automatically in a defined use case'. 62 Plotkin (n 3). 63 O'Neill and Spector (n 16) 1 (stating that genetic programming has been 'described as an "invention machine" that is capable of generating human-competitive solutions'). 64 ibid 2. See also Sai Sumathi, Thiag Hamsapriya, Paneerselvam Surekha, Evolutionary Intelligence: An Introduction to Theory and Applications with Matlab (Springer 2008) 172 (pointing out that '[t]he idea of a computer automatically programming itself is a very old, desirable and elusive goal', that 'it is as difficult to get a computer to program as it is to get it to do anything else', and that 'when early attempts at automatic programming largely failed to deliver what they promised, people began to [. . .] stay away from the subject'). , 930 (pointing out that, certain aspects of cognitive processes and phenomena (such as perception, language understanding, reasoning, etc.) can be reproduced in computational artifacts, such as computer programs, to the extent, to which they can be captured by computational modelling). 67 See John McCarthy, 'What is Artificial Intelligence' (Stanford University, 12 November 2007) 4 <http://www-formal.stanford.edu/ jmc/whatisai.pdf> accessed 3 March 2020 (noting that '[w]henever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently'). 68 Drexl and others (n 31) para 13 (pointing out that '[o]ne needs first to consider why the patent system has never required disclosure of how an invention came into being -not of least importance is the reason that such a requirement might simply be unfeasible to apply and enforce'). Perhaps the main concern arising with regard to the application of AI techniques in the inventive process is that they automate cognitive functions, such as problem solving and data and information processing. 73 On the one hand, one may doubt whether the contribution to the inventive process by a natural person can be deemed to be sufficient to give rise to the inventor entitlement. On the other hand, one can question whether it is possible, at all, to distinguish between the roles of a human and an 'intelligent system', given that the latter is designed by the former. Assuming that fully autonomous AI systems -i.e. systems capable of performing tasks in the absence of any instructions -are not on the horizon yet, AI can be applied as problem-solving techniques during the inventive process. The question arises: Can automation of problem solving through such techniques reach an extent, which would no longer fit the concept of human inventorship, and what qualifying criteria can, or should, be applied to assess the sufficiency of such contribution?
As far as the normative criteria are concerned, we may not find explicit specific provisions under existing patent laws. 74 The rules on co-inventorship appear to be rather inapplicable to the human-machine interaction. Yet, the underlying principle that there should be a substantial contribution to the development of an invention, as a qualifying factor for the inventor entitlement, can still be valid in situations where a technical solution might be found by applying AI. For instance, most would probably agree that merely switching on a computer, or giving a computer a command in a natural language -'Solve this!', or 'Design a new product!' -cannot be deemed as a sufficient human contribution. While this scenario is not yet realistic, 75 the question is whether there are interim steps performed by a computer that might be more decisive for solving a problem and outweigh the human contribution. For that, we need to take a closer look at the human-computer interaction in the process of developing an invention.

III. Automation of inventive process: A basic understanding
In view of an invention as a technical solution for a technical problem, this section situates AI applications in the context of literature on computational methods enabling (partial) automation of problem solving.

Computational paradigm of problem solving
While a problem is generally defined is a goal that is not 'immediately attainable', 76 problem solving refers to a process of achieving the goal, starting from the initial state, through a sequence of actions. 77 Such view, essentially, reflects the concept of computation as a process of deriving the intended output from the given inputs. 78 More so, problem solving is compared to calculating a mathematical function, whereby x values are transformed into y values. 79 Computation connects computer science and cognitive science. 80 Computational approaches -including those that are often called 'AI' -are also known as computational intelligence, 81 computational thinking, 82 and intelligent computing. 83 Even though the extent to which the analogy with computation can be applied to cognition is disputable, 84 computational modelling is regarded to be 73 On the automation of problem solving through AI, see Jureta (n 55) 9-19. See also above (n 54) and the accompanying text. 74 For instance, the EPC does not explicitly stipulate any qualitative or quantitative criteria with regard to the inventing activity that should give rise to the inventor entitlement; such criteria can be found under national patent law statutes and courts' jurisprudence, especially, related to disputes over co-inventorship. See   'a definitional factor' of cognitive science 85 and a key methodology for understanding cognitive phenomena. 86 Search is another fundamental concept that relates problem solving, computation, and AI. 87 Computational problem solving underlies Herbert Simon's theory that conceptualises the process of problem solving as the search through a problem space. 88 Machine learning algorithms reflect this idea. 89 Attempts to automate problem solving processes have been pursued in the field of computer science and AI for decades. 90 Computational problem solving represents an area of interdisciplinary research and integrates approaches of cognitive science, mathematics, logic, computer science, neuroscience, biology, and psychology among others. The notion of computation is a junction point where these disciplines intersect. 91 It is important to distinguish between 'computational' and 'computer-implemented' problem solving. The former is a broader concept and implies 'abstract[ing] away from the material details of the device [used] to make the calculations, be it an abacus, pen and paper, or [a] programming language and processor'. 92 For the purpose of understanding the human-computer interaction in the process of developing an invention, it is helpful to outline the main stages of computational problem solving.

Abstraction and computational modelling
For a problem to be solved by computational methods, it needs to be represented in 'the right abstraction'. 94 Abstraction constitutes the 'essence of computational thinking'. 95 It involves the reduction of a phenomenon of interest -e.g. an object, system, or a process -'to a set of essential characteristics for a particular modelling purpose', 96 and the encoding of the key mathematical, Activity Description

Problem formulation
The identification of the problem to be solved through computation Abstraction and modelling The reduction of a problem to the elements and relations that are necessary for understanding and solving it and their representation in a formal structure (e.g. a computational model) The design of an algorithm (or the adjustment of the pre-existing algorithm) The specification of a sequence of steps that can transform given inputs into the intended output Programming The coding of an algorithm in a way it can be executed on a physical computer Data manipulation The preparation of the selected data to be used during computation Execution The execution of an algorithm on a computer

Interpretation and communication of results
The analysis of the results of computation and their representation in a way they can be communicated Let us take a closer look at those components that are most closely related to the design of the problem-solving mechanism. logical or symbolic relations between its constituting elements. 97 Computational modelling refers to the conception and the formal representation of how the input-output relation can be derived. 98 While computational modelling integrates approaches from various disciplines (among others: computer science, engineering, mathematics and physics 99 ), mathematical principles, rules and tools -e.g. different types of equations 100 -play a crucial role in determining the way in which computation proceeds. 101 A computational model captures the relations between inputs and outputs by 'map[ping them] into appropriate mathematical expressions [such as a set of equations]'. 102 In this sense, it essentially represents a causal mechanism connecting the inputs and the desired outcome through the sequence of states. 103 Computational models vary greatly as to their purposes, 104 types, 105 and complexity. 106 They are considered to be powerful tools applied in problem solving 107 across the fields of science 108 and engineering. 109 What enables such broad deployment of computational approaches is that many problems across disciplines (including biology, physics, chemistry, engineering, etc.) can be 'cast as optimization problems and thereby benefit from [. . .] the reservoir of knowledge of mathematical optimization [. . .,] numerical analysis, computational methods, and other branches of mathematics'. 110 Notwithstanding their diversity and complexity, computational methods of problem solving, at their basis, serve one purpose: to transform the given inputs into the desired output by way of executing the given instructions.

Designing an algorithm and programming
In order to address a problem with a computer, any executed operation needs to be specified 'with absolute precision [. . .] at least with the machines we have access to today'. 111 While the search for a problem solution can be viewed as a sequence of states, an algorithm specifies instructions that determine the transitions between the states. By definition, an algorithm is 'an effective procedure to solve a given problem, that is, a finite sequence of elementary and totally explicit (¼ well defined and not ambiguous) instructions'. 112 An algorithm and a code 'together indicate how to organize and describe a series of actions to achieve a desired result: the algorithm constitutes the stage of designing and evaluating the strategy on which to build single actions, while coding reflects the operational phase that leads to the execution of those actions on a particular computing device, such as a PC'. 113 In itself, programming language does not 'offer any approach to problem solving beyond a means of formulating algorithms'. 114 After all, 'programming isn't hard when you know how to solve the problem'. 115 Thus, nothing 'esoteric' is going on when computational models are executed by a computer. 116 Notwithstanding a model's complexity, computers contribute to problem solving by 'crunching numbers' 117 obediently, 118 and it is by 'brute force computation' 119 that they can outperform humans.  118 Whitby (n 87) 31 ('At the most basic level a computer is a thoroughly obedient moron. Technically we say that it executes an algorithm -a patterns of simple ordered steps. You may be able to see how mathematical calculations can be broken down into simple steps that could be performed by our obedient moron.').

Soft computing
ANNs and EAs -two subfields of AI, which gave rise to the recent discussions on patent policy and AI-generated' inventions -represent the so-called soft computing approaches. 120 Such methods are applied to problems characterised by high uncertainty and complexity. 121 A detailed explanation of how soft computing works goes beyond the scope of this paper. 122 From the inventorship perspective, the most relevant aspect is: What determines the way computation is performed when methods based on ANNs or EA are run on a computer?

a) Artificial neural networks
ANNs comprise a variety of computational models with biologically motivated structures 123 and form a subfield of machine learning, which is defined as 'the study of methods for programming computers to learn' 124 that evolved from computational learning theory and pattern recognition. 125 What an algorithm 'learns' in the course of processing training data is how to correlate inputs (independent variables) with outputs (dependent variables) by inferring a function. While a function is sometimes playfully called 'a magical artifact' 126 that turns inputs into outputs, 'instead of using magic, [one] actually [uses] an instruction (algorithm) of how to transform the x to get the y, by using simpler functions such as addition, multiplication and exponentiation'. 127 Thus, in the course of learning, 'the algorithm defines, refines, and executes a [. . .] function[, which] is always specific to the kind of problem being addressed by the algorithm'. 128 As each artificial neuron 'solves a small piece of the problem, [. . .] using many neurons in parallel solves the problem as a whole'. 129 Ultimately, ANNs constitute nothing more than 'long sequences of summations and multiplications'. 130 Even though it is often said that machine learning techniques can perform tasks without being 'explicitly' programmed, 131 this does not denote the absence of any instructions determining how the input-output relation is derived through computation. Instead of being 'explicitly' programmed in a conventional sense (i.e. by providing a workflow-type list of commands 132 ), machine learning leverages mathematical and statistical methods. 133 Thus, the learning process is, on the one hand, 'purely mathematical', 134 whereby computational operations are guided by formulas, equations, functions, etc. that constitute a part of an algorithm. On the other hand, it is 'basically just a statistical matter of which variables are most correlated with the outcome'. 135 That being said, commentators argue that machine learning has 'nothing to do with understanding', 136 and that a more suitable term for it would be 'automated model fitting [which would not sound] cool enough to attract the same level of investment and innovation interest'. 137

b) Evolutionary algorithms
Evolutionary (or genetic) algorithms represent a category of stochastic search algorithms 138 that are applied to solving complex problems 139 such as optimisation of multiple, potentially conflicting parameters of a system. 140 In essence, evolutionary algorithms generate and evolve a set of candidate solutions (a population 141 ) through reiterative modifications -mutation, recombination, selection 142 -and reach the 'best-scoring' solution based on the principle of natural evolution that the fittest survives. 143 EAs represent mathematical optimisation which derive optimal values to a given function (the objective function) subject to specific conditions (constraints). 144 While the term 'stochastic' implies randomisation, it does not mean that a computer does something 'out of the blue': The application of stochastic local search algorithms requires a set of prerequisites that ultimately determine how computation is executed. In particular, components that need to be predefined are the search space, candidate solutions, neighbourhood relation, memory states, initialisation function, step function, and termination predicate. 145 For instance, the initialisation function within an evolutionary algorithm 'specifies the search initialization in the form of a probability distribution over initial search positions and memory states', 146 while the step function 'determines the computation of search steps by mapping each search position and memory state to a probability distribution over its neighboring search positions and memory states'. 147 To summarise, both ANNs and EAs rely on computational instructions -including functions and equations embedded in an algorithm -that determine how computation is executed.
IV. Implications for human inventorship 1. The design of a computational method as the decisive factor of computational problem solving Of crucial importance from an inventorship perspective is an understanding of the extent to which the functioning of 'intelligent systems' can be attributed to the way they are designed and applied by humans. If we accept that problem solving 'occurs when a problem solver determines how to solve a problem, that is, how to accomplish the goal', 148 in the case of computational problem solving, the 'how' refers to the conception of the overall computational procedure, which, besides an algorithm, can encompass multiple ingredients. In the case of ANNs, of particular importance is the selection of datasets. Collectively, such elements determine how the input-output relation is computed. In other words, the design of the overall computational method can be viewed as the conception of a problem-solving mechanism and, hence, an invention.
The reviewed literature suggests that the conception of a computational procedure occurs before an algorithm is encoded in a programming language and executed by a computer. 149 The overall process of how the input-output relation can be derived through computation is conceptualised by a human 150 and constitutes a causal mechanism embodied in an algorithm and a computer program. 151 Notably, the characterisation of a problem is considered to be the 'hardest part of problem solving', 152 which invokes the famous postulate that a well-stated problem is half-solved. Further, defining 'the right abstraction is critical' 153 for computational problem solving. 154 It is worth noting that algorithms vary in complexity and uniqueness: While routine tasks are performed by established algorithms, new algorithms can be designed for complex problems. 155 Even though AI applications are, at times, portrayed in the legal scholarship as being able to 'discover complex rules and patterns [. . .] given only an abstract problem definition and simple rules for generating and evaluating possible solutions to the problem', 156 the simplicity of the given rules and the complexity of the derived rules ought not to be generalised. For instance, in the case of EAs, the definition of the fitness function is considered to be the most difficult part, 157 which requires human judgement. More importantly, if a computer learns new rules based on the given rules, one cannot view such new rules as being generated by a computer autonomously, driven by 'own will'. Furthermore, the adjustment of an existing algorithm to a problem at hand should not be downplayed. 158 Even where such adjustment concerns an algorithm, or a system designed to address a highly specific problem, it may not happen instantaneously, or effortlessly. The design of 142 Mutation, recombination and selection are based on a set of operators and parameters. The specification of these mechanisms 'have a strong impact on the performance of an EA', and 'much research in EAs has been devoted to the design of effective mutation and recombination operators'. ibid 1097. 143 A fitness function is a 'mathematical function that maps a genotype to a numeric fitness value'. See the glossary provided by Jennifer S Hallinan and Janet Wiles, 'Evolutionary Algorithms' in Nadel (n 78). 149 In addition to the overview provided above at III.2. and III.3., see also Kleinberg and others (n 135) 134 (specifying that the 'basic steps' of how a training algorithm works comprise 'collecting a dataset, specifying a concrete outcome to be predicted in that data set, deciding which candidate predictors to construct and include in the statistical model; building a procedure for finding the best predictor, which uses all the other variables to predict the outcome' (emphasis added)); Patton and Patton (n 57) 307 (emphasising that it would not be accurate to compare the process of creating a software system with the process of building a bridge: In the case of a bridge construction, the design process only specifies something that can then be constructed using general engineering principles, while in the case of designing a software system, there are no 'general-purpose engineering-like principles and solutions' that can transform the goal into the result. Instead, the problem-solving logic needs to be 'fully specified', while the process of programming 'simply [. . .] generate[s] machine instructions that carry out the logic'.). 150 On abstraction and modelling, see above at III.2. 151 Davenport (n 103) 49. See also Wing (n 94) 3718 (explaining that, while an algorithm is 'an abstraction of a step-by-step procedure for taking input and producing some desired output', a programming language is 'an abstraction of a set of strings each of which when interpreted effects some computation'). [r]esearch papers on machine discovery typically give the algorithm center stage, but they pay little attention to the developer's efforts to modulate the algorithm's behavior for given inputs', and that 'algorithm manipulation is another important way that developers and users can improve their chances for successful discoveries'). the NASA antenna is an apt example -it took the scientists about one month to adjust the system to the changed technical specifications of a mission's parameters and to prototype the final antenna. 159 (It is worth mentioning that NASA scientists had reportedly spent two years developing the evolutionary system for designing the antenna. 160 ) Moreover, it is important to emphasise that, even though AI is often characterised as a general purpose technology, 161 and even claimed to be 'a general-purpose method of invention', 162 there is no single 'general-purpose' algorithm, or a 'general-purpose' model capable of solving any problem. 163 Quite to the contrary, scalability and generalizability are well-known problems of AI. 164 Thus, computational problem solving is about designing 'intelligent' computational systems and algorithms, whereas 'computers [. . .] are incapable of formulating algorithms and even so-called "intelligent" systems rely on a human being to formulate the algorithm'. 165 Notably, the future of AI is associated with the development of new algorithms. 166 Thus, as long as an algorithm contains instructions that determine computational operations, and as long as computers are bound by such instructions, it would be unjustified to attribute 'cognitive autonomy' to an algorithm, or a software system, and to view 'intelligent systems' as 'standalone' problem-solvers and inventors. In light of the foregoing, the delineation between the human and non-human (algorithmic) contributions to an invention appears, in itself, artificial. 167 2. What about 'black-box' models?
It is common to refer to some types of AI, especially deep neural networks, as 'black box' models. 168 While such characterisation generally implies the limited explainability of models, it is worth noting that neither a universally accepted definition of explainability, nor a clear delineation between explainability and related terms -comprehensibility, transparency, interpretability -seems to exist. 169 While all these qualities can be desirable and even indispensable for various regulatory reasons, the relevance of AI explainability for patent law has, at least, two distinct aspects. First, it matters for the fulfilment of the disclosure requirement, in particular, in cases where the claimed technical effect is enabled through the working of AI (akin to 'computer-implemented' inventions). 170 Second, from an inventorship perspective, the question is whether the characterisation of a model as a 'black box' may denote certain decision-making autonomy of a computer as to how to perform computational operations. 171 One can argue that, if it is unclear how exactly a problem is solved within a 'black box', 172 a human cannot and should not be credited for finding the solution. However, it is important to clarify what exactly the 'black box' problem refers to, and what factors account for the limited explainability of ANN models. A 'black box' generally implies computational complexity, and the contributing factors include the non-linearity of a model, 173 the complexity of data representation within a neural network (commentators note that, 'even if [they] understand the underlying mathematical principles and theories, [ANN] models lack an explicit declarative representation of knowledge' 174 ), the problem of data retrieval from a neural network, 175 and a limited understanding of the causality of the 'learned' statistical correlations. 176 The latter factor can explain why the way a model has arrived at a prediction might not appear straightforward. 177 In other words, it is often unclear whether the statistical correlations 'learned' from the training data actually reflect the genuine causality between the features. 178 Yet, a limited understanding of data representations, or of the 'learned' correlations, do not denote a lack of understanding of how an ANN has been trained.
Furthermore, it might be interesting to ponder whether the factor of the explainability of how an invention might have come into being can, at all, be relevant for the question of the allocation of the inventor entitlement. Can we explain how thoughts and solutions occur and become perceptible to a human mind? History provides examples where great ideas were received in a dream state, 179 or serendipitously, 180 which might be more complex phenomena to explain compared to computational processes during the model training. What matters is that none of the above-mentioned factors of limited explainability of ANNs denote the absence of the causality between the instructions provided to a computer and the outcome of computation, irrespective of whether it can be (readily) interpreted or not. 181

What about the unpredictability of the solution?
Legal narratives about AI-generated inventions sometimes highlight 'a surprising effect' of AI applications. 182 One can argue that, since a human could not imagine, or foresee the results, s/he cannot be deemed to be an inventor. Yet, this contention seems to be misplaced. First of all, the underlying premise that a solution should be known upfront is flawed, as it contradicts the very definition of problem solving -i.e. reaching an objective, which is not 'immediately attainable'. 183 For instance, in the case of inventions resulting from experimentation, it would be absurd to stipulate the foreseeability of the outcome as a prerequisite for the inventor entitlement.
Second, the task, for which an ANN model is trained, or to which an EA is applied, is always known upfront. What one perhaps cannot envisage is how exactly the function relating input and output variables will look like, since it is the interaction of an algorithm with training data that creates correlations (in the case of ANN). 184 In this regard, the hypothetical that the solution could not have been imagined by a human is highly speculative: If a human, theoretically, can make the same calculations on paper -even if that would take a lifetime -s/he could eventually reach the same outcome.
Third, even less sophisticated methods, such as pattern recognition, 185 can uncover relations that a human applying them may not foresee. 186 However, irrespective of the level of complexity of computational methods, what matters from an inventorship perspective is that the problem-solving mechanism -i.e. the trajectory and the determinants of computational operations -is provided by a human. If so, there is seemingly no reason to consider an algorithm, which embodies such mechanism, to be an 'autonomous problem solver'.
Curiously, one of the arguments raised by the applicants for patents designating the connectionist system DABUS as an inventor was that the computer 'identified the novelty of its own idea before a natural person did'. 187 Philosophy of AI is perhaps a better suited discipline to answer the question of whether computers can, at all, identify or conceive ideas. What can be reasonably assumed is that certain data representations, as a result of computational operations, can be formed within a computational system before they are perceived by a person using a computer to perform such operations. However, the case of data generated in the course of training is not unique in this regard. Likewise, one can argue that, when data is mined, a computer is first to 'see' a pattern, or microorganisms used as analytical tools 188 are first to 'discover' certain biological phenomena, or a chemical reagent is first to 'establish' a chemical reaction. Thus, if such 'priority' factor were to be material for the allocation of the inventor entitlement to a natural person, it would need to be applied across the board. The question to what extent the human mind is independent in making discoveries might be a subject for an epistemological discussion. From a more practical perspective, the problem solver is the one who elaborates the steps of how a problem at hand can be solved. In the case of machine learning, as discussed earlier, the results obtained through the training of a model and its application are essentially determined by instructions provided by a human. 189 Furthermore, it is important to emphasise that, as such, data representations or 'learned' statistical correlations do not constitute a readily applicable solution to a problem. The issue of interpretability of ANNs is a prima facie evidence that computational outcomes can be transformed into meaningful, actionable and communicable knowledge only when they are interpreted by a human.

Towards a broader dialogue
The view presented here based on the literature review is that computational methods of problem solving -including ANNs and EAs -essentially rely on the instructions that determine how inputs are mapped into output through computation. Thus, as long as a computer is bound by an algorithm, there is no reason to assign to it 'cognitive' autonomy, irrespective of the complexity of the computational process. As an operative test to prove the decisive role of such instructions, it is suggested to run a counterfactual where a computer would need to solve a problem in their absence.
As computational techniques are inevitably becoming more and more sophisticated, questions that needs to be explored with experts in computer science on a broad basis are: Under what conditions can a computer deviate from the algorithm provided by a human? Under what conditions might it be possible that a computer can derive the relation between inputs and outputs without instructions, provided upfront by a human, of how this should be done?
Distilling an adequate understanding of the capabilities of computational systems can be challenging due to the significant divergence in opinions as to how far the developments in automation of cognitive functions can reach. 190 Automation of scientific research can be an apt example. Some commentators envisage that 'the next logical step in laboratory automation' is where Robot Scientists are able to 'automate all aspects of the scientific discovery process[:] generate hypotheses from a computer model of the domain, design experiments to test these hypotheses, run the physical experiments using robotic systems, and then analyse and interpret the results'. 191 Other researchers argue that, even though computational methods of scientific discovery 'are an increasingly important tool in science[,] the role of the human scientist remains, for the foreseeable future, essential '. 192 Furthermore, the diversity of scenarios of problem solving through computational techniques needs to be further examined, especially, where multiple contributors are involved in the design of a computational model. The eclectic nature of computational techniques can have implications for the allocation of the inventor entitlement. For instance, in some cases, not only the designer of the original algorithm, but also the user who adjusts ('tweaks') it to the problem at hand can be viewed as equal contributors to problem solving. In other cases, a standard algorithm can be applied, but the choice and the handling of data by a data scientist might play a decisive role. This aspect can be explored through case studies, as constellations can be as diverse as the problems solved through computational techniques. However, while the scenarios may vary greatly, they might not pose new legal uncertainties and can eventually be resolved under the rules on co-inventorship. This issue is not covered within the scope of this paper, as the focus here lies mainly on the human-computer interaction in the context of the inventive process and its implications for the concept of inventorship.

V. Concluding remarks
This paper highlights that the ongoing policy inquiries and the recent legal scholarship on the topic of 'AI-generated' inventions demonstrate the lack of a comprehensive technical basis, as well as tend to underappreciate the distinction between 'autonomous' and 'automated' systems. Paraphrasing the opening quote, there might be a profound disconnect between the questions raised by policymakers concerning 'non-human inventorship' and the technological state of the art.
The article has shown that AI techniques represent computational methods of problem solving enabling partial automation of inventive activity. As such, the application of such techniques cannot be prejudicial for the allocation of the inventor entitlement to a natural person. As long as a human specifies instructions that determine how the input-output relation is derived through computation, and as long as computers are bound by such instructions, there is seemingly no reason why AI-aidedallegedly 'AI-generated' -inventions should be treated under patent law differently than inventions assisted by other types of problem-solving tools and methods as far as inventorship is concerned. Instead, the use of such techniques should be a matter of the assessment of inventive step. 193 As we do not personify the laws of physics or chemistry, neither should we attribute a mystic personality to computational processes carried out according to the laws of mathematics and statistics. Even though it became common to use the language that anthropomorphises algorithms, such tendency was viewed as 'an obstacle to properly conceptualizing' legal and societal challenges posed by AI techniques, 194 as well as misguiding the policy priorities. 195 If computers only execute the problem-solving mechanismdefined in this paper as instructions as to how the input-output relation should be derived through 189 Above (n 181). 190 See generally Ford (n 19). 191 Andrew Charles Sparkes and others, 'Towards Robot Scientists for Autonomous Scientific Discovery' 2(1) Automated Experimentation 1-12 (2010). 192 Sozou and others (n 48) 731. See also Langley (n 158) 231, 234 (noting that 'the more common perspective is that [computational] discovery systems should aid scientists rather than replace them').
193 Peter Blok, 'The Inventor's New Tool: Artificial intelligence -how does it fit in the European patent system?' 39(2) E.I.P.R. 69 (2017) (pointing out that 'an artificial intelligence application should be seen as a tool, and that inventions made with that tool are patentable as long as the artificial intelligence application is not a tool the average skilled person would use routinely'). 194 Watson (n 20) 417. 195 In the words of Yann LeCun, the concerns that one day 'somehow we'll come up with [. . .] artificial general intelligence, and that we'll create a human-level intelligence that will escape our control [is] a bit like we haven't invented the internal combustion engine yet and we are already worrying that we're not going to be able to invent the brake and the safety belt'. Ford (n 19) 135-136.