Artificial Intelligence and Unfair Competition – Unveiling an Underestimated Building Block of the AI Regulation Landscape

The article illustrates the underestimated role unfair competition law (UCL) can play as a building block of the regulatory landscape relating to Artificial Intelligence (AI). To this end, it examines to what extent overarching prominent principles of AI regulation such as Fairness, Transparency, Autonomy or Innovation are reflected in paradigms of UCL and on this basis evaluates how the latter can contribute to the realization of the former. In this course, prominent problems raised by AI that are commonly discussed under different legal regimes are reconsidered under a UCL perspective, showing that this perspective may both complement or even substitute traditional regulatory approaches. Finally, the article indicates how AI could inversely give impulses for the doctrinal advancement of UCL as a still ambiguous and insufficiently understood body of law.


Introduction
For quite some time, "Artificial Intelligence" has been at the centre of attention of intellectual property and competition law scholars. However, as opposed to IP and antitrust, 1 the role unfair competition law (in the following: UCL) can and should play in the AI regulatory landscape has so far largely been neglected. 2 Certainly, the fact that UCL is a complex matter, the understanding of which as an area of law in its own right is debated and the systematic location and design of which in the legal order varies widely across EU member states, let * Ass. iur, Junior Research Fellow at the Max Planck Institute for Innovation and Competition, Munich. All online information was accessed before 27 November 2020. 1 Although hinting at the Anglo-American legal sphere, the term "antitrust" is preferred in this analysis over "competition law" in order to avoid terminological confusion vis a vis "unfair competition law", since (from a European perspective) both regimes can be considered subsets of a "competition law" understood as umbrella term. 2 But see for example WIPO Conversation on Intellectual Property (IP) and Artificial Intelligence (AI), Second Session, Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence (21st May 2020) para 8: "No separate section concerning AI and unfair competition has been added. However, recognizing that IP law and competition law clearly relate, questions have been added in the various sections (…)." alone worldwide, is one explanatory factor for this shortcoming. All the more, it seems worth bringing the potential of this body of law also and especially to the attention of legal orders with as of yet an underdeveloped focus on it. In order to fill this analytical gap, 3 the present paper examines to what extent general principles widely proclaimed as key pillars and guiding paradigms of AI regulation are reflected in specific sub-equivalents stemming from the realm of UCL, thus enlightening UCL's potential to contribute to their achievement. On analytical terms, a particular focus in the course of this assessment lies in re-considering prominent problems raised by AI, that are commonly discussed under different legal regimes, under a UCL perspective -showing that this perspective may both complement or even substitute traditional approaches. On substantive terms, special attention will be given to UCL's contribution to the AI innovation ecosystem. Finally, on a reverse note, the potential of AI as an impulse for further developing the doctrinal framework of UCL and its relevance for the global order of competition will be considered.
II. Setting the scene: What is AI, what is UCL, and what can the latter contribute to the regulation of the former?
AI and UCL share the feature that it is hard to say what they actually are. AI is a "catch-all" term for certain new technologies revolving around Big Data analysis and advanced algorithms, comprising visions of "autonomy" and "self-learning". In demystifying technical terms, for the purposes of this analysis, Machine Learning as the most important and most prominent AI technology will be considered as main reference point. 4 UCL is a less fashionable, but similarly ambiguous phenomenon: a body of law first enshrined on the international level in Art.10bis of the Paris Convention for the Protection of Industrial Property of 1883, that has historically been perceived as safeguarding the "morals" or "business ethics" in competition, relying on the ideal model of the "honourable merchant".
Modern scholarship construes UCL taking recourse to functional economic considerations, postulating an ultimate complementarity with antitrust law and the protection of competition as an institution as central goal. 5 Still, the exact design and understanding of UCL rules varies considerably across EU member states and worldwide: in systematic terms ranging from codifications in competition law to consumer law, public law or criminal law, in substantive terms oscillating between the protection of competitors, consumers and competition as an institution. 6 Whereas the b2c dimension of UCL has been harmonized in the EU via the Unfair Commercial Practices (UCP) Directive, 7 the b2b dimension has so far not. 8 However, this ambiguity does not have to be a disadvantage as regards UCL's potential for contributing to the AI regulatory landscape. Admittedly, given the divergences outlined, it will hardly have (immediate) benefits as regards the harmonization of regulation. Yet, first, there may be gains arising from the idea of "regulatory competition". Especially the fact that b2b UCL is not harmonized at the EU level should in this light be considered a chance. The regulatory landscape regarding AI is as dynamic as the technology it seeks to regulate. Competing national approaches of treating AI under UCL may be considered a "regulatory sandbox" 9 in its own right: The best solutions found can then be exported into other jurisdictionsboth on the legislative level and on the level of judicially interpreting general clauses via comparative legal methods. Second and related, the inherent and characteristic flexibility of UCL, which ultimately unites the understanding of all legal orders disposing of such a body of law, fits very well with the dynamic nature of the AI field. UCL can play a viable role as a "fall back" regime to address new and unforeseen competitive risks in the absence of specific legislation. This fall back character, breeding the ground for doctrinal developments that can later be explicitly codified, belongs to the 5 Cf. Reto M. Hilty, 'The Law Against Unfair Competition and its Interfaces', in Reto M Hilty and Frauke Henning-Bodewig (eds), Law Against Unfair Competition -Towards a New Paradigm in Europe? (Springer 2007) 1; Rupprecht Podszun, 'Der "more economic approach" im Lauterkeitsrecht' (2009) WRP 509. 6 For an overview, see Frauke Henning-Bodewig, International Handbook Of Unfair Competition (Beck/Hart/Nomos 2013); illustrative of the scattered nature Richard Arnold, 'English Unfair Competition Law' (2013) 44 (1) IIC 63, 77: "It is still the case that English law does not recognise any general tort of unfair competition. It does not follow, however, that there is no English law of unfair competition."; on the difficulties of determining UCL see also Frauke Henning-Bodewig and Achim Spengler, 'Conference Report: "Framing -The 'Hard Core' of Unfair Competition Law' (2016) GRUR Int. 911. 7 Directive 2005/29/EC of the European Parliament and of the Council concerning unfair business-to-consumer commercial practices in the internal market. 8 As far as the b2c dimension is concerned, this article will focus on European law, as far as the b2b dimension is concerned, on German law as an illustrative and doctrinally advanced example or blueprint. 9 On regulatory sandboxes for data sharing cf. Rupprecht Podszun, 'Datenpools: Ausprobieren statt differenzieren' (2019) WUW 289. traditional traits UCL is known for. 10 It gains even more relevance in the digital economy. Now to what extent may AI regulation principles be reflected in UCL paradigms? The debate about the regulatory framework for AI is dynamic and ongoing and it is certainly too early to speak of an established acquis. Nevertheless, a certain consensus regarding overarching and recurring paradigms, reflected both in academic debate and numerous policy guidelines of public as well as private institutions, can be identified. Among the principles invoked over and over again are an overall realization of "Ethics", Fairness, Transparency, Accountability, Autonomy, and the promotion of innovation. 11 The following considerations will shed light upon ways in which specifically UCL can contribute to achieving these goals.
III. "AI Ethics" and "Business Ethics": The (non-)convergence of regulatory chimeras To start with, one can generally reflect on whether there might be a connecting line between the widely proclaimed wishes for "AI ethics" 12 and the notion of "Business Ethics" often or at least historically associated with UCL. This point obviously goes to the very heart of the debate on what UCL is all about. In its historical roots, as already mentioned, it used to be the area of law addressing the "ethics" of and in competition. 13 Whereas this understanding has to a large extent been overridden by the modern, economic-functional approach, fragments of the old understanding still permeate laws, judgments, and scholarly debate, with varying emphasis from member state to member state. If one were to follow the notion that there is still a place for "business ethics" within the legal order and that this place is UCL, then aligning the respective principles with the demands for "ethical AI" does not seem far-fetched. The stance of this article, however, is not to promote this claim, but rather to point out the dire 10 13 It is worth noting, however, that, irrespective of the "moral" rhetoric and underpinnings, the practical application of the law also in former times often followed a functional balancing of interests. need for demystification of the "ethics" narrative. First and foremost, there is hardly an "ethical" value without a "legal" mirror image, in particular a fundamental or human right relating to the respective value, 14 which makes the whole notion of "ethics" more confusing than helpful for the purposes of legal scholarship. Second, oftentimes conduct that may be deemed "unethical" converges with anticompetitive conduct. In any case, it appears clear that only those parts of "AI ethics" relating to or impacting on markets and competition can gain relevance in the realm of UCL. Ultimately, when it comes to concrete legal operationalization, all such issues, irrespective of their metaphysical provenance, come down to a balancing of legitimate interests of all market participants. Such balancing lies at the doctrinal heart of UCL. The following considerations will thus embrace legal, not "ethical" reflections.

IV. Fairness
The prima facie most obvious, yet at the same time most complicated potential "common ground" of AI and UCL is the "fairness" principle itself. On the surface, the "fairness" of UCL and the "fairness" invoked in the AI context might be perceived as having little in common apart from terminology: Whereas "fairness" in the AI debate is mostly understood as referring to the principle of equality and the prohibition of "biased" discrimination, the "fairness" of UCL is teleologically entrenched in safeguarding competition or at least competition-related interests. 15 However, not only do both concepts share an inherent open-and vagueness. 16 Also, one should not overlook the many ways AI can be (mis-)used for negative impacts specifically on competition. Whereas this is primarily exemplified in antitrust scenarios such as "algorithmic collusion", 17 there are as well manifold instances AI can impact areas traditionally associated with the realm of UCL, especially those affecting "consumer protection". Examples will be provided below. However, the distinct feature of UCL's general clause(s), prohibiting "unfair" commercial practices, is of course the capability of solving new and unforeseen a n t icompetitive risks, which for logical reasons evade further elaboration in this article. 14 Cf. High-Level Expert Group (n 11) 37, however considering fundamental rights a mere sub-realization of ethics. 15 Of course, some phenomena of "discrimination" have immediate competitive relevance, for example the prohibition imposed on dominant companies not to apply dissimilar conditions under Art. 102 (c) TFEU; on the connection between antidiscrimination legislation and UCL see also section VIII.2. below. 16 Cf. High-Level Expert Group (n 11) 12: "(…) we acknowledge that there are many different interpretations of fairness (…)." 17  Whereas this is not the place to delve deeper into the ongoing and long-standing debate about the substantive meaning of "fairness" (or rather: the manifold dimensions it entails), one very concrete aspect of UCL's contribution to a "fair" market order is worth highlighting: its regulatory complementarity to antitrust law. In substantive terms, UCL is equipped to address competition problems that fall short of the antitrust requirement of market dominance. 18 This is all the more relevant in light of the difficulties associated with determining market power in data-driven markets. 19 Of course, heeding doctrinal systematics, one has to be cautious not to circumvent or undermine conclusive decisions of antitrust law as to the non-illegality of certain conduct exacted by a non-dominant player through referring to UCL. Yet, especially if one follows the "modern" understanding of UCL that puts safeguarding competition as an institution at the centre of teleological attention, its general clauses rise to be a building block for addressing AI induced market failures outside the realm of antitrust.

V. Transparency
Transparency is a core mantra of AI regulation. Both the involvement of AI as such (as opposed to purely human decision making) and the concrete way AI reaches a decision (commonly labelled the "black box" problem, mirrored by efforts to achieve "Explainable AI") are widely desired to be transparent. 20 Now, transparency comes in many facets, but definitely an important one is market transparency. The traditional systematic realm of safeguarding market transparency is UCL, which prohibits misleading commercial practices. 21

AI based personalization
Among the main and economically most valuable areas of AI application is its use for strategies of personalization, notably including personalized pricing and personalized advertising. 23 A fierce debate has evolved around whether such personalization strategies should be banned or limited, even in the event that they are overall welfare-enhancing, on the grounds that they are widely perceived by consumers as "unfair" or "unjust". 24 Without going deeper into that discussion, one thing appears undisputed: It is essential for the consumer to know that he or she is subjected to a personalization strategy and not receiving a standard offer. 25 To the extent the consumer is not acting on the basis of an autonomous and informed decision, personalization may thus violate transparency rules imposed by UCL. 26 Especially lack of price transparency constitutes an informational asymmetry detrimental to economic welfare by eradicating the possibility of comparing prices, which is essential for competition. 27 Of course, the exact information requirements are subject to debate: They have to be balanced in order not to evoke an 22  Beyond personalization, there are further instances where UCL can resolve transparency problems of AI related marketing activities. First, in light of the ambiguity surrounding the term "AI" as such, one can think of considering the marketing of "normal" computer software under the catchy promise of "AI" a misleading practice. Second, companies increasingly proclaim codes of conduct relating to AI, in which they make more or less concrete statements as to the way in which they intend to use AI for the good of society and refrain from undesired behaviour. 31 Such codes can be considered part of the "Corporate Digital Responsibility" phenomenon as a digitized continuance of "Corporate Social Responsibility". 32 In case a company acts contrary to its statements in such a code, UCL plays an important role in combating the deception lying therein and restituting market transparency. 33 For if companies want to employ their "good conduct" as a competitive advantage vis a vis consumers valuing such behaviour, competition on these grounds can only function if the promises made are actually kept. The main problem in the application 28 In this regard, personalization offers interesting chances: Each consumer could get personalized information, exactly suiting his or her capabilities, situation and needs. Ultimately, this is one aspect of what is currently discussed under the vision of "personalized law", cf. ( of the law in this regard is the vagueness of many statements. 34 For example, one can hardly draw consequences from promises such as using AI in a "socially beneficial" 35 way as such.
Third, another body of cases that may increasingly gain relevance stems from the realm of intellectual property law and relates to the necessity of distinguishing whether an intangible good, especially one that looks like a "work" in the copyright sense, has been created by humans or with considerable help of AI. The question whether considerable human guidance is essential for the justification of IP protection for "AI generated" output has kept and will further keep IP scholars busy. 36 Yet what is sure is that any legal distinction between human-made and "AI generated" subject matter faces the practical challenge of having to discern the respective origin. A market solution, relying on consumers valuing human-made works over AI generated ones, does not work if one actually cannot be told from the other. 37 If an AI generated "work" is marketed as humanmade, such marketing, be it actively or under passive concealment of AI origin, may constitute an act of unfair competition: a misleading practice. 38

VI. Accountability
Ensuring accountability of companies for damages "autonomously" caused by their AI is the most "classic" legal problem relating to AI. 39 The prime example is the autonomous car running over pedestrians, yet AI might also "autonomously" harm intellectual property rights or competition in general. In the case of UCL, the issue at stake is to determine liability for unfair commercial actions committed "by" or with the help of the AI of a company. The need for a holistic concept with respect to such "attribution issues", uniting the somewhat fragmented doctrinal landscape revolving around ideas such as 34 38 In this regard, a distinction has to be made: The potential non-registrability of subject matter generated "autonomously" by AI may also lead companies to conceal the use of AI when registering their inventions or designs, cf. WIPO (n 2) para (vii). Such deceptive acts before an IP office however do not fall under UCL as they do not (immediately) happen in a market context, but are directed at a public institution; Sven Hetmank and Anne Lauber-Rönsberg, 'Künstliche Intelligenz -Herausforderungen für das Immaterialgüterrecht' (2018) GRUR 574, 581 suggest that a labelling requirement as to AI involvement could be introduced as a protection criterion for AI generated products to establish transparency. 39 An aspect worth highlighting in this context is the overestimation of the relevance of "autonomy" notions: In many cases, it is simply decisive whether there has been (in-)sufficient guidance of foreseeability of certain AI induced results "on the human side", irrespective of the "autonomy" degree "on the AI side". "secondary liability" into a coherent framework has rightly been emphasized in the recent scholarly debate. 40 When construing such framework specifically with a view to AI, rather than starting from scratch and inventing entirely new concepts, it appears wise to build on the acquis, i.e. the manifold role models various legal regimes have already developed in the field of liability attribution. UCL can be one of these doctrinally inspiring regimes. 41 In Germany, the concept of "liability for breaches of duty of care in competition" has been developed on the grounds of the UCL general clause as alternative in particular to the "Störerhaftung" of IP law. 42 It provides doctrinal guidelines for attributing anticompetitive acts to a company by way of making the company responsible for not having fulfilled its duties to prevent the respective act. Transferring this concept as potentially adequate role model to AI induced violations of antitrust law has already been proposed. 43

VII. Autonomy
The ultimate threat of AI, triggered by Science-Fiction-inspired notions, is its impact on replacing humans. Conversely, preserving human autonomy lies at the core of AI regulation principles. 44 UCL builds on and aims at safeguarding a very important sub-aspect of human autonomy: the autonomy of consumers as participants in the market, who make the very concept of competition work in executing their "role as arbitrator". The problems raised by AI in relation thereto are twofold.

Autonomy threats of AI use by suppliers
Where AI is used on the supply side, in particular personalization strategies, basing new offers and advertisements solely on previous preferences, may capture consumers in a "filter bubble". Autonomous choice from a variety of market options may get lost in the course of proliferation of such preference tailored systems. Yet, the good news 40  is that UCL generally disposes of the means to address such threats: As already mentioned, transparency requirements at least mitigate the tension. 45 Consumers voluntarily and informedly entering into or staying in filter bubbles exact an autonomous choice to do so, although the paradox and dangers of voluntary self-incapacitation are well-known. In this capacity, UCL appears to be the competitionoriented sub-pillar of fighting the overarching "filter bubble" problem. 46 2. Autonomy threats of AI use by consumers (Even) more problematic is the mirror dimension of AI use by consumers, especially when relying on Internet of Things applications, for which the term "algorithmic consumers" has been coined. 47 An example is the "autonomous fridge" in the "smart home", which orders new food (based on previous preferences) without the human consumer (actively) involved. Whereas on the one hand such use may constitute a welcome "fight fire with fire" counter strategy vis a vis detrimental use of AI by companies, restoring the technical and informational balance, it may at the same time, from an anthropological perspective, deprive consumers of their very capability of acting as rational market agents, since "all" of their decisions are taken over by their AI tools. 48 As regards UCL's potential answer to this threat of its very foundations, a doctrinal acceptance of and adjustment to "algorithmic consumers", meaning in particular a re-construction of the "average consumer" standard, seems necessary, 49 but insufficient to address the autonomy problem. Rather, as in other fields of law, it would probably be necessary to "keep the human in the loop", so for example the 45 Cf. section V.1. above; Wagner and Eidenmüller (n 24) 590 on personalized pricing: "An obligation to disclose the application of first-degree price discrimination appears innocuous and potentially effective to leverage consumer autonomy." 46 A parallel problem regarding "filter bubbles of opinion" threatening democracy is  49 The "average consumer" standard, against which misleading practices are judged, is not only challenged by personalization phenomena that question the very concept of "average" (cf. Peter Rott, 'Der "Durchschnittsverbraucher" -ein Auslaufmodell angesichts personalisierten Marketings?' (2015) VuR 163). Also, with a view to "algorithmic consumers", a "technicized" reconstruction of this hypothetical figure as "average algorithmic consumer" may become necessary. fridge might be forced to check back on the consumer from time to time to ask whether preferences have changed or a new offer might be of interest. Such an obligation would generally have to be realized outside UCL. Nonetheless, UCL with its rich experience on matters of consumer choice may provide theoretical guidelines for policymakers to assess how much decisive power can be delegated to "algorithmic consumers" and how much cannot without undermining the functioning of the market order as such. In particular, UCL doctrine can in this regard influence the debate on implementing the respective parameters by design. 50 VIII. UCL as an enforcement tool for AI related extra-UCL market conduct rules? 1.
Locating UCL in the enforcement landscape UCL can act as an (additional) enforcement pillar for a variety of market conduct rules outside UCL, the violation of which negatively impacts competition, via the doctrine of "breach of statutory duty". On procedural terms, this option unleashes the enforcement possibilities via competitors and consumer associations, which the UCL of many legal orders relies on, thus providing for an institutional enrichment beyond the state authorities associated with antitrust law. Such enforcement is quicker and more flexible than long administrative proceedings and thus displays characteristics especially fit for AI and the digital economy. On substantive terms, "breach of statutory duty" appears an apt doctrinal vehicle for operationalizing the ongoing discussion about a growing convergence of areas of law relating to the protection of consumer interests in the digital economy. Among the numerous breaches of law that can potentially be sanctioned via these mechanisms, three appear especially relevant in the AI context: Discrimination, protection of personal data, and cybersecurity. 51

AI based discrimination in market relevant contexts
Anti-discrimination legislation is the standard to legally judge "AI bias" issues against. Although anti-discrimination rules are not as such rules relating to market conduct, as required for coming within the ambit of UCL, they can be in certain contexts. An obvious example are again the above-mentioned personalization strategies in 50 Cf. IEEE (n 12). 51 The phenomena of discrimination and personal data protection can be seen in conjunction with the personalization problem outlined above, as personalization can be based on data gathering in violation of data protection rules and, if the personalization relies on traits protected by anti-discrimination laws, it may also violate the latter.
commercial contexts, namely in case personalization is based on traits anti-discrimination law prohibits to refer to, such as race or gender. Although these aspects are at the outset grounded in non-economic values such as human dignity and personality, they still shape and limit the way companies act on the market.

3.
Competition and privacy: Friends or foes?
The most fundamental concrete 52 threat raised by AI for society is its capacity of establishing all-embracing surveillance, both by the state and by private companies. 53 It is thus key to align strong data protection rules with the market and welfare oriented economic goals that are pursued by competition rules. 54 A fierce debate on the relationship between competition and data protection law has been sparked by investigations of the German competition authority Bundeskartellamt against Facebook, alleging an abuse of a dominant position based primarily on a breach of data protection rules. 55 At the same time, there is a discussion on whether data protection violations can be sanctioned as breach of statutory duty under UCL. 56 If one follows the above-mentioned idea of a teleological complementary of antitrust and UCL, considering them two bodies of law essentially aimed at the very same target of safeguarding functioning competition (or maximizing welfare), then it seems crucial to discursively align these two strands of discussion and construe them in conjunction. 57 While from an antitrust perspective, the test is whether the conduct in violation of data protection rules can be considered as falling within the established categories of an "exploitation" of customers or an "impediment" of competitors by a market dominant actor, in UCL terms a violation of a market conduct rule and a considerable effect on the (competition-related) interests of the market participants are required. Yet, the uniting issue in both dimensions appears to be to what extent data protection rules either have an inherent connection to competition or which competition-specific "plus" is required for deriving a harm to competition from a breach of data protection law. 58 The answer to this question is complex, and the reflections are ongoing. Yet, there are some theoretical guideposts this article wants to highlight: First, the efforts of understanding "privacy" as an economic good and thus integrating it within economic welfare theories need to be further pursued and advanced. 59 This way, privacy as a central consumer interest of the digital economy might eventually be captured as part of "consumer welfare". According to a widespread line of thinking, this is the normative standard competition laws should pursue, and at the same time in need of reconstruction and adaption to the digital age. 60 Second, the doctrinal acquis regarding conceptual overlaps of privacy/personality 61 and intellectual property should be integrated into the discussion: Both regimes are, although with varying emphasis, embedded in both economic and personalitybased justifications as the basis for rights in intangible subject matter, while at the same time the understanding of the relationship between IP and competition law seems far more advanced than between privacy and competition law. 62 Third, quite likely the result of such 58 As regards breach of statutory duty in the EU, the discussion is overlapped by the systematic issue of whether the GDPR sanction regime is conclusive and thus prevents relying on additional enforcement mechanisms. This question is out of the scope of this paper, as it gives no guidance on the substantive relationship between data protection law and competition law. 59 Welfare theory is ultimately about the (pareto-)optimal allocation of goods: If privacy can be understood as a good that has to be optimally allocated, it may well be included in an overall welfare doctrine spanning both competition and data protection law; on the economics of privacy see Alessandro Acquisti, Curtis Taylor and Liad Wagman, 'The Economics of Privacy' (2016) 52 (2) Journal of Economic Literature; pessimistic Bertin Martens at the Consumer Law Days 2019 (n 48) 231, considering the economic value of privacy still insufficiently understood and economics thus being of little help for balancing welfare with data protection interests; optimistic Ryan Calo, "Privacy and Markets: A Love Story' (2016) 91 (2) Notre Dame Law Review 649. 60 Cf. European Data Protection Supervisor, Preliminary Opinion: Privacy and competitiveness in the age of big data: The interplay between data protection, competition law and consumer protection in the Digital Economy (2014) https://edps.europa.eu/sites/edp/files/publication/14-03-26_competitition_law_big_data_en.pdf para 71: "Given the reach and dynamic growth in online services, it may therefore be necessary to develop a concept of consumer harm, particularly through violation of rights to data protection, for competition enforcement in digital sectors of the economy". 61 There are complex differentiations regarding the concepts of "privacy" and "personality" and their interrelation. Elaborating on these lies beyond the scope of this article. 62 on the relationship between privacy and intellectual property cf. Diana Liebenau, 'What Intellectual Property Can Learn from Informational Privacy, and Vice Versa' (2016) 30 (1) Harvard Journal of Law and Technology 285; on a historical sidenote, it seems illustrative to recall that influential German scholar Josef Kohler once reflections will in any case be a hybrid character of data protection rules as comprising elements that can be subjected to economic paradigms, and others that cannot. 63 Fourth and finally, notwithstanding these horizons of teleological pluralisation and overlaps, a basic systematic dividing line must be heeded: Where there is no harm to competition, data protection cannot and should not be enforced via competition regimes purely on the grounds of "enforcement assistance". 64

Cybersecurity
Cybersecurity is of key importance for functioning of and trust in AI and IoT ecosystems. The "autonomous car", which needs to be prevented from being hacked, again furnishes an illustrative example. While the legal theory of cybersecurity still appears in its infancy, its character as market conduct rules seems rather undisputed. 65 Liability under UCL in case of violation of such rules can act as an (additional) incentive for companies to adequately safeguard the respective standards. 66

IX. The contribution of UCL to the regulatory framework for fostering AI innovation
Another key promise of AI is fostering innovation. UCL can contribute to an innovation-enhancing legal framework in at least three dimensions, which are to be outlined in the following.
considered the whole body of (b2b) UCL as protecting the "personality interests" of companies (cf. Josef Kohler, Der unlautere Wettbewerb (Rothschild 1914) 17 ff. One can still reflect on whether to locate in particular trade secrecy interests purely in the realm of economics, to view them from an IP angle, or to theorize them in conjunction with privacy and "corporate personality" paradigms. 63

data access on UCL grounds
Data access is key for AI innovation. Especially "Machine Learning" heavily relies on data. The debate on access to such data has advanced quite far over the last couple of years. 67 Yet, only rarely has the option of inferring data access regimes from the realm of UCL be considered in the discussion. 68 If data access interests can be located in realms traditionally associated with ULC, they should be located there and not elsewhere for reasons of systematic coherence. 69 Yet, in addition, also the potential of UCL to act as innovative "catch basin" for competition related issues for which no other systematic realm is intuitively or prominently compelling or in sight should be explored. 70 A UCL approach could address access issues both relating to the b2b and the b2c dimension, and does not seem to require explicit legislation de lege ferenda, 71 although this would certainly have benefits for legal clarity. Rather, for the time being UCL's general clauses could "do the job".
First, horizontal claims could result as consequence of a "deliberate obstruction of competitors", which in German UCL is an established "small general clause" for b2b conduct that is deemed unfair according to an overall balancing of interests. 72 74 This should also be seen against the backdrop of the discussion on potentially "exporting" the concept of "relative market power" anchored in German antitrust law (section 20 GWB), that has no equivalent on the European level, but potentially considerable relevance for regulating the digital economy. 75 It does not appear compelling to base such considerations, should other jurisdictions consider to adopt them, in antitrust; rather a valid option is construing them, as systematic hybrid phenomena, under UCL as well. On substantive terms, some kind of power asymmetry (yet below the dominance threshold) would probably (co-)determine the reference point for intervention. 76 Second, especially regarding access desires of consumers, UCL appears the ideal systematic place, as its b2c dimension is commonly categorized as belonging to the realm of "consumer protection law". 77 To the extent access corresponds with portability, a UCL based portability regime could be theorized in conjunction with the role model of Art. 20 GDPR under a common vision of digital consumer welfare. As regards the substantive standard for granting such access, it has been proposed to rely on the necessity of certain data for the optimal use of a connected device and structure the claim as a "claim to connectivity" that goes even beyond portability. 78 element of FRAND with a claim based on unfair competition law appears apt at least on the terminological surface. 74 For an overview of potential market failures relating to data access see Bertin Martens, 'Data Access, Consumer Interests and Social Welfare: An Economic Perspective' (2020) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3605383. 75 78 According to Drexl (n 48) 238 it appears "fair" to grant data access to consumers who need such access in order to use their device in an economically sound manner; on consumer access needs in the IoT cf. also Drexl (n 68); Josef Drexl, Data access and control in the era of connected devices (2019) https://www.beuc.eu/publications/beuc-x-2018-121_data_access_and_control_in_th e_area_of_connected_devices.pdf.

2.
Locating UCL in the "AI & IP" discourse: market sensitive investment protection While the whole academic world seems to be talking about traditional IP, especially copyright and patent protection, for AI and its outputs, 79 the protection of the respective subject matter via UCL has received little scholarly attention. 80 It is time to fill this void. 81 a) Protecting AI innovation via UCL: Practical perils and theoretical horizons A long and controversial discussion revolves around the extent protection against the imitation of intangible subject can be granted on UCL grounds in parallel or in addition to given IP law. The concrete design of such doctrines varies across EU member states and internationally. 82 Whereas a traditional background was prohibitingbeyond UCL-specific cases such as deceptive imitations -"slavish" or "parasitic" imitation on "moral" grounds, 83 irrespective of market effects and thus ultimately de facto expanding IP protection, modern doctrine highlights the potential of UCL to act as a flexible and market-sensitive protection regime. 84 Recognizing this distinction is key for the following considerations: Whereas in practical terms UCL in its current form as applied by the courts, partly still adhering to moral doctrines of old, poses a danger of additionally overprotecting public domain subject matter, 85 the modern, market-sensitive economic understanding bears considerable potential. This potential may manifest itself in three dimensions: First, on an abstract legal theory note, as symbolizing an overarching regulatory paradigm for a data economy tailored, namely flexible, approach to the protection of intangible goods. Second, on a note drawing therefrom and relating to de lege ferenda considerations, as an alternative to the introduction of new IP rights in instances of uncertainty. Third, on a classical and again intertwined note relating to the function of granting protection supplementary to IP de lege lata, as taking account of the fact that AI might fundamentally alter the IP landscape, and correspondingly also reshape the interaction of thins landscape with UCL. Although these three dimensions are obviously closely entangled, the following analysis will build on them as a broad three-fold structure.
Starting with the overarching legal theory characteristics of UCL in its modern understanding, these appear to make it a perfect match for regulating AI innovation in general. To put these characteristics in a nutshell: UCL protection is conduct-based rather than subject matter oriented, 86 it is highly flexible instead of relying on standardized, pre-determined criteria, and it is sensitive for welfare economic insights, i.e. granted to the extent necessary to remedy market failure -irrespective of whether this market failure results from over-or under-protection in the IP realm. The detrimental downsides are lack of legal certainty, the numerous remaining shortcomings of economic wisdom, and complexities of practical application. 87 Among the flexibility features is notably that UCL protection has no pre-determined term: It thus theoretically bears the potential of lasting exactly as long as needed for the amortisation of investments, 88 whereas a gap of formal term of protection and the actual need for protection has long been found a welfare endangering problem of IP law. 89 This becomes all the more pertinent in the AI context, which is characterized by very dynamic production cycles hard to align with abstract protection terms. 90 Also, protection can be 86 Cf. Drexl (n 73) 278 para 112: "This however questions the very appropriateness of a property approach to regulating that economy. IP systems are largely based on the paradigm of protecting intangible assets, such as technologies in particular, that play a role as input in the production of physical goods. Such a paradigm does not seem to fit a world in which customers have to rely on real-time and accurate information as an input." 87 cf. Rupprecht Podszun, 'Der "more economic approach" im Lauterkeitsrecht' As regards the feature of conduct-reliance, a common problem in the AI and IoT realm are difficulties both in the definition 92 and in the allocation 93 of subject matter of protection. In such dubious instances, it seems a viable "way around" to rather look at the welfare effects of conduct, shifting the problem from the realm of the technical to the realm of the economic. 94 Furthermore, the economicfunctional characteristics of UCL seem especially fitting for the protection of "AI-generated" intangible goods. The problematic cases of interest for legal academic debate are characterized by the absence of notable human effort or guidance. 95 Thus, the market focus of UCL seems an especially suitable regulatory match. Protecting the personality and interests of human "creators" (understood in a wide sense not limited to copyright) has always been a key justification for granting intellectual property rights. 96 Yet, in the absence of humans whose interests have to be given weight in the balancing exercise, with "inventors" replaced by "investors", 97 a not only "more", but "purely economic approach" to such constellations on the doctrinally apt grounds of UCL appears an appropriate framework. 98 In each case, one would have to investigate who made relevant investments and if their recoupment is endangered by free-riding. On a legal theory note, one could thereby uphold a differentiation between an anthropocentrically construed, "classic" intellectual property law in wrong decisions, asking the question of how long data should be protected will simply miss the needs of this economy." 91 Contemplating sector-specific protection so to say constitutes the mirror image of the current debate on sector-specific data access regimes. 92 This relates in particular to the dynamism of subject matter such as self-learning or "evolutionary" algorithms. 93 This is reflected in the prominent debate on who should own the rights in "AI generated" output; on a more visionary note, a general blurring of "actors" within global informational networks has been diagnosed, with the proposal of in response ultimately holding "conduct itself" liable, cf. Gunther Teubner, 'Digitale Rechtssubjekte?'(2018) 218 AcP 155, 202. 94 Comparable proposals have been made as to the re-construction of copyright law: namely, instead of technically looking at "reproductions", undertaking a "principle- continental European "droit d'auteur" tradition on the one hand, and a purely economic market regime for AI on the other hand. 99 Coming to the lex ferenda dimension of UCL considerations, UCL has been traditionally attributed a "pacesetter function", meaning that protection was granted on UCL grounds before the respective doctrines eventually materialized into a full-fledged IP right. 100 This feature should be kept in mind when reflecting on potential new protection regimes, especially for computer generated "works", but also for data or ML models. 101 As long as it is simply unclear whether there is an economic need for introducing such rights, i.e. whether there is market failure in need of remedy, 102 it seems wise to refrain from hastily and prematurely establishing new and potentially dysfunctional full IP rights. Rather, one could thoroughly monitor how things develop, gather economic evidence and insights, flexibly grant protection on UCL grounds, and codify the parameters established in this course once a constant need for protection has materialized. 103 Of course, the economic costs of potentially dysfunctional market intervention on uncertain grounds 104 have to be weighed against those of legal uncertainty in the absence of a clearly 99 Of course, this goes with the caveat that the "romantic", anthropocentric understanding of IP has to a certain extent been overridden by industry determined market realities, see Hilty, Hoffmann and Scheuerer (n 37) 27. 100 On the "pacesetter function" of UCL vis a vis introducing new intellectual property rights see Zech (n 10) 161 f; Ohly (n 84) 522 f; Kur (n 84) calls UCL an "incubator" for new IP rights; emphasizing the "interim" character of a UCL solution in the AI context Dornis (n 80) 44; id. (n 80) 1252. 101 Cf. Céline Castets-Renard, 'The Intersection between AI and IP: Conflict or Complementarity?' (2020) 51 (2) IIC 141, 142: "(…) the lawmaker may be led to consider that a sui generis system of IP rights for AI-generated inventions should be raised to adjust innovation incentives for AI."; in favour of new IP regimes Dornis (n 80) 1257 and 1264. 102 Not identifying market failure regarding AI tools and outlining contextdependency of market failure regarding AI outputs Hilty, Hoffmann and Scheuerer (n 37) 15 ff; considering market failure regarding training data possible Philipp Hacker, 'Immaterialgüterrechtlicher Schutz von KI-Trainingsdaten' (2020) GRUR 1025, 1033; assuming an economic need for protection Dornis (n 80) 1264; yet all authors acknowledge the lack of clear empirical evidence. Absent such evidence, the whole market failure standard ultimately comes down to an allocation of the burden of proof or burden of justification, with main starting point options being either the status quo or the freedom principle. 103 Critical on the introduction of new IP rights for trained AI Zech (n 97) 1146: "Any reaction of IP law beyond jurisprudence and interpretative guidance has to be handled with care. New investment protection rights should only be introduced if otherwise a clear market failure is to be expected. In the area of artificial intelligence, this seems not to be the case."; on sufficiency of (inter alia) UCL with regard to protection of AI data cf. also Peter R Slowinski, ' defined right, 105 and against those associated with the lack of harmonisation of b2b UCL. 106 Lastly, as regards the concrete application of UCL protection, the need to fine-tune doctrinal requirements of the assessment is to be highlighted. Specific criteria common in legal orders can be categorized in two strands: First, on historical and systematic grounds, they materialize as "specific unfairness" parameters aligned with other traditional paradigms of UCL, in particular confusion over origin as a problem of market transparency, or means of knowledge gathering related to breach of trade secrecy. 107 Second, they can serve as functional equivalents to IP protection thresholds, such as the German doctrine of requiring subject matter to dispose of "competitive originality (wettbewerbliche Eigenart)". This criterion has always remained dubious, and, as it relies on paradigms of visuality, i ts suitability is additionally challenged in digital contexts. 108 Yet, such criteria, within the boundaries of legal methodology, are generally open to flexible development by courts and scholarship, and they should be developed accordingly with a view to the needs and characteristics of the digital economy. 109 Ultimately, their goal must be to give, on a more concrete level of abstraction, guidance to courts for operationalizing the market failure assessment. In this context, if one combines the notions of data access regimes on UCL grounds on the one hand (see section IX.1. above) and data protection regimes on the other hand, an integrated UCL approach bears the potential of progressively contributing to finding the widely sought optimal balance between access and protection. UCL could provide the breeding ground for considering totally new approaches from scratch. A concrete area of relevance seems giving impulses for reform of the database sui generis protection right: The calls for such reform are getting more and more nuanced, 110 and they include the finding that data protection and data access need to be seen in conjunction when forming a new and adequate regime.

b)
Application to AI components When concretely applying these considerations to AI, it appears apt to structure the assessment along the steps of the Machine Learning process, i.e. training data, learning process, and output. 111 A substantive evaluation of market failure regarding these phenomena lies beyond the scope of this paper. 112 Rather, it aims at enlightening some abstract doctrinal paradigms for meeting potential market failures. Starting with training data, applying UCL protection to data 113 in general has been discussed for quite some time, in particular as an alternative approach to or argument against the introduction of a new property right in data. 114 Consequently, UCL can also constitute a means of protection against the misappropriation of a specific subphenomenon of data, namely AI training data, meaning protection against the creation of another AI model by using the same training data as a competitor. 115 The trait of temporal dynamism has led scholars to compare data with fashion, and correspondingly to consider the dynamic legal protection of fashion under UCL a role model for the legal protection of data. The assumption goes that both are of high value, but short-lived, so that at least registered IP rights seem(ed) inappropriate for optimal protection. 116 Whether one considers these parallels convincing or not, in any (and every) case the specific economic features of AI training data, in particular the investments needed to generate or obtain them, have to be taken into account. 117 As far as the assessment further depends on the abovementioned doctrinal requirements of the respective legal order, 118 it is in particular debated whether data dispose of "competitive originality", and it is assumed that in most cases they do not. 119 Coming to protection for AI algorithms, a technical and a legal distinction have to be made: The optimization algorithms on the basis of which a model is trained constitute classical software to the extent they are written in computer code, 120 whereas algorithms as such never fall under IP protection. Thus, not only is their treatment under copyright and patent law the same as regarding classical software, 121 but also the usual paradigms of UCL protection for software apply. 122 In this regard, it is worth noting that the fashion argument outlined above has also been made regarding computer programs. 123 These can generally be protected under UCL, 124 yet conclusive decisions of copyright and patent law as to the scope of their (non-) protection must be heeded, which leaves little room for practical relevance of UCL in light of given IP protection for these phenomena. The case is more complex for trained AI models, i.e. the actual AI tools: Whether or to what extent such models, including the "weights" they comprise, are subject to protection under copyright and patent law is debated, 125 with especially their dynamic, changing nature potentially altering traditional IP paradigms. Thus, on the one hand, one may consider the relevancy of UCL greater as opposed to optimization algorithms, since due to its conduct-based flexibility, application of UCL to trained ML models could flexibly address market failure in these uncertain contexts. 126 On the other hand, if models do not fall under IP software protection, then this decision should generally not b e circumvented or overridden by UCL protection.
Lastly, as regards AI generated output, heeding the systematic decisions of IP law must again be the key directive for applying UCL. As already mentioned above, its potential relevance lies especially in cases where IP protection for AI generated output is not given due to lack of a human author, inventor or designer. 127 Yet whether this lack is systematically deliberate or accidental, remains ambiguous: "AI generated output" was simply not imaginable at the time the respective laws were enacted, an argument that can be turned one or the other direction. 128 In any case, two developments relating to AI generated intangible goods need to be closely monitored: First, empirical economic insights into the market structures in order to identify market failure or its absence. Second, the legal theory discussion on the impact the lack of human involvement could have on the IP paradigm of "public domain". 129 3.
The UCL dimension of trade secret protection: doctrinal guidance and theoretical background for flexible hybrid regimes Lastly, the UCL dimension of trade secret protection can act as a legal theory cornerstone for regulating the AI economy. In the EU, trade secret protection has meanwhile been codified as a separate body of law. 130 Yet, it still has roots in and references to UCL, in particular when relying on an "(un)fairness" standard as subsidiary general clause of infringing acts. 131 Trade secret protection is an important building block in the AI/IP protection landscape. 132 Data, algorithms, models and outputs can all be protected as a trade secret. 133 Notwithstanding certain welfarist ambiguities, 134 the European trade secret regime is widely praised as a balanced and adequate system with regard to the optimal trade-off between exclusivity and access. 135 This is inter alia attributed to the UCL inspired, flexible, conduct based approach it adopts. 136 Instead of a full-fledged property angle, 137 the regime was constructed as a doctrinal hybrid between IP and UCL, uniting the "best of both worlds". 138 By these virtues, the trade secret directive can be considered a concrete materialization of the overall legal theory features of a UCL approach outlined above. Not least for the sake of legal coherence, it follows that trade secret protection and UCL should not be considered separate worlds, but (still) understood and interpreted with a mutual view to one another. After all, the explicit reliance of the TS Directive on UCL standards also bears the potential to revitalize UCL as an area of interest and importance, and give impulses to refuel the European harmonisation discourse regarding its b2b dimension.

X. Conclusion and outlook
It has been shown that UCL will have a viable role to play in shaping the contours of a market order increasingly determined by AI. It can in numerous ways contribute to the achievement of central regulatory paradigms aiming at optimal utilization of this new technology for the 132 134 The effects of trade secret protection on AI innovation are ambivalent insofar as on the one hand, the regime provides some extent of exclusivity, thereby protecting investments and safeguarding innovation incentives, but on the other hand, it also creates obstacles vis a vis third parties that want to use e.g. certain data to train their ML models. 135 Leistner (n 104) 18 ff. 136 Drexl (n 73) 269 para 56: "(…) such further limited protection can be considered as better suited to serve the purposes of the data economy, by focussing on the particular way in which a third party has specifically acquired access to the data instead of granting exclusive protection against the use of data." 137 Ibid. 291, para 182: "Rather than recognising exclusive control over any use of protected information, as would be typical for intellectual property regimes, EU trade secrets law implements a tort law approach that bans specific conduct related to the acquisition, dissemination and use of trade secrets that can be considered unfair." 138 On the advantages of legal hybrids see Ohly (n 84) 86 ff. good of society. Thereby, UCL can and should not only take a passive role of reacting and adjusting its established standards, but by the virtue of its doctrinal flexibilities pro-actively partake in developing the new standards required to address the manifold challenges AI raises. At the same time, legal issues induced by AI provide the occasion to further reflect on and refine the nature and core of UCL as a still insufficiently understood body of law. Due to its characteristic flexibility, UCL displays an extraordinary reliance on and responsiveness to societal, economic and technological changes. Such changes are currently and for the time to come significantly driven by Artificial Intelligence. UCL may in this course be considerably reshaped and advanced as a genuine building block of regulating the digital economy, competitive order, and society.