-
PDF
- Split View
-
Views
-
Cite
Cite
Uroš Ćemalović, Prohibited artificial intelligence practices according to Article 5 of the European Union’s regulation on AI—between the ‘too late’ and the ‘not enough’, International Journal of Law and Information Technology, Volume 32, 2024, eaae023, https://doi.org/10.1093/ijlit/eaae023
- Share Icon Share
Abstract
In June 2024, the European Union adopted the AI Act (AIA), its first comprehensive legal instrument in this field. Notwithstanding the undoubted importance of the very adoption of this act, it is much less certain whether it will significantly contribute to more trustworthy and human-centric AI tools and systems. Focusing on a critical examination of the provisions of AIA dedicated to the prohibited AI practices, this paper argues that the answer is no, mainly because of the AIA’s late adoption, as well as due to the fact that it did not go far enough in regulating AI.
Introduction
Over the last ten years, the issue of artificial intelligence (AI) has been more and more intensively discussed in various national, transnational (intergovernmental), and supranational, fora, bringing together not only officials but also experts and civil society. One of the most outstanding examples is the Global Partnership on Artificial Intelligence (GPAI), ‘a multistakeholder initiative’1 launched in 2020, which ‘aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities’.2 However, the regulation of AI—especially if we focus on legally binding acts—is critically lagging behind this conspicuous enthusiasm for the debate on AI-related issues. While there is a proliferating number of various ethic guidelines and rules of conduct—prepared and published by plethora of different entities and organisms in order to limit the use of unwanted and potentially harmful AI practices and to contribute to what is often referred to as ‘responsible AI’3—the legal acts belonging to so-called ‘hard law’ on AI are significantly less numerous.
There is, however, a clear tendency to regulate AI in growing number of national legislations all over the globe. In January 2024, the International Association of Privacy Professionals published a document entitled ‘Global AI Law and Policy Tracker’,4 examining ‘the development of comprehensive legislation, focused legislation for specific use cases, national AI strategies or policies, and voluntary guidelines and standards’5 in 23 countries, but also in the EU. An overwhelming majority of analysed countries, national ‘laws and policies’ applicable to some aspects of AI are predominantly covering issues such as personal data protection, consumer protection, digital economy, and various questions related to intellectual property rights (IPR), but rarely comprise legally binding regulation dedicated exclusively to AI. In any case, the EU Regulation of 13 June 2024 laying down harmonized rules on artificial intelligence6 (hereinafter referred to as the AI Act—AIA) is the first supranational legally binding act of this kind.
The proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on AIA, published by the European Commission in April 2021, was the first formal step leading to the adoption of Union’s regulatory framework exclusively dedicated to AI. This text was the fruit of long and laborious discussions, during which the two important milestones were the Commission’s communication entitled “Artificial Intelligence for Europe”7 (April 2018) and its Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics8 (February 2020). On 13 March 2024, almost three years after its initial publication, the European Parliament (EP)—in the first reading and according to ordinary legislative procedure—officially adopted the proposal of the AIA.9 Two months later, on 21 May 2024, the Council of the EU approved the text10 and it was published in the EU’s Official Journal on 12 July 2024.
By the simple fact that it represents the first supranational legal effort to comprehensively regulate AI, the AIA ‘must be welcomed, especially as it provides a basis for an urgently needed constructive dialogue on a matter of extreme and ubiquitous importance’.11 In the same vein, legislator’s laudable effort to justify document’s background and various regulatory solutions—the AIA has 180 recitals—speaks strongly in favour of the open, inclusive, and democratic process in which it has been elaborated. However, the particularities of the decision-making process within the EU and its intrinsic slowness have led to a belated adoption of an act that—striving to satisfy numerous and often contradictory economic, political, and other interests—did not go far enough, potentially leading either to its limited effects or to significant problems in its future enforcement in the EU Member States. Through a critical examination of the provisions of AIA dedicated to the prohibited AI practices, the objective of this paper is to try and distil if and to what extent the AIA will serve the purpose it is adopted for—to promote ‘the uptake of human-centric and trustworthy artificial intelligence’12. The focus will be on eight prohibited AI practices defined by Article 5-1, and it will be questioned how and to what extent the numerous legal standards related to the definition of these practices can lead to significant uncertainties regarding their interpretation.
Prohibited ai practices and difficulties of their enforcement
Article 5.1 of the AIA is exclusively dedicated to the enumeration of prohibited AI practices, bringing an exhaustive list of those practices, therefore not allowing the potential prohibition of any future potential use of AI that is not provided for in points 5.1(a) to 5.1(h). This nomotechnical choice can be understood from the point of view of general considerations related to the rule of law and legal certainty. However, the extremely rapid development of AI and various ways of its misuse will soon require the adoption of more and more substantial amendments to the AIA. Taking into consideration both complexity and slowness of the EU’s legislative procedures—issues which have been summarily examined in the last chapter—it is very likely that the existing provisions of the AIA will soon be (at least partially) insufficient, inadequate, or inapplicable. Nevertheless, in the current state of the development of AI, the AIA offers more or less adequate answers to the main concerns, especially when AI is misused to ‘provide novel and powerful tools for manipulative, exploitative and social control practices’ (recital 28 of the AIA). In any case, seven of eight points of Article 5.1 prohibit ‘the placing on the market, the putting into service or the use’ of various AI system(s) (points a-f) or biometric categorization systems (point g) or “real-time” remote biometric identification systems’ (point h).
Subliminal, manipulative, or deceptive techniques
Linguistic, contextual, and teleological interpretation of the provision of Article 5.1(a) indicates that it is a complex legal norm, comprised of five mandatory cumulative constitutive elements. In other words, an AI-related practice cannot be deemed to be prohibited under EU law if it does not fulfil all the conditions, out of which four are specific only for this particular kind of practice, while the fifth represents a common characteristic of all cases listed in points (a) to (g).13 Therefore, here the focus will be on the four specific conditions characterizing only the practice which summarily can be referred to as the one leading to the distortion of behaviour.
First, an AI system has to deploy at least one of the following three techniques: (i) subliminal technique beyond a person’s consciousness; (ii) purposefully manipulative, or (iii) deceptive techniques.
Second, the deployment of such a technique(s) has to be characterized by a specific objective or effect, which, in itself, has two elements; the first element has to be objectively manifested in reality and consists of a material distortion in the behaviour of ‘a person or a group of persons’; on the other hand, the second element is situated entirely (and, consequently, can be observed exclusively) on the level of individual consciousness of ‘manipulated’ natural person, because the above-mentioned distortion has to be caused by an ‘appreciable’ impairment of person’s ability to make an informed decision. This impairment can be seen as a subjective element, intrinsically correlated with the objective one. Moreover, the level of the impairment referred to as ‘appreciable’ can only be judged from the point of view of the latter.
Third, the distortion in the behaviour of ‘a person or a group of persons’, even if it is confirmed—potentially using techniques that are far beyond purely legal considerations and can vaguely be situated somewhere between psychoanalysis and forensic psychology—that the distortion was appreciable, is not sufficient to establish the existence of a prohibited AI practice. It is also necessary to have an objective manifestation of this distortion in the behaviour: the fact that this particular person has taken ‘a decision that that person would not have otherwise taken’. This element is probably the weakest point of the entire construction of this particular type of prohibited AI practice. Even if we imagine a situation in which it is established that, first, a technique used had the objective/effect of ‘materially distorting’ a behaviour and, second, the person’s ability to make an informed decision was ‘appreciably impaired’, how to make sure that the fact this particular person has taken a different decision than the one he/she usually takes is a direct consequence of an AI practice? What can be considered as a legitimate change in a person’s decision-making routines so that it can be certified as authentic, genuine, and not influenced by ‘an AI system that deploys subliminal techniques’? In other words, how to make sure that this particular person ‘would not have otherwise taken’ a different decision even in a total absence of whichever AI system deploying ‘subliminal, purposefully manipulative or deceptive techniques’? Is this provision of newly adopted EU act indirectly implying that the human behaviour is usually almost entirely predictable, and that ‘an informed decision’—at least in a universe limited to person’s individual cognitive abilities and ethical preferences—is a mathematically precise value, the disturbance of which can be easily observed, thus entirely reducing the possibility of whichever ‘unwanted’ decision? To put it bluntly, whose intelligence is really artificial here? All these questions tackle at least two very sensitive and mainly meta-legal issues. The first is widely discussed in philosophy and concerns complex relations between the human perception and free will,14 while the second concerns the importance of both the liberty of consciousness and the freedom of opinion not only for the values of human rights and democracy but also for the concept of a society that gave birth to them.
Fourth, the ‘unwanted’ decision the previous paragraph referred to has to cause or to be likely to cause a significant harm to a broadly defined list of entities (the person who made the decision, but also another person or group of persons). While, by its apparent objectivity, this element of the provision does not seem to be potentially leading to the problems of interpretation, it, however, raises at least two other important questions. First, in order for an AI practice to be prohibited under the provision of Article 5.1(a), a harm has to be a ‘direct effect’ of a distorted behaviour that has led to an unwanted decision; in other words, the link cause to effect has to be established between, on the one hand, the decision of a person/group of persons (taken under the influence of an AI system) and, on the other, the harm caused. Second, this harm has to be significant, which is yet another standard potentially leading to diverging interpretations.
Exploitation of vulnerability
As was the case with the techniques leading to the distortion of behaviour, this type of prohibited AI practice is comprised of several mandatory cumulative constitutive elements, out of which one is exactly the same (material distortion of the behaviour), but devoid of the component related to appreciable impairment of person’s ability to make an informed decision. The second common element of the two types of prohibited AI practices is a significant harm caused, but, however, with some terminological and substantial specificities compared to the practice defined in Article 5-1(a). It is, therefore, necessary to examine in detail all the constitutive elements of this prohibited AI practice.
First, an AI system has to exploit ‘any of the vulnerabilities of a person or a specific group of persons’, but only if they are due to person’s/group’s ‘age, disability or a specific social or economic situation’. It should be noted that there are a number of concrete situations in which a person/group can find itself that could only be encompassed by the notion of ‘specific social situation’ if it is interpreted lato sensu. For example, can the citizenship, ethnic or racial belonging, mother tongue, and other languages spoken be considered as a specific social situation? In spite of a tendency in sociology and political science to establish a connexion between citizenship and social class,15 the recent studies are more inclined to deeply question this connection, claiming that the crisis of the welfare state is accompanied by ‘the tensions of present-day unsustainable society where social inequalities are growing dramatically’.16 Moreover, the stable interpretation of the Court of Justice of the EU undoubtedly goes in direction of the neat decoupling of the notions of citizenship (nationalité) and social situation,17 arguing in favour of an interpretation that ‘social status’ indicates all other material conditions of life that are not necessarily related to income.18 Therefore, it is difficult to imagine that the EU legislation on prohibited AI practices will be applicable if a person’s/group’s vulnerability is due to their citizenship, ethnicity, or language(s) they (do not) speak, even if it is technically possible to design an AI system exploiting the vulnerabilities based on the above-mentioned criteria.19
Second, the exploitation of a vulnerability analysed in the previous paragraph has to have the objective or the effect of materially distorting the behaviour of a person. At first glance, this condition can seem to be identical to the first of two elements analysed in paragraph 2.1.2., in the context of subliminal and other techniques leading to the distortion of behaviour. However, unlike for the latter, the distortion of the behaviour cannot affect ‘a person or a group of persons’ but ‘that person or a person belonging to that group’. Consequently, the vulnerability referred to in paragraph 2.2.1. can be either individual or collective, but, in order for an AI practice to be prohibited under EU law, the distortion of the behaviour can only concern a natural person, independently of the fact whether it exploits her/his individual or collective vulnerability. In other words, vulnerability as such can exist in foro externo, but only if it affects individual behaviour.
2.2.3. Finally, third, the material distortion of the behaviour has to be followed by a ‘significant harm’, yet another element comparable to the one already examined in paragraph 2.1.4. However, there are two important specificities. First, by analogy with what has already been concluded in the previous paragraph, the harm can only be individual and not inflicted on the group. Second, unlike for subliminal, manipulative or deceptive techniques, it is not necessary that the significant harm has actually taken place, but it is sufficient that this harm is ‘reasonably likely’ to happen. Even if both the literature and the case law are abundant when it comes to the interpretation of this terminus technicus, it is not yet known how the Court of Justice of the EU will interpret what is a reasonable risk of harm in the context of prohibited AI practices.
Evaluation or classification of persons or groups of persons
While the previous two analysed types of prohibited AI practices concerned techniques aimed at changing the behaviour of a person, the one examined in this sub-chapter takes the social behaviour as one of its input values. There are four fundamental cumulative requirements, of different levels of complexity, which have to be fulfilled in order to consider this AI practice as prohibited under EU law.
2.3.1. The purpose of an AI system placed on the market, put into service, or used has to be either evaluation or classification of persons or groups of persons. In other words, this system is not characterized by the fact that it exerts any kind of influence over natural persons: it does not have to deploy subconscious techniques or exploit person’s vulnerabilities; its major constitutive element is the aim with which it was initially designed. It does not mean that, for being prohibited, this AI practice can be entirely devoid of real-life consequences (see 3.3.4. below); however, its fundamental differentia specifica is the fact that its ‘intended use’ is to either evaluate or classify persons/groups of persons. The only requirement in this respect is that this evaluation or classification is based on inputs gathered ‘over a certain period of time’. Even if, for a potential end user, the note/characteristic assigned by an AI system to a person/group can seem to be produced instantly, it is the consequence of what can be called observation and following.
2.3.2. The evaluation/classification performed by an AI system has to be based either on social behaviour or personal/personality characteristics of the processed person or group. It is particularly terrifying to further elaborate on all potential methods and techniques by which AI can gather and process data on behaviour or personality of observed individual(s). While for some of those techniques, one has not to go too far—‘untargeted scraping of facial images from the internet or CCTV footage’, but also ‘profiling’ mentioned in points e) and d) of the same Article 5-1, see more in sub-chapters 2.5. and 2.4.—others can be provided by the extremely rapid development of AI, but also by the Internet of Things (IoT)20 and various technologies for recording. Given that ‘paradigms such as the IoT are transforming society, bringing humans closer to their devices than ever before (Nieto & Rios 2019, 78)’, the potential use of data produced by IoT devices for the purposes of evaluation or classification of the same humans gives an eerie perspective of technology.
2.3.3. In order to be considered as prohibited under EU law, AI system having for the purpose the evaluation or classification of natural persons or groups of persons must also be able to produce some kind of ‘social score’. Even if, from a technical—and, more precisely, mathematical—point of view this score has to have its numerical expression (eg, a value on a scale from 1 to 5 or from 0 to 10, where the highest value would represent the best possible social score), this precise numerical value is not indispensable from the point of view of the AIA. Even if an AI system would, in background, most certainly operate in the context of one or several mathematically expressible values, the element EU legislation insists on is only that this score can be used for the purposes of evaluation or classification of natural persons or their groups, leading to at least one of two potential concrete kinds of their treatment (see 2.3.4. below). Therefore, this score has to have as a constitutive element the classification of all targeted persons/groups in at least two categories, on the basis of which different treatments can be applied to them. For example, a higher social score could be used as a basis for attribution of certain rights and/or advantages to which access is denied for persons/groups having lower score.21
2.3.4. Finally, apart from having a specific purpose (evaluation or classification of persons/groups, paragraph 2.3.1.), concrete source of data upon which it is based (social behaviour or personal/personality characteristics, paragraph 2.3.2.) and social score as indispensable element (paragraph 2.3.3.), AIA prohibits this kind of AI practice only if it can have some real-life consequences. Namely, the mere existence of a social score is not enough; it is also necessary that it leads to at least one concrete kind of ‘detrimental or unfavourable treatment’ of a person/group. First, this treatment can take place ‘in social contexts that are unrelated to the contexts in which the data was originally generated or collected’ [point (i)]; second, it can be ‘unjustified or disproportionate’ to the person’s/group’s social behaviour or the gravity of such behaviour [point (ii)]. Even if, at least in theory, it is understandable22 why the EU legislators insisted on this fourth element in order to ban this AI practice, the future enforcement of the provision of Article 5-1(c)-[points (i) and (ii)] will encounter at least two significant problems when it comes to their interpretation. As for point (i), in the context of the rapid development of social media and changing cultural paradigms, the notion of ‘unrelated social context’ can be blurred, not to mention that its understanding can significantly vary in the function of national, regional, ethnic, cultural, and other specificities. When it comes to point (ii), the same can be said about the entire formulation ‘unjustified or disproportionate to their social behaviour or its gravity’? According to which standards the above-mentioned proportionality and gravity will be interpreted? How the diverging national legal heritages—especially when it comes to legal framework related to the gravity ‘social behaviour’ and its judicial and administrative interpretation—in different EU Member States would deal with the need for uniform interpretation and full enforcement of the AIA? It seems that the EU legislators, led by the wish to be exhaustive and frightened by the perspective of fussing AI developers, have opened a wide path to inapplicability of the entire provision of Article 5-1(c).
Risk assessment through human profiling
The use of technology in order to prevent the commitment of criminal offences is not a new phenomenon; while, most often, it focalizes on criminal investigation23 of already committed offences, the rapid changes brought about by the third and fourth industrial revolutions have led to the increased interest for the use of new methods of crime prevention and policing in ‘the world in which crime is committed and prevented becomes ever more technologically-based’.24 Ineluctably, this development has recently included AI-based tools and systems, leading to the concrete provision of the AIA dedicated to this issue [Article 5-1(d)]. In order to consider an AI system for risk assessments of natural persons for crime prevention purposes prohibited under EU law, there are three fundamental cumulative requirements, accompanied by one exception.
2.4.1. The most general requirement defined by the EU legislation refers to the very function of an AI system: its aptitude to perform a risk assessment of a natural person. Due to the limited competencies of the EU in matters related to policing, criminal investigation, and prevention of crime, the Union is devoid of a normative framework specifically dedicated to this issue. Therefore, when the Union’s legislation or different legally non-binding acts adopted by various EU’s instances and bodies mention risk assessment, it is often in the context such as cross-border health threats,25 sanitary and environmental risk management,26 or customs.27 In all above-mentioned cases, even when a risk assessment can be directly or indirectly related to a natural person, it cannot be considered as profiling nor can concern personality traits and characteristics (see 2.4.3.), and, therefore, is entirely devoid of use value when it comes to prevention of crime (see 2.4.2.).
2.4.2. Apart from being able to make risk assessments of natural persons, an AI system has to have a specific aim: assessing or predicting the likelihood of a natural person committing a criminal offence. To some extent, this requirement is similar to the purpose of an AI system described in detail in subparagraph 2.3.1. More precisely, while the evaluation or classification of persons/groups of persons is intended to lead to a certain social score, in this case, an AI system is supposed to have as a consequence the estimation of the likelihood of criminal behaviour. In both situations, the operation performed by AI is essentially the same: putting the entire sample of observed humans into at least two categories, out of which, when it comes to risk assessment tools, the one is considered to be more inclined to commit an offence. Of course, it is more likely that the system would be able to produce more nuanced results, essentially comparable to the system of social scores, this time focused only on the probability the observed individual has to commit an offence.
2.4.3. The essential requirement EU legislation imposes for the prohibition of this type of AI practice is the one related to the basis upon which the system performs the risk assessment: it has to be done either via profiling of a person or through the assessment of its personality traits and characteristics. In both cases, the input data mostly concern behavioural patterns and individual psycho-social profiles, both of which can be considered as a private sphere, often touching the quintessence of intimacy. In such a context, the most sensitive legal, ethical, and technical question is the one related to the ways an AI system has gathered and processed the input data, out of which quasi-entirety are potentially subject to the EU and national provisions on data protection. Therefore, this element of the provision of Article 5-1(d) has to be interpreted and enforced having particular regard to the EU’s General Data Protection Regulation, as well as to its interpretation by the Court of Justice.
2.4.4. Finally, even if an AI system cumulatively fulfils all the three above-mentioned conditions, it still can be exempt from the prohibition if it is used as a support for human assessment of the involvement of a person in a criminal activity. However, this exemption is applicable only if the performed assessment “is already based on objective and verifiable facts directly linked to a criminal activity”. While the double conditionality for this exemption can be considered as appropriate and, to some extent, reassuring, it still remains to be seen how the notions of objective and verifiable facts shall be interpreted by various EU and national judicial and administrative instances.
Facial recognition databases
Taken independently from the use and expansion of AI, automated facial recognition systems (AFRS) have been developed for already 60 years. Back in 1964, the American researcher Bledsoe et al.28 worked on facial recognition computer programming, by conceiving ‘a semi-automatic method, where operators are asked to enter twenty computer measures, such as the size of the mouth or the eyes’.29 While, for at least the next two decades, different AFRS have been improved in various ways (mainly by adding new markers), in 1988 were introduced the first elements of what could be called an AI-based tool to process data. However, in spite of significant progress related to the accuracy of various AFRS, ‘the precision of this technology is sometimes uncertain and might lead to adverse investigative repercussions’, 30 while numerous researchers have indicated numerous cases of racial discrimination in the outputs of AI-based facial recognition technologies.31 In such a context, it is clear what motivated the EU legislators to include the provision dedicated to facial recognition databases in the AIA [Article 5-1(e)]. This provision has two important components: the nature of the action performed by an AI system (2.5.1.) and the source of data it used (2.5.2.).
2.5.1. If one takes the notion of AFRS in a broad sense, the provision of the AIA is dedicated to only one aspect of this technology: the creation or expansion of facial recognition databases. Even if it is clear that those databases play the pivotal role in whichever AFRS, captures used to develop them represent only one of three aspects of the potential use of AI in facial recognition-based identification: ‘(i) face detection to locate the human in images and videos; (ii) face capture to process analog information into digital data using facial features; and (iii) face matching for verification of identity’.32 Therefore, it can be concluded that, in the current state of its development, the EU legislation only covers some of above mention three situations.
2.5.2. In order to be considered as prohibited under EU law, an AI system that creates or expands facial recognition databases has to do so ‘through the untargeted scraping of facial images from the internet or CCTV footage’. While it is perfectly understandable why Article 5-1(d) focuses on the two most common ways AI proceeds to provide input data (internet and CCTV footage), it remains dubious why other possible ways to ‘feed’ facial recognition databases have remained outside the scope of this provision. It is entirely imaginable that the whole databases of facial images can be provided in numerous other ways, which could, eg, include various institutional or personal footages, thus avoiding ‘untargeted scraping’ mentioned in the AIA. Once again, either by its wish to be precise and exhaustive—or, even worse, in order not to antagonize the business interests of AI developers—the EU legislators haven’t gone far enough in regulating prohibited AI practices. While Recital 43 of the AIA rightfully points out that AI systems creating or expanding facial recognition databases ‘should be prohibited because that practice adds to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy’, it remains unclear why other potential sources have been left out, especially when we know that national and international institutions and numerous big companies have at their disposal the entire series of facial images.
Inferring of human emotions
The ability to have emotions and to express them can be considered as one of the most ‘humane’ characteristics of the mankind, even though ‘it becomes conspicuous that one central human property is never used to define human beings: our emotionality’.33 This becomes particularly noticeable when we take into consideration that—at least for now—there is no machine, automated system, or AI-based tool that would be able to express (and not simulate) genuine emotions. Even if one can argue34 that some kinds of animals—such as endothermic vertebrates—are able to behave in certain ways that could be defined as emotional,35 emotions remain fundamentally bio-cultural processes, of which any kind of automated system is intrinsically incapable. However, AI can be used as a tool to infer human emotions and, under certain conditions, the AIA considers this practice as prohibited. It, is, therefore, first necessary to focus on the way the EU legislation understands the notion of emotions (2.6.1.), before focusing on the concrete AI practices that can be considered prohibited (2.6.2.). and two exceptions that can legitimize the use of these practices (2.6.3.).
2.6.1. The normative part of the AIA does not provide a definition of the notion of emotions. Nevertheless, its Recital 18 stipulates that ‘The notion of “emotion recognition system” […] refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement’ and, therefore ‘does not include physical states, such as pain or fatigue,’ [nor] mere detection of readily apparent expressions, gestures, or movements, unless they are used for identifying or inferring emotions”. It is interesting to note that, while recital 18 also refers to “intentions”, the provision of Article 5-1(f) mentions only emotions. Given that recitals are, in principle, devoid of legally binding effect and can be used only to interpret other provisions of the act, it would be interesting to see how the EU jurisdictions will interpret it. The need to protect the privacy and limit the spreading of potentially dangerous AI practices strongly speaks in favour of the broadest possible interpretation of this notion.
2.6.2. In order to be considered as prohibited under the AIA, AI systems capable of inferring human emotions have to be deployed ‘in the areas of workplace and education institutions’. It is not easy to speculate what concretely motivated EU legislators to adopt such a restrictive approach, especially given that the provision of Article 5-1(f) also comprises significant exceptions (see 2.6.3. below). There are numerous entirely public places (such as, eg, all non-private and non-restricted urban areas, transportation, various administrative, and judicial institutions) in which the use of such AI systems would be equally or more detrimental. It is foreseeable that the development and deployment of such systems in the relatively near future would impose the need to amend this provision.
2.6.3. To the restrictive approach described in the previous paragraph, the EU legislators have added one significant exception: AI systems for the detection of emotions are not forbidden by the AIA when their use ‘is intended to be put in place or into the market for medical or safety reasons’. While the interpretation of the notion of medical reasons would, most probably, encounter some moderately difficult problems of interpretation, the reasons related to safety could represent a sticky point for the proper implementation of this provision. For example, different employers may have significantly different understandings of the notion safety in the areas of workplace, while the protection of the safety of educators and school children could be used as a pretext for the deployment of AI systems for the purpose of identifying or inferring emotions.
Biometric categorization of humans
In some of its important elements, biometric categorization of natural persons governed by the provision of Article 5-1(g) of the AIA is similar to the risk assessment through human profiling already analysed in chapter 2.4. However, there are two crucial differences. First, while the profiling is focused on personality traits and characteristics of natural persons, biometric categorization ‘has as objective to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation’; in other words, while the latter focuses on elements related to person’s belonging to a group (race, political religious or philosophical beliefs, sexual orientation), the former is more individualized (eg, it can concern person’s emotional self-control, her/his tendency to exhibit socially unacceptable or aggressive behaviour, to consume psychoactive substances, etc.). Second, in order to be considered as prohibited under the AIA, risk assessment through human profiling has to be focused on the evaluation/prediction of the likelihood of a natural person committing a criminal offence, while, for biometric categorization, the AIA does not have a component related to the prediction of person’s future behaviour. Therefore, the further analysis will first focus on the source used by biometric categorization systems (2.7.1.), before turning to the issue of its prohibited use (2.7.2.); finally, one exception will be examined, defining the situation when this AI practice cannot be considered as prohibited (2.7.3).
2.7.1. Biometric categorization systems are prohibited under the AIA if their purpose is to generate categorized datasets of natural persons. In spite of the fact that the provision of Article 5-1(g) uses the expression ‘systems that categorize […] persons based on their biometric data’, teleological and contextual analysis allows the conclusion that if biometric data represent the input value upon which is ‘based’ the system, it is not impossible to imagine that this AI practice is still prohibited under the EU law when some of its elements are not biometric data taken stricto sensu, but other relevant information the system uses to classify each individual in an appropriate group.
2.7.2. Probably the weakest point of the entire provision of Article 5-1(g) is a very restrictive definition of the prohibited use of biometric categorization systems. The AIA comprises an exhaustive enumeration (numerus clausus) of the potential collective characteristics of classified individuals, which are, in the same time, the key element of their group identity. In other words, according to Article 5-1(g), a prohibited biometric categorization system is one that uses biometric data to deduce or infer one of the following characteristics of a person: race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. On the other hand, recital 16 of the AIA stipulates that ‘the notion of “biometric categorization”’ referred to in this Regulation should be defined as assigning natural persons to specific categories [that] can relate to aspects such as sex, age, hair colour, eye colour, tattoos, behavioural or personality traits, language, religion, membership of a national minority, sexual or political orientation. The logical comparative analysis of the two provisions leads to the conclusion that the personal characteristics enumerated in the former provision are exclusively those that are ‘deduced or inferred’, while the notion of biometric categorization taken lato sensu comprised in the latter covers characteristics that can ‘relate to’ a longer list of personal information (including, eg, hair and eye colour, tattoos, and even some behavioural or personality traits). In any case, the enumeration in Article 5-1(g) is exhaustive and does not cover the use of biometric categorization systems that deduce or infer belonging to a national or linguistic minority, nor person’s behavioural or personality traits. It is not necessary to further elaborate on how this restrictive definition can be potentially detrimental to privacy, as well as to the protection of numerous individual and collective human rights.
2.7.3. Biometric categorization systems exempted from interdiction under the AIA are those related to ‘labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement’. This provision is to be taken strictissimo sensu, meaning that, in order to benefit from this exemption, biometric datasets have to fulfil two cumulative conditions: first, they should be lawfully acquired and, second, can be used only ‘in the area of law enforcement’. It is, however, to be noted that the latter expression is rather unprecise and would, most probably, be subject to divergent interpretations by various EU and national instances. It would have been much more appropriate if Article 5-1(g) had used the formulation ‘for the purposes of law enforcement’ that can be found in some other provisions of the AIA (recital 159, Article 5-1(h), Article 5-2, Article 14-5). However, even the latter formulation can be subject to divergent interpretations in different EU Member States.
Real-time remote biometric identification systems36
As the analysis from the previous sub-chapter has shown, the importance of a sound and predictable legal framework for the use of biometric data is crucial for the protection of privacy, human rights and, to an important extent, for safeguarding the rule of law. Mutatis mutandis, the same can be said for real-time remote biometric identification systems in publicly accessible spaces. However, the crucial difference is that, when it comes to the use of biometric categorization systems, the fact that it is used for the purposes of law enforcement can, under certain circumstances, ‘make it legal’ under the AIA’s (see 2.7.3. above). On the contrary, according to the same act, ‘real-time’ remote biometric identification systems ‘are, in principle, considered as a prohibited AI practice’, ‘unless and in so far as their use is strictly necessary’ for one of the three objectives defined in Article 5-1(h), points (i) to (iii). It, is, therefore, first necessary to examine the two major conditions the EU legislation requires in order to consider this AI practice prohibited (2.8.1.), before focusing on three exceptions that can legitimize it (2.8.2).
2.8.1. According to the AIA, the use of ‘real-time’ remote biometric identification systems is forbidden if it fulfils the following two conditions: (i) it takes place in a publicly accessible space and (ii) it is carried out for the purposes of law enforcement. In order to ensure the appropriate enforcement of this provision, the two normative standards comprised in the above-mentioned conditions should be further clarified. According to Article 3, point 44 of the AIA, publicly accessible space means ‘any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions’. For a change, the EU legislators have opted for a broad, comprehensive definition—especially when it comes to the nature of ownership, access restrictions, and capacity of the place—and this choice will positively influence the future enforcement of this provision. On the other hand, the AIA does not include a definition of the formulation ‘for the purposes of law enforcement’. Following the teleological interpretation of this and other sources of EU law, it should be opted for a broader definition, encompassing not only law enforcement taken stricto sensu (policing, border control, investigation, and repression of criminal activities) but also the enforcement of legal, administrative, and judicial acts belonging to private law.
2.8.2. In order to be exempted from interdiction described in the previous sub-paragraph, ‘real-time’ remote biometric identification system has to be strictly necessary for at least one of the three following objectives: (i) targeted search for persons in specific situations (victims of trafficking, abduction, sexual exploitation, or missing persons); (ii) prevention of narrowly defined sorts of threats to the life or physical safety and threats of a terrorist attack;37 and (iii) localization or identification of a person suspected of having committed a criminal offence, under precise and well-defined conditions (only for the purposes of criminal investigation, prosecution, or execution of a criminal penalty). Unlike in many other provisions of the AIA, the EU legislators, in this case, have adopted narrower and more detailed definitions, allowing national and other EU instances competent for future enforcement of this act a smoother application of the principle exceptiones sunt strictissima interpretationes. On the other hand, the main weakness of this sudden enthusiasm of EU legislators for narrow and precise definitions may allow the political authorities with less democratic tendencies to abuse ‘real-time’ remote biometric identification systems for purposes such as the intimidation of political opponents and free media.
Conclusion
The AIA recently adopted by competent EU authorities—as the first act of this scope and ambition—is a very important step in the regulation of rapidly developing AI systems. As it is underlined in its recital 2, its main objective is to facilitate ‘the protection of natural persons, undertakings, democracy, the rule of law and environmental protection, while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI’. However, from the very wording of this provision, it is clear the EU legislators were fully aware that the wish to boost innovation and employment, on the one hand, and to make AI human-centric and trustworthy, on the other, may often be conflicting objectives. In such a context, a more detailed analysis of the provisions of Article 5-1—which, in many ways, as this paper argues, represents the AIA’a core provision—has shown that, in numerous aspects, it does not go far enough, introducing a series of legal standards that would raise issues of interpretation, and thus potentially lead to its unsatisfactory level and quality of enforcement. For example, it was shown to what extent sound and predictable legal norms on the use of biometric data—as is the case of remote biometric identification systems—are important for the protection of privacy, human rights, freedom of media, and democracy. Nevertheless, while anti-democratic and illiberal political tendencies of certain political stakeholders (both in Europe and worldwide) represent, and will continue to do so, a serious threat to the deployment of various intrusive and illegitimate AI systems and practices, not less invasive are numerous technological companies and other AI developers, in their endless race for quick profit. In such a context, even if it came too late and have not gone far enough, the advent of this new piece of EU legislation brings a ray of hope that regulated development of AI is possible.
Footnotes
GIAI official website <https://gpai.ai/about/>, accessed 20 May 2024.
ibid.
See Silja Voeneky et al., The Cambridge Handbook of Responsible Artificial Intelligence - Interdisciplinary Perspectives (CUP 2022).
<https://iapp.org/media/pdf/resource_center/global_ai_law_policy_tracker.pdf>, accessed 20 May 2024.
ibid, p. 2.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), OJ L series, 12.7.2024.
COM(2018) 237 final <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN>, accessed 21 May 2024.
COM(2020) 64 final, <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020DC0064>, accessed 21 May 2024.
European Parliament legislative resolution (P9_TA(2024)0138), <https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html#title1>, accessed 21 May 2024.
The working version of text approved by the EU Council <https://data.consilium.europa.eu/doc/document/PE-24-2024-INIT/en/pdf>, accessed 22 May 2024.
Rostam Neuwirth, The EU Artificial Intelligence Act—Regulating Subliminal AI Systems (Routledge 2023).
Recital 1 of the AIA.
Only can be prohibited an AI practice that consists of at least one of the following three actions: (1) the placing on the market, (2) the putting into service, or (3) the use of various AI system(s). When it comes to point (h), the only possible action that can be considered prohibited is ‘the use’ of ‘real-time’ remote biometric identification systems in publicly accessible spaces.
The literature on this issue is enormous and it exceeds the limits of this paper. For an overview of several possible approaches to the topic, see Uri Maoz, Walter Sinnott-Armstrong (eds), Free Will: Philosophers and Neuroscientists in Conversation (OUP 2022); Alfred Mele (ed.), Surrounding Free Will: Philosophy, Psychology, Neuroscience (OUP 2015).
See Thomas Humphrey Marshall, Citizenship and Social Class and Other Essays (CUP 1950).
David Benassi and Enzo Mingione, ‘Citizenship and the welfare state—T.H. Marshall’ in Marisol García Cabeza and Thomas Faist (eds), Encyclopedia of Citizenship Studies (Elgar Publishing 2024) 53–57.
Whenever the Court of Justice of the EU or its Court of First Instance (CFI) refer to ‘social situation’ of a natural person, they examine it in the light of broader context of person’s material conditions of living. For example, in its judgement of 26 September 2000 in case Bärbel Kachelmann (C-322/98), the Court indicates that ‘Bankhaus continues to employ on a full-time basis a female member of staff whose duties are comparable to those of Ms Kachelmann and, ‘in view of her social situation’, Ms Kachelmann must be considered as having priority in terms of job protection’ (point 13). In a similar vein, in its judgement of 11 May 2005 in the case Saxonia Edelmetalle GmbH v Commission of the European Communities (joined cases T-111/01 and T-113/01) the CFI mentions the applicant’s ‘economic and social situation and viability’ (point 84), ECLI:EU:T:2005:166.
For example, in its judgement of 24 November 2022 in case MCM v Centrala studiestödsnämnden (C-638/20) the CJEU ‘notes that, in accordance with Chapter 3, Paragraph 23, the first subparagraph of the Law on student financial aid, the entitlement to such assistance, which depends neither on the income of the applicant’s parents nor on any other social situation’ (point 14), ECLI:EU:C:2022:916.
Concrete examples are numerous and the limits of potential abusive uses would only depend on the imagination and needs of the designers of malicious AI systems. For example, it is possible to imagine an AI practice that would exclusively or predominantly target the users of a certain language spoken by an ethnic minority living within an EU Member State (MS). Even if the mere fact of belonging to such an ethnic/linguistic group cannot be in itself seen as a vulnerability, it certainly becomes so when AI-based targeting is founded on a such individual characteristic, especially in the light of a minority status (among all other citizens) of this group. The same can be said about the citizens of one MS living in another MS, if, for example, they can be targeted not because of their ‘social situation’ taken stricto sensu, but owing to their specificities in comparison to the majority of other citizens.
Numerous researchers, in different contexts, have already examined the issue of data produced by IoT devices and their potential use for various, including malicious, purposes; see Safi et al, ‘A Survey on IoT Profiling, Fingerprinting, and Identification’, (2022) ACM Trans Internet Things , <https://www.researchgate.net/profile/Sajjad-Dadkhah/publication/360976914_A_Survey_on_IoT_Profiling_Fingerprinting_and_Identification/links/62a33c7755273755ebe1d786/A-Survey-on-IoT-Profiling-Fingerprinting-and-Identification.pdf>, accessed 27 May 2024; Lee et al, ‘ProFiOt: Abnormal Behavior Profiling (ABP) of IoT devices based on a machine learning approach’, (2017) Proceedings of 27th International Telecommunication Networks and Applications Conference (ITNAC).
One of the most successful, convincing, and artistically very well-executed fictional representations of an operational system of evaluation and classification of persons is given in episode one (entitled ‘Nosedive’) of the third season of British science fiction television series ‘Black Mirror’ created by Charlie Brooker. Even if this episode follows a person obsessed with her social media ratings—where ‘score’ is not attributed to AI-based system, but by the users of the same social media, who are entitled to rate other users on a scale from one to five stars—it marvellously depicts a society where not only social status but also the access to numerous fundamental rights, exclusively depends on a score attributed by an automated system taking the prerogatives usually reserved to legally very well regulated and controlled institutions. Moreover, the episode ‘Nosedive’, but also the entire television series ‘Black Mirror’ was, under numerous different angles, treated not only in general media and blogs but also in scientific publications; see, eg, François Allard-Huver, Julie Escurignan, ‘Black Mirror’s Nosedive as a new Panopticon: Interveillance and Digital Parrhesia in Alternative Realities’ in Angela M. Cirucci and Barry Vacker, Black Mirror and Critical Media Theory (Rowman & Littlefield Publishing Group 2018) 43–54; Steven Keslowitz, ‘The Digital Dystopias of Black Mirror and Electric Dreamsbooks’ (McFarland & Company, Jefferson 2020); David Kyle Johnson, ‘Black Mirror and Philosophy: Dark Reflections’, e-book, 2019.
While the evaluation or, even more so, classification of human beings on the basis of their social behaviour or personal/personality characteristics—especially when it is accompanied with some kind of ‘social score’—can be considered ethically inacceptable from numerous points of view, it would be difficult to justify the systematic ban of AI systems allowing it in a context where those persons are not treated differently in connection to the results of such evaluation or classification. However, once a hypothetic AI system has already collected and processed a bundle of data related to personal/personality characteristics allowing the evaluation or classification of human beings, their detrimental, unfavourable or discriminatory treatment is only one small, too small step away. In any case, by adding the fourth condition necessary to ban this AI practice, the EU legislators have significantly diverged from their wish, expressed in the first recital of AIA, ‘to promote the uptake of human centric and trustworthy artificial intelligence’.
There is a plethora of studies dedicated to this issue; see, eg, Andy Bain (ed.), Law Enforcement and Technology - Understanding the Use of Technology for Policing (Palgrave 2017); Evelien De Pauw, Paul Ponsaers, Kees van der Vijver, Willy Bruggeman (eds), Technology-led Policing (Maklu 2011); Laura J. Moriarty (ed.), Criminal Justice Technology in the 21st Century (Charles C. Thomas Publishing 2005).
Paul Ekblom, ‘Technology, Opportunity, Crime and Crime Prevention—Current and Evolutionary Perspectives’ in Benoit Leclerc and Ernesto Savona (eds), Crime Prevention in the 21st Century (Springer 2017) 212.
For example, Regulation (EU) 2022/2371 of the European Parliament and of the Council of 23 November 2022 on serious cross-border threats to health and repealing Decision No 1082/2013/EU, OJ L 314, 6.12.2022, pp. 26–63.
In March 2013, three independent scientific committees providing consultancy for the European Commission adopted the opinion entitled ‘Making Risk Assessment More Relevant for Risk Management’, https://ec.europa.eu/health/scientific_committees/consumer_safety/docs/sccs_o_130.pdf, accessed 10 July 2024.
The EU Strategy and Action Plan for Customs Risk Management was Adopted in 2014; see https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2014%3A0527%3AFIN, accessed 10 July 2024.
Woodrow Wilson Bledsoe, The Model Method in Facial Recognition; Technical Report; Panoramic Research, Inc.: Palo Alto, 1964.
Insaf Adjabi et al., ‘Past, Present, and Future of Face Recognition: A Review’, 9(8)-(2020) Electronics 1188–2013.
Zubair Ahmed Khan and Asma Rizvi, ‘AI Based Facial Recognition Technology and Criminal Justice: Issues and Challenge’ (2021) Turkish Journal of Computer and Mathematics Education 3384–3392.
See, eg, Alex Najibi, ‘Racial Discrimination in Face Recognition Technology’, 24 October 2020, Havard University Blog, https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/, accessed 11 July 2024.
Sun, Xudong Sun, Pengcheng Wu and Steven Hoi, ‘Face detection using deep learning: An improved faster RCNN approach’ (2018) Neurocomputing 42–50.
Birgitt Röttger-Rössler and Hans Jürgen Markowitsch, Emotions as Bio-cultural Processes (Springer 2009) 16.
Shigeru Watanabe and Stan Kuczaj (eds), Emotions of Animals and Humans: Comparative Perspectives (Springer 2012).
For a detailed, comparative analysis, see: Shigeru Watanabe, Stan Kuczaj (eds), Emotions of Animals and Humans: Comparative Perspectives (Springer 2012).
The issue of real-time remote biometric identification systems in publicly accessible spaces is regulated by the AIA in a detailed manner, given not only the provision of Art. 5-1(h) (core provision) is dedicated to it, but also the provisions of Art. 5-2 to 5-7 (auxiliary provisions). Due to the limited space and according to the general objective of this paper, here the focus will be only on the core provision and its main features.
According to Art. 5-1(h), point (ii), the threat to the life or physical safety of natural persons has to be ‘specific, substantial and imminent’, while the threat of a terrorist attack has to be ‘genuine and present’ or ‘genuine and foreseeable’.