
Contents
-
-
-
-
-
-
-
-
-
-
A Introduction A Introduction
-
B Machines: The first instances B Machines: The first instances
-
I ADM systems as socio-technical systems I ADM systems as socio-technical systems
-
-
C Humans: ‘Human intervention’ and ‘human oversight’ C Humans: ‘Human intervention’ and ‘human oversight’
-
D Courts: Between humans and machines D Courts: Between humans and machines
-
I The SyRI case I The SyRI case
-
II The Buona Scuola case II The Buona Scuola case
-
III The Schufa case III The Schufa case
-
IV The Uber case IV The Uber case
-
-
E Concluding remarks E Concluding remarks
-
-
-
-
-
-
-
7 Between Humans and Machines: Judicial Interpretation of the Automated Decision-Making Practices in the EU
-
Published:August 2024
Cite
Abstract
In the EU, judges have recently provided more precise interpretations of automated decision-making (ADM) practices through their judicial rulings. Prominent instances of ADM practices such as fraud detection, teacher placement, credit scoring, and dismissing workers offer valuable insights into how judges interpret these practices. This chapter aims to systematize these practices and explores the role of judicial interpretation in defining these activities and involvement of humans in decision-making processes. The chapter is divided into three sections focusing on machines, humans, and courts. It begins by exploring the concrete uses of ADM in the EU (‘machines’). From there, it delves into how these systems are currently being utilized by taking into account two official reports published by the EU institutions. The research then identifies the socio-technical quality of ADM practices and argues how this quality necessitates meaningful human participation in decision-making processes. The chapter then examines the human-centric provisions of the relevant EU legal instruments surrounding ADM systems and targeting human participation (‘humans’). Finally, it examines four judicial cases which surfaced public and private contexts in the Netherlands, Italy, and Germany (‘courts’). In conclusion, the chapter identifies three key dimensions of judicial interpretation regarding ADM practices: (i) epistemic; (ii) substantial, encompassing socio-technical and legal dimensions; and (iii) methodological. It argues that these dynamics prove the pivotal role of judicial interpretation in comprehending the technical aspects of automation and ensuring meaningful human participation in decision-making processes.
A Introduction
In the European Union (EU), recent judicial rulings have provided more precise interpretations of automated decision-making (ADM) practices. The right not to be subject to automated decisions, as described in Article 22 of the General Data Protection Regulation (GDPR),1 was brought to the Court of Justice of the European Union (CJEU) on 16 March 2023, making the first instance of such consideration.2 The case concerns credit scoring used in Germany, known as ‘Schufa’, and whether credit scoring can be considered an automated decision. Additionally, another significant case related to Article 22 of the GDPR came out in the Netherlands. The Court of Appeal in Amsterdam (Gerechtshof Amsterdam) found that several automated processes, including assigning rides, calculating prices, rating drivers, calculating ‘fraud probability scores’, and deactivating drivers’ accounts in response to suspicions of fraud on Uber’s and Ola platforms, are considered as automated decisions.3
In light of these contemporary examples within the EU, this chapter aims to systematize the ADM practices and explores the role of judicial interpretation in defining these activities and involvement of humans in decision-making processes. The chapter is divided into three sections focusing on machines, humans, and courts. It begins by exploring the concrete uses of ADM in the EU (section B. Machines). From there, it delves into how these systems are currently being utilized by taking into account two official reports published by the EU institutions. The objective is to provide nuanced insights into the current applications of ADM systems, avoiding overly broad generalizations. Following this analysis, the research identifies the socio-technical quality of ADM practices and argues how this quality necessitates meaningful human participation in decision-making processes. The chapter then examines the human-centric provisions of the relevant EU legal instruments surrounding ADM systems and targeting human participation (section C. Humans). Finally, it examines four judicial cases which surfaced public and private contexts in the Netherlands, Italy, and Germany (section D. Courts). In conclusion, the chapter identifies three key aspects of judicial interpretation regarding ADM practices: (i) epistemic; (ii) substantial, encompassing socio-technical and legal dimensions; and (iii) methodological. It argues that these aspects prove the pivotal role of judicial interpretation in comprehending the technical aspects of automation and ensuring meaningful human participation in decision-making processes.
B Machines: The first instances
In the artificial intelligence (AI) age, the decision-making landscape is undergoing a profound transformation. Significant decisions about modern life are increasingly delegated from human hands to algorithmic machines.4 Algorithms are ‘a series of instructions that instruct a software package to take a dataset and learn a model or discover some underlying pattern’.5 An ADM system ‘augments or replaces human decision-making by using computational processes to produce answers to questions either as discrete classifications or continuous scores’.6 Such decision-making has been implemented in complex areas involving public and private contexts, including social benefits, migration and border control, and loan or mortgage applications. As such systems become more prevalent in modern life, it is important to consider the complexities they introduce to decision-making and to examine their social and legal impacts thoroughly.
However, as noted by an Italian court in 2019, ADM systems possess the quality of ‘multidisciplinary characterization’ (caratterizzazione multidisciplinare), requiring not only legal, but technical, computer, and statistical skills.7 This situation makes it even more difficult to understand the complexities posed by such systems. Therefore, legal scholars resort to analogical thinking and explore the similarities first between the new digital technology in question and the previous ones. However, the use of analogy in those examples has demonstrated that it does not sufficiently meet the nature of what a particular technology is, and thus misses many of its unique features.8 It is important to note that incorrect conceptualizations of technologies which often rest on the use of analogy can lead to incorrect normative results in the legal sphere, as also recognized by a recent judgment of the District Court of the Hague (Rechtbank Den Haag), which argues that if we base our knowledge of technologies on properties and use terms such as ‘self-learning’, and make wrong analogies between the human person and a new technology, we are then unable ‘to properly justify actions and to properly substantiate decisions’ in an administrative system.9
Considering this issue, this chapter takes into account two official reports published by the EU institutions which provide empirical research on the current uses of automated systems used by the public sector: ‘Getting the Future Right: Artificial Intelligence and Fundamental Rights’ published by the Fundamental Rights Agency in 202010 (hereafter Report I) and ‘AI Watch Artificial Intelligence in Public Services: Overview of the Use and Impact of AI in Public Services in the EU’ published by the Joint Research Centre in 2020 (hereafter Report II).11 The purpose is to be concrete on the current uses of ADM systems and to avoid examining the systems at stake on an overgeneralized fashion. Considering the two reports, the most critical examples used in the public sector are observed in the fields of social benefits and biometrics.
The Organisation for Economic C-operation and Development (OECD) defines social benefits as ‘current transfers received by households intended to provide for the needs that arise from certain events or circumstances, for example, sickness, unemployment, retirement, housing, education or family circumstances’.12 The Report I underlines that automated systems used in public administration include areas such as benefit calculations, fraud prevention and detection, eligibility assessments, and risk scoring.13 The purpose of governments is to enhance the efficiency of decision-making on these issues. In the context of social benefits, the Report I explains two important areas where ADM systems are used for decisions—housing and unemployment benefits.14 In these areas, rule-based decision-making is applied, defined based on ‘if–then rules’.15 For instance, a person will be eligible for a certain income if she/he has an income below a certain threshold.16 Table 7.1 sketches three artificial intelligence (AI) practices in the field of social benefits, defining their purposes, data sources, and ‘black-box’ aspects.17
ADM Systems in the Area of Social Benefits . | Purpose . | Data Source . | Techniques—The ‘Black Box’ Aspects . |
---|---|---|---|
Deciding on Housing Benefits (Report I) | Efficiency (to speed up tasks) | Internal database containing data on benefit application processes Data is pseudonymized | Processing applications Rule-based decision-making Decision-tree model following the rules In particular, ‘a simple statistical model (linear regression) is used where the input is the income and the cost limits, and the outcome is the amount of benefit’.19 |
Deciding on Unemployment Benefits (Report I) | Efficiency | Various databases containing the population register and tax authorities’ databases to obtain information about salaries and work experiences | Processing applications Rule-based decision-making: ‘if all conditions are fulfilled, the system calculates the period of payments and the amount of benefits in the light of the period of payments and the average daily salary’.20 |
Automating Various Social Assistance Decisions (Report II)21 Processing applications on homecare, sickness benefits, unemployment benefits, and taxes | Efficiency | Personal data through the self-service portal | Robotic Process Automation (RPA)22 Rule-based decision-making |
ADM Systems in the Area of Social Benefits . | Purpose . | Data Source . | Techniques—The ‘Black Box’ Aspects . |
---|---|---|---|
Deciding on Housing Benefits (Report I) | Efficiency (to speed up tasks) | Internal database containing data on benefit application processes Data is pseudonymized | Processing applications Rule-based decision-making Decision-tree model following the rules In particular, ‘a simple statistical model (linear regression) is used where the input is the income and the cost limits, and the outcome is the amount of benefit’.19 |
Deciding on Unemployment Benefits (Report I) | Efficiency | Various databases containing the population register and tax authorities’ databases to obtain information about salaries and work experiences | Processing applications Rule-based decision-making: ‘if all conditions are fulfilled, the system calculates the period of payments and the amount of benefits in the light of the period of payments and the average daily salary’.20 |
Automating Various Social Assistance Decisions (Report II)21 Processing applications on homecare, sickness benefits, unemployment benefits, and taxes | Efficiency | Personal data through the self-service portal | Robotic Process Automation (RPA)22 Rule-based decision-making |
ADM Systems in the Area of Social Benefits . | Purpose . | Data Source . | Techniques—The ‘Black Box’ Aspects . |
---|---|---|---|
Deciding on Housing Benefits (Report I) | Efficiency (to speed up tasks) | Internal database containing data on benefit application processes Data is pseudonymized | Processing applications Rule-based decision-making Decision-tree model following the rules In particular, ‘a simple statistical model (linear regression) is used where the input is the income and the cost limits, and the outcome is the amount of benefit’.19 |
Deciding on Unemployment Benefits (Report I) | Efficiency | Various databases containing the population register and tax authorities’ databases to obtain information about salaries and work experiences | Processing applications Rule-based decision-making: ‘if all conditions are fulfilled, the system calculates the period of payments and the amount of benefits in the light of the period of payments and the average daily salary’.20 |
Automating Various Social Assistance Decisions (Report II)21 Processing applications on homecare, sickness benefits, unemployment benefits, and taxes | Efficiency | Personal data through the self-service portal | Robotic Process Automation (RPA)22 Rule-based decision-making |
ADM Systems in the Area of Social Benefits . | Purpose . | Data Source . | Techniques—The ‘Black Box’ Aspects . |
---|---|---|---|
Deciding on Housing Benefits (Report I) | Efficiency (to speed up tasks) | Internal database containing data on benefit application processes Data is pseudonymized | Processing applications Rule-based decision-making Decision-tree model following the rules In particular, ‘a simple statistical model (linear regression) is used where the input is the income and the cost limits, and the outcome is the amount of benefit’.19 |
Deciding on Unemployment Benefits (Report I) | Efficiency | Various databases containing the population register and tax authorities’ databases to obtain information about salaries and work experiences | Processing applications Rule-based decision-making: ‘if all conditions are fulfilled, the system calculates the period of payments and the amount of benefits in the light of the period of payments and the average daily salary’.20 |
Automating Various Social Assistance Decisions (Report II)21 Processing applications on homecare, sickness benefits, unemployment benefits, and taxes | Efficiency | Personal data through the self-service portal | Robotic Process Automation (RPA)22 Rule-based decision-making |
In this field, these systems include processing the applications first and deciding on these applications second. Due to the complexity of the systems used, the techniques on decision-making should be considered as the ‘black-box’ aspects of the ADM practice at stake, meaning that such systems are producing results without clear or understandable explanations of how the results have been reached.23 This quality is particularly significant because, as the Hague Court highlighted, citizens could neither anticipate the intrusion into their private life nor can they guard themselves against it.24
All three practices are developed as rule-based decision-making. In the last practice, robotic process automation (RPA) has been used in the municipality of Trelleborg, Sweden, since 2016, noted by Report II.25 Despite in-depth conversations in major media outlets, neither the public officials or the company which developed the system have provided satisfactory answers regarding how the system works, nor how it makes decisions.26 Furthermore, such a system makes decisions on the basis of its data sources. It is therefore necessary to take into consideration the fact that the data source may not provide sufficient, necessary, or even correct information. This critical observation emphasizes the importance of human intervention in these areas.
In the case of biometrical systems, two fields are prominent in public use: predictive policing and migration and border control management. Table 7.2 sketches the two use cases in this area based on Report I27 and Report II,28 defining their purposes, data sources, and ‘black-box’ aspects.
ADM systems in the Area of Biometrics—Law Enforcement . | Purpose . | Data Source . | Techniques—The ‘Black-Box’ Aspects . |
---|---|---|---|
Predictive Policing (mapping crime patterns, detecting online hate speech,30 preparing risk assessment on gender-based violence) (Report I) | Efficiency (to speed up tasks) Security | Historical crime and police data (containing crime reports, witness statements, suspect declarations) | Data mining and machine learning processes and predictive analytics, simulation, and data visualization Analysing data to identify common patterns and trends and creating models on the basis of this analysis to predict crimes, perpetrators, or victims |
Environmental data such as population density, the presence of certain public places and services, and major events or holidays Personal data (real-time and historical data used) in predicting potential perpetrators and victims (including criminal records, addresses, phone numbers, location data) | Creating a ‘heat map’ outlining the prevalence of certain crimes in certain areas In the case of gender-based violence, the AI system produces a ‘risk score’ on the basis of the risk of repetition that is evaluated by the police in the light of the level of gravity and the nature of threats (Report I) | ||
Migration and Border Control Management | Efficiency and Security | Personal data evaluating facial expressions and behaviours | Biometric identification, biometric categorization, and emotion recognition system |
ADM systems in the Area of Biometrics—Law Enforcement . | Purpose . | Data Source . | Techniques—The ‘Black-Box’ Aspects . |
---|---|---|---|
Predictive Policing (mapping crime patterns, detecting online hate speech,30 preparing risk assessment on gender-based violence) (Report I) | Efficiency (to speed up tasks) Security | Historical crime and police data (containing crime reports, witness statements, suspect declarations) | Data mining and machine learning processes and predictive analytics, simulation, and data visualization Analysing data to identify common patterns and trends and creating models on the basis of this analysis to predict crimes, perpetrators, or victims |
Environmental data such as population density, the presence of certain public places and services, and major events or holidays Personal data (real-time and historical data used) in predicting potential perpetrators and victims (including criminal records, addresses, phone numbers, location data) | Creating a ‘heat map’ outlining the prevalence of certain crimes in certain areas In the case of gender-based violence, the AI system produces a ‘risk score’ on the basis of the risk of repetition that is evaluated by the police in the light of the level of gravity and the nature of threats (Report I) | ||
Migration and Border Control Management | Efficiency and Security | Personal data evaluating facial expressions and behaviours | Biometric identification, biometric categorization, and emotion recognition system |
ADM systems in the Area of Biometrics—Law Enforcement . | Purpose . | Data Source . | Techniques—The ‘Black-Box’ Aspects . |
---|---|---|---|
Predictive Policing (mapping crime patterns, detecting online hate speech,30 preparing risk assessment on gender-based violence) (Report I) | Efficiency (to speed up tasks) Security | Historical crime and police data (containing crime reports, witness statements, suspect declarations) | Data mining and machine learning processes and predictive analytics, simulation, and data visualization Analysing data to identify common patterns and trends and creating models on the basis of this analysis to predict crimes, perpetrators, or victims |
Environmental data such as population density, the presence of certain public places and services, and major events or holidays Personal data (real-time and historical data used) in predicting potential perpetrators and victims (including criminal records, addresses, phone numbers, location data) | Creating a ‘heat map’ outlining the prevalence of certain crimes in certain areas In the case of gender-based violence, the AI system produces a ‘risk score’ on the basis of the risk of repetition that is evaluated by the police in the light of the level of gravity and the nature of threats (Report I) | ||
Migration and Border Control Management | Efficiency and Security | Personal data evaluating facial expressions and behaviours | Biometric identification, biometric categorization, and emotion recognition system |
ADM systems in the Area of Biometrics—Law Enforcement . | Purpose . | Data Source . | Techniques—The ‘Black-Box’ Aspects . |
---|---|---|---|
Predictive Policing (mapping crime patterns, detecting online hate speech,30 preparing risk assessment on gender-based violence) (Report I) | Efficiency (to speed up tasks) Security | Historical crime and police data (containing crime reports, witness statements, suspect declarations) | Data mining and machine learning processes and predictive analytics, simulation, and data visualization Analysing data to identify common patterns and trends and creating models on the basis of this analysis to predict crimes, perpetrators, or victims |
Environmental data such as population density, the presence of certain public places and services, and major events or holidays Personal data (real-time and historical data used) in predicting potential perpetrators and victims (including criminal records, addresses, phone numbers, location data) | Creating a ‘heat map’ outlining the prevalence of certain crimes in certain areas In the case of gender-based violence, the AI system produces a ‘risk score’ on the basis of the risk of repetition that is evaluated by the police in the light of the level of gravity and the nature of threats (Report I) | ||
Migration and Border Control Management | Efficiency and Security | Personal data evaluating facial expressions and behaviours | Biometric identification, biometric categorization, and emotion recognition system |
Both Report I and Report II do not provide any specific use case about the AI-driven ADM systems used for migration and border control management. However, newer AI techniques are being developed to control borders and to provide a decision support system for border authorities.31 Moreover, this area is particularly significant as all the three systems of biometrics mentioned in the table, biometric identification, categorization, and emotion recognition systems, can be used in this area.32 According to a recent empirical study, the existing uses of digital technologies across European immigration and asylum systems include forecasting tools, processing of short- and long-term residency and citizenship applications, risk assessment and triaging systems, speech recognition, distribution of welfare benefits, matching tools, mobile phone data extraction, and electronic monitoring.33 Due to such a rich variety of advanced technological systems, this field is called a ‘human laboratory’,34 where people are used as ‘test subjects’ for such systems.
As a case study, the ‘iBorderCtrl’ project funded under EU Horizon 2020 over a thirty-six-month period can be considered in this regard. The main objective of this project is ‘to enable faster and thorough border control for third country nationals crossing the land borders of EU Member States (MS), with technologies that adopt the future development of the Schengen Border Management’.35 The project brings together many technologies including biometric verification, automated deception detection, document authentication, and risk assessments in one system.36 The project team evaluated these technologies with the border control officers’ assistance from Hungary, Latvia, and Greece, who were the three end-users of the project.37 However, the project has received severe criticism from digital rights journalists,38 scholars,39 and civil society organizations.40 They have highlighted concerns regarding the technology’s accuracy as well as issues related to bias, discrimination, privacy, due process, and procedural fairness.
Furthermore, in 2018, ‘Homo Digitalis’, an organization focusing on the protection of digital rights in Greece and a member of European Digital Rights, filed a petition to the Greek Parliament regarding the pilot implementation of the iBorderCtrl project on the Greek border.41 They underlined the concerns regarding the lack of transparency and trust in the actual capabilities of the AI systems employed in the project. The petition also underscored the high risk of discrimination against individuals based on specific categories of personal data.42 However, the ‘black-box’ aspects of the project have not been unlocked. The project was filed before the CJEU to disclose information about the ethics reports and the legal assessments regarding the technological system concerned and how it works.43 The General Court argued that the public interest justifies the disclosure of the relevant documents:
there is a public interest in participating in an informed, open, and democratic debate regarding the question, whether control technologies, as the one mentioned, are desirable, if they should be funded via public money, and that this public interest must be duly respected.44
However, the Court arguably concluded that the public interest in disclosing information should begin only after the completion of research.45 It is also important to note that in the case of an ADM use with law enforcement purposes such as prevention, detection, or prosecution of criminal offences, that ADM is subject to the Law Enforcement Directive46 (lex specialis) which provides lower standards compared to the General Data Protection Regulation (GDPR) in terms of transparency and data protection rights.47
I ADM systems as socio-technical systems
The use cases examined above have proved that ADM systems are more than just technological systems. They are social systems that mediate social institutions and structures.48 They are used in different social, public, and human services. In particular, AI-driven ADM systems for deciding on social benefits, making risk assessments and producing risk scores of individuals, and identifying, categorizing, and detecting individuals and their behaviours or their emotions49 clearly affect and form social structures and institutions.
In the field of philosophy of technology, the social aspect of AI-driven ADM systems is identified through the notion of relational ethics.50 According to the relational understanding of AI, AI is able to recognize the interaction between people and technology, and how complex infrastructures are affected by society and by human behaviours. In other words, AI-driven ADM systems have an impact on people, interpersonal interactions, and society as a whole because they are able to recognize these social components of their environment. Therefore, the notion of relational ethics proposes that AI should be considered as a socio-technical system that is much more than an automation technique. In this regard, it suggests an investigation within the dynamics of the situation in which the decision is taken to see what is ‘right’.51
The relational understanding of AI helps separate the technical and social aspects of AI, although the two are closely intertwined. While the technical aspect is related to the black-box aspects or decision trees of an AI system, thereby presenting an epistemic problem, the social aspect is related to the data it collects. Indeed, the data possesses a social dimension, given that it originates from societal sources. The AI systems examined above obtain social data, such as social expressions, behaviours, events, emotional reactions, and common patterns or mistakes. This overview underscores a crucial aspect of AI systems: namely, individuals are not only subjected to their own data regarding their own actions and choices but also to the aggregated data that is collected from similarly situated individuals and weaves social contexts of individuals. In the context of AI systems used in the field of migration and border control, this social context is widely built, including different publics coming from different countries.
In other words, the judging framework of AI-driven ADM systems is built on the basis of actions and behaviours not attributable to a single individual. This situation means that the outcome of an ADM system examined above will never be a personal decision but a social one that encompasses not only a single social community but diverse communities. Therefore, it is necessary to protect not only individual interests but also collective interests.52 This situation proves that such systems have the potential to produce significant consequences for individuals, minorities, and society in general. They are particularly sensitive in terms of the protection of fundamental rights and can pose life-changing social consequences.53 Therefore, it is imperative to engage human input in decision-making processes to avert such outcomes. This situation necessitates a critical assessment of the human-centric provisions of the relevant EU legal instruments to understand whether they adequately facilitate human participation in decision-making processes.
C Humans: ‘Human intervention’ and ‘human oversight’
European legal instruments on digital technologies acknowledge the importance of the human factor in the era of automation.54 Despite being in its early stages, European digital legal framework is strongly committed to ensuring that humans play an active role in decision-making. This human-centric approach sets Europe apart on a global scale, distinguishing it from the United States and China.55 While the United States is adopting a market-driven approach and China is promoting a state-driven approach, the EU is pursuing a rights-driven human-centric approach.56 On 26 January 2022, the Commission also published the European Declaration on Digital Rights and Principles for the Digital Decade defining the European position on digital transition as ‘putting people at the centre’ which emphasizes that rights and freedoms should be duly respected online—just as they are offline.57
The reflection forms of this perspective and its connection with ADM systems can be found both in the GDPR and the draft AI Act. Under Article 22 of the GDPR,58 data subjects have the right not to subjected to decisions with legal and ‘significant effects’ ‘based solely on automated decision-making’ or profiling.59 This means that the GDPR prohibits in general automated decision-making that does not involve meaningful human intervention. It only allows such decision-making in specific circumstances, according the conditions set in the second paragraph: (i) if it is necessary for contractual aims, (ii) if it is authorized by Union or Member State law, or (iii) if it is based on the data subject’s explicit consent. When automated decisions are exceptionally allowed in one of these described circumstances, the data controller shall implement safeguarding measures for the data subject, such as the right to be informed, the right to obtain human intervention, and the right to challenge the decision.60 Furthermore, the GDPR also limits the use of sensitive data in ADM systems to mitigate potential discriminatory effects. Processing such kind of data61 is only permissible with the explicit consent of the data subject or a substantial public interest.62
However, legal scholarship has underlined that the wording of Article 22 leaves an extensive room for interpretation, and that interpretation plays a key role in clarifying its scope.63 In particular, the question of whether there is a meaningful human intervention in an ADM process that can circumvent the prohibition defined in Article 22 can only be understood on a case-by-case basis.64
While this situation poses a critical challenge for judges in terms of setting clear and consistent interpretations, the very fact that their interpretations hold decisive authority reveals the significance of judicial interpretation on this issue.
Another critical legal instrument of the EU, the draft AI Act,65 also mandates ‘human oversight’ requirements to ensure that fundamental rights of individuals are protected:66
Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.67
Moreover, in the fourth paragraph of this provision, the draft AI Act recognizes individual autonomy by authorizing the human person who is responsible for human oversight to ‘decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output’.68
However, the normative power of the Article is not sufficient to achieve this purpose, as it does not consider the difference between AI-driven ADM systems and human beings in terms of ‘cognition’.69 It is obvious that humans are not capable of examining the whole entire data mining process nor validating the outputs of AI systems in a meaningful way.70 The human person might only detect obvious failures. This situation also makes it difficult to detect human gender bias replicated in the ADM system.71 Therefore, human oversight provisions alone cannot be considered as an effective resort for the fundamental rights challenges that ADM systems pose. Alone, they can neither legitimate the use of the ADM system nor the decisions that the system takes. In fact, the system cannot be considered a legitimate device in a democratic society unless its use is proven legal, necessary, and proportionate.72
However, the human oversight requirement is still crucial and can be advanced as one of the ways for humanizing the digital government. Indeed, the proposal warns the human person who is responsible for the oversight to be ‘aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system’.73 This tendency refers to ‘undue deference to automated systems by human actors that disregard contradictory information from other sources or do not (thoroughly) search for additional information’,74 known as ‘automation bias’.75 It is promising that the proposal is aware of this tendency.76 Still, other aspects must be considered. In future iterations, the AI Act should also consider that the excessive digitalization of government services and the automation of decision-making might exclude the unique human feature of forgiving trivial mistakes, such as misspelling names or dates when filling out benefit applications or tax returns.77 Overall, a rigorous approach must consider that the system itself might have bias, the human person who is reviewing the system might also have bias (‘automation bias’), and the individual who is subject to the system might make trivial mistakes due to, for instance, digital illiteracy or ignorance. The current version of the proposal is far from addressing this standard as it does not sufficiently consider the first and last points presented here. However, in the following section, the chapter explains that judicial interpretation plays a critical role in providing guidance for such concerns.
D Courts: Between humans and machines
In this section, the chapter focuses on four leading judgments on the concrete ADM practices that have been observed in the EU Member States. The purpose is not to discuss all the ADM-related cases observed in the EU but rather shed light on the role of courts in clarifying the socio-technical and legal aspects of automation and the human factor in decision-making processes. In this regard, the section examines the SyRI, Buona Scuola, Schufa, and Uber cases, respectively, which surfaced public and private contexts in the Netherlands, Italy, and Germany.
I The SyRI case
The SyRI case from the Netherlands stands out as one of the most significant tech-related cases in the world.78 It provides a clear example of the global trend towards the digitalization of the welfare state and the legal concerns surrounding it. As the UN Special Rapporteur on extreme poverty and human rights noted in 2019, the ‘welfare state is gradually disappearing behind a webpage and an algorithm, with significant implications for those living in poverty’.79
The Dutch government is the first government in the EU to have applied AI-driven digital welfare technologies and, as a result, to have violated the rights of individuals. On the 5 February 2020, the District Court of the Hague (Rechtbank Den Haag) ruled that the use of the SyRI algorithm system (‘System Risk Indication’), a digital welfare fraud detection system applied by the Dutch government, violated Article 8 of the European Convention of Human Rights (ECHR),80 which guarantees the right to respect for private and family life, home and correspondence.81
According to the Dutch Legislator, SyRI was a technical infrastructure linking and analysing data anonymously, with the ability to generate risk reports that address legal or natural persons considered ‘worthy of investigating with regard to possible fraud, unlawful use and non-compliance with legislation’.82 Certain bodies of the Dutch government applied this algorithm in collaboration by exchanging data to identify the perpetrators of related abuses.
The claimants, several human rights activists and non-governmental organizations, stated that the national legislation on SyRI does not have sufficient safeguards for the protection of private life, with the result that its binding effect was deemed invalid. The District Court of the Hague reviewed the algorithm’s legislation and the usage of the algorithm by the Dutch government mainly based on Article 8 of the ECHR, the Charter of Fundamental Rights of the European Union (CFR), and the principles established in the GDPR, particularly the principle of transparency, the principle of purpose limitation, and the principle of data minimization. In this context, the Court analysed the ‘extent and seriousness of the interference’ with Article 8 of the ECHR based on the SyRI Legislation and the information about the algorithm provided by the State.83
To identify the scope of interference, the Court focused mainly on the functioning of the algorithm. The Court concluded that the SyRI legislation did not provide sufficient information about the functioning of the system particularly related to the risk models consisting of risk indicators, risk analysis methods applied in the system, and the generation of the decision trees. Therefore, the Court found a violation under Article 8 of the ECHR on the basis of lacking information about the system in the SyRI legislation.84 The main problem, which led to a violation of the Convention, is that the function of the SyRI has remained opaque in the legislation. The problem of lack of transparency in the legislation also illustrates the concentration of private power behind the system.85
The claimants also referred to Article 22 of the GDPR. They argued that ‘the submission of a risk report . . . can be considered a decision with legal effect, or at least a decision that affects the data subjects significantly in another way, and that this decision is taken on the basis of automated individual decision-making within the meaning of Article 22 GDPR, which is prohibited’.86 The Court agreed with the claimants that a risk report had a ‘“significant effect” on the private life of the person to whom the risk report pertains’.87 However, it noted that such a risk report did not have the legal effect. The Court did not ‘give an opinion on whether the exact definition of automated individual decision-making in the GDPR and, insofar as this is the case, one or more of the exceptions to the prohibition in the GDPR have been met. That is irrelevant in the context of the review by the court whether the SyRI legislation meets the requirements of Article 8 ECHR.’88 To conclude, the Dutch Court in its SyRI judgment clarified that legislation on ADM systems should articulate the functioning of ADM systems in clear terms.
II The Buona Scuola case
Another significant judicial case on ADM systems has been observed in Italy. The ADM practice was concerned with using a teacher placement algorithm (known as ‘algoritmo della buona scuola’, ‘good-school algorithm’ in English),89 which sparked extensive public debate and prompted public administrative decisions in 2019.90 In this case, the Italian Ministry of Education used software to make efficient and swift decisions on the placements of newly selected teachers and process the mobility requests of already employed teachers. According to the Mobility Rankings 2016, the algorithm made structural mistakes by assigning thousands of teachers91 to incorrect professional placements in practice.92 Furthermore, according to the Algorithm Watch, the system automictically compelled some teachers with autistic children to relocate from the southern region of Calabria to Prato, in the northern region of Tuscany.93
Two critical judgements on this issue have offered significant legal interpretations making concrete the principle of transparency and the human factor. First, in April 2019, the Italian Council of State (Consiglio di Stato) found that ‘the use of “robotic” procedures cannot justify circumventing the principles that shape our legal system and regulate the conduct of administrative activities’.94 In this regard, the algorithm, which has a legal value, must comply with the general principles of administrative activity, such as transparency, reasonableness, and proportionality.95 Furthermore, the Council of State interpreted the transparency principle that requires ‘the full knowability of any rules expressed in a language other than the judicial one’.96 This ‘full knowability’ (‘piena conoscibilità’) includes the decision-making procedure and the relevant data of that system in order to verify whether the outcomes of the ‘robotic procedure’ comply with the legal requirements.97 It is crucial to emphasize that the Council of State does not advocate for the complete disclosure of the system’s code in question. Instead, it calls for a clear explanation of its ‘technical formula’ that both judges and citizens can comprehend.98
In September 2019, the second key judgment came on this issue from the Administrative Court of Lazio (Tribunale Amministrativo Regionale del Lazio). The Court focused on the human factor and pointed out that human judgment is irreplaceable, and automation may only play ‘a merely auxiliary and instrumental role’, rather than taking a ‘dominant or surrogate’ position within the administrative process:99
informatics procedures, even when they reach their highest level of precision and even perfection, they can never fully replace, truly supplant, the cognitive, inquisitive, and judgmental activities that only an inquiry entrusted to a physical person is capable of performing.100
According to the Lazio Court, this interpretation is in line with the Italian Constitution and Article 6 of the ECHR,101 which prevents a ‘deleterious Orwellian perspective’ where the decision-making is entirely handed over to machines.102 While the Lazio Court did not concretize Article 6 of the ECHR concrete in the present case, the ‘principle of good governance’ is considered in the case law of the Strasbourg Court:
the principle of ‘good governance’ requires that where an issue in the general interest is at stake it is incumbent on the public authorities to act in good time, in an appropriate manner and with utmost consistency.103
To conclude, considering these arguments, the Lazio Court underlined that algorithms should serve as supporting tools in public decision-making rather than assuming a primary role.
III The Schufa case
The right not to be subject to automated decisions was considered for the first time in the Schufa case before the CJEU on 16 March 2023. The Schufa is a private German credit information agency responsible for evaluating the trustworthiness of customers seeking any contractual relationship including loans, mortgages, or house rentals through profiling their financial behaviours.104 Based on that profiling, Schufa issues a certificate with a score and provides a positive or negative result about the person applied.105 However, the company offers no reasonable or understandable reasoning about its evaluation. In other words, it does not disclose how the score is calculated.106
In 2018, an applicant who received a negative score requested Schufa to provide additional information about the negative result. Considering the underlying logic of their automated system as commercial and industrial secrecy, Schufa provided only the basic functioning of its automated system. The applicant waited for information about Schufa’s profiling for two years despite filing a complaint with the German Data Protection Authority. Subsequently, the applicant appealed the decision before the Administrative Court of Wiesbaden (Verwaltungsgericht Wiesbaden). In October 2021, the Wiesbaden Administrative Court (‘referring court’) stayed the administrative proceedings and referred to the CJEU two questions regarding the interpretation of Article 22 of the GDPR, the right not to be subject to automated decision-making which is granting data subjects the right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’.107 Hence, the German Administrative Court initiated the first example of such consideration before the CJEU.
The first matter focuses on clarifying the financial activity conducted by Schufa and questions whether credit scoring is an automated decision. Before analysing the matter, however, the Advocate General (AG) first emphasizes the distinctive character of Article 22(1) and states that that provision ‘establishes a general prohibition on decision of the kind described’ rather than a right to be invoked by the data subject.108 In terms of interpretating Article 22 GDPR the AG suggests that:
[t]he automated establishment of a probability value concerning the ability of a data subject to service a loan in the future constitutes a decision based solely on automated processing, including profiling, which produces legal effects concerning the data subject or similarly significantly affects him or her, where that value, determined by means of personal data of the data subject, is transmitted by the controller to a third-party controller and the latter, in accordance with consistent practice, draws strongly on that value for its decision on the establishment, implementation or termination of a contractual relationship with the data subject.109
In this regard, the AG argues that the scoring is considered as profiling within the meaning of Article 4(4) of the GDPR since the procedure in question ‘uses personal data to evaluate certain aspects concerning their economic situation, reliability and probably behaviour’.110 Secondly, the AG argues that the refusal of a credit has both legal and significant effects on the data subject since the data subject can no longer benefit from a contractual relationship with the financial institution and is affected significantly from a financial point of view.111 This means that the action in question may have an impact that is not only legal but also economic and social.112
Thirdly, the AG questions what is the relevant ‘decision’ in the case at issue and underlines that in the decision-making process, there are multiple phases such as profiling, the establishment of the score, and the actual decision on the grant of credit.113 He then highlights that the scoring by Schufa is a ‘decision’ within the meaning of Article 22(1) of the GDPR since it ‘tends to predetermine the financial institution’s decision to grant or refuse the credit to the data subject, such that this position must be considered only to have purely formal character in the process’.114 According to the AG, the crucial factor is the effect that the ‘decision’ has on the data subject.115 Considering that a negative score alone may produce negative impact on data subjects by restricting their freedoms and stigmatizing them in society, it makes sense to qualify that score as a ‘decision’ when a financial institution gives it paramount importance in the decision-making process.116
He concludes that in such circumstances, ‘credit applicants are affected from the stage of the evaluation of their creditworthiness . . . not only at the final stage of the refusal to grant credit, where the financial institutions is merely applying the result of that evaluation to the specific case’.117 It is also worth noting that the referring court states similar aspect: ‘experience from the data protection supervision carried out by the authorities shows that the score plays the decisive role in the granting of loans’.118
He further considers the purpose of the EU Legislator through Article 22, which is to protect the rights of data subjects, and states that a restrictive interpretation of that provision would create a gap in legal protection where data subjects cannot exercise their rights and freedoms, particularly described in Articles 15(1)(h), 16, and 17 of the GDPR.119
Furthermore, the AG clarifies the content of Article 15(1)(h) regarding the obligation to provide ‘information about logic involved’. He states that this information covers the calculation method used by a credit information agency unless there are no conflicting interests that are worthy of protection such as the right to protection of intellectual property under Article 17(2) of the CFR.120 In light of joint reading of Recitals 58 and 63 and Article 12(1) of the GDPR, the AG concludes that ‘the obligation to provide ‘meaningful information about the logic involved’ must be understood to include sufficiently detailed explanations of the method used to calculate the score and the reasons for a certain results’.121 Moreover, considering the complexity of algorithms, the AG emphasizes that the principle of transparent information and communication in Article 12 of the GDPR does not establish any obligation for the controller to disclose the algorithm since there is no benefit of commutating a complex formula without providing a necessary explanation.122
Finally, to reply to the second question posed by the referring court, the AG explains that Article 6(1) and Article 22 do not prevent domestic legislation from profiling as long as it falls outside the scope of Article 22 of the GDPR. However, in this case, the national court must comply with the requirements outlined in Article 6 of the GDPR which includes relying on an appropriate legal basis. The AG Opinion holds notable importance as it marks the initial judicial interpretation of the legal term ‘automated decision’, clarifying that if an algorithm predominantly influences decision-making, the activity of that algorithm qualifies as an ‘automated decision’ within the meaning of Article 22 of the GDPR.
IV The Uber case
On 4 April 2023, the Court of Appeal in Amsterdam (Gerechtshof Amsterdam) found that several automated processes including assigning rides, calculating prices, rating drivers, calculating ‘fraud probability scores’, and deactivating drivers’ accounts in response to suspicions of fraud on Uber’s and Ola platforms are considered as automated decisions in three judgments.123 In particular, the Court’s judgment on the deactivation decisions taken against Uber drivers has been crucial with regard to ADM systems and human participation. In this case, the Dutch Court has argued whether the deactivation decision taken against Uber drivers, which means they can no longer work through Uber, are automated decisions.124
First, the Court has considered the privacy statement of the company, which confirms that Uber makes an ‘automated decision’ when deactivating users ‘who are identified as having engaged in fraud’.125 Secondly, Uber has explained that Uber’s Risk Team relies on software to automatically detect various fraudulent activities, such as when a driver repeatedly cancels rides within a short time period, which may suggest ‘cancellation fraud’.126 According to the Court of Appeal, this example has showed that Uber ‘involves automated processing of personal data of drivers whereby certain personal aspects of them are evaluated on the basis of that data, with the intention of analysing or predicting their job performance, reliability and behaviour. As such, this processing meets the definition of profiling as contained in Article 4(4) of the GDPR.’127 Thirdly, the Court has considered that the deactivation decisions addressed to drivers are worded in a very general manner without mentioning any concrete conduct that forms the basis of decisions.128 Furthermore, the Court found that the limited human intervention in Uber’s automated decisions to dismiss workers was not ‘much more than a purely symbolic act’ considering also the fact that the Risk Team of the company is based in Kraków, Poland.129 In other words, the Dutch Court clarified that human intervention should have a meaningful contribution to the decision-making process rather than a simply symbolic participation. Table 7.3 sketches all four cases examined in this chapter and illustrates the key legal provisions and their findings.
Judicial Cases . | Technical Framework of the ADM systems . | Key Legal Provisions . | Judicial Interpretation . |
---|---|---|---|
The SyRI Case (The Netherlands) The Hague District Court | Fraud detection system: Generating risk reports about legal and natural persons considered worthy of investigating with regard to possible fraud | Article 8 of the ECHR | Legislation should articulate the functioning of an algorithm in clear terms. |
The Buona Scuola Case (Italy) The Council of State & the Administrative Court of Lazio | Teacher placement system: Assigning thousands of teachers to an incorrect professional placement | Article 6 of the ECHR | Algorithms should serve as supporting tools in public decision-making rather than assuming a primary role |
The Schufa Case (Germany) Advocate General Pikamäe | Credit scoring system: Providing its clients with information on the creditworthiness of consumers and producing a prediction on the basis of a mathematical statistical method of the probability of future behaviour, such as the repayment of credit | Article 22(1) of the GDPR Article 15(1)(h) of the GDPR | If an algorithm plays a primary role in decision-making, the activity of that algorithm is considered as an ‘automated decision’ within the meaning of Article 22 of the GDPR |
The Uber Case (The Netherlands) The Court of Appeal | Predicting job performance: deactivating drivers’ accounts in a generalized framework | Article 22(1) | Human intervention should have a meaningful contribution to the decision-making process rather than a simply symbolic participation |
Judicial Cases . | Technical Framework of the ADM systems . | Key Legal Provisions . | Judicial Interpretation . |
---|---|---|---|
The SyRI Case (The Netherlands) The Hague District Court | Fraud detection system: Generating risk reports about legal and natural persons considered worthy of investigating with regard to possible fraud | Article 8 of the ECHR | Legislation should articulate the functioning of an algorithm in clear terms. |
The Buona Scuola Case (Italy) The Council of State & the Administrative Court of Lazio | Teacher placement system: Assigning thousands of teachers to an incorrect professional placement | Article 6 of the ECHR | Algorithms should serve as supporting tools in public decision-making rather than assuming a primary role |
The Schufa Case (Germany) Advocate General Pikamäe | Credit scoring system: Providing its clients with information on the creditworthiness of consumers and producing a prediction on the basis of a mathematical statistical method of the probability of future behaviour, such as the repayment of credit | Article 22(1) of the GDPR Article 15(1)(h) of the GDPR | If an algorithm plays a primary role in decision-making, the activity of that algorithm is considered as an ‘automated decision’ within the meaning of Article 22 of the GDPR |
The Uber Case (The Netherlands) The Court of Appeal | Predicting job performance: deactivating drivers’ accounts in a generalized framework | Article 22(1) | Human intervention should have a meaningful contribution to the decision-making process rather than a simply symbolic participation |
Judicial Cases . | Technical Framework of the ADM systems . | Key Legal Provisions . | Judicial Interpretation . |
---|---|---|---|
The SyRI Case (The Netherlands) The Hague District Court | Fraud detection system: Generating risk reports about legal and natural persons considered worthy of investigating with regard to possible fraud | Article 8 of the ECHR | Legislation should articulate the functioning of an algorithm in clear terms. |
The Buona Scuola Case (Italy) The Council of State & the Administrative Court of Lazio | Teacher placement system: Assigning thousands of teachers to an incorrect professional placement | Article 6 of the ECHR | Algorithms should serve as supporting tools in public decision-making rather than assuming a primary role |
The Schufa Case (Germany) Advocate General Pikamäe | Credit scoring system: Providing its clients with information on the creditworthiness of consumers and producing a prediction on the basis of a mathematical statistical method of the probability of future behaviour, such as the repayment of credit | Article 22(1) of the GDPR Article 15(1)(h) of the GDPR | If an algorithm plays a primary role in decision-making, the activity of that algorithm is considered as an ‘automated decision’ within the meaning of Article 22 of the GDPR |
The Uber Case (The Netherlands) The Court of Appeal | Predicting job performance: deactivating drivers’ accounts in a generalized framework | Article 22(1) | Human intervention should have a meaningful contribution to the decision-making process rather than a simply symbolic participation |
Judicial Cases . | Technical Framework of the ADM systems . | Key Legal Provisions . | Judicial Interpretation . |
---|---|---|---|
The SyRI Case (The Netherlands) The Hague District Court | Fraud detection system: Generating risk reports about legal and natural persons considered worthy of investigating with regard to possible fraud | Article 8 of the ECHR | Legislation should articulate the functioning of an algorithm in clear terms. |
The Buona Scuola Case (Italy) The Council of State & the Administrative Court of Lazio | Teacher placement system: Assigning thousands of teachers to an incorrect professional placement | Article 6 of the ECHR | Algorithms should serve as supporting tools in public decision-making rather than assuming a primary role |
The Schufa Case (Germany) Advocate General Pikamäe | Credit scoring system: Providing its clients with information on the creditworthiness of consumers and producing a prediction on the basis of a mathematical statistical method of the probability of future behaviour, such as the repayment of credit | Article 22(1) of the GDPR Article 15(1)(h) of the GDPR | If an algorithm plays a primary role in decision-making, the activity of that algorithm is considered as an ‘automated decision’ within the meaning of Article 22 of the GDPR |
The Uber Case (The Netherlands) The Court of Appeal | Predicting job performance: deactivating drivers’ accounts in a generalized framework | Article 22(1) | Human intervention should have a meaningful contribution to the decision-making process rather than a simply symbolic participation |
E Concluding remarks
Similarly to the bricolage activity, judicial interpretation involves navigating a complex landscape of normativities that may not always appear seamlessly fused or unified.131 When judges engage in interpreting laws, regulations, and legal principles, they often encounter a mosaic of norms and precedents, each with its own distinct nuances and interpretations. This process provides for making concrete the relevant legal norms and clarifies their rules.132 The integration of algorithms into public and private decision-making processes and the mosaic landscape of Article 22 of the GDPR covering both public and private decision-makers have cascaded the complexity of this task, necessitating an intricate interplay between technological and legal components.
The judicial cases examined in this chapter have evinced that judicial interpretation is highly crucial for understanding both the socio-technical, and legal aspects of automation and the human factor in decision-making processes. Ultimately, the chapter has identified three aspects of judicial interpretation on ADM practices: (i) epistemic, (ii) substantial including socio-technical and legal aspects, and iii) methodological.
From an epistemic point of view, in all four cases judges struggled to describe the functioning and the purpose of the algorithm at stake. They tried to understand where, how, and for what purpose ADM systems have been used by the relevant public or private actors rather than understanding technical or computer science-related features of a particular system. In this sense, their judicial interpretation had started from defining the digital system at stake, namely defining the functioning of fraud detection, credit scoring, teacher placement, and dismissing workers, in the cases examined in this chapter.
From a substantial point of view, the courts have clarified socio-technical and legal aspects of automation and the human factor in decision-making-processes. On the socio-technical side, the courts have demonstrated that it is necessary to have clear legislations on the functioning of ADM practices to ensure that the system at stake is explainable to human persons. In the SyRI case, the Dutch court has found that the relevant legislation did not provide sufficient information about the functioning of the fraud detection system particularly related to the risk models consisting of risk indicators, risk analysis methods, and the generation of the decision trees. It is also worth noting that understanding the basic functioning of the algorithm has become highly significant to determine the ‘extent and seriousness’ of individual rights interferences by ADM systems.
The Italian courts have also highlighted similar concerns in the Buona Scuola case regarding the teaching placement algorithm. The Italian Council of State has underlined the need for a clear explanation regarding the ‘technical formula’ of a particular ADM system to ensure the general principles of an administrative activity, such as transparency, reasonableness, and proportionality. On the same issue, the Lazio Court focused on the human factor and underlined that automation can only play an auxiliary role in decision-making rather than a dominant position. In this sense, both the SyRI and the Buona Scuola cases have clarified that (i) legislation should articulate the functioning of an algorithm in human-readable terms rather than complex computer codes, and (ii) algorithms should serve as supporting tools in public decision-making rather than assuming a primary role. The two arguments prove that in both cases involving public uses of ADM systems, the courts have largely emphasized public law principles while assessing the ADM system at stake such as the principles of legality and transparency.
From a legal perspective, the AG in the Schufa case and the Dutch Court in the Uber case have focused on the meaning of the ‘automated decision’ and the human intervention measure. In the Schufa case, the AG has clarified that ‘credit scoring’ is an ‘automated decision’ within the meaning of Article 22 of the GDPR when a financial institution gives it paramount importance in the decision-making process. In the Uber case, the Dutch Court has underlined that the deactivation decisions addressed to drivers are worded in a general manner without mentioning any concrete conduct that forms the basis of decisions and the limited human intervention in Uber’s automated decisions to dismiss workers was not ‘much more than a purely symbolic act’. In this sense, both aspects have clarified that (i) if an algorithm plays a primary role in decision-making, the activity of that algorithm is considered as ‘automated decision’ within the meaning of Article 22 of the GDPR, and (ii) the human intervention should have a meaningful contribution to the decision-making process rather than a simply symbolic participation. Both arguments demonstrate that in instances involving private uses of ADM systems, the courts have predominantly focused on elucidating how the ADM system in question was employed in decision-making by private entities.
From a methodological point of view, the judges’ inquiry regarding automation and meaningful human participation has presented an interactional legal ground where the relevant normative actors interact with one another. The courts have considered the normative aspect of automation, the relevant provisions of the ECHR, the GDPR, and the relevant domestic legislation to clarify the roles of automation and humans in decision-making processes. In this sense, their reasoning has not been limited to scope of national legislation, but also included supranational and international legal provisions.
Ultimately, the three aspects of judicial interpretation, epistemic, substantial, and methodological, have proved the pivotal role that judges play in comprehending automation and ensuring meaningful human participation in decision-making processes. In doing so, judges have not only narrowed the divide between machines and humans but also law and digital society.
Footnotes
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2016) OJ L119/1.
Opinion of Advocate General Pikamäe, OQ v Land Hesse, Joint Party: Schufa Holding AG, Case 634/21, 16 March 2023.
Amsterdam Court of Appeal (Gerechtshof Amsterdam), ECLI:NL:GHAMS:2023:796, Case No 200.295.747/01, 4 April 2023; ECLI:NL:GHAMS:2023:793, 200.295.742/01, 4 April 2023; ECLI:NL:GHAMS:2023:804, Case No 200.295.806/01, 4 April 2023.
Karen Yeung, ‘The New Public Analytics as an Emerging Paradigm in Public Sector Administration’ (2022) 27(2) Tilburg Law Review 1–32.
David Leslie and others, ‘Artificial Intelligence, Human Rights, Democracy, and the Rule of Law: A Premier’ (2021) Council of Europe and Alan Turing Institute, 36 https://edoc.coe.int/en/artificial-intelligence/10206-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-a-primer.html accessed 10 October 2023.
ibid 36.
Consiglio di Stato, Sec IV, n 2270, 8 April 2019, para 8.3.
On the controversial analogies between states and social platforms see Kate Klonick ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2017) 131 Harvard Law Review 1599–603.
The Hague District Court (Rechtbank Den Haag), ECLI:NL:RBDHA:2020:865, Case No C-09-550982/HA ZA 18-388, 5.2.2020, at para 6.46, stating from the opinion of the Advisory Division: ‘The term “self-learning” is confusing and misleading: an algorithm does not know and understand reality. There are predictive algorithms which are fairly accurate in predicting the outcome of a court case. However, they do not do so on the basis of the substantive merits of the case. They can therefore not substantiate their predictions in a legally sound manner, while that is required for all legal proceedings for each individual case. . . . The reverse also applies: the human user of such a self-learning system does not understand why the system concludes that there is a link. An administrative organ that partially bases its actions on such a system is unable to properly justify its actions and to properly substantiate its decisions.’
European Union Agency for Fundamental Rights (FRA), ‘Getting the Future Right: Artificial Intelligence and Fundamental Rights’ (2020) (Report I) https://fra.europa.eu/sites/default/files/fra_uploads/fra-2020-artificial-intelligence_en.pdf accessed 13 July 2023.
European Commission (EC), ‘AI Watch Artificial Intelligence in Public Services: Overview of the Use and Impact of AI in Public Services in the EU’ (2020) Science for Policy Report by the Joint Research Centre (Report II) https://publications.jrc.ec.europa.eu/repository/handle/JRC120399 accessed 13 July 2023. At the civil society level, see also EDRi, ‘Use Cases: Impermissible AI and Fundamental Rights Breaches’ (2020) https://edri.org/wp-content/uploads/2021/06/Case-studies-Impermissible-AI-biometrics-September-2020.pdf accessed 13 July 2023.
OECD, Glossary of Statistical Terms, Social Benefits Definitions, https://stats.oecd.org/glossary/detail.asp?ID=2480 accessed 13 September 2023.
ibid. It is worth noting that according to the report, the organization used this AI system has firstly used image processing to process applications in order to decide on such social benefit applications.
ibid 27.
ibid.
Report II does not provide sufficient information about this system. It only considers this system as an ‘automated decision-making system’ in n 11. Therefore, this part of the research is conducted through the report of the ‘Algorithm Watch’, which reported this issue in 2020. See in Katarina Lind and Leo Wallentin, ‘Central Authorities Slow to React as Sweden’s Cities Embrace Automation of Welfare Management’ (2020) https://algorithmwatch.org/en/trelleborg-sweden-algorithm/ accessed 13 July 2023.
Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard UP 2015).
The Hague District Court, Case No C/09/550982/HA ZA 18-388, 5 February 2020, para 6.65.
Lind and Wallentin (n 21). It is worth noting that a journalist, Freddi Ramel, lodged an appeal to the Administrative Court of Appeal under the Sweden’s Freedom Information Act to see the code of the system. Trelleborg argued that the code was a trade secret. However, the Court decided that the code of the system is a public document and therefore ‘the source code has to be made accessible to the public and is fully included in the principle of public access’. Available in Anne Kaun, ‘Suing the Algorithm: The Mundanization of Automated Decision-Making in Public Services through Litigation’ (2021) Information, Communication & Society 1–17.
The project of iBorderCtrl at https://perma.cc/L7KM-TPFK accessed 13 July 2023.
New technologies serving in this area are categorized within the area of ‘smart borders’ technologies. See Javier Sánchez-Monedero and Lina Dencik, ‘The Politics of Deceptive Borders: “Biomarkers of Deceit” and the Case of iBorderCtrl’ (2022) 25(3) Information, Communication & Society 414. See also the UK’s smart border technology to detect deception at https://post.parliament.uk/research-briefings/post-pn-375/ accessed 16 September 2023. See also the US version called ‘AVATAR’, an automated lie detector and a deception detection technology based on eye tracking at https://discernscience.com/avatar/ accessed 16 September 2023.
Derya Ozkul, ‘Automating Immigration and Asylum: The Uses of New Technologies in Migration and Asylum Governance in Europe’ (Refugee Studies Center, Oxford 2023) 5–6 https://www.rsc.ox.ac.uk/publications/automating-immigration-and-asylum-the-uses-of-new-technologies-in-migration-and-asylum-governance-in-europe accessed 27 October 2023.
EDRi, ‘Technological Testing Ground: Migration Management Experiments and Reflections from the Ground Up’ (2020) 16, https://edri.org/our-work/european-court-supports-transparency-in-risky-eu-border-tech-experiments/ accessed 18 September 2023.
The project’s website is https://www.iborderctrl.eu/The-project accessed 6 October 2023. See also European Commission, ‘Intelligent Portable Border Control System: Periodic Reporting for Period 2—iBorderCtrl (Intelligent Portable Border System) https://cordis.europa.eu/project/id/700626/reporting accessed 6 October 2023.
See the details of the project at https://cordis.europa.eu/project/id/700626/reporting (accessed on 3 July 2024).
European Commission, ‘Intelligent Portable Border Control System’ (2023) n 35.
Ryan er and Ludovica Jona, ‘We Tested Europe’s New Lie Detector for Travelers – and Immediately Triggered a False Positive’ (2019) The Intercept https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/ accessed 6 October 2023 (quoting Ray Bull, professor of criminal investigation at the University of Derby: ‘The technology is based on a fundamental misunderstanding of what humans do when being truthful and deceptive.’).
Dimitri Van Den Meerssche, ‘Virtual Borders: International Law and the Elusive Inequalities of Algorithmic Association’ (2022) 33(1) European Journal of International Law 171–204; Niahm Kinchin and Davoud Mougouei, ‘What Can Artificial Intelligence Do for Refugee Status Determination? A Proposal for Removing Subjective Fear’ (2022) 34(3–4) International Journal of Refugee Law 373–97; Sánchez-Monedero and Dencik (n 32) 413–30.
Petra Molnar, ‘Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up’ (European Digital Rights Refugee Law Lab 2020) https://edri.org/our-work/regulating-migration-tech-how-the-eus-ai-act-can-better-protect-people-on-the-move/ accessed 18 September 2023.
Eleftherios Chelioudakis, Homo Digitalis, EDRi, ‘Greece: Clarifications Sought on Human Rights Impacts of iBorderCtrl’ https://edri.org/our-work/greece-clarifications-sought-on-human-rights-impacts-of-iborderctrl/ accessed 18 September 2023.
ibid.
CJEU, Breyer v REA, Case T-158/19, 15 December 2021.
ibid para 200 stating the importance of democratic oversight of such technologies, in verbatim, ‘[es besteht] ein Interesse der Öffentlichkeit daran . . . an einer informierten öffentlichen und demokratischen Diskussion über die Frage teilzunehmen, ob Kontrolltechnologien wie die in Rede stehenden wünschenswert sind und ob sie durch öffentliche Gelder finanziert werden sollen, und dass dieses Interesse gebührend gewahrt werden muss’.
ibid paras 200203. On 7 September 2023, the CJEU upheld this decision in C-135/22P, 7 September 2023, paras 64–112.
Directive (EU) 2016/680 of the European Parliament and the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, OJ L119/89, 4 May 2016.
Teresa Quintel, Data Protection, Migration and Border Control: The GDPR, The Law Enforcement Directive and Beyond (Hart 2022)
Nathalie Smuha and others, ‘How the EU can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act’ (2021) LEADS Law, University of Birmingham for a Legal, Ethical and Accountable Digital Society, 12 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3899991 accessed 13 July 2023 (stating that such systems cannot be considered solely as consumer or technical products).
See an interesting discussion stating that emotion AI systems detect physical signals or muscle movements not emotions in Lisa Feldman Barrett, ‘Darwin was Wrong: Your Expressions Do Not Reveal Your Emotions’ (2022) Scientific American https://www.scientificamerican.com/article/darwin-was-wrong-your-facial-expressions-do-not-reveal-your-emotions/ accessed 4 October 2023.
Virginia Dignum, ‘Relational Artificial Intelligence’ (2022) arXiv:2202.07446, 2022, https://arxiv.org/abs/2202.07446 accessed 16 September 2023.
However, it is not an easy task as it requires multidisciplinary and multi-stakeholder participation.
Such a conclusion necessitates the full consideration of the existing formulations of the rule of law, as some approaches solely focus on protecting individual interests. See a relevant discussion on this issue in Anuj Puri, ‘Rule of Law, AI, and the ‘Individual’ (Verfassungsblog, 2022) https://verfassungsblog.de/roa-individual/ accessed 27 August 2023.
Sümeyye Elif Biber, ‘Machines Learning the Rule of Law: EU Proposes the World’s First Artificial Intelligence Act’ (Verfassungsblog, 13 July 2021) https://verfassungsblog.de/ai-rol/ accessed 27 August 2023.
It is impossible to cite here all European legal instruments surrounding digital technologies. However, the most prominent legal instruments are the GDPR (Regulation (EU) 2016/179, OJ L119/1), the Digital Services Act (DSA) Regulation (EU) 2022/2065, OJ L277/1, the draft AI Act (European Commission, COM/2021/206 final), and the Consolidated Working Draft of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (CAI(2023)18).
Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (OUP 2023).
Anu Bradford, ‘The Race to Regulate Artificial Intelligence: Why Europe has an Edge over America and China’ Foreign Affairs (27 June 2023) https://www.foreignaffairs.com/united-states/race-regulate-artificial-intelligence?utm_medium=promo_email&utm_source=lo_flows&utm_campaign=registered_user_welcome&utm_term=email_1&utm_content=20231106 accessed 9 August 2023.
European Declaration on Digital Rights and Principles for the Digital Decade, Brussels, 26.1.2022 COM(2022) 28 final.
Art 22 of the 2016 GDPR is the main EU legal norm on the right not to be subject to automated decision-making. However this right is not a recent development in the EU. It is first recognized in French Law in art 2 of the 1978 French Law on Data Processing, Data Files and Individual Liberties (Loi no 78-17 du 6 janvier 1978 relative à l’informatique, aux fichiers et aux libertés, at https://www.legifrance.gouv.fr/loda/id/LEGIARTI000006528060/1978-07-23/#LEGIARTI000006528060), and then respectively reflected in arts 12 and 15 of the 1995 ‘Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data’ (OJ L281/31), art 6 of the 1981 ‘Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data’ (ETS 108), and finally art 22 of the GDPR (n 1).
Frederike Kaltheuner and Elettra Bietti, ‘Data is Power: Towards Additional Guidance on Profiling and Automated Decision-Making in the GDPR’ (2018) 2(2) Journal of Information Rights, Policy and Practice 10; Reuben Binns and Michael Veale, ‘Is That Your Final Decision? Multi-Stage Profiling, Selective Effects, and Article 22 of the GDPR’ (2021) 11(4) International Data Privacy 319–32.
Sebastião Barros Vale and Gabriela Zanfir-Fortuna, ‘Automated Decision-Making under the GDPR: Practical Cases from Courts and Data Protection Authorities’ (2022) Future Privacy Forum, 28 https://fpf.org/blog/fpf-report-automated-decision-making-under-the-gdpr-a-comprehensive-case-law-analysis/ accessed 27 August 2023.
Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts COM(2021) 206 final.
The human oversight requirement is applied in Germany in a different way. German approach excludes the use of automated systems for administrative acts requiring the use of discretion. This approach means that only humans can exercise discretion. See a comment on this issue in Paul Nemitz and Eike Grzäf, ‘Artificial Intelligence Must Be Used According to the Law, or Not at All’ (Verfassungsblog, 2022) https://verfassungsblog.de/roa-artificial-intelligence-must-be-used-according-to-the-law/ accessed 17 September 2023.
Manuel Alfonseca and others, ‘Superintelligence Cannot be Contained: Lessons from Computability Theory’ (2020) 70 Journal of Artificial Intelligence Research 1–7.
See an excellent empirical study on the inadequacy of human oversight in Ben Green, ‘The Flaws of Policies Requiring Human Oversight of Government Algorithms’ (2022) 45 Computer Law & Security Review 1–22. (demonstrating that people are unable to perform the desired oversight function, and proposing institutional oversight). See also Ben Green and Amba Kak, ‘The False Comfort of Human Oversight as an Antidote to AI Harm’ (2021) Future Tense, https://slate.com/technology/2021/06/human-oversight-artificial-intelligence-laws.html accessed 17 August 2023.
World Wide Web Foundation, ‘Policy Brief W20 Argentina, Artificial Intelligence: Open Questions about Gender Inclusion’ (2018) http://webfoundation.org/docs/2018/06/AI-Gender.pdf accessed 17 September 2023.
Art 52 of the EU Charter, which provides that ‘any limitation on the exercise of the rights and freedoms recognised by this Charter must be provided for by law and respect the essence of those rights and freedoms. Subject to the principle of proportionality, limitations may be made only if they are necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others’.
Saar Alon-Barkat and Madalina Busuioc, ‘Human–AI Interactions in Public Sector Decision-Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice’ (2022) Journal of Public Administration Research and Theory 7–8 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3794660 accessed 17 August 2023 (discussing that there are two reasons for this bias, namely ‘the perceived inherent superiority of automated systems by humans’ and ‘cognitive laziness’ referring to ‘human reluctance to engage in cognitively demanding mental process’). It is important to note that the ‘cognitive laziness’ of the human person who is examining the system cannot be an excuse for an unfair situation created by an ADM system. After all, the system must pass the necessity test, and cannot serve the purpose of exacerbating injustice in society.
ibid.
Indeed, the over-reliance on the outputs of a high-risk AI system is extremely dangerous. Spanish government, for instance, used an AI system called ‘VioGén’ to estimate the risk of recidivism in gender violence. According to the Algorithmic Watch, however, the system failed in its predictions. Fourteen out of the fifteen women who were killed in a domestic violence incident in 2014, having reported their aggressor before, had been classified by the system as being at low or non-specific risk. See in Algorithmic Society Report, ‘Algorithmic Society’ (2020) 227, https://automatingsociety.algorithmwatch.org/wp-content/uploads/2020/12/Automating-Society-Report-2020.pdf accessed 13 July 2023 See also the news on this issue reported by El Mundo, ‘Las asesinadas que denunciaron se valoraron como “riesgo bajo o nulo”’ https://www.elmundo.es/espana/2014/12/09/54861553ca4741734b8b457e.html accessed 13 July 2023. See also an article on the function of the system in José Luis González Álvarez et al, ‘Integral Monitoring System in Cases of Gender Violence VioGén System’ (2018) 4(1) Behavior & Law Journal http://www.interior.gob.es/documents/642012/1626283/articulo+violencia+de+genero/fd0e7095-c821-472c-a9bd-5e6cbe816b3d accessed 13 July 2023.
Sofia Ranchordás, ‘Empathy in the Digital Administrative State’ (2022) 71 Duke Law Journal 1341–89 (discussing this issue within the concept of ‘empathy’ as a key value of administrative law).
The Hague District Court (Rechtbank Den Haag), ECLI:NL:RBDHA:2020:865, Case No C-09-550982/HA ZA 18-388, 5.2.2020, at paras 6.1–6.118.
UN Human Rights Council (2019) ‘Visit to the United Kingdom of Great Britain and Northern Ireland: Report of the Special Rapporteur on extreme poverty and human rights’, A/HRC/41/39/Add.1’, 23 April, https://ap.ohchr.org/documents/dpage_e.aspx?si=A/HRC/41/39/Add.1 accessed 13 October 2023.
European Convention on Human Rights, 4 November 1950, 213 UNTS 221.
ibid para 3.2.
It is important to note that the Hague Court highlighted the special responsibility of the state when applying to such kind of technologies and their extensive and increasing interference with the right to respect for private life in the light of the case of S. and Marper v the United Kingdom of the European Court of Human Rights, para 6.84.
Matteo Turilli and Luciano Floridi, ‘The Ethics of Information Transparency’ (2015) 11 Ethics and Information Technology 105 (defining transparency as a pro-ethical condition for enabling or impairing other ethical principles or practices); Jenna Burrell, ‘How the Machine Thinks: Understanding Opacity in Machine Learning’ (2016) 3(1) Big Data and Society, January–June 2016, 1 (according to the author, opacity might stem from three forms, namely state or corporate secrecy, technical illiteracy, and from the characteristics of machine learning algorithms and the scale required to apply them usefully). See also Mireille Hildebrandt, ‘The Dawn of a Critical Transparency Right for the Profiling Era’ in Jacques Bus and others (eds), Digital Enlightenment Yearbook (IOS Press 2012) 41.
ibid para 6.59.
ibid para 6.60.
Marcia de Angelis, ‘Algoritmi nei concorsi pubblici: il caso dei docenti che fa “scuola”’ Ius in Itinere’ (3 October 2019) <https://www.iusinitinere.it/algoritmi-nei-concorsi-pubblici-il-caso-dei-docenti-che-fa-scuola-23299> accessed 15 October 2023.
Stefano Civitarese Matteucci, ‘“Umano troppo umano’. Decisioni amministrative automatizziate a principio di legalità (2019) (1) Dritto Pubblico Il Mulino, January–April 4–41.
According to Repubblicca, at least 10,000 teachers are affected: ‘Scuola, trasferimenti di 10 mila docenti lontano da casa. Il Tar: “L’algoritmo impazzito fu contro la Costituzione”’, https://www.repubblica.it/cronaca/2019/09/17/news/scuola_trasferimenti_di_10mila_docenti_lontano_da_casa_il_tar_l_algoritmo_impazzito_fu_contro_la_costituzione_-236215790/ accessed 15 October 2023.
Fabio Chiusi, ‘Italy/Contextualization: A Lauder Conversation, but mostly around “AI”’ in ‘Automating Society Report 2020’, Algorithm Watch, https://automatingsociety.algorithmwatch.org/report2020/italy/ accessed 15 October 2023.
ibid.
Consiglio di Stato, Sec IV, n 2270, 8 April 2019, para 8.2: ‘L’utilizzo di procedure ‘robotizzate’ non può, tuttavia, essere motivo di elusione dei princìpi che conformano il nostro ordinamento e che regolano lo svolgersi dell’attività amministrativa’.
ibid para 8.3: ‘il meccanismo attraverso il quale si concretizza la decisione robotizzata (ovvero l’algoritmo) deve essere “conoscibile”, secondo una declinazione rafforzata del principio di trasparenza, che implica anche quello della piena conoscibilità di una regola espressa in un linguaggio differente da quello giuridico’. The italic emphasis made by the author.
ibid.
TAR Lazio-Roma, Section 3rd-Bis, n 10964, 10–13 September 2019.
ibid. It states verbatim: ‘le procedure informatiche, finanche ove pervengano al loro maggior grado di precisione e addirittura alla perfezione, non possano mai soppiantare, sostituendola davvero appieno, l’attività cognitiva, acquisitiva e di giudizio che solo un’istruttoria affidata ad un funzionario persona fisica è in grado di svolgere’.
ECtHR, Moskal v Poland, App No 10373/05, 15 September 2009, para 51.
Opinion of Advocate General Pikamäe, OQ v Land Hesse, Joint Party: Schufa Holding AG, Case 634/21, 16 March 2023. Schufa describes its activities as follows: ‘Credit scoring is all about the question of how probable it is that a person will meet their payments. This is very important information for companies or banks. It provides a data basis to help decide whether to provide credit or purchases on account. Thus reducing the risk of a default.’ See at https://www.schufa.de/schufa-en/scores-data/scoring-at-schufa/#532026 accessed 23 October 2023.
Currently, Schufa provides five score classes, namely ‘insufficient’, ‘sufficient’, ‘acceptable’, ‘good’, and ‘excellent’. See https://www.schufa.de/scoring-daten/hilfe-ihrem-schufa-score/ (23 October 2023).
Art 22 of the GDPR, n 1.
The Schufa case (AG) (2023) (n 104) para 31. This interpretation aligns with the 2018 opinion of the European Data Protection Board, which endorsed the views presented in the Article 29 Working Part Guidelines, stating that ‘[t]he term “right” in the provision does not mean that Article 22(1) applies only when actively invoked by the data subject. Article 22(1) establishes a general prohibition for decision-making based solely on automated processing. This prohibition applies whether or not the data subject takes an action regarding the processing of their personal data.’ In ‘Article 29 Data Protection Working Party: Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’, WP251REV.01, 6 February 2018, 19.
ibid para 33.
ibid para 35.
ibid para 38.
ibid para. 40.
ibid para 47.
ibid para 43.
ibid para 43.
ibid para 43.
ibid para 46.
ibid paras 48–50, namely the right to information, the right to rectification and the right to erasure.
ibid para 54.
ibid para 58.
ibid para 57.
Amsterdam Court of Appeal (Gerechtshof Amsterdam), ECLI:NL:GHAMS:2023:796, Case No 200.295.747/01, 4.4.2023; ECLI:NL:GHAMS:2023:793, 200.295.742/01, 4.4.2023; ECLI:NL:GHAMS:2023:804, Case No 200.295.806/01, 4.4.2023.
Amsterdam Court of Appeal (Gerechtshof Amsterdam), ECLI:NL:GHAMS:2023:793, 4.4.2023,
ibid para 3.21.
ibid para 3.21.
ibid para 3.21.
ibid para 3.24.
The table is created by the author with the intention of summarizing the key aspects of the judicial cases examined in this chapter.
For the use of ‘bricolage’ in legal context, see Mark Tushnet, ‘The Possibilities of Comparative Constitutional Law’ (1999) 108 Yale Law Journal 1285–86. See also the definition of this activity in Claude Lévi-Strauss, The Savage Mind (University of Chicago Press 1966) 17–18. According to Lévi-Strauss there is a distinction between engineering and bricolage. The engineer approaches to a task with a predefined project in mind and works with the materials available to achieve it. The bricoleur, in contrast, makes do with ‘whatever is at hand’, with a set of tools and materials. Tushnet uses this term to explain the work of interpreters as they find themselves in an intellectual world that ‘provides them with a bag of concepts “at hand”, not all of which are linked to each other in some coherent way’.
Friedrich Müller, Arbeitsmethoden des Verfassungsrechts’ in Enzyklopädie der Geisteswissenschaftlichen Arbeitsmethoden (R. Oldenburg Verlag 1971) 123–90 (discussing that the process of making concrete legal norms involves extensive engagement with legal materials, including doctrines, commentaries, case law, comparative legal documents, and numerous texts that are not identical with the respective legal norm text); Friedrich Müller and Ralph Christensen, Juristische Methodik—Band I—Grundlegung für die Arbeitsmethoden der Rechtspraxis (11th edn, Dunker & Humblot 2013) 263; Matthias Klatt, Making the Law Explicit: The Normativity of Legal Argumentation (Hart Publishing 2008) 54–56 (‘the text is only a “guideline”, as such it has no claim to normativity . . . the rule is not the beginning, but the product of the process of the application of the law’).
Month: | Total Views: |
---|---|
August 2024 | 2 |
September 2024 | 63 |
October 2024 | 36 |
November 2024 | 20 |
December 2024 | 30 |
January 2025 | 24 |
February 2025 | 26 |
March 2025 | 22 |
April 2025 | 24 |