International migration management in the age of artiﬁcial intelligence

Artiﬁcial intelligence (AI) has the potential to revolutionise the way states and international organisations seek to manage international migration. AI is gradually going to be used to perform tasks, including identity checks, border security and control, and analysis of data about visa and asylum applicants. To an extent, this is already a reality in some countries such as Canada, which uses algorithmic decision-making in immigration and asylum determination, and Germany, which has piloted projects using technologies such as face and dialect recognition for decision-making in asy-lum determination processes. The article’s central hypothesis is that AI technology can affect international migration management in three different dimensions: (1) by deepening the existing asymmetries between states on the international plane; (2) by modernising states’ and international organisations’ traditional practices; and (3) by reinforcing the contemporary calls for more evidence-based migration management and border security. The article examines each of these three hypotheses and reﬂects on the main challenges of using AI solutions for international migration management. It draws on legal, political and technology-facing academic literature, examining the current trends in technological developments and investigating the consequences that these can have for international migration. Most particularly, the article contributes to the current debate about the future of international migration management, informing policymakers in this area of growing importance and fast development.


Introduction
Artificial intelligence (AI) technology is increasingly used in public and private domains to perform tasks usually associated with human intelligence, such as the ability to learn from data, the capacity to recognise images and speech and process natural language doi:10.1093/migration/mnaa003 V C The Author(s) 2020.Published by Oxford University Press.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/),which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.(Nilsson 2014;Ertel 2018).The focus on such technologies is not recent though-Alan Turing investigated the potential for machines to think already in 1950 (Turing 1950), and AI as a discipline was initiated in 1956 with the Dartmouth Summer Research Project on Artificial Intelligence (Moor 2006).Since then, the exponential increase in computational power combined with the availability of large quantities of data ignited the contemporary surge in interest for AI (Russell and Novig 2010).To date, no machine has passed the Turing test, and it thus remains to be seen whether one day a computer will be able to think like a human being (Turing 1950).Still, references to AI, machine learning, and algorithms have progressively permeated the social sciences and humanities scholarship (Crawford and Calo 2016;Calo 2017;Kitchin 2017;Cath et al. 2018;McGregor, Murray and Ng 2019).Drawing on this growing body of academic literature, this article examines the implications of AI for international migration.
AI is understood here as 'a growing resource of interactive, autonomous, self-learning agency, which enables computational artifacts to perform tasks that otherwise would require human intelligence to be executed successfully' (Taddeo and Floridi 2018: 751).Simply put, AI is 'a set of techniques aimed at approximating some aspect of human or animal cognition using machines' (Calo 2017: 404).One of these techniques is machine learning or 'the systematic study of algorithms and systems that improve their [algorithms'] knowledge or performance with experience' (Flach 2012: 3).AI thus refers to technologies that perform tasks usually associated with humans and act intelligently by learning from data with the aid of algorithms (sets of instructions used to solve problems).Algorithms have been used for millennia but have gained importance in our contemporary societies due to the power of computers to gather and analyse large quantities of data at a speed that is far superior to what a human being would be capable of doing.
AI algorithms draw on vast amounts of data, including big data, to learn and make inferences about patterns and future behaviour (Burrell 2016;Wachter, Mittelstadt and Russell 2018;McGregor, Murray and Ng 2019).Big data, or the 'high velocity, complex and variable data' (Tech America Foundation 2012), has great potential to be used in forecasting and managing migratory flows (Rango 2015;Pew Research Center 2017;Beduschi 2018;IOM 2018;Spyratos et al. 2018).Fuelled by big data, AI algorithms are said to increase efficiency by streamlining repetitive tasks, notably those that require the review of large amounts of paperwork (Chui et al. 2018).
AI thus has the potential to revolutionise the way states and international organisations seek to manage international migration.AI is gradually going to be used to perform tasks, including identity checks, border security and control, and analysis of data about visa and asylum applicants (Chui et al. 2018).To an extent, this is already a reality in some countries such as Canada, which uses algorithmic decision-making in immigration and asylum determination (Molnar and Gill 2018), and Switzerland, which is currently testing an algorithm to improve refugee integration (Bansak et al. 2018).In the European Union (EU), the revised Schengen Information System (SIS) will be using facial recognition, DNA, and biometric data to facilitate the return of migrants in an irregular situation (Regulation 2018(Regulation /1860)).These examples illustrate well the current trend of increasing reliance on new technologies, including AI, for international migration management and border security.
Against this background, the article evaluates the impact of AI technologies on international migration management.Migration management is understood as the different strategies, policies, processes, and procedures negotiated and adopted by relevant actors at the international level to provide a framework to manage migratory flows in an orderly and predictable manner.This notion thus directly relates to that of migration governance but has a narrower scope (Cre ´peau and Atak 2016).Migration management is a contested notion (Ansems de Vries and Guild 2019).As Castles notes, the political will and assumed capacity to manage migratory flows is often contradicted by reality, as migration is a complex phenomenon that cannot be easily 'managed ' (2004b: 214).Still, states and international organisations have made it clear that they wish to 'manage large movements of people' through the 'implementation of planned and well-managed migration policies' (New York Declaration for Refugees and Migrants 2016: paras.11 and 15).Therefore, the article focuses specifically on states' and international organisations' role in international migration management.Although the academic literature points out the contributions of other stakeholders such as private intermediaries and non-governmental organisations in international migration governance (Castles 2004a;De Haas 2010;Groutsis, Van den Broek and Harvey 2015), these remain outside of the scope of this article.This choice is justified by the growing uses of AI technologies by states and international organisations in the field of international migration.
The article structures its analysis following a tripartite conception of international migration management, according to which a diversity of actors determine how migration management is exercised and understood through a variety of practices and discourses (Geiger and Pe ´coud 2010: 2).The article's central hypothesis is that AI technology can affect international migration management in these three different dimensions: (1) by deepening the existing asymmetries between states on the international plane; (2) by modernising states' and international organisations' practices; and (3) by intensifying the contemporary calls for more evidence-based migration management and border security.
Therefore, the article examines each of these three hypotheses consecutively (Sections 2-4), before reflecting on the main challenges brought by the potential over-reliance on AI solutions for international migration management (Section 5).It draws on legal, political, and technology-facing academic literature, examining the current trends in technological developments and investigating the consequences that these can have for international migration.In doing so, it contributes to the current debate about the future of international migration management, thus informing policymakers in this area of growing importance and fast development.

AI divide: deepening actor asymmetries
States, international organisations, and a multitude of regional, subnational, and local stakeholders (Panizzon and van Riemsdijk 2019) evolve in what has been described as a 'fragmented tapestry of global migration governance' (Betts 2011: 309).This section focuses specifically on the role of states and international organisations within this complex architecture, by reflecting on the potential impact brought about by the development of AI technologies in the field of international migration.
In the aftermath of the so-called migration crisis in 2015-16, states, as well as a variety of stakeholders, have engaged in efforts to strengthen the global mechanisms for migration governance.The United Nations (UN) General Assembly (2016) adopted the New York Declaration for Refugees and Migrants, followed by the negotiation and adoption of two global compacts (Global Compact on Refugees 2018; Global Compact for Safe, Orderly and Regular Migration 2018).Still, states interact in asymmetric ways at the global level.This is in part because although a greater proportion of migratory movements are South-South (Abel and Sander 2014;UK Research and Innovation 2019), states in the Global North still set the global agenda for international migration.
The deployment of AI technologies can deepen such asymmetries in two main ways.Firstly, it can amplify the so-called digital divide (Norris 2001) between states with more advanced technological capabilities and those lacking such technologies.AI enthusiasts' main claim is that it can be used to cut costs and increase efficiency (Chui et al. 2018).AI technologies would, therefore, be advantageous for migration and asylum procedures, which are normally lengthy, primarily manual, and largely based on migrants and asylum-seekers' claims.States are already considering using AI technologies in this field.For example, the German Federal Office for Migration and Refugees (Bundesamt fu ¨r Migration und Flu ¨chtlinge, BAMF) has piloted projects using technology such as automatic face or dialect recognition, name transliteration, and analysis of mobile data devices for identity verification to support decision-making in the asylum determination process (Tangermann 2017; Federal Office for Migration and Refugees 2018).Their objective is to verify migrants' claims on identity and country of origin, which would otherwise require lengthy human expert linguists' assessment (Patrick, Schmid and Zwaan 2019).The EU has been recently adopting new legislation aiming at making use of AI and related technologies in migration and security areas (Regulation 2018/1860; Regulation 2019/816; Regulation 2019/818).States that are not members of the Organisation for Economic Cooperation and Development (OECD) such as Bangladesh, Nepal, and Malaysia are also using technology to automate their migration management systems (Gelb and Krishnan 2018).Similarly, states are investigating the possibilities to use AI technologies to predict the next 'migration crisis'.For instance, Swedish authorities have used 'migration algorithms' based on techniques such as machine learning to forecast future migration flows (Carammia and Dumont 2018).
Accordingly, AI technologies could cement the leading position of those AI-capable states, which would be placed at the forefront of the global efforts to manage migration in the years to come.Such a situation would create an AI divide.In this new paradigm, states with less advanced technological means could be further isolated.This is particularly important when one takes into consideration the development of AI technologies in countries such as China due to the juxtaposition of the so-called liberal democracies with more authoritarian regimes on the same side of the divide.For instance, China has become a more prominent actor in global migration debates (Lu 2014;IOM 2019b).Although there is little evidence suggesting that AI is currently used in migration management, Chinese authorities certainly master data-driven AI technologies, including the contested social credit system (Backer 2018;Matsakis 2019).Besides, the AI divide could either reinforce or, conversely, represent a shift from the North-South paradigm (Chetail 2008).If those AI capabilities concentrate in the Global North, the AI divide would rather reinforce the existing North-South paradigm.However, if states in the Global South take the opportunity to develop their AI capabilities, that could give then an additional means to exert influence in matters related to migration management as fully fledged AI-capable states.For example, Latin American states such as Brazil, which has become a more common destination country of migration and asylum in the region (Bertino Moreira 2017), could take this opportunity to further strengthen their position in international migration management.Accordingly, the AI divide could simultaneously contribute to deepening the already asymmetrical relationships between North-South states, while shifting the focus slightly towards what could come to be the 'AI-capable states and the others' split in international migration management.
Secondly, the AI divide would also impact international organisations.They could embrace the opportunity to assist less AI-capable states in keeping up with technological advances.Capacity-building programmes and initiatives aiming at stepping up technical support could increase the use of AI technologies.Such a role would not represent a radical change to what many international organisations are already doing.For instance, the World Bank has been assisting less developed states with the implementation of digital identity solutions through its ID4D programme (World Bank 2018).However, there is a risk that international organisations would prioritise an agenda heavily influenced by AIcapable states in matters relating to AI technology.Once more, such a possibility is not too distant from present issues.For example, the role of international organisations as conveyers of the Global North's migration management views has been criticised in the literature (Geiger 2010;Ashutosh and Mountz 2011).More specifically, the International Organisation for Migration (IOM) and the Office of the United Nations High Commissioner for Refugees (UNHCR) have been criticised for becoming too involved in the implementation of the EU's global approach to migration via subcontracting and rule transmission (Lavenex 2016).By the same token, Pe ´coud has argued that the IOM promotes the interests of developed states while claiming to supposedly work in the interest of all (Pe ´coud 2018).
Accordingly, international organisations should maintain their global focus to include the needs and views of less AI-capable states in the future uses of AI technologies in migration management.Without such an approach, international organisations would further support the asymmetric relationships between developed and less developed states in the arena of migration governance.Conversely, by strengthening the technological capacity of the less AI-capable states, international organisations may be able to alleviate the negative effects of an impending AI divide.
AI technologies may thus have a profound impact on states' and international organisations' relationships at the international level.The AI divide may also affect their practices in the area of international migration management, which will be analysed in the next section.

AI tools: modernising traditional practices
whether these new tools would considerably change the current practices.This section examines the impact of AI technologies on the practices of states and international organisations.
AI-capable states may use AI algorithms to predict the next 'migration crisis' with greater precision (Nyoni 2017;Carammia and Dumont 2018), thus foreseeing incoming movements of people based on a variety of available data, including Wi-Fi positioning, big data, or Google Trends (Alessandini et al. 2017;Connor 2017;Dijstelbloem 2017).The consequences of deploying such technologies are at least twofold.
On the one hand, states could use AI technologies to foresee arrivals and prepare more efficiently for large influxes of people.For instance, decision-makers could use AI algorithms to analyse large amounts of data and identify potential gaps in their reception facilities.These gaps could relate, for example, to the lack of sufficient places for families with children or vulnerable unaccompanied children.Identifying such gaps and acting on them would allow state authorities to prepare and adapt their reception conditions, thus complying with their legal obligations under international human rights law (IHRL).States parties to international human rights treaties such as the International Covenant on Civil and Political Rights (ICCPR), the European Convention on Human Rights (ECHR), or the American Convention on Human Rights (ACHR) have agreed to respect, protect, and fulfil the legal rights set forth by these instruments.They owe these obligations also to foreigners under their jurisdiction (Human Rights Committee General Comment No. 15 1986; M.S.S. v. Belgium and Greece 2011; Advisory Opinion OC-21/14 2014).They encompass, for example, the prohibition of torture, inhuman, or degrading treatment (Article 7 of the ICCPR; Article 3 of the ECHR; Article 5 of the ACHR).In this regard, states breach their treaty obligations in case of inhuman reception conditions for migrants and asylum-seekers (M.S.S. v. Belgium and Greece 2011).AI technologies could be used to prevent such breaches.
On the other hand, AI-capable states may be inclined to put measures in place to prevent migrants' and asylum-seekers' arrivals.Regrettably, that would reinforce the existing non-entre ´e policies; in other words, the existing variety of measures aimed at obviating access by migrants and asylum-seekers to a state's territory (Hathaway 2005).Non-entre ´e policies encompass visa controls, carrier sanctions, the establishment of international zones, and maritime interceptions on the high seas (Gammeltoft-Hansen and Hathaway 2015).AI technologies could be instrumental for each of these policies, for example, by streamlining visa controls and identity checks in offshore facilities.Besides, AI technologies could be used to reinforce unlawful non-refoulement practices (as forbidden by Article 33 of the Refugee Convention).For example, such technologies could assist targeted maritime interventions aiming at returning migrants and asylum-seekers to places where they may fear for their lives or freedom.In this regard, AI is at risk of becoming another political tool, used to reinforce old state practices, which aim to curb international migration and prevent asylum-seekers from reaching their territories.
The development of AI technologies may also influence the practice of international organisations operating in the field of international migration.International organisations have already demonstrated a keen interest in adopting new technologies.Some already use machine learning and AI in conjunction with biometric technology (The Engine Room and OXFAM 2018).For instance, IOM has launched the Big Data for Migration Alliance, which aims to explore the uses of technology in international migration (IOM and European Commission 2018).The UNHCR has been using technology for case management since 2002 when it developed the IT case management tool Profile Global Registration System (proGres).Since then, the UNHCR has massively deployed biometric technology (e.g.fingerprints and iris scans) for refugee registration and distribution of aid in camps.It does so through a sophisticated array of IT tools complementing proGres such as the Biometric Identity Management System (BIMS), CashAssist enabling the distribution of cash assistance, and the Global Distribution Tool (GDT) enabling the distribution of in-kind assistance such as food.Lately, the UNHCR has been piloting a new tool, the Population Registration and Identity Management EcoSystem (PRIMES), which consolidates all UNHCR data in a single database accessible via the Internet (UNHCR 2019).
The uses of technology in international migration management and humanitarian action raise two main sets of concerns.First, there are obvious issues about cybersecurity (Singer and Friedman 2014), notably as international organisations such as the UNHCR aggregate personal data of vulnerable people in centralised databases, possibly making them an attractive target for hackers.Secondly, there are growing reservations about the emergence of a form of 'surveillance humanitarianism' (Latonero 2019).The claim is that by increasingly relying on technology to collect personal data of vulnerable people such as migrants and refugees, these organisations create additional bureaucratic processes that could lead to exclusion from protection.International organisations should, therefore, strive to protect the data of the vulnerable people they intend to serve (Kuner and Marelli 2017).Such concerns echo much of the recent scholarship that has raised alarm about the negative effects of data-driven AI in society (O'Neil 2017; Noble 2018; Eubanks 2018; Zuboff 2019).In particular, research demonstrates that AI algorithms can reinforce stereotypes leading to social injustice (Noble 2018), and that the uses of AI technology may narrow the scope of the welfare state (Eubanks 2018).Moreover, due to their 'black box' nature and the fact they may be protected under trade secrets, AI algorithms may contain undetectable inaccuracies and mistakes (Pasquale 2015), which can lead to unlawful discrimination (Citron and Pasquale 2014).
To summarise, while the argument that AI technology may bring innovation, reduce costs, and build more effective systems for international migration management (Chui et al. 2018) is an attractive one, it is equally important that such tools are developed and deployed within ethical (Floridi et al. 2018) and legal frameworks, in particular IHRL (McGregor, Murray and Ng 2019).As Cre ´peau and Atak have observed, there has been an 'insufficient focus on the human rights dimension in migration management' (Cre ´peau and Atak 2016: 113).It is the right time to address this gap, notably due to the potential influence of AI technologies in the area of migration management.IHRL should thus serve as a baseline for action in this field, with an emphasis on guiding policy and providing for the protection of the migrants' and asylum-seekers' rights throughout the migratory processes.Policymakers should bear in mind that the use of AI technologies may lead to mistakes.For example, AI algorithms may accidentally misidentify a migrant as a terrorist or miscalculate the risk of ill-treatment upon deportation to their country of origin.Blind over-reliance on AI technologies could lead to serious breaches of human rights if in these scenarios, migrants were deprived of liberty due to misidentification, or if they were subjected to torture or inhuman treatment upon deportation.
In addition, the intertwined relationships between private and public sectors in the technology area raise important concerns about who bears responsibility for harms caused by the deployment of AI solutions-the software developer, the company commercialising the AI solution, or the decision-maker who adopted a decision based on that AI tool.This issue, which is not exclusive to the field of migration management, remains largely unsettled as a matter of law (Cerka, Grigiene and Sirbikyte 2015;Sullivan and Schweikart 2019).At the UN level, the focus has been on the application of the UN Guiding Principles on Business and Human Rights, a non-legally binding international framework, to technology companies (Ruggie 2007;Human Rights Council 2008).Negotiations to adopt a legally binding instrument are ongoing at UN level (Open-ended Intergovernmental Working Group on Transnational Corporations and Other Businesses Enterprises with Respect to Human Rights 2019).However, the existing framework, which is based on the idea that businesses should respect, protect, and remedy human rights, still largely relies on businesses' goodwill to comply with such requests (Addo 2014).
Therefore, a way forward is for states and international organisations to adopt a human rights-based approach (Cre ´peau and Atak 2016; UN Secretary-General's High-level Panel on Digital Cooperation 2019) and conduct human rights impact assessments (World Bank 2013) to verify whether the uses of AI technologies in migration management are not detrimental to migrants' and asylum-seekers' rights.In doing so, they would be able to scrutinise their policies, programmes, and practices to identify and measure the potential harms to human rights.States and international organisations should also require that a due diligence assessment (Harrison 2013) must be undertaken by businesses developing the AI solutions that they will later implement in the field of migration management.Such assessments should be included in procurement procedures through which state authorities and international organisations representatives acquire specific AI tools developed by private sector companies.Accordingly, AI technologies could become instrumental in modernising states' and international organisations' practices, without compromising on the protection of migrants' and asylum-seekers' human rights.

AI data: supporting the evidence-based discourse
The deployment of AI technologies for international migration management could also contribute to the intensification of the contemporary discourse calling for more datadriven and evidence-based policymaking in this area.As AI algorithms are fuelled by data, the more pervasive their use in migration management, the more data they will simultaneously require and produce.Such a situation may strengthen the current trend of 'datafication of migration management' (Broeders and Dijstelbloem 2015: 242).This formulation refers to the over-reliance on different types of data, including satellite and big data, for migration management and border control.In particular, the datafication follows from the growing state investment in software and information management systems for border surveillance and migration management (Broeders and Dijstelbloem 2015).Yet, experts have argued that more data on migration is needed to inform policymaking.For instance, Singleton has argued that there is 'a pressing need for reliable, timely and comparable statistical data on migration and asylum, as well as on arrivals at national borders' to guide policymaking at the EU level (Singleton 2016: 1).The IOM has also emphasised the need for more reliable migration data to inform policymaking as the available datasets are often limited to certain regions and set periods (IOM 2017;IOM and European Commission 2018).
Per se, evidence-based policymaking, or the belief that policy decisions should follow from rigorous and accurate scientific evidence (Cairney 2015;Parkhurst 2016) is certainly a laudable idea.It is a good practice to adopt decisions supported by firm scientific evidence.However, policymaking is considerably broader than technical decision-making, which means that policymakers often compromise on critical issues and act on public perceptions or fears (Cairney 2015;Parkhurst 2016;Bianchi and Saab 2019;Khosrowi 2019).Moreover, evidence-based policymaking presents important difficulties.The definition of 'evidence' is often challenging, as policymakers tend to give precedence to quantitative evidence to the detriment of qualitative data (Baldwin-Edwards, Blitz and Crawley 2019).In addition, biases can permeate the creation, selection, and interpretation of evidence (Parkhurst 2016).
These difficulties are also present in the field of international migration.Quantitative evidence originating in statistical data, and more recently big data (Rango 2015;Alessandini et al. 2017;Beduschi 2018), is usually favoured to the detriment of qualitative data (Baldwin-Edwards, Blitz and Crawley 2019).For example, Crawley and Blitz suggest that there are important discrepancies between the qualitative evidence typically provided by academic research into migrants' decisions to migrate and the quantitative assumptions underpinning the EU's efforts to curb irregular migration (2019).Besides, the potential for biases is quite significant in international migration.For example, scholars have highlighted differences between the concepts used by academics (e.g.'drivers' of migration) and policymakers (e.g.'root causes' of migration) (Carling and Collins 2018).Such a choice of concepts is not without significance, because it influences the way evidence is created.By framing the topic of research from the outset, the resulting evidence generated by the research risks only reaffirming the already predetermined policy choice (e.g. that migration has root causes in development issues and not that migration is driven by a vast array of considerations and choices).
Therefore, it is important that quantitative as well as qualitative datasets, such as those collected by independently funded academic research projects, are used to train AI algorithms for use in migration management.For example, algorithms could analyse datasets composed of extensive qualitative interviews with migrants to discover patterns and make predictions about people's intentions to migrate.They could then triangulate these findings with statistical and other quantitative data sources (such as the number of arrivals and the number of asylum applications in a set period and territory) to predict large movements of people.These algorithms could also be used to monitor and evaluate government policies and programmes.
Importantly, such qualitative and quantitative datasets already exist.For instance, the Economic and Social Research Council-funded Mediterranean Migration Research Programme has been collecting qualitative data on different aspects of the migratory process (Baldwin-Edwards, Blitz and Crawley 2019).Research agencies and programmes including the European Research Council and the Horizon 2020 have similar projects (Baldwin-Edwards, Blitz and Crawley 2019).International organisations have adopted new initiatives aimed at harnessing data for migration.Projects such as the United Nations Global Pulse (UN Global Pulse 2018) and the Migration Data Portal (IOM 2019a) provide key resources in this area.
Accordingly, the increasing calls for more evidence-based policymaking in the field of international migration management go hand in hand with the proliferation of different resources and datasets in the field.Such an abundance of available data on migration can benefit the development of data-powered AI algorithms.The potential surge in the uses of such algorithms would certainly lead to the production of more data on migration, justifying in turn the need for more data-driven policymaking in this area.Still, the adoption of AI solutions for international migration management is not without its risks, which is the subject of the next section.

AI risks: addressing design and implementation challenges
As with any new venture, using AI technologies for international migration management can be challenging.This section investigates three sets of issues relating to data quality at the origin of AI algorithms' design, migrants' data privacy, and algorithmic accountability and fairness.
First, there are concerns about data quality of AI algorithms.It is generally accepted that poor data quality used for training AI algorithms can produce equally poor outcomes (Redman 2018;Richardson, Schultz and Crawford 2019).Some examples from other domains further illustrate this risk.For instance, IBM's AI Watson failed to provide cancer identification as it could not interpret medical language, local acronyms, and consultation notes.In other words, the attributes needed for such identification were not found in the categorical structured data that Watson was using at the time (Schmidt 2017).More alarmingly, the use in the United States of COMPAS, an AI algorithm designed to inform judicial decision-making in sentencing by predicting the likelihood of reoffending behaviour, has allegedly led to racial discrimination (Courtland 2018).Predictive policing algorithms such as PredPol, which was used by police forces in the UK (Nilsson 2018), also present important issues (Ferguson 2017).One of the issues with predictive policing is that, as put forward by Richardson, Schultz and Crawford, 'actual crime data is often incomplete or distorted ' (2019: 202).Therefore, it is important to assess the quality of the data used for training algorithms at an early stage in the algorithmic cycle.Failing to do so may lead to breaches of human rights of those affected by the technology.
Such problems can also permeate the design and implementation of AI in the migration context.As Singleton indicates, mistakes in the uses of migration data are common (2016).For instance, she points out that analysts may conflate administrative data on the numbers of migrants with estimates of migration, and border-crossing data may be incorrectly used to represent migrant numbers.The latter is particularly problematic as one individual may cross the same border several times and can thus be accounted for all the crossings as different people.In addition, AI algorithms can reflect the biases of their creators, thus reinforcing discrimination (Crawford, Miltner and Gray 2014; UN Secretary-General's High-level Panel on Digital Cooperation 2019).For instance, AI algorithms using natural language processing techniques building on less diverse datasets could amplify the biases and stereotypical associations of their designers (Rudinger, May and Van Durme 2017).Such algorithms have an important place in migration management as they can be used for dialect recognition, streamlining asylum determination processes (Tangermann 2017;Federal Office for Migration and Refugees 2018).Gaps in the data about the dialects of ethnic minorities used to train the algorithms could reinforce existing patterns of discrimination vis-a `-vis these ethnic minorities.Similarly, if face recognition technologies were used in migration management-for identity verification, for example-they could fail to recognise the faces of a large part of migrants and asylumseekers, based on their race or ethnic origin.This is because face recognition technologies have difficulties recognising people with darker skin types, in particular women, due to the lack of diversity in the data used to train the algorithms (Buolamwini and Gebru 2018).Accordingly, if AI becomes more widespread, the output of one AI algorithm will feed the subsequent ones, possibly cascading the original mistakes, gaps, and misuses of data.Therefore, software developers should address the quality of the data used for AI algorithms as a matter of priority.
Secondly, there are important concerns about data privacy when applied to migration matters.It is well established that in principle, individuals enjoy the same rights online as they do offline, including the right to privacy or the respect for one's private life (Human Rights Council 2012;UN General Assembly 2014;Human Rights Council 2016;Schmitt 2017).The right to the respect for one's private life encompasses telecommunications and electronic data (Uzun v. Germany 2010;S. and Marper v. United Kingdom 2008;Big Brother Watch and others v. United Kingdom 2018;Tristan Donoso v. Panama 2009;Escher et al. v. Brazil 2009).Non-nationals' data privacy must also be protected insofar as they fall within the jurisdiction of a state party to a human rights treaty recognising this right (UN General Assembly 1985).
Still, states may impose limitations to the exercise of this qualified right, including in connection to migration matters.As per Article 8 of the ECHR, these limitations must be adopted in accordance with the law, pursue a legitimate interest (e.g.national security, public safety, the prevention of disorder or crime, or the protection of the rights and freedoms of others), and comply with the requirements of necessity and proportionality.The EU's General Data Protection Regulation (GDPR) also allows for restrictions regarding, for example, national security and public security, in its Article 23 (1).As the EU has firmly embedded migration within its security agenda, it is conceivable that member states would rely on this provision to impose restrictions on migrants' data protection.For instance, the UK has already imposed restrictions relating to 'the maintenance of effective immigration control, or the investigation or detection of activities that would undermine the maintenance of effective immigration control' (Data Protection Act 2018: Schedule 2 Part 1 Paragraph 4).Under the Data Protection Act 2018, non-nationals, including EU citizens within the UK's jurisdiction, do not benefit from important rights.These are the right to be informed about data collection (Articles 13 and 14 of the GDPR), the right of access (Article 15 of the GDPR), the right to erasure (Article 17 of the GDPR), and the rights to restrict processing (Article 18 of the GDPR) or to object to processing (Article 21 of the GDPR).The UK's High Court has recently ruled that this exemption under the Data Protection Act 2018 is lawful [R (Open Rights Group & the3million) v. Secretary of State for the Home Department 2019].The outcome of this decision is thus quite indicative of the level of tolerance for restrictions on migrants' data privacy, at least in the UK.
Thirdly, there are critical issues with algorithmic accountability and fairness.AI algorithms may one day become instrumental in, for example, declining one's visa application or matching a migrant's identity to that of a suspected terrorist-and do so without a clear explanation about how the machine reached such a decision.This situation may arise in part because algorithms can be trained using unsupervised learning, in which case the machine can learn by itself, identifying patterns and making predictions that do not necessarily follow what a human would do (Graves and Clancy 2019).These 'thought processes' are not explainable by humans, not even by those who have designed the algorithm at the very beginning of the training process (Pasquale 2015).Such a system thus creates considerable unpredictability and opacity, making it harder to understand how decisions that can have a crucial impact on one's human rights were made (McGregor, Murray and Ng 2019).Besides, decision-makers, as all human beings, generally tend to favour the results presented by machines, even if these are mistaken-a phenomenon known as automation bias (Wickens et al. 2015).
Consequently, individuals may face important difficulties in obtaining redress in case of violations of their rights.Fairness, which is a crucial component of the right to a fair trial, entails that a trial (including before administrative authorities) should progress in 'the absence of any direct or indirect influence, pressure or intimidation or intrusion from whatever side and for whatever motive' (Human Rights Committee General Comment No. 32 2007: 25).Subsequently, it is possible to question whether the reliance on opaque AI algorithms, combined with the persistence of automation bias, may excessively influence decision-making, thus compromising the fairness of the process.This is particularly concerning in the field of migration management, given the inherent power differentials between the decision-makers and the migrants and asylum-seekers.
To overcome these issues, Citron has proposed the concept of 'technological due process', which encompasses accountability, fairness, and transparency guarantees (Citron 2008;Citron and Pasquale 2014).McGregor, Murray and Ng have proposed to build upon an existing legal framework, IHRL, to address algorithmic accountability at all stages of its life cycle, from design to implementation (2019).According to this proposition, IHRL would be crucial to identify the potential harms that algorithmic decisionmaking could bring about.IHRL could be effective vis-a `-vis states parties to international treaties on human rights.However, this framework does not apply, as a matter of binding law, to non-state actors such as corporations (Clapham 2006).Wachter and Mittelstadt propose another possible solution, via the establishment of a 'right to reasonable inferences', applicable to states and non-state actors alike (2019).This right would require data controllers to justify the type of data, the inferences made based on this data, and the accuracy and reliability of the data and the methods used.Provided that there is enough political will to adopt such a solution, this new right could change the way algorithmic accountability is perceived, with implications for migration and asylum decision-making as well.

Conclusion
The development of AI technologies will likely impact international migration management in its three substantive dimensions (Geiger and Pe ´coud 2010): (1) the relationships between the key actors, (2) their practices, and (3) the discourses shaping international migration management.First, it is anticipated that the AI divide could contribute to deepening the asymmetries between states insofar as international migration management is concerned.Besides, this new paradigm could either reinforce or, conversely, represent a shift from the North-South paradigm.If AI-capable states concentrate in the Global North, the AI divide would rather reinforce the existing North-South paradigm.However, if states in the Global South take the opportunity to develop their AI capabilities, that could give them additional means to exert influence in matters related to migration management as fully fledged AI-capable states.International organisations will conceivably continue to play a crucial role in assisting less AI-capable states in keeping up with the technological advances, thus contributing to bridging the AI divide.
Secondly, the development of AI technologies for international migration could reinforce old practices in the field of international migration management.For instance, it could provide new tools to strengthen existing non-entre ´e policies and unlawful nonrefoulement state practices.As noted earlier, AI technologies could be used to assist states in maritime interventions aiming to return migrants and asylum-seekers to unsafe countries and territories.AI would thus become another political tool for curbing international migration and preventing arrivals.However, if there is enough political will, AI technologies could also assist states and international organisations in preparing for large movements of people.For example, AI could do so by predicting the new 'migration crisis' and using this information to allocate resources better and improve reception conditions.
Thirdly, AI will possibly amplify calls for more data-driven evidence-based policymaking in this field.Given that AI algorithms are powered by data from diverse sources, including big data, the more they are used in migration management, the more they will require and produce new datasets.While evidence-based policymaking is, in principle, a good practice, it also encompasses important challenges (Cairney 2015;Parkhurst 2016).In particular, it is important to clarify what counts as 'evidence' in evidence-based policymaking.Due to the complex nature of migration, both quantitative and qualitative datasets, such as those collected by independently funded academic research projects, should be taken into consideration.Moreover, both types of datasets should be used to train AI algorithms for use in migration management.In this way, these new AI tools will be able to depict a fuller picture of migration and go beyond a purely numerical view of the migratory phenomenon.
Furthermore, applying AI to international migration management is not without its risks.This article has highlighted and examined three main challenges concerning the quality of the data used to train algorithms, migrants' data privacy, and algorithmic accountability and fairness.While these challenges are not entirely exclusive to international migration, they should be considered before AI technologies become more widespread in international migration management.For instance, as pointed out by Molnar and Gill, Canadian decision-makers should carefully consider the effects of using AI algorithms on vulnerable and under-resourced communities as their rights are less wellprotected than those of Canadian citizens (2018: 4).Indeed, due to the black-box nature of AI algorithms, mistakes, inaccuracies, and biases may be difficult to detect.Consequently, individuals may not be able to obtain redress in case of violations of their rights.AI-capable states such as Canada and Germany should lead the way in implementing ethically sound and legally compliant AI solutions.That is also a valid consideration for international organisations that increasingly use technology and collect data from vulnerable populations they intend to protect (Kuner and Marelli 2017).
In a nutshell, while international migration management is likely to be influenced by the developments in AI technologies, policymakers should not succumb to the hype surrounding AI without a comprehensive consideration of its implications.International migration is a complex and context-dependent phenomenon.AI alone is not a panacea, and it cannot provide a one-size-fits-all model for international migration management.Policymakers should thus consider all these aspects if they want to make sense of international migration management in the age of AI.