Doing Refugee Right(s) with Technologies? Humanitarian Crises and the Multiplication of “Exceptional” Legal States

Like borders, refugee protection settings beyond the EU often serve as testing grounds for technologies. This article takes a socio-legal perspective to show how humanitarian experimentation in these contexts is made possible through different, interacting challenges to sovereignty. It argues that the understanding that actors or their positions are “exceptional” allows for and justiﬁes data practices that would otherwise not be legally permissible. Examples of data practices in refugee protection settings are connected to work in geopolitics, science and technology studies, and sociology of law. The article shows how the position of the United Nations High Commissioner for Refugees (UNHCR) as negotiator on behalf of refugees and an emergency-driven techno-solutionism not only interacts with the already precarious legal context most people seeking refuge ﬁnd themselves in. It coincides with the legal positioning of International Organisations and with citizenship-oriented conceptions of privacy, further constituting people seeking refuge as (digital) rights optional. This is problematic not least because of concerns about adequate data protection or the implications of bias. Data ﬂows and algorithms are generative of the politics of contemporary societies, implying that the structural undermining of digital rights of people seeking refuge in the present can also hinder their access to rights in the future.


I N TRO D UC TI ON
"When refugees flee war, they become citizens of a country called UNHCR until they return to their country or are resettled.Does this country UNHCR not have the right to own the data of its citizens?" 1 With these words Imad Malhas, the founder of IrisGuard, seeks to legitimise the registration of "iris-scanning fraudproof biometrics" by the United Nations High Commissioner for Refugees (UNHCR). 2 Back in 2013, the technology his company developed was key in trialling the capturing of biometric data upon registration for refugee protection with UNHCR Jordan.These days, obtaining biometric data by drawing on developments in Artificial Intelligence (AI) is standard UNHCR registration practice.The UN refugee agency has obtained the biometric information of at least 8 million people, which was supposed to be stored on a single database by the end of 2019. 3uch of the reasoning behind UNHCR's biometric registration procedures is grounded in the conflation of a person's digital identity with their legal identity.The emphasis is put on Sustainable Development Goal 16.9 -universal access to a legal identity -and the desire that "no one should be left behind". 4The problem is, however, that the registration of biometric information by UNHCR does not guarantee legal recognition or access to rights.Contrary to the assertion by Malhas in the above quote, UNHCR is neither a country nor a State.It is an international organisation (IO) with a legal mandate to provide protection and assistance to the world's refugees. 5In this article, an IO is understood to be "an organization established by agreement under international law, with at least one organ with a will of its own (volonte ´distincte) and which possesses international legal personality". 6As such, UNHCR is subject to international law and bears international human rights responsibilities. 7But it has legal immunity regarding domestic and regional legislations.And it is well known that UNHCR and other organisations involved in refugee governance have problems with accountability.There are ample legal, political, and practical reasons why accountability towards people it is mandated to protect tends to fall short. 8NHCR has positioned itself as negotiator of "protection space", especially in States that are geographically located in South-East Asia and the Middle East and are non-signatory to the 1951 Refugee Convention and its 1967 Protocol. 9In this capacity, the UN refugee agency along with other UN agencies is deeply involved in governing practices that are generally associated with responsibilities of sovereign States.The ways these organisations 1 C. Nedden, & A. Dongus, "Getestet an Millionen Unfreiwilligen tested on millions of non volunteers)", Die Zeit, 17 Dec. 2017, available at: https://www.zeit.de/digital/datenschutz/2017-12/biometriefluechtlinge-cpams-iris-erkennung-zwang (last visited 19 Jan. 2023).
operate to order information, classify populations, and provide benefits closely resembles developments in how State actors are deploying new business models and modes of public administration. 10UNHCR and other IOs are also at the forefront of the datafication of procedures generally associated with the welfare state and border policing. 11These practices do not necessarily reduce the power of State actors.As the intrinsically partial "Seeing like a State" practices of IOs are increasingly data-driven, 12 scholars have been asking urgent questions about their long-term consequences and normative implications. 13Concerns about the digitisation of refugee governance go beyond concerns about risk of failure.
Experimental technologies can also bring about harm if they work as intended. 14There are some known examples of harms inflicted but the extent of potential consequences of humanitarian digital practices are largely speculative. 15This is not the same as being unforeseen: speculation can serve as a critical compass that signals the need for being responsive. 16his article contributes to discussions on the digital transformation of refugee protection by providing additional insights into how complex legal environments allow for treating people seeking refuge as (digital) rights optional.A socio-legal perspective is used to explore how multiple legal positions -often present in refugee protections settings -are established as "exceptional".This provides legitimation for the gathering of vast amounts of data in humanitarian settings and technology use that otherwise -for instance, if it would concern EU citizens -would not be legally permissible and/or would require more thorough legal safeguards.As this study engages with what "exceptions" allow, the violent nature of sovereignty becomes clear. 17Positioning different yet interacting conditions as "exceptional" allow for legitimation and legalisation of governmental differentiation.It allows, as Ghassan Hage put it, the establishment of "another governmentality directed at subjects whose lives are constructed as less valuable in themselves, and against whom more repressive and violent forms of subjugation can be deployed with less difficulty." 18efugees -like any other citizen, non-, and not-yet citizens -do engage in rights-claiming acts. 19But the article points out how experimental data practices in refugee-hosting are enabled through different legal exceptions and rationalities which further allow for constituting people seeking refuge as rights-optional non-citizens.And it seems that a figure -a sovereign -to direct calls to for more rights is absent.The question therefore remains, following Agamben, if it is only through claiming rights from sovereign power that a qualified life under biopolitical governance can be ascertained?It seems that the real or imagined consequences of data-driven governance serve to further entrench exclusionary, State-centred conceptions of citizenship and can foreclose alternative futures. 20his article should not be misread as an argument against technologies.Such a position does not speak to the realities of most people, including refugees.Of course, technological innovations can also benefit refugees, and technology and infrastructure will also allow alternatives and contestations, which explains the importance of data-activism. 21Louise Amoore's perspective is useful in understanding resistance and rights claim-making, not in opposition to or outside of, but in relation to the iterative learning and attributive power of machines and humans. 22She urges us -social scientists, scholars of ethics, and humanities -to find ways to engage with the assumptions and arrangement of algorithms. 23his brings me to a brief note on my methodology.Ethnographic studies on digital connectivity and refugee protection (Jordan, 2012 -2020; Kurdish Region of Iraq (KRI), 2020 -2023) have strongly informed my thinking and further explain the geographical focus of examples given on digital transformations.Earlier, I have shown how digital connections are perhaps even more crucial in the lives of people who have become forcefully displaced than for sedentary populations, not least because they often need to navigate prolonged legal precarity. 24Here, I draw on few empirical examples from these studies.Rather, I connect studies and reporting on data practices in refugee protection to studies from science and technology studies (STS), sociology of law, migration studies, political geography etc.The datafication of humanitarian systems is very hard to study, given the security barriers, the absence of transparency and other limitations to access.Of course, more work needs to be done to better understand how these systems work (or do not work) in practice and how they interact with the border work of State actors.The settings upon which I draw throughout this article are not exceptional; it is the positioning of people, times and circumstances, places, organisations, and States as "exceptional" that this study seeks to unravel.
The structure of this article is as follows: Section 2 clarifies the terminology used and points to some of the nuances required in debates on technology and data practices in refugee protection.In Section 3, I examine elements common to digital transformations associated with refugee governance, particularly when it concerns precarious protection settings.This connects to topics frequently discussed in academic literature on humanitarian interventions, such as the emphasis on crisis and emergency, 25 the marketisation of aid, 26 techno-solutionism, and 'neophilia' -the love of innovations and quick fixes. 27Less academic attention has been given to the topic discussed in Section 4: the legal positioning of IOs and whether this makes them much desired partners for private bodies to team up with.Even if their data protection policies were legally enforceable, they set mediocre data protection standards compared to those of EU data legislation, for instance.Section 5 explores how persisting citizenship-oriented conceptions of privacy resonate with older othering practices, while newer characteristics of digital technologies can further restrict access to rights.The potential that algorithms in interaction with other technological and sociopolitical developments and human short-sightedness will further restrict space for claimmaking for refuge and by people seeking refuge is likely.This demonstrates the need for careful scholarly and practical engagement.

SI TU ATI N G NEW ER T ECH NO LO GI ES
The capturing of body-centric information and other data for migration governance purposes has a long history which is closely interlinked with colonialism, the institutionalising of racism and policing. 28Furthermore, trialling techniques and technologies in refugee governance is not new either. 29What is newer is that technological developments -the increase in computational power coupled with vast quantities of data and algorithmic innovations -have enabled the capture of attributes that were previously imperceptible, 30 and that the extracted data is reconfigured into complex assemblages. 31These can then be more easily remotely available to other actors who can use it for different purposes.The open-ended shifting lifecycle of data means that it can have multiple and simultaneous meanings, purposes, and effects at different times and places.The following empirical example provides some insights into ways in which data can travel and its influence on people's movement.I will then discuss key concepts and terminology used in this article, such as digitisation, different forms of data, and AI.This enables me to further question the beliefs behind "AI for social good" in subsection 2.3. 322.1.Data travels I encountered the possible consequences of data practices long before I became more aware of the potential ways in which data can travel.In 2012, my citizenship allowed me the mobility to conduct research for my master's thesis on the experiences of Iraqi refugees in Jordan who had received formal rejection for resettlement in the US.Because of general reluctance to share refugee protection responsibilities, this durable solution to refugees' prolonged legal uncertainty is in short supply.The decision-making is opaque.Most receive little or no information on it. 33This was somewhat different for those eligible for the Special Immigration Visa (SIV) which was made available for Iraqi nationals who had been employed by the US government or US forces during the occupation of Iraq. 34The selection procedures included security screening by the US Department of Homeland Security (DHS), on the premises of the Jordan office of the International Organisation of Migration (IOM).If a person's request was denied, they received a letter stating the reason for their rejection.My research consisted of semi-structured interviews with people who had received such a letter. 35ne of the persons I spoke to was Sanad, a young Iraqi Assyrian man who had sought refuge in Jordan. 36As he had worked as a translator for the US Army, he was eligible for an SIV.But Sanad had received a rejection letter citing "security reasons".He was convinced his rejection was due to his fingerprints.In 2006, on the streets of Baghdad, someone in his vicinity shot at US soldiers, who then retaliated.Sanad's friend was killed and Sanad, shot in the leg, was taken to prison.His fingerprints were taken, and he was interrogated for 11 days.It is impossible to say if his rejection of an SIV was directly or indirectly related to data gathered during this incident.But considering the persistence, transferability, and interoperability of personal data and the lack of a regulatory framework, his guess might be correct.I, like him, can only speculate.What this example shows is how the lack of transparency about how life-changing decisions are made and the uncertainty it creates have come to be (perceived as) connected to data practices.
Data gathered at one point in time for a particular purpose, can later resurface elsewhere.Sanad's data, collected and stored by an external State actor (the US Government), might have been used later to identify him as "risky" in a third country (Jordan).It is beyond the scope of this article to discuss what safeguards can be taken against the misuse of refugees' and migrants' data by State actors and regional actors such as the EU and/or under what circumstances such actors would be allowed to infringe on the digital rights of people on the move for security purposes.Instead, I look at the long relationship between security, aid and technology, particularly regarding the capture and circulation of biometric data and other very personal information. 37NHCR's experimental usage of iris scans -not merely for registration but also for wider purposes such as the distribution of aid -is but one example of increasingly automated practices.These include and go beyond the digital capture of other biometric details such as voice and/or facial recognition 38 ; the development of large platforms and databases to collect and share information with implementing partners, banks, etc. 39 ; the use of information to distribute aid, including experimentation with blockchain for identification and cash distribution purposes 40 ; statistics and algorithms geared to assess vulnerabilities and assist in decision- making 41 ; the use of chatbots for communication with communities and psychosocial counselling 42 ; and predictive modelling. 43This interacts with the aid sector's difficulties with developing an integrated, transparent and data-driven approach, also since funding for localised cybersecurity systems and policies is severely lacking. 44

Digitisation, Datafication, and AI
AI refers to data-focused automated statistics, systems, and decision-making which draw upon advances in machine learning (ML) and natural language processing.Developments in AI have been key for enabling the use of biometrics for verification (one-to-one comparison), identification (one-to-many comparisons) and categorisation (deducing if an individual belongs to a group defined by selected attributes).AI is also used for chatbots, predictive modelling, automated decision-making, etc.The use of AI and Automation Decision-Making (ADM) is closely related to processes of digitisation and datafication.Digitisation is the "conversion and articulation of historically analogue information, processes, and actions through digital tools". 45Datafication is the turning of numbers relating to our identity, societal positioning, and practices into data and datasets. 46Often, the belief underlying datafication is that more data will result in more accurate and nuanced information about our behaviour in the present, but also concerning the future.ADM enhances the intensity of modern risk-assessment as it differentiates between information and the possibilities that are available in the present to act upon the future. 47Prior to this, decision-making and planning was equally predicated on assessments of how the future would and should likely unfold, but arguably in a manner that was more reflexive of the recursive impact.
In the humanitarian sector, there is lack of a consistent definition and shared terminology regarding data. 48Distinctions need to be made between personal and non-personal data and between sensitive and non-sensitive data.Personal data is information that can be traced back to features specific to the physical, physiological, genetic, mental, economic, cultural, or social identity of a natural person.Non-personal data does not relate to any specific person, either because it was never linked to such features or because it was made anonymous.But like personal data, non-personal data can be sensitive.The classification of sensitive data depends on how the likelihood and severity of harms are assessed.
An algorithm has been defined as "a recipe composed in programmable steps . . .organizing and acting on a body of data to quickly achieve a desired outcome". 49Through techniques like ML, algorithms are "trained" to recognise patterns and to classify them.They operate by learning from data, and this learning changes the process.The common idea that AI operates like a black box is rather misleading, 50 for it gives the impression that the errors 41 K. Bansak, J. Ferwerda, J. Hainmueller, A. Dillon, D. Hangartner, D. Lawrence, & J. Weinstein, "Improving Refugee Integration through Data-driven Algorithmic Assignment", Science, 359(6373), 2018, 325-329.
of algorithmic decision-making can be traced back to people -most often white men -behind the machine. 51This would then imply that the solution simply lies in more inclusive design and/or more representative data.Even the designers of the algorithm would struggle to trace a decision back to its source.The complexity of decision-making and how weights are calculated and attribute value in ML systems are, by design, difficult to interpret.This does not mean that "algorithms are unaccountable as such". 52They give accounts of themselves all the time.These are partial and incomplete, but as feminist scholars have long argued, accounts of the social world have always been partial. 53The expression "the human in the loop" seeks to identify the responsible human subject -such as the owner of the company that developed the technology for drone strikes or the surgeon deploying surgical robots -who could be held accountable, for instance in a court of law.It would be too easy to blame automation for mistakes and errors.Returning to Sanad's case, the presence of US army forces in Iraq, their ability to take him in for 11 days and to obtain his personal data, seemingly without clarifying its purpose, are clear examples of bad human decision-making and abuse of power.When it comes to gathering and using data, human decision-making around such procedures is often also a source of risk.This became clear in a more recent example concerning the misuse of data by the Dutch government to predict fraud among social welfare recipients.Although automated systems were used, unlawful forms of analysis, based on racialised attributes, lack of human oversight, and flawed legislation were the main reasons for the prolonged human suffering it caused. 54The use of AI does not render human beings fully outside the "loop".Rather, there have always been contingent and fragile dynamics to human agency. 55Human agency in relation to the use of ADM is a complex philosophical and sociological that exceeds the scope of this article.Louise Amoore's Cloud Ethics gives important suggestions to which I will return in Section 5. 56

AI for (whose) good?
Proponents of AI in humanitarian settings speak of "AI for social good". 57In themselves, technologies are neither good nor bad.This does not mean they are neutral, for they act upon and within social realities.Their operations, the data they engage with and the outputs they produce are the result of pre-existing human relationships and material conditions.The working of capitalism and the pervasive presence of (post)colonialism are equally present and often already amplified within humanitarian settings. 58Humanitarian digitisation operates in and reworks these intersecting structures of exclusion, further explaining why, as I stated earlier, many scholars and civil society actors have expressed concern about the extractive nature and unforeseen consequences of digital experimentation in humanitarian settings.
At present, the actual intelligence of AI-driven technologies is still doubtful. 59But their usage -in combination with the development of large platforms and databases -can have important consequences, particularly because they tend to perpetuate discrimination.
Numerous studies have shown how short-sighted use of AI tends to reproduce racialised, gendered and classed bias. 60This interacts with the tendency of human beings to ascribe greater credibility and legitimacy to decisions made by computers, 61 further explaining the consensus among computer scientists that extreme caution is needed when using AI in judicial settings. 62Dangers of exaggerating human bias and discrimination became evident for instance in trials involving risk assessment engines.These were likely to overestimate the risk of black defendants reoffending, also when race is not an input. 63he EU is a geographical and symbolic space which actively draws on and develops technologies for migration control purposes. 64Refugees and migrants often lack adequate data protection, which enables the EU to use migration, asylum and border control management issues as testing grounds for new technologies, under the cloak of security. 65Discourses around securitisation -that present migration as a threat to national safety, economy, healthcare, culture etc. -are used to bolster this exceptionalism and to institutionalise legal precarity in and beyond the EU. 66For people who are already legally marginalised, such as those (not yet) recognised as refugees, other migrants in precarious legal situations, and stateless persons, the risks are high.While digital technologies and connectivity enable forced migrants to move, resist and have autonomy they also, often simultaneously, increase the potential for control, exploitation, and surveillance.
The efficacy of biometric and other data-driven systems should not be overstated: often they break down or do not work at all. 67But whether or not technology works as intended, digital trialling can have serious consequences.For instance, "success stories" about the use of technologies in refugee settings can result in shifts in what is deemed acceptable for citizens elsewhere. 68Migration control is often presented as the last bastion of State sovereignty, 69 with the figure of the refugee presented as the exception to the rule.This article takes a more complicated view.By taking a closer look at how digitisation efforts in refugee protection settings beyond the EU's borders interact with complex legal frameworks, as will be further explored in Section 4, it is argued that the differential recognition of digital rights interacts with other modes of inclusion or exclusion.This then helps to assert control that takes place beyond borders and sovereignty.

SE EKI NG R EFU GE I N A D I GI T ALI SED "P RO TE CTI O N SPAC E"
In the next pages, I focus on some common elements in digital transformations of "protection space", a term used by UNHCR in its reporting since the early 2000s.This depiction of refugee protection as a negotiated and operationally focused activity has been critiqued for devaluing legal obligations towards refugees. 70In combination with the humanitarian imperative and the marketisation of aid, refugee protection activities have become driven by humanitarian neophilia and techno-solutionism as will be explored in subsection 3.2. 71I subsequently examine an example of how digitisation interacts with UNHCR's legal categorisation procedures.I ask what this pragmatic, rather than normative, approach means for procedures grounded in international refugee law.

UNHCR's position in "Protection Space"
It was also in the early 2000s, around the same time that UNCHR started to use the term "protection space", that UNHCR began experimenting with biometrics.By taking the fingerprints of people seeking to return from Pakistan to Afghanistan, the UN refugee agency was not just copying the securitisation techniques that State actors had started to deploy after 9/ 11.It was the US Department of State that earmarked funding for UNHCR to trial this technology. 72Ten years later, Jordan and Lebanon were among the prime locations for humanitarian digital experimentation.This mainly resulted from the need to respond adequately and urgently to the large-scale displacement caused by the war in Syria.The hopes that humanitarian actors projected onto biometric identification in Jordan can also be seen in response to the difficulties that aid workers had encountered when counting Iraqi urban refugees between 2006 and 2010. 73Jordan was also particularly suitable for rolling out the use of iris scans for cash-based assistance, as this technology had been part of the country's banking infrastructure since 2008. 74In contrast, the Lebanese government did not express interest in this system for cash transfers. 75he term "protection space" most often occurs in UNHCR's country operation plans for non-signatory States in South East Asia and the Middle East.Many of these States are major refugee-hosting countries and have been reluctant or indifferent about adopting the Refugee Convention and the legal-normative institutionalisation of refugeehood often associated with it. 76The international refugee regime tends to view States not party to the Refugee Convention as the exception. 77But not being a signatory does not in itself make a State exceptional.There are 44 States that have not ratified the Convention.They actively engage with international refugee law through, for instance, other international laws, domestic and regional legislation and administrative instructions. 78Often, Memoranda of Understanding (MoU) between UNHCR and these governments clarify the responsibilities of UNHCR. 79n Section 1, I referred to the founder of IrisGuard comparing UNHCR to a country when discussing its activities in Jordan.While this comparison is misguided, scholars have indeed compared UNHCR's position -especially concerning its role in Jordan-to that of a "surrogate State". 80This term refers to the operational obligations UNHCR has taken on and the impossibility of providing full recognition of refugee rights.UNHCR Jordan took on refugee governance functions that are usually associated with the role of a State in response to the urgent needs of Iraqi refugees since 2006.This included registration, access to social services and cash assistance as well as refugee status determination (RSD).RSD is the process by which a government or UNHCR determines if a person seeking international protection is considered as a refugee under international law.Often formal refugee recognition is vital for realising other rights.But the ability of UNHCR Jordan to provide fuller access to rights is limited.It is constrained by its MoU with the Jordanian Government and continues to depend on the willingness of its host and (external) funding. 81otwithstanding important contextual differences, approaches to forced displacement in non-signatory States in the Middle Eastern region are characterized as hospitable yet temporary, with opaque policies and procedures that are often based on the prioritisation of certain national or ethno-religious affiliations. 82The roles undertaken by UNHCR and third parties and the space provided for establishing refugee protection in these settings are the result of lengthy political negotiations.UNHCR's "protection space" approach has, however, been critiqued for the following reason.By establishing itself as a negotiator through appealing to humanitarian values, the normative obligations of States as of UNHCR, toward refugees, are undermined.By shifting the responsibility for refugee protection to UNCHR and away from rights towards conditional needs, protection is rendered fluid and fragile. 83

Addressing scarcity through automation and humanitarian neophilia
Humanitarian approaches to refugee protection are often critiqued for their short-term focus.Their emergency imaginary prioritises immediate solutions over long-term structural concerns and compassion over rights. 84But since what was supposed to be temporary tends to become prolonged for years, humanitarian settings are often beset by scarcity -of funds, compassion, resettlement slots, rights, authority.Structural shortages make obtaining, ordering, classifying, and assessing information and data foundational for humanitarian programming.The importance of assuring donors, especially those who might be willing to receive resettlement referrals, necessitates careful accounting and reporting. 85utomating mechanisms to determine who is eligible to receive aid by ranking of vulnerability would simultaneously make that decision less arbitrary and the process more efficient.One such instrument is the Vulnerability Assessment Framework (VAF).UNHCR, World Food Programme (WFP) and United Nations International Children's Emergency Fund (UNICEF) introduced this proxy means testing instrument in 2014 to assess and standardise the vulnerability of Syrian non-camp refugees in Jordan.The algorithmic processing of quantifiable data by VAF gives a vulnerability score from 1 to 4. A score of 3 (highly vulnerable) or 4 (severely vulnerable) is a requirement but not a guarantee for receiving cash assistance. 86VAF is based on a predicted expenditure welfare model, together with other factors such as coping strategies, level of education, and health.Under different names, similar instruments were rolled out in Egypt, Lebanon, and Iraq.In Jordan, the use of VAF has been extended to Syrian camp refugees, other national cohorts, and Jordanian citizens.Many questions remain, such as: is a system trained on the data of one population suitable to assess other populations? 87umanitarian legibility schemes might be crucial for assessing needs and determining when providing assistance.They are equally a means of control. 88A logic of audit coincides with a suspicious outlook toward potential recipients of aid. 89Indeed, the initial use of biometrics in humanitarian settings was driven by the need to reduce so-called 'recyclers': people who would seek to receive individualised support more than once.The motivation of UNHCR for introducing fingerprint capturing in the early 2000s was to ensure that people returning to Afghanistan would only receive support once.By now, evidence that biometric identification reduces this form of low-level fraud continues to be scant.What is clear is that most fraud in humanitarian settings occurs earlier in the aid supply chain. 90Yet, the reasoning that biometrics and other means of automating aid serves to reduce fraud persists, allowing for such developments to continue.
Threatened donor fatigue incentivizes UNHCR and its implementing partners to provide data that demonstrates their efficiency. 91VAF is but one of the large-scale vulnerability assessments carried out in Jordan.There is an underlying presumption that more data yields more objective information and therefore allows for more certainty about who to regulate and how.This then reinforces the idea that humanitarian operations are neutral.But aside from the failure to recognise that technologies generally draw on European knowledge systems, it means that unequal power relations already intrinsic to most forms of humanitarianism are likely to be reproduced. 92nother response to the lack of substantial funding is that UN agencies and humanitarian organisations have increasingly been developing partnerships with private entities.This alignment of aid with business often goes hand in hand with humanitarian neophilia and technosolutionism.Humanitarian neophilia refers to the Silicon Valley-inspired drive within the humanitarian sector for novelty, innovation and disruption. 93It is often supported by big tech companies, including Microsoft, Google, and Facebook.Techno-solutionism refers to reliance on technological tools and the potential of technical expertise and technology to function as anti-politics machines. 94Sociopolitical issues get turned into technical problems with the promise of a quick fix.Proponents of this shift have been making arguments for innovation labs, which are described as 'safe havens for experimentations' to find solutions. 95n refugee situations, there is a perennial lack of solutions, which explains the hope projected on technologies and innovations to solve problems that are fundamentally political.This interacts with a thinking often present in emergency settings -"it is better to do anything than nothing" -and serves to justify immediate action involving elements that at another time, or in other circumstances, would be deemed problematic. 96hat makes partnerships with private entities particularly troublesome is that many such companies are simultaneously involved in the development of technologies that are commonly understood as harmful.Most public scrutiny has been directed at the partnership between the WFP and Palantir, a data-mining firm known for its involvement in US Immigration and Customs Enforcement's controversial use of AI for border control purposes, and in Cambridge Analytica's election rigging. 97Many other private partners -including Malhas' IrisGuard and the later discussed company Accenture -have been or are similarly involved in the development of technologies for border control, predictive policing, and the like.
In the next subsection, I come back to VAF as I question how the automation of assessments is interacting with UNHCR's mandated obligations towards refugees.Whereas the example concerns protection in Middle Eastern States, the situation is far from being particular to Middle Eastern non-signatory States, as lessons learned out "there" can easily be used elsewhere or for different purposes.This also becomes clear in section 5, where I discuss how the engagement of some refugee law scholars with the potentials for automation for refugee recognition procedures sound familiar.

RSD, legal categorisation and automated assessments
The multiplication of legal categories and differing access to rights through them is a welldocumented forced migration management technique. 98In signatory States, prolonging refugee status determination and providing only temporary residence permits to recognised refugees are but two ways to circumvent obligations arguably set out in the Refugee Convention. 99But also in non-signatory States known to host large numbers of refugees like Lebanon and Jordan, bureaucratic differentiation can have important consequences. 100This also relates to UNHCR's involvement in ascertaining refugees' protection claims.
UNHCR's involvement in RSD is extensive: by 2011, it held sole responsibility for RSD in 54 countries and shared responsibility with national governments in 23 countries, which shows its involvement does not depend only on whether a State is a signatory. 101UNHCR's mandate states it can declare prima facie status, but it has not declared Syrian nationals as such therefore necessitating RSD to establish a refugee status. 102Around 2015, in Lebanon, Egypt, Jordan, Turkey, and Iraq, the RSD/Resettlement procedures for Syrian nationals were merged."RSD proper" was only conducted for those few who are already likely to be considered for resettlement. 103The merged refugee recognition procedures were then conducted by UNHCR on behalf of States willing to receive people selected for resettlement.It seems that at least in Jordan similar approaches have been used for non-Syrian nationals.As a result, the majority of those who are referred to as "refugees" by UNHCR and humanitarian organisations in their reporting on the Middle East are not legally categorised as such.Most likely they are registered for refugee protection with the UNHCR country office yet have not undergone RSD.Instead, there is a wide array of bureaucratic labels in useregistered refugee, asylum-seeker, Person of Concern, displaced person, foreigner, labourer etc. 105 -which result from negotiations with State actors and efforts to appease them.UNHCR has in its own selection criteria for who is deemed eligible for third country resettlement. 106In addition, they need to follow criteria set by receiving States.But in Jordan, VAF scores are also increasingly taken into consideration when deciding who ends up being interviewed. 107The sorting work undertaken by digital technology has either come to coexist with and/or has come to replace work that was previously carried out by bureaucratic means.Before automation started to impact them, UNHCR's RSD procedures or the selection process for consideration for resettlement were also known to lack in accountability and were bedevilled by politics, human bias, and other exclusionary mechanisms.Most likely, people were already excluded from procedures set up to ascertain whether a person is a recognised refugee, a procedure grounded in international refugee law. 108How then does this automation of selection for RSD compare to the prior state of affairs?
The VAF measures poverty rather than whether people are "at risk", as this was the purpose for which the VAF was designed.The use of this automated assessment of relative vulnerability is an example of function creep: the collection of data or use of technologies intended for one purpose (assessing one's need for cash assistance) for another purpose (assessing who qualifies as a "real" refugee).Automated decision-making might further solidify decisions made, not least because human beings tend to ascribe more credibility and legitimacy to decisions made by computers. 109One of the critiques on the limits of UNHCR's "protection space"-that who is deemed a legible refugee is determined on conditional needs rather than on rights -might have become stickier. 110uantified measurements can appear more "neutral", "objective", or "fair".But classification is the result of and conceals politicisation. 111Classification mechanisms operate along with racialised, gendered, and other reductive logics.For instance, Jordan's and Lebanon's protection spaces tend to overlook the gendered harms and vulnerabilities of Syrian refugee men. 112They are also state-centric, as becomes evident in UNHCR Iraq's vulnerability assessments.Statelessness is not taken into account despite widespread recognition that it contributes to vulnerability. 113Analytical frameworks and experience derived from commercial platforms and epistemologies from the Global North can easily help strengthen the "power of the identifier, UNHCR, at the expense of the identified -the refugee". 114Who 42948715/realizing_the_rights_of_asylum_seekers_and_refugees_in_jordan_from_countries_other_than_syria_with_a_focus_on_ yemenis_and_sudanese (last visited 18 Oct. 2023).
and what is being optimised here, and who gets to decide?In Section 4 I look more closely at how differential power is manifested in legislation regarding data protection.

LEG I SLAT IN G REF UG EE DAT A GO VER NAN CE ?
All over the world, data governance is guided by models that involve large amounts of data being appropriated -often without meaningful consent -and used to generate value for different purposes and by different actors.For instance, States and large technology companies often have competing interests, power, and capacities.This then influences how data governance operates and the extent to which data is linked to privacy and identity rights, treated as a surveillance tool under the cloak of national security, or exchanged as a commodity. 115n order to reduce the potential for abuse and excessive power, legislation has been developed to formalise requirements around the collection, storage and processing of data; these regulate when various data practices are legally allowed.
IOs are subject to international law and have responsibilities for international human rights.UNHCR is also legally bound to its mandate, which is set out in the UNHCR statute.The Refugee Convention includes no provision on privacy, unlike other international human rights instruments.Article 12 of the Universal Declaration of Human Rights (UDHR) defines it as individual autonomy and identifies the right to seek the protection of the law against arbitrary interference. 116Article 17 of the International Covenant on Civil and Political Rights (ICCPR) formulates obligations to protect privacy against interference by governments or other actors, without any clause of limitation. 117In October 2018, the UN High-Level Committee on Management agreed to 10 Personal Data Protection and Privacy Principles, to set a common framework for data collection, processing and storage, and the transfer of personal data in mandated activities by, or on behalf of, UN System Organisations. 118UNHCR also has its own data protection policy as do other IOs such as WFP and IOM.It should be borne in mind that accountability mechanisms in instances of UNHCR human rights violations are limited and restrict internal oversight, despite much "accountability talk". 119Os have immunity from national legislation, the idea being that this helps to ensure they can fulfil their mandate independently and to comply with principles such as neutrality and humanity.Regional legislation, such as the EU's General Data Protection Regulations (GDPR) also does not formally apply, notwithstanding its potential normative impact. 120elow I refer to EU legislation for this allows a comparison between what the EU deems necessary safeguards for EU citizens and what is available for refugees beyond the EU.But EU legal regimes contribute to 'exceptionalising' migration contexts, within and beyond the EU, as they systematically carve out exceptions for data protection of refugees and other people on the move.This occurs through straightforward discriminatory legislation, as is the case in the currently proposed AI Act. 121GDPR formally applies to non-EU citizens, but its restrictions -like "national security" or "public security" -or means to establish a legal basis such as vital interests, including "humanitarian grounds" allow sovereign space for interpretation, especially in migration contexts.Indeed, the reasoning that migration control equals security and crime control -crimmigration -serves to systematically exclude people on the move through, for instance, the various EU-Schengen information systems. 122Plans to make these systems interoperable are being rolled out, despite awareness that this will probably brand almost all so-called third-country nationals as "risky by default". 123n the rest of this section, I take a closer look at UNHCR's data protection policy.I compare the Policy on the Protection of Personal Data of Persons of Concern to UNHCR to the EU's GDPR. 124Comparing UNHCR's policies to what the EU deems essential data protection for some allows additional insights into how UNHCR's policies would have fallen short, had they been legally enforceable.UNHCR published its data protection policy in 2015 -two years after it first started to obtain biometric information on people registering for protection in Jordan.The policy starts with recognition for the importance of people's consent, their right to be informed about its purpose and the ability to refuse and to request corrections and deletions. 125It is then stated that if data is transferred to implementing partners or other third parties, the "data subject is informed of this fact". 126Further on, it is formulated that all cases are "permanently retained", raising questions about the earlier mentioned potentials for deletion. 127Throughout the policy, no mention is made of personal data that ought to be considered sensitive -as in GDPR.Rather, along with biometric details, information about religion, ethnicity, and opinions are put under the header of personal data. 128And unlike GDPR, there is no mention of the importance of data minimisation: only obtaining essential data.
Alongside concerns regarding the adequacy of UNCHR's data protection policies, there are many questions regarding feasibility and actual implementation.For instance, concerning informed consent and being informed about data-sharing, studies reaching from Jordan and Bangladesh to Uganda and Ethiopia have shown that usually very little information is provided before, during, or after registration.Reporting suggests that even false information is provided, such as assurances that biometric registration is carried out for health purposes.Other people were told that biometric registration is required to receive aid. 129 Perhaps this has become true by default: the system might no longer give the option to not obtain biometric information or staff might have forgotten how to do these procedures manually.The question also remains if there are real possibilities for meaningful consent in refugee protection settings, considering the difficult circumstances in which people find themselves.And UNCHR's Policy states that subjects can raise any additional concerns, for instance regarding data sharing, with a person who is established as a so-called data focal point.This function is usually held by the most senior UNHCR protection staff member of a country office. 130In reality, people holding such a position are rarely accessible to refugees.
Worries about registering excessive biometric information on refugees and other datadriven humanitarian actions relate to concerns about possible data breaches, leaks, and the sharing of data with third parties with different and potentially harmful agendas. 131In 2016, the UN's internal oversight body Office of Internal Oversight Services (OIOS) brought serious data breaches to light. 132Three of the five UNHCR missions investigated, had shared refugees' personal data with host governments, without assessing the data protection offered by these governments or establishing a transfer agreement. 133IOs' data policies are devoid of legal implications when they are breached. 134Third parties do not enjoy the same privileges and immunities as the IOs they work together with, meaning they can be subjected to the jurisdiction of State parties who might be interested in gaining access. 135s mentioned in Section 3, there are many dubious public-private partnerships within the humanitarian realm.Private companies might be involved in refugee settings for altruistic motives or for "ethics bluewashing" -trying to come across as more ethical by being involved in aid.But the economic models of big tech companies suggest otherwise.Humanitarian settings can provide additional opportunities for extractive data mining, which explains technology companies' interest in them.More data allows for further fine-tuning prediction models, which might inadvertently be used to predict the behaviour also of European citizens.Another less altruistic motive is so-called "ethics dumping": the exportation of unethical digital processes and products to countries with weaker frameworks or enforcement mechanisms, after which the outcomes are re-imported. 136ven when ample precautions are taken, private partnerships can be problematic.When WFP established its partnership with Palantir, a statement was issued saying that the company would not get access to WFP's personal data.But non-personal data can also be sensitive.And no mention was made of whether the company has access to WFP's metadata. 137etadata (data about data) can be used to re-identify persons or groups, which explains why ICRC and Privacy International have called for humanitarian metadata protection. 138inally, knowledge that companies obtain by trialling technologies in humanitarian settings can be used elsewhere for instance for border control purposes.One example is the company Accenture and how it oscillates between technological support in humanitarian settings and for border control purposes.In 2013 the company became involved in creating UNHCR's Biometric Identity Management System (BIMS) in Malawi.Around the same time, Accenture and its partners were awarded a contract for maintenance of the EUs Visa Information System (VIS).By 2016 the company was invited to a workshop by EU-Lisa the agency established to manage EU ´s large scale IT-systems, known for operating as surveillance technologies for border control purposes.Accenture was asked to consider how UNHCR's biometric registration could be useful for EU-Lisa's technological infrastructure. 139By now, the company is major recipient of contracts from EU-Lisa, whereas it also continues to work in humanitarian settings.For instance, it works closely together with earlier mentioned IrisGuard and WFP to record consumption behaviour in Jordan's refugee camps though blockchain technology. 140

ENG AG I NG W I TH N EW T ECH NO LO GI ES AN D OL DER O TH ERI N G TEC HN I QU ES
I have just shown how data protection falls short in refugee protection situations, even if it were legally enforceable.A too narrow focus on regulations, laws, and corporate policies alone, however, would obscure what is really at stake, which is human dignity and autonomy. 141A legalistic approach weakens consensus on what privacy is -that is, the ability to control the boundaries between one's sense and knowledge of self and others, and those between private and public. 142Understanding of privacy evolved over time, in response to technological changes and in interaction with organisational and social settings.It is implicated in struggles over secrecy and power between people and their communities, governmental (and non-governmental) actors, scientists, civil society, and corporations.Trust is another factorthere are shifting boundaries between those who should and should not be trusted, what is and is not suspicious and what should or should not be controlled.Privacy continues to be regarded as a bourgeois, western or individual concept.For instance, in her otherwise pivotal study on data extraction and the surveillance economy, Shoshana Zuboff makes the bold claim that privacy is simply less of a concern for people in China. 143For many people all over the world, privacy is not always a possibility.But technology-mediated negotiations and struggles over information, private lives and trust are certainly taking place.This is also the case when it comes to people navigating humanitarian emergencies or living in precarious protection settings.Approaches to privacy urgently need to become decolonized not least because short-sighted views on privacy allow it to be seen as something that can be (more easily) bypassed, especially when it comes to non-western "others" and even more so when they are considered "at risk" (vulnerable) or "a risk" (dangerous). 144In short, it allows for exceptions.
Newer technological developments often further interact with other, often older, othering techniques.When I discussed Sanad's experience with the capturing of his fingerprints and how he associated this with the decision that he was rejected for US resettlement, I also explained that his experience was unusual.Most often, refugees living in prolonged legal uncertainty do not receive any information regarding such decision-making.Especially when it concerns UNHCR's resettlement programmes, chances to seek redress or hold an actor accountable are notoriously difficult.Different tasks are spread out over different implementing organisations and private actors and further obfuscate responsibility and conceal where accountability might be sought."Agency laundering", a term coined to describe how moral responsibility can be obfuscated by deploying technology, 145 is therefore not just a recent phenomenon.
But whereas the ways in which newer technologies work and operate does resonate with older othering techniques there are also additional complicating factors.For instance, regarding the deployment of automated vulnerability assessment, a study in Lebanon shows how technological developments can move decision power and expert knowledge elsewhere -to headquarters, consultancies, algorithms who attribute weight and establish relations etc.away from people who are familiar with the local contexts. 146Meanwhile, as technologies tend to obfuscate how decisions are made and therefore complicate seeking accountability of institutions and human actors that have adopted a tool, the results are often given greater credibility. 147The human operating the machine tends to ascribe more legitimacy to decisions made by a computer than to themselves or those impacted by them. 148Proving that a computer made a mistake is often difficult.When this is combined with the "trickster" stereotype that is almost always associated with the figure of the refugee, it is highly probable that the scope for refugees and other precarious migrants to address errors and bias has been reduced. 149he development and deployment of highly controversial and invasive technologies can therefore have profound consequences for legal and bureaucratic procedures around being and becoming a refugee.Data and statistical inferencing result in predictions, through anticipating the future and closing off actions. 150This may interact with the requirement for protection as formulated in the Refugee Convention: "a well-founded fear of being persecuted", which focuses on the likelihood of future persecution.It is a forward-looking test that provides room for uncertainty and doubt, even if uncertainty and lack of knowledge are unequally distributed. 151Datafication, however, focuses on reducing uncertainty.As seen with racialised policing and security algorithms, the futures they condense contain "within it the residue of all violence of past colonial histories, migrations, journeys and border crossings, a fulsome sediment of all the actions and transactions of past movements in the name of justice". 152The likelihood that AI will make it more difficult to make claims, not least claims for asylum or claims when one is in a precarious legal position, is very real.
This means there is an urgent need for scholars and practitioners working on law, migration, and technology to carefully engage with how data flows and algorithms are transforming contemporary ethics and politics.This includes actively participating in discussions on the consequences of experimenting with algorithmic and data-driven forms of refugee and migration governance.How to go about this is not straightforward.For instance, there is general agreement that extreme caution is needed with deploying AI in judicial settings, not least because of the considerable risk that people who are already marginalised are disproportionately affected.Yet some of the pioneering scholarship on technology and refugee law has started to discuss the potentials in migration court settings.
One explanation for this is that AI development for government decision-making is unlikely to slow down just because of ethical quandaries. 153Others argue that AI can be useful to provide valuable and visible insights into fairer RSD procedures.It could, for instance, expose disparity in outcomes in RSD between different States and divergences in national legal are also intrinsic to algorithmic decision-making.The ways in which weights are calculated and predictions are derived -based on attributes -always carry with them degrees of uncertainty.Paying closer attention to this and to the social and technical conditions under which algorithms can emerge and operate, could allow ways for accountability, also for people on the move.

CO NC LU SIO N
The ability to seek and enjoy protection under the Refugee Convention is closely interrelated to the recognition of other fundamental human rights, including the right to movement and digital rights.Across the globe, refugees and other migrants existing in a state of enduring legal uncertainty are perhaps the most monitored persons there are. 161This article has looked closer at Middle Eastern protection contexts.In these settings, different forms of legal "exceptions" interact, which allow people seeking refuge and the settings in which they live are treated as rights optional.
The distinction between citizens and non-citizens is increasingly enacted through digital techniques and data flows which are far from contained by the model of the juridical sovereign State.A more complex assemblage of people, organisations, and technologies interact with a sovereign security discourse, humanitarian reason, and other reasons for exceptions that are further influencing the enactment or deprivation of rights.And as algorithms are generative of the politics of contemporary societies, technologies deployed for refugee and migration governance purposes are likely to reinforce exclusionary, State-centred citizenship.This, combined with the structural undermining of the digital rights of people seeking refuge in the present, can easily further restrict the right to seek refuge in the future.

10 M. Hildebrandt ,
Slaves to Big Data.Or Are We? IDP 2013, the 9th Annual Conference on Internet, Law & Politics, Barcelona, 25 Jun.2013.11 P. Andreassen, A. Kaun, & K. Nikunen, "Fostering the Data Welfare State: A Nordic Perspective on Datafication", NORDICOM Review, 42(2), 2021, 207-223; P. Molnar & L. Gill, Bots at the Gate.A Human Rights Analysis of Automated Decision-Making in Canada ´s Immigration and Refugee System, International Human Rights Program and the Citizen Lab, Toronto, 2018.

33A.
Garnier, L. Jubilut, & K. Sandvik, Refugee Resettlement: Power, Politics and Humanitarian Governance, New York, Berghahn Books, 2018.34 US Department of State -Bureau of Consular Affairs, Special Immigrant Visas for Iraqi and Afghan Translators/ Interpreters, available at https://travel.state.gov/content/travel/en/us-visas/immigrate/siv-iraqi-afghan-translators-interpreters.html (last visited 9 Jan. 2023).35 This research took place as part of the Master's program International Development Studies, at Wageningen University & Research in The Netherlands.Ethical approval was received.Participants were in-depth informed about the research, its intent and potential outcomes and consented to partaking.