The Future of Movement: The Implications of AI-Integrated Border Management 

Photo Credits: "Oslo Airport, Gardermoen - Oslo airport customs border" by Glentamara is licensed under CC BY-SA 4.0.


Author

Josh Bernstein

Editor

Sydney Wisener




The Future of Movement: The Implications of AI-Integrated Border Management examines how as artificial intelligence integrates into border management, the balance between efficiency and privacy is under scrutiny. While AI can streamline processes, it risks compromising individual rights, especially for migrants and asylum seekers. From predictive analytics to emotion detection and facial recognition, these technologies highlight the tension between innovation and human dignity. Policymakers must ensure safeguards are in place to prevent misuse and protect vulnerable populations. Borders are evolving—how do we ensure rights evolve with them?


Introduction


The integration of artificial intelligence (AI) at border crossings remains a controversial topic in border governance and management. The use of AI at border crossings creates a tension between privacy rights and efficiency

The use of AI at border crossings offers the potential to expedite the process for travelers. However, the new level of efficiency could negatively impact the individual privacy of all people crossing international borders. This may include travelers, migrants, and refugees seeking asylum. AI-related border policies must include safeguards to protect all travelers. As part of their mandate, national border agencies must move a high volume of people and goods across borders while undertaking a significant effort to prevent entries that threaten public safety. In the current global landscape where high income countries (HICs) experience increased levels of migration and trade, national border agencies are struggling to meet demand. AI offers an excellent solution. 

AI-integrated border management minimizes human decision making by gathering information about a traveler, making border management less labour-intensive and costly. However, AI-integrated models of border management also introduce new risks to privacy, wellbeing and security of individuals being processed through these checkpoints.

While AI-integrated border crossings optimizes and may improve accuracy of identifying border threats, AI tools have significant ramifications for the privacy and treatment of people crossing a border.


Three Technologies


There are three types of problematic AI technologies used at border crossings: predictive behaviour, facial recognition and behaviour mapping. These three technologies have the potential to incorrectly identify legal migrants and travelers as threats. This could hinder human rights and individual security of people travelling. In addition, it could undermine immigration policies and their effectiveness in addressing issues such as an aging population or a skills shortage in an economy. By using a HIC case study to analyze each of these technology deployments, the concerns and potential risk of AI-abuse becomes demonstrable. 


1. Predictive Analytics

In the case of New Zealand and its use of predictive behaviour analytics to deny entry, AI discriminated against individuals with specific ethnic and national identities. This case illustrates a major issue with using AI-assisted predictive analytics technology at border crossings: the technology relies on large datasets, and if those datasets reflect historical biases, the AI could perpetuate discriminatory patterns of denial in the future

The use of predictive analytics by the New Zealand Customs Agency was sharply criticized by New Zealanders and progressive parties who argued that this policy directly contradicts the New Zealand Human Rights Code. The use of AI-integrated predictive analytics technology undermines the efficacy of a safe and equitable system by making assumptions about people at borders based on their identities. This perpetuates stereotypes in a society and creates a culture in border enforcement agencies of hostility. Although predictive analytics were only at borders, stereotypes transcend these spaces and could contribute to a less inclusive society.  

This case offers two lessons for policy makers in HICs. First, the use of predictive analytics to profile individuals is an inherently risky endeavour that should be banned from AI-integrated border management technology. The use of AI-assisted predictive analytics in border management led to a lack of public trust and scrutiny from other elected officials. Second, policymakers must work to create inclusive legislation that does not discriminate against people when using AI as part of public service delivery to prevent AI facilitated discrimination. The New Zealand case highlights how AI technology, though used to uphold public safety, infringed on individual and collective rights. 


2. Emotion Detection Technology

The second type of risky technology that could be deployed at border crossings is emotion detection applications used to assess interactions with border crossers. Although there are no active deployments of this type of technology, the EU has funded and piloted it. Trialed in 2018 by Hungary, Greece and Latvia, this technology aimed to pick up on what some commentators have termed ‘microexpressions’ to determine if migrants are being genuine in their answers to border security questions. However, some scientists criticized this system, arguing that micro-expressions are not effective at determining if someone is lying. This system is flawed as the technology fails to capture the nerves of a person at a border, which can cause one to exhibit unusual microexpressions as a nervous response. All people react differently to crossing a border. While there are profiles and strategies governments have used to identify these patterns of behaviour, the emotional responses of each individual (especially those who are neurodivergent) or entering a country with a different culture to theirs react uniquely making this practice discriminatory.

The EU AI Act is concerning, as while it prohibits the use of emotion detection technology, it makes an exception for areas pertaining to the health and safety of the public. This legislation could severely derail the life of an immigrant worker moving to the EU or legal migrants if a customs officer incorrectly detects a lie within one’s answers.

 In the AI Act, emotion detection technology is allowed to ensure public safety. The use of emotion detection technology at border crossings allows security and border services in the EU to employ this technology could upend the lives of immigrants and citizens. Additionally, emotion detection technology violates consent rights by intruding into how people express themselves and using it for future profiling purposes without the informed consent of the individual, providing no alternatives. In a quote from the office of the European Data Protection Supervisor, “Turning the human face into another object for measurement and categorisation by automated processes controlled by automated processes controlled by powerful companies and governments touches the right to human dignity.”

3. Facial Recognition Software

The third type of controversial AI being deployed is facial recognition software, currently being used at the US border for migrants and asylum seekers entering from Mexico. On August 14, 2024, MIT Technology Review reported that the US Department of Homeland Security (DHS) Intends to use facial recognition to identify migrant children as they age. This strategy from the DHS could be used to keep track of aging migrants as they enter the US and assist ICE execute forced removals. By adding migrants to facial recognition databases, the law enforcement landscape could change in the US to a model reliant on technology. Such a landscape change could contribute to the over-policing of migrant neighbourhoods decreasing the likelihood of these people exercising their rights while living in the USA. Additionally, the use of the CBP One the Asylum app used by US immigration officials provides no alternatives to mobile collection of biometric information for people as young as fourteen. Since its deployment, it has collected biometric information for 1.5 million people. The use of AI-assisted facial recognition software by the US government demonstrates how AI can violate privacy rights and complicate rules by allowing for minors to be documented without their free consent. Although facial recognition has many positive uses, manipulation of minors and privacy rights for AI training and potential for migrant tracking sets up the US for the adoption of AI to surveil publics (AI creep) and begins to erode civil liberties. In addition to the challenges AI places on civil liberties, the collection of biometric and health information has risks. If cybercriminals attempted to compromise networks and servers where this sensitive data was stored, a cyber-attack could be used to extract this information for the purposes of identity theft. A hack would pose a threat to national security if cybercriminals or other states were using this information to steal the identities of people or use the information to enter into a country undetected for nefarious purposes. 

Policy makers in the US to balance the tension between security and freedom must work to create other alternatives for migrant processing while balancing this approach with efficiency. Although national security must remain a relevant part of this discussion, policy makers should create an option for migrants under eighteen to use other forms of documentation or check-ins with their municipalities at local levels. 


Beware the AI Creep


This article has discussed three types of technologies used at border systems that only work to harm the privacy and security of travellers and migrants. While the battle begins at the border, the commercialization and integration of AI into border management has led to AI creeping into society. Policymakers need to be aware of the ways that using AI at border-crossings is contributing to a larger problem: the dissipation of privacy and informed consent rights in HICs, two major parts of a democracy. People crossing international borders are moving through spaces where they do not have rights to protect them. Border agents of some jurisdictions have the right to search and investigate travelers without their consent or reasonable grounds as defined by a constitution. By integrating AI into an immigration clearing process, the rights of people before even arriving at a check-point are in jeopardy. Although national security must remain an important component of this policy discussion, there must be a balance struck between efficiency and security versus equity and privacy. If policymakers are serious about avoiding this type of state surveillance, they should create regulations at the international level that aim to strike this balance and push back against border service agencies that create unfair regulations by banning predictive behavioural technology and placing limits on facial recognition use in contexts with vulnerable people.


Previous
Previous

Digging Deeper: How AI is Reshaping Canada's Mining Industry and Workforce

Next
Next

Harnessing the Ripple Effects: Prospects of Making AI Work for Canadian Jobs