Home / Technology / DHS Facial Recognition App Details Revealed
DHS Facial Recognition App Details Revealed
29 Jan
Summary
- Department of Homeland Security revealed new details on Mobile Fortify app.
- NEC identified as the vendor behind the facial recognition technology.
- App deployed by CBP and ICE for identity verification in the field.

The Department of Homeland Security (DHS) recently published comprehensive details regarding its Mobile Fortify facial recognition application. This app is utilized by federal immigration agents for identifying individuals encountered during field operations. The disclosures, including the identity of the app's vendor, were part of the DHS's mandated 2025 AI Use Case Inventory.
Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) have both deployed Mobile Fortify, with CBP's operational use beginning in May of the previous year and ICE gaining access on May 20, 2025. NEC has been publicly identified as the vendor, with its Reveal facial recognition solution advertised for extensive database searches. A significant contract from 2020 to 2023 between NEC and DHS highlights the use of NEC's biometric matching products.
Both CBP and ICE state the app's primary function is rapid identity confirmation. ICE further notes its utility for field agents operating with limited information and needing to access multiple systems. The app collects facial images, fingerprints, and identity document photographs, transmitting this data to CBP for processing through government biometric systems that employ AI for matching. ICE indicates it does not directly manage the AI models, which are overseen by CBP.
CBP has indicated that "Vetting/Border Crossing Information/ Trusted Traveler Information" was used in the app's development. This has raised concerns, as highlighted by a recent case where a woman reported her Global Entry and TSA Precheck privileges revoked after an encounter involving facial recognition. While CBP asserts sufficient monitoring protocols are in place, ICE is still developing its monitoring protocols and AI impact assessment, despite the app being classified as "high-impact" and already deployed.
Concerns over incorrect matches are significant, with reports of individuals being detained due to misidentification by the app. ICE acknowledges that an appeals process and mechanisms for incorporating public feedback are currently in development. This comes as federal agencies are expected to complete AI impact assessments before deploying high-impact AI use cases, as per Office of Management and Budget guidance.




