Goto

Collaborating Authors

 fortify


ICE and CBP's Face-Recognition App Can't Actually Verify Who People Are

WIRED

ICE and CBP's Face-Recognition App Can't Actually Verify Who People Are ICE has used Mobile Fortify to identify immigrants and citizens alike over 100,000 times, by one estimate. It wasn't built to work like that--and only got approved after DHS abandoned its own privacy rules. The face-recognition app Mobile Fortify, now used by United States immigration agents in towns and cities across the US, is not designed to reliably identify people in the streets and was deployed without the scrutiny that has historically governed the rollout of technologies that impact people's privacy, according to records reviewed by WIRED. The Department of Homeland Security launched Mobile Fortify in the spring of 2025 to "determine or verify" the identities of individuals stopped or detained by DHS officers during federal operations, records show. DHS explicitly linked the rollout to an executive order, signed by President Donald Trump on his first day in office, which called for a "total and efficient" crackdown on undocumented immigrants through the use of expedited removals, expanded detention, and funding pressure on states, among other tactics. Despite DHS repeatedly framing Mobile Fortify as a tool for identifying people through facial recognition, however, the app does not actually "verify" the identities of people stopped by federal immigration agents--a well-known limitation of the technology and a function of how Mobile Fortify is designed and used.


Fortify Your Foundations: Practical Privacy and Security for Foundation Model Deployments In The Cloud

Chrapek, Marcin, Vahldiek-Oberwagner, Anjo, Spoczynski, Marcin, Constable, Scott, Vij, Mona, Hoefler, Torsten

arXiv.org Artificial Intelligence

Foundation Models (FMs) display exceptional performance in tasks such as natural language processing and are being applied across a growing range of disciplines. Although typically trained on large public datasets, FMs are often fine-tuned or integrated into Retrieval-Augmented Generation (RAG) systems, which rely on private data. This access, along with their size and costly training, heightens the risk of intellectual property theft. Moreover, multimodal FMs may expose sensitive information. In this work, we examine the FM threat model and discuss the practicality and comprehensiveness of various approaches for securing against them, such as ML-based methods and trusted execution environments (TEEs). We demonstrate that TEEs offer an effective balance between strong security properties, usability, and performance. Specifically, we present a solution achieving less than 10\% overhead versus bare metal for the full Llama2 7B and 13B inference pipelines running inside \intel\ SGX and \intel\ TDX. We also share our configuration files and insights from our implementation. To our knowledge, our work is the first to show the practicality of TEEs for securing FMs.


Preventing Illegal Logging: Simultaneous Optimization of Resource Teams and Tactics for Security

Carthy, Sara Marie Mc (University of Southern California) | Tambe, Milind (University of Southern California) | Kiekintveld, Christopher (University of Texas at El Paso) | Gore, Meredith L. (Michigan State University) | Killion, Alex (Michigan State University)

AAAI Conferences

Green security — protection of forests, fish and wildlife — is a critical problem in environmental sustainability. We focus on the problem  of  optimizing the defense of forests againstillegal logging, where often we are faced with the challenge of teaming up many different groups,  from national police to forest guards to NGOs, each with differing capabilities and costs. This paper introduces a new, yet fundamental problem: SimultaneousOptimization of Resource Teams and Tactics (SORT).  SORT contrasts with most previous game-theoretic research for green security — in particular based onsecurity games — that has solely focused on optimizing patrolling tactics, without consideration of team formation or coordination.  We develop new models and scalable algorithms to apply SORT towards illegal logging in large forest areas. We evaluate our methods on a variety of synthetic examples, as well as a real-world case study using data from our on-going collaboration in Madagascar .