biometry
The theoretical limits of biometry
Biometry has proved its capability in terms of recognition accuracy. Now, it is widely used for automated border control with the biometric passport, to unlock a smartphone or a computer with a fingerprint or a face recognition algorithm. While identity verification is widely democratized, pure identification with no additional clues is still a work in progress. The identification difficulty depends on the population size, as the larger the group is, the larger the confusion risk. For collision prevention, biometric traits must be sufficiently distinguishable to scale to considerable groups, and algorithms should be able to capture their differences accurately. Most biometric works are purely experimental, and it is impossible to extrapolate the results to a smaller or a larger group. In this work, we propose a theoretical analysis of the distinguishability problem, which governs the error rates of biometric systems. We demonstrate simple relationships between the population size and the number of independent bits necessary to prevent collision in the presence of noise. This work provides the lowest lower bound for memory requirements. The results are very encouraging, as the biometry of the whole Earth population can fit in a regular disk, leaving some space for noise and redundancy.
AI-driven biometry and the infrastructures of everyday life
Over the past years, we have become witness to the exponentially growing proliferation of biometric technologies: facial recognition technology and fingerprint scanners in our phones, sleep-pattern detection technology on our wrists or speech-recognition software that facilitates auto-dictation such as captioning. What all these technologies do is measure and record some aspect of the human body or its function: facial recognition technology measures facial features, fingerprint scanners measure the distance between the ridges that make up a unique fingerprint, sleep-pattern detection measures movement in our sleep as a proxy for wakefulness, and so on. AI is fundamentally a scaling technology. It is walking in the footsteps of many other technologies that have deployed classification and categorisation in the name of making bureaucratic processes more efficient, from ancient library systems to punch cards, to modern computer-vision technologies that'know' the difference between a house, a road, a vehicle and a human. The basic idea of these scaling technologies is to minimise situations in which individual judgement is required (see also Lorraine Daston's seminal work on rules).
- Health & Medicine > Therapeutic Area (0.50)
- Information Technology > Security & Privacy (0.35)