BANGKOK - The launch of facial recognition technology at two Indian airports and plans to place it in police stations have stoked fears over privacy and increased surveillance among human rights groups in the country. The "paperless biometric technology" launched in Bengaluru airport this week identifies passengers by their face, doing away with the need to present boarding passes, passports and other identity documents, according to a statement from the airport in India's tech capital. Another airport in the southern Indian city of Hyderabad is also testing facial recognition technology this month. While airlines, airports and the companies developing the software promise greater security and increased efficiency, some technology analysts and privacy experts say the benefits are not clear, and come at the cost of privacy and greater surveillance. This is particularly true of India, which does not have a data protection law or an electronic surveillance framework, said Vidushi Marda, a lawyer and advisor at human rights group Article 19.
The use of facial recognition in the United States public sector has received a great deal of press lately, and most of it isn't positive. There's a lot of concern over how state and federal government agencies are using this technology and how the resulting biometric data will be used. Many fear that the use of this technology will lead to a Big Brother state. Unfortunately, these concerns are not without merit. We're already seeing damaging results where this technology is prevalent in countries like China, Singapore, and even the United Kingdom where London authorities recently fined a man for disorderly conduct for covering his face to avoid surveillance on the streets.
Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers or the capability of a machine to imitate intelligent human behavior. Despite its nascent nature, the ubiquity of AI applications is already transforming everyday life for the better. Whether discussing smart assistants like Apple's Siri or Amazon's Alexa, applications for better customer service or the ability to utilize big data insights to streamline and enhance operations, AI is quickly becoming an essential tool of modern life and business. In fact, according to statistics from Adobe, only 15 percent of enterprises are using AI as of today, but 31 percent are expected to add it over the coming 12 months, and the share of jobs requiring AI has increased by 450 percent since 2013. Leveraging clues from their environment, artificially intelligent systems are programmed by humans to solve problems, assess risks, make predictions and take actions based on input data.
Thanks to advances in artificial intelligence (AI), society is now facing a unique challenge: how do we regulate the usage of human faces and voices? Facial recognition is the ability of computer systems to identify and us by our faces. Voice recognition is the ability of computer systems to do the same for our words. Both are powered by AI, and both create benefits for consumers and citizens. These technologies also raise difficult questions about privacy and personal rights.
As San Francisco moves to regulate the use of facial recognition systems, we reflect on some of the many'faces' of the fast-growing technology Last week, San Francisco became the first city in the United States to ban the use of facial recognition technology, at least by law enforcement, local agencies, and the city's transport authority. My immediate reaction to the headlines was that this was great for individuals' privacy, a truly bold decision by the San Francisco board of supervisors. The ordinance actually covers more than just facial recognition, as it states the following: "'Surveillance Technology' means any software, electronic device, system utilizing an electronic device, or similar device used, designed, or primarily intended to collect, retain, process, or share audio, electronic, visual, location, thermal, biometric, olfactory or similar information specifically associated with, or capable of being associated with, any individual or group.". The ban excludes San Francisco's airport and sea port as these are operated by federal agencies. Nor does it mean that no individual, company or other organizations installing surveillance systems that include facial recognition, and the agencies banned from using the technology, can cooperate with the people allowed to use it.