In the past year, thousands of people in the U.S. have tried to trick facial identification verification to fraudulently claim unemployment benefits from state workforce agencies, according to identity verification firm ID.me Inc. The company, which uses facial-recognition software to help verify individuals on behalf of 26 U.S. states, says that between June 2020 and January 2021 it found more than 80,000 attempts to fool the selfie step in government ID matchups among the agencies it worked with. That included people wearing special masks, using deepfakes--lifelike images generated by AI--or holding up images or videos of other people, says ID.me Chief Executive Blake Hall. A look at how innovation and technology are transforming the way we live, work and play. Facial recognition for one-to-one identification has become one of the most widely used applications of artificial intelligence, allowing people to make payments via their phones, walk through passport checking systems or verify themselves as workers.
After several high-profile cases, it's understandable that governments would want to start regulating artificial intelligence (AI), and biometric technology in particular. The Clearview AI scandal has shown that people are really'not OK' with the knowledge that companies scraped the internet for private images in order to train a facial recognition AI solution they then turned around and sold to law enforcement agencies. Additionally, a number of cases by civil rights groups have shown that when AI is employed to make decisions about providing credit, rendering a verdict, or simply verifying the identity of a person, minorities are often discriminated against. At the end of April, the EU adopted a proposal for a regulation called the Artificial Intelligence Act (AIA) designed to regulate AI-based solutions. When these new rules go fully into effect, the EU hopes to become a global trendsetter in AI regulation.
McDonald's is being sued for recording customers' biometric data at its new artificially intelligent-powered drive-thru windows without getting their consent. In court filings, Shannon Carpenter, a customer at a McDonald's in Lombard, Illinois, claims the system violates Illinois' Biometric Information Privacy Act, or BIPA, by not getting his approval before using voice-recognition technology to take his order. BIPA requires companies to inform customers their biometric information--including voiceprints, facial features, fingerprints and other unique physiological features--is being collected. Illinois is only one of a handful of states with biometric privacy laws, but they are considered the most stringent. A McDonald's customer in Chicago is suing the burger chain, claiming it records and stores users' voiceprints without their written consent, in violation of Illinois strict biometric privacy law In 2020, the fast-food chain began testing out using voice-recognition software in lieu of human servers at 10 locations in and around Chicago.
The European Union has introduced a proposal to regulate the development of AI, with the goal of protecting the rights and well-being of its citizens. The Artificial Intelligence Act (AIA) is designed to address certain potentially risky, high-stakes use cases of AI, including biometric surveillance, bank lending, test scoring, criminal justice, and behavior manipulation techniques, among others. The goal of the AIA is to regulate the development of these applications of AI in a way that will foster increased trust in its adoption. Similar to the EU's General Data Protection Regulation (GDPR), the AIA law will apply to anyone selling or providing relevant services to EU citizens. GDPR spearheaded data privacy regulations across the United States and around the world.
"The global AI agenda: Promise, reality, and a future of data sharing" is an MIT Technology Review Insights report produced in partnership with Genesys and Philips. It was developed through a global survey conducted in January and February 2020 of over 1,000 executives across 11 different sectors and a series of interviews with experts having specific responsibility for or knowledge of AI. The article below is an extract of the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review's editorial staff.
Modern vehicles, including connected vehicles and autonomous vehicles, nowadays involve many electronic control units connected through intra-vehicle networks to implement various functionalities and perform actions. Modern vehicles are also connected to external networks through vehicle-to-everything technologies, enabling their communications with other vehicles, infrastructures, and smart devices. However, the improving functionality and connectivity of modern vehicles also increase their vulnerabilities to cyber-attacks targeting both intra-vehicle and external networks due to the large attack surfaces. To secure vehicular networks, many researchers have focused on developing intrusion detection systems (IDSs) that capitalize on machine learning methods to detect malicious cyber-attacks. In this paper, the vulnerabilities of intra-vehicle and external networks are discussed, and a multi-tiered hybrid IDS that incorporates a signature-based IDS and an anomaly-based IDS is proposed to detect both known and unknown attacks on vehicular networks. Experimental results illustrate that the proposed system can detect various types of known attacks with 99.99% accuracy on the CAN-intrusion-dataset representing the intra-vehicle network data and 99.88% accuracy on the CICIDS2017 dataset illustrating the external vehicular network data. For the zero-day attack detection, the proposed system achieves high F1-scores of 0.963 and 0.800 on the above two datasets, respectively. The average processing time of each data packet on a vehicle-level machine is less than 0.6 ms, which shows the feasibility of implementing the proposed system in real-time vehicle systems. This emphasizes the effectiveness and efficiency of the proposed IDS.
"I have nothing to hide" was once the standard response to surveillance programs utilizing cameras, border checks, and casual questioning by law enforcement. Privacy used to be considered a concept generally respected in many countries with a few changes to rules and regulations here and there often made only in the name of the common good. Things have changed, and not for the better. China's Great Firewall, the UK's Snooper's Charter, the US' mass surveillance and bulk data collection -- compliments of the National Security Agency (NSA) and Edward Snowden's whistleblowing -- Russia's insidious election meddling, and countless censorship and communication blackout schemes across the Middle East are all contributing to a global surveillance state in which privacy is a luxury of the few and not a right of the many. As surveillance becomes a common factor of our daily lives, privacy is in danger of no longer being considered an intrinsic right. Everything from our web browsing to mobile devices and the Internet of Things (IoT) products installed in our homes have the potential to erode our privacy and personal security, and you cannot depend on vendors or ever-changing surveillance rules to keep them intact. Having "nothing to hide" doesn't cut it anymore. We must all do whatever we can to safeguard our personal privacy. Taking the steps outlined below can not only give you some sanctuary from spreading surveillance tactics but also help keep you safe from cyberattackers, scam artists, and a new, emerging issue: misinformation. Data is a vague concept and can encompass such a wide range of information that it is worth briefly breaking down different collections before examining how each area is relevant to your privacy and security. A roundup of the best software and apps for Windows and Mac computers, as well as iOS and Android devices, to keep yourself safe from malware and viruses. Known as PII, this can include your name, physical home address, email address, telephone numbers, date of birth, marital status, Social Security numbers (US)/National Insurance numbers (UK), and other information relating to your medical status, family members, employment, and education. All this data, whether lost in different data breaches or stolen piecemeal through phishing campaigns, can provide attackers with enough information to conduct identity theft, take out loans using your name, and potentially compromise online accounts that rely on security questions being answered correctly. In the wrong hands, this information can also prove to be a gold mine for advertisers lacking a moral backbone.
In recent years, data and computing resources are typically distributed in the devices of end users, various regions or organizations. Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks. Federated learning emerges as an efficient approach to exploit distributed data and computing resources, so as to collaboratively train machine learning models, while obeying the laws and regulations and ensuring data security and data privacy. In this paper, we provide a comprehensive survey of existing works for federated learning. We propose a functional architecture of federated learning systems and a taxonomy of related techniques. Furthermore, we present the distributed training, data communication, and security of FL systems. Finally, we analyze their limitations and propose future research directions.