The saying "data is the new oil," was reportedly coined by British mathematician and marketing whiz Clive Humby in 2006. Data is the fuel powering modern AI models; without enough of it the performance of these systems will sputter and fail. And like oil, the resource is scarce and controlled by big businesses. What do you do if you're a small computer vision company? You can turn to fake data to train your models, and if you're lucky it might just work.
The US national tax authority announced Monday that it will stop using facial recognition software to verify taxpayers' identities when they create online accounts, following a chorus of privacy concerns. Internal Revenue Service officials had put forth the authentication system as a security measure following years of growing fears over online scams and identity theft, but the program ended up also prompting worries. The initiative involved identity verification company ID.me, which won a nearly $90 million contract to make taxpayers' accounts more secure. The IRS said "it will transition away from using a third-party service for facial recognition to help authenticate people creating new online accounts." "The IRS will quickly develop and bring online an additional authentication process that does not involve facial recognition," it said, as the agency faces staffing shortages and significant backlogs.
Singapore has trialled patrol robots that blast warnings at people engaging in "undesirable social behaviour", adding to an arsenal of surveillance technology in the tightly controlled city-state that is fuelling privacy concerns. From vast numbers of CCTV cameras to trials of lampposts kitted out with facial recognition tech, Singapore is seeing an explosion of tools to track its inhabitants. That includes a three-week trial in September, in which two robots were deployed to patrol a housing estate and a shopping centre. Officials have long pushed a vision of a hyper-efficient, tech-driven "smart nation", but activists say privacy is being sacrificed and people have little control over what happens to their data. Singapore is frequently criticised for curbing civil liberties and people are accustomed to tight controls, but there is still growing unease at intrusive tech.
In its annual report, the AI Now Institute, an interdisciplinary research center studying the societal implications of artificial intelligence, called for a ban on technology designed to recognize people's emotions in certain cases. Specifically, the researchers said affect recognition technology, also called emotion recognition technology, should not be used in decisions that "impact people's lives and access to opportunities," such as hiring decisions or pain assessments, because it is not sufficiently accurate and can lead to biased decisions. What is this technology, which is already being used and marketed, and why is it raising concerns? Researchers have been actively working on computer vision algorithms that can determine the emotions and intent of humans, along with making other inferences, for at least a decade. Facial expression analysis has been around since at least 2003.
Police in London are moving ahead with a deploying a facial recognition camera system despite privacy concerns and evidence that the technology is riddled with false positives. The Metropolitan Police, the U.K.'s biggest police department with jurisdiction over most of London, announced Friday it would begin rolling out new "live facial recognition" cameras in London, making the capital one of the largest cities in the West to adopt the controversial technology. The "Met," as the police department is known in London, said in a statement the facial recognition technology, which is meant to identify people on a watch list and alert police to their real-time location, would be "intelligence-led" and deployed to only specific locations. It's expected to be rolled out as soon as next month. However, privacy activists immediately raised concerns, noting that independent reviews of trials of the technology showed a failure rate of 81%.
A YOUNG MAN, let's call him Roger, arrives at the emergency department complaining of belly pain and nausea. A physical exam reveals that the pain is focused in the lower right portion of his abdomen. The doctor worries that it could be appendicitis. But by the time the imaging results come back, Roger is feeling better, and the scan shows that his appendix appears normal. The doctor turns to the computer to prescribe two medications, one for nausea and Tylenol for pain, before discharging him. This is one of the fictitious scenarios presented to 55 physicians around the country as part of a study to look at the usability of electronic health records (EHRs).
Until recently, artificial intelligence has struggled to gain a foothold on Wall Street. In the last few years, large investment banks like Goldman Sachs and JP Morgan have hired artificial intelligence specialists away from academia and put them in charge of their internal AI divisions. Financial technology start-ups have begun using machine-learning algorithms to model credit ratings and detect fraud. And hedge funds and high-frequency traders are using AI to make investment decisions. Politicians are starting to take notice.
Taylor Swift raised eyebrows late last year when Rolling Stone magazine revealed her security team had deployed facial recognition recognition technology during her Repudiation tour to root out stalkers. But the company contracted for the efforts uses its technology to provide much more than just security. ISM Connect also uses its smart screens to capture metrics for promotion and marketing. Facial recognition, used for decades by law enforcement and militaries, is quickly becoming a commercial tool to help brands engage consumers. Swift's tour is just the latest example of the growing privacy concerns around the largely unregulated, billion-dollar industry.
It's now possible to check in automatically at Shanghai's Hongqiao airport using facial recognition technology, part of an ambitious rollout of facial recognition systems in China that has raised privacy concerns as Beijing pushes to become a global leader in the field. The airport unveiled self-service kiosks for flight and baggage check-in, security clearance and boarding powered by facial recognition technology, according to the Civil Aviation Administration of China.
The Indian government plans to decongest its airports by introducing facial recognition technology next year - a proposal that may once again raise privacy concerns in the South Asian country. India's ministry of civil aviation on Thursday said passengers on domestic flights will be able to choose to use their biometric authentication system and go paperless. "Security will benefit from the ability of the technology to verify the passenger at every checkpoint in a non-intrusive way," ministry secretary Rajiv Nayan Choubey said in a statement. The proposal says passengers would be verified by being photographed at every stage of the check-in process - from entering the airport to proceeding through security and boarding the plane. The India government statement said the biometric technology will be introduced first at Bengaluru and Hyderabad airports by February next year, followed by Kolkata, Varanasi, Pune and Vijayawada by April.