If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Facial recognition technology is being deployed in airports, security cameras and in our phones. Now, Tokyo is using facial recognition in an unexpected way - to serve up targeted advertisements to taxi passengers as they're ferried to their destination, based on their age and gender. The unsettling practice was discovered by Google privacy engineer Rosa Golijan, who posted a photo of a tablet she encountered when hopping into a taxi in Japan. Facial recognition technology is being deployed in airports, security cameras and in our phones. Now, Japan is using the tech to serve up targeted ads to passengers in taxis.
This post was done in partnership with Wirecutter. When readers choose to buy Wirecutter's independently chosen editorial picks, Wirecutter and Engadget may earn affiliate commission. Despite what I tell my son, I really don't have eyes in the back of my head. But I do have Wi-Fi security cameras with smartphone apps, which allow me to keep tabs on him, as well as my dog, my car, the front door, and the yard. Picking the right one (or two, or three) depends on what you want to do with it.
Few biometric technologies are sparking the imagination quite like facial recognition. Equally, its arrival has prompted profound concerns and reactions. With artificial intelligence and the blockchain, face recognition certainly represents a significant digital challenge for all companies and organizations - and especially governments. In this dossier, you'll discover the 7 face recognition facts and trends that are set to shape the landscape in 2019. Let's jump right in .
The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. "Facial recognition is the plutonium of AI", it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read. The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera.
Despite concerns over facial recognition's impact on civil liberties, public agencies have continued to apply the tool liberally across the U.S. with one of the biggest deployments coming to an airport near you. The U.S. Department of Homeland Security (DHS) said that it plans to expand its application of facial recognition to 97 percent of all passengers departing the U.S. by 2023, according to the Verge. By comparison, facial recognition technology is deployed in just 15 airports, according to figures recorded at the end of 2018. In what is being referred to as'biometric exit,' the agency plans to use facial recognition to more thoroughly track passengers entering and leaving the country. The U.S. Department of Homeland Security (DHS) said that it plans to expand its application of facial recognition to 97 percent of all passengers departing the U.S. by 2023 The system functions by taking a picture of passengers before they depart and then cross-referencing the image with a database containing photos of passports and visas.
Microsoft has said it turned down a request from law enforcement in California to use its facial recognition technology in police body cameras and cars, reports Reuters. Speaking at an event at Stanford University, Microsoft president Brad Smith said the company was concerned that the technology would disproportionately affect women and minorities. Past research has shown that because facial recognition technology is trained primarily on white and male faces, it has higher error rates for other individuals. "Anytime they pulled anyone over, they wanted to run a face scan," said Smith of the unnamed law enforcement agency. "We said this technology is not your answer."
The ACLU and other groups urged Amazon to halt selling facial recognition technology to law enforcement departments. Lending tools charge higher interest rates to Hispanics and African Americans. Job hunting tools favor men. Negative emotions are more likely to be assigned to black men's faces than white men. Computer vision systems for self-driving cars have a harder time spotting pedestrians with darker skin tones.
A prized attribute among law enforcement specialists, the expert ability to visually identify human faces can inform forensic investigations and help maintain safe border crossings, airports, and public spaces around the world. The field of forensic facial recognition depends on highly refined traits such as visual acuity, cognitive discrimination, memory recall, and elimination of bias. Humans, as well as computers running machine learning (ML) algorithms, possess these abilities. And it is the combination of the two--a human facial recognition expert teamed with a computer running ML analyses of facial image data--that provides the most accurate facial identification, according to a recent 2018 study in which Rama Chellappa, Distinguished University Professor and Minta Martin Professor of Engineering, and his team collaborated with researchers at the National Institute of Standards and Technology and the University of Texas at Dallas. Chellappa, who holds appointments in UMD's Departments of Electrical and Computer Engineering and Computer Science and Institute for Advanced Computer Studies, is not surprised by the study results.
Ask a layman about artificial intelligence and they might point to sci-fi villains such as HAL from 2001: A Space Odyssey or the Terminator. But the co-founders of the AI Now Institute, Meredith Whittaker and Kate Crawford, want to change the conversation. Instead of talking about far-flung super-intelligent AI, they argued on the latest episode of Recode Decode, we should be talking about the ways AI is affecting people right now, in everything from education to policing to hiring. Rather than killer robots, you should be concerned about what happens to your résumé when it hits a program like the one Amazon tried to build. "They took two years to design, essentially, an AI automatic résumé scanner," Crawford said. "And they found that it was so biased against any female applicant that if you even had the word'woman' on your résumé that it went to the bottom of the pile." That's a classic example of what Crawford calls "dirty data." Even though people think of algorithms as being ...
A dexterous robot arm that can can automatically feed people forkfuls of food has been developed by researchers in the US. Experts studied how real people use forks to feed each other in order to teach the robot the best way to go about its task. The arm automatically adjusts both the force it uses and the angle at which it spears items to best pick up and deliver mouthfuls of food - regardless of size or texture. A dexterous robot arm that can can automatically feed people forkfuls of food has been developed by researchers in the US. 'Being dependent on a caregiver to feed every bite, every day, takes away a person's sense of independence,' said roboticist Siddhartha Srinivasa.