Goto

Collaborating Authors

Vision


Emotion AI Opens Up New Possibilities for Consumer Research

#artificialintelligence

The idea of using Artificial Intelligence to understand consumer behavior has been around for a while now. From speculating its accuracy to debating its methods, researchers have always had a keen interest in discussing AI's future in consumer research. Emotion AI is the most noticeable development in this regard. Emotion AI is the sub-set of Artificial Intelligence that tries to understand human expressions, both verbal and non-verbal. Also known as Affective Computing, Emotion AI is the science of recognizing, interpreting, processing, and simulating human expressions. Affective Computing was first coined in 1995 by Rosalind Picard's paper of the same name, published by the MIT Press1.


Microsoft Surface Laptop 4 review: Windows 10 as it is meant to be

The Guardian

Microsoft's sleek and stylish Surface Laptop is back for its fourth generation with faster performance and a greater variety of chips. The Surface Laptop 4 is available with either a 13.5in or a 15in screen and starts at £999 in the UK, $999 in the US or $1,599 in Australia sitting above the Surface Laptop Go as Microsoft's mainstream premium notebook, competing with the similarly priced Dell XPS 13 and Apple MacBook Air, among others. Very little has changed on the outside, matching the dimensions, weight, port selection and design of 2020's Surface Laptop 3. Here tested with a 13.5in screen, it still looks and feels sleek with its aluminium lid, choice of Alcantara fabric or aluminium deck and bright and crisp touch screen. The keyboard is excellent while the large trackpad is smooth and precise. The speakers are loud and clear with reasonable bass for a laptop, while the 720p webcam and microphones are better than most for video calls.


Ed Markey, Elizabeth Warren, Ayanna Pressley reintroduce legislation to stop government use of facial recognition

Boston Herald

Facial recognition is facing a showdown in Congress. U.S. Sens. Edward Markey and Elizabeth Warren are joined by U.S. Rep. Ayanna Pressley in reintroducing legislation to tamp down on the government's use of biometric technology, which includes facial recognition. Executive Director of ACLU Massachusetts Kate Ruane said people shouldn't worry "that government agencies are keeping tabs on their every movement." The bill would, under almost any circumstance, make it illegal for any federal agent or official to "acquire, possess, access or use" any biometric surveillance system. This includes facial recognition, voice recognition, gait recognition and "other immutable characteristic(s)," according to the bill.


5 Computer Vision Trends for 2021

#artificialintelligence

Computer Vision is a fascinating field of Artificial Intelligence that has tons of value in the real-world. There's a huge wave of billion-dollar computer vision startups coming and Forbes expects the computer vision market to reach USD 49 billion by 2022. The main goal of computer vision is to give computers the ability to understand the world through sight, and to make decisions based on their understanding. In application, this technology allows the automation and augmentation of human sight, creating a myriad of use cases. If AI enables computers to think, computer vision enables them to see, observe and understand.


Democratic lawmakers want to ban the federal government from using facial recognition

Engadget

Four Democratic lawmakers want to ban the federal government from using facial recognition technology. Led by Massachusetts Senator Edward J. Markey and Oregon Senator Jeff Merkley, the group plans to introduce The Facial Recognition and Biometric Technology Moratorium Act to Congress. If passed, the bill would prohibit federal authorities from using the technology alongside several other biometric tools like voice recognition. Perhaps even more significantly, state and local entities, including law enforcement agencies, would need to pass their own moratoriums to secure funding from the federal government. In laying out the need for policy intervention, the group cites a report from The National Institute of Standards and Technology.


10 Papers in 2021

#artificialintelligence

To answer today's problems, our research centre is dedicated to anticipating the challenges that European businesses face. To help doctors detect tumours in a patient, we built a model combining 2D and 3D convolutional neural networks in collaboration with the Institute Carnot CALYM. Computer vision is one of the most active research areas and applies to many use cases in different sectors. In the field of healthcare, AI tools can assist doctors in their decision-making. Our alumna Cécile Pereira worked at Eura Nova Marseille on a pipeline that builds navigable pathways aligned with the researcher's needs.


The False Comfort of Human Oversight as an Antidote to A.I. Harm

Slate

In April, the European Commission released a wide-ranging proposed regulation to govern the design, development, and deployment of A.I. systems. The regulation stipulates that "high-risk A.I. systems" (such as facial recognition and algorithms that determine eligibility for public benefits) should be designed to allow for oversight by humans who will be tasked with preventing or minimizing risks. Often expressed as the "human-in-the-loop" solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the "loop" of A.I. seems reassuring, this approach is instead "loopy" in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems. A.I. is celebrated for its superior accuracy, efficiency, and objectivity in comparison to humans.


Council Post: How To Make Sure That Diversity In AI Works

#artificialintelligence

Chief Technology Officer at Integrity Management Services, Inc., where she is leading cutting-edge technology solutions (AI) for clients. Artificial intelligence is ubiquitous today. Most of us do not know where AI is being used and are unaware of the biased decisions that some of these algorithms produce. There are AI tools that claim to infer "criminality" from face images, race from facial expressions and emotion recognition through eye movements. Many of these technologies are increasingly used in applications that impact credit card checks, fraud detection, criminal justice decisions, hiring practices, healthcare outcomes, spreading misinformation, education, lifestyle decisions and more.


AI is getting smarter every day, but it still can't match the human mind

#artificialintelligence

Recognizing an image used to be a task in which humans had a clear advantage over machines--until relatively recently. Initiatives such as the ImageNet project, formulated in 2006, have served to significantly reduce this difference. Led by Chinese American researcher Fei-Fei Li, a computer science professor at Stanford University who also served as director of the Stanford Artificial Intelligence Lab (SAIL), the ImageNet project consists of a database with nearly 15 million images that have been classified by humans. This repository of information is the raw material used to train the computer vision algorithms and is available online free of charge. To boost development in the area of computer image recognition, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) was created in 2010 where systems developed by teams from around the world compete to correctly classify the images shown on their screens.


TikTok Has Started Collecting Your 'Faceprints' and 'Voiceprints.' Here's What It Could Do With Them

TIME - Tech

Recently, TikTok made a change to its U.S. privacy policy, allowing the company to "automatically" collect new types of biometric data, including what it describes as "faceprints" and "voiceprints." TikTok's unclear intent, the permanence of the biometric data and potential future uses for it have caused concern among experts who say users' security and privacy could be at risk. On June 2, TikTok updated the "Information we collect automatically" portion of its privacy policy to include a new section called "Image and Audio Information," giving itself permission to gather certain physical and behavioral characteristics from its users' content. The increasingly popular video sharing app may now collect biometric information such as "faceprints and voiceprints," but the update doesn't define these terms or what the company plans to do with the data. "Generally speaking, these policy changes are very concerning," Douglas Cuthbertson, a partner in Lieff Cabraser's Privacy & Cybersecurity practice group, tells TIME.