Results


New iPhone brings face recognition (and fears) to masses

Daily Mail

Apple will let you unlock the iPhone X with your face - a move likely to bring facial recognition to the masses. But along with the roll out of the technology, are concerns over how it could be used. Despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would'normalise' the technology. This could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused'. Facial recognition could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused', experts have warned.


Apple iPhone X's FaceID Technology: What It Could Mean For Civil Liberties

International Business Times

Apple's new facial recognition software to unlock their new iPhone X has raised questions about privacy and the susceptibility of the technology to hacking attacks. Apple's iPhone X is set to go on sale on Nov. 3. The world waits with bated breath as Apple plans on releasing a slew of new features including a facial scan. The new device can be unlocked with face recognition software wherein a user would be able to look at the phone to unlock it. This convenient new technology is set to replace numeric and pattern locks and comes with a number of privacy safeguards.


New iPhone brings face recognition --and fears -- to the masses

The Japan Times

WASHINGTON – Apple will let you unlock the iPhone X with your face -- a move likely to bring facial recognition to the masses, along with concerns over how the technology may be used for nefarious purposes. Apple's newest device, set to go on sale on Friday, is designed to be unlocked with a facial scan with a number of privacy safeguards -- as the data will only be stored on the phone and not in any databases. Unlocking one's phone with a face scan may offer added convenience and security for iPhone users, according to Apple, which claims its "neural engine" for FaceID cannot be tricked by a photo or hacker. While other devices have offered facial recognition, Apple is the first to pack the technology allowing for a three-dimensional scan into a hand-held phone. But despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would "normalize" the technology and open the door to broader use by law enforcement, marketers or others of a largely unregulated tool.


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.


If you weren't raised in the Internet age, you may need to worry about workplace age discrimination

Los Angeles Times

Although people of both genders struggle with age discrimination, research has shown women begin to experience age discrimination in hiring practices before they reach 50, whereas men don't experience it until several years later. Just as technology is causing barriers inside the workplace for older employees, online applications and search engines could be hurting older workers looking for jobs. Many applications have required fields asking for date of birth and high school graduation, something many older employees choose to leave off their resumes. Furthermore, McCann said, some search engines allow people to filter their search based on high school graduation date, thereby allowing employers and employees to screen people and positions out of the running.


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender. Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. "Language reflects facts about the world," Caliskan told Ars.


Artificial intelligence can be sexist and racist just like humans

#artificialintelligence

Researchers at Princeton University and Britain's University of Bath found that machine learning "absorbs stereotyped biases" when trained on words from the internet. Their findings, published in the journal Science on Thursday, showed that machines learn word associations from written texts that mirror those learned by humans. A psychological tool called the implicit association test (IAT)--an assessment of a person's unconscious associations between certain words--inspired the development of a similar test for machines called a word-embedding association test (WEAT). The WEAT found that male names were associated with work, math, and science, and female names with family and the arts, meaning these stereotypes were held by the computer.


Why can artificial intelligence be racist and sexist?

#artificialintelligence

To do this, they resorted to a very non-standard method – the test for hidden associations (Implicit Association Test, IAT), used to study social attitudes and stereotypes in people. Using IAT tests as a model, Kaliskan and her colleagues created the WEAT (Word-Embedding Association Test) algorithm, which analyzes entire fragments of texts to find out which linguistic entities are more closely connected than others. As an example, Kaliskan cites the way in which the Google Translate translator's interpreter algorithm incorrectly translates words into English from other languages, based on the stereotypes that it has learned based on gender information. In one of the tests, the researchers found a strong associative relationship between the concepts "woman" and "motherhood".


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.