Apple will let you unlock the iPhone X with your face - a move likely to bring facial recognition to the masses. But along with the roll out of the technology, are concerns over how it could be used. Despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would'normalise' the technology. This could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused'. Facial recognition could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused', experts have warned.
WASHINGTON – Apple will let you unlock the iPhone X with your face -- a move likely to bring facial recognition to the masses, along with concerns over how the technology may be used for nefarious purposes. Apple's newest device, set to go on sale on Friday, is designed to be unlocked with a facial scan with a number of privacy safeguards -- as the data will only be stored on the phone and not in any databases. Unlocking one's phone with a face scan may offer added convenience and security for iPhone users, according to Apple, which claims its "neural engine" for FaceID cannot be tricked by a photo or hacker. While other devices have offered facial recognition, Apple is the first to pack the technology allowing for a three-dimensional scan into a hand-held phone. But despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would "normalize" the technology and open the door to broader use by law enforcement, marketers or others of a largely unregulated tool.
A viral app that added Asian, Black, Caucasian and Indian filters to people's selfies has removed them after being accused of racism. The update which launched yesterday was met with backlash - with many people criticising it for propagating racial stereotypes. The filters drew comparison with'blackface' and'yellowface' - when white people wear make up to appear to be from a different ethnic group. The filters drew comparison with'blackface' and'yellowface' - when white people wear make up to appear to be from a different ethnic group. The app uses Artificial Intelligence to transform faces.
FaceApp has removed a number of racially themed photo filters after being accused of racism. The app, which uses artificial intelligence to edit pictures, this week launched a number of "ethnicity change filters". They claimed to show users what they'd look like if they were Caucasian, Black, Asian or Indian. FaceApp has attracted fierce criticism for launching the filters, with some users claiming they were racist, and encouraged users to "black up" digitally. Responding to the backlash, FaceApp founder and CEO, Yaroslav Goncharov, said, "The ethnicity change filters have been designed to be equal in all aspects.
The answer, according to some former NSA analysts, is that the agency routinely monitors many of its employees' computer activity. It is a $200 million-a-year industry, according to a study last year by 451 Research, a technology research firm, and is estimated to be worth $500 million by 2020. Employee monitoring recently came to light in a high-profile lawsuit involving Uber and Waymo, the self-driving car company owned by Google's parent firm, Alphabet. Privacy advocates have been pushing for years to have Congress review various communications privacy laws in light of updates to technology.
It's been another eventful week in Trump's America. The Ninth Circuit effectively administered a coup de grace to the president's Muslim travel ban after hearing from most of Silicon Valley about said ban's deleterious effects. Senate Majority Leader Mitch McConnell censored Elizabeth Warren on the Senate floor for trying to read a letter critical of would-be Attorney General Jeff Sessions. And, despite all the work to be done forming a new government and horrific conflict of interest implications, the President found time to take Nordstrom to task on Twitter (via his personal and official POTUS accounts) for dropping his daughter's clothing line. As Ivanka's prospects as a fashion mogul were trending down, Cherlynn Low was reading the tea leaves of Snap Inc.'s IPO filing to see if the future is bright for the company's first foray into hardware, Spectacles.
If you leave your iPad untouched for a few days, you probably do not need to worry about much more than a flat battery and a backlog of emails. But with artificial intelligence, computers could soon have their own set of'rights' that could let them sue you for neglect, according to a leading scientist. Professor Marcus du Sautoy, a mathematician at Oxford University, has suggested that as AI leads to our devices developing their own consciousness, they may need their own laws to protect them. Advances in artificial intelligence could lead to computers and smartphones developing consciousness and they may need to be given'human' rights. He claims that if technology is conscious, it could also then be deemed as being alive, and so could win the right to be governed by laws on human rights.