Surveillance isn't the only application of China's advanced facial recognition software. Conservationists are now using the technology too, as a tool to help protect wild panda populations. According to a report from Xinhua News, researchers at the China Conservation and Research Center for Giant Pandas in Chengu have begun using facial recognition software to identify the often similar-looking faces and markings of wild pandas. Giant pandas are the latest subject of China's facial recognition software. Conservationists are now using the technology to monitor and track the animals.
An office worker who believes his image was captured by facial recognition cameras when he popped out for a sandwich in his lunch break has launched a groundbreaking legal battle against the use of the technology. Supported by the campaign group Liberty, Ed Bridges, from Cardiff, raised money through crowdfunding to pursue the action, claiming the suspected use of the technology on him by South Wales police was an unlawful violation of privacy. Bridges, 36, claims he was distressed by the apparent use of the technology and is also arguing during a three-day hearing at Cardiff civil justice and family centre that it breaches data protection and equality laws. Facial recognition technology maps faces in a crowd and then compares them to a watchlist of images, which can include suspects, missing people and persons of interest to the police. The cameras scan faces in large crowds in public places such as streets, shopping centres, football crowds and music events such as the Notting Hill carnival.
It's no secret that the health of our planet is declining. Deforestation, melting sea ice, rapidly disappearing species and more have weakened Earth's ecosystems, and climate change is arguably the most pressing issue of our time. We need to think outside of the box – and move swiftly – to recover a sustainable future. Once considered the stuff of science fiction, artificial intelligence (AI) is not only playing a growing role in our everyday lives, but it could be a critical tool in helping save the planet. In fact, reversing what could soon be permanent damage is the impetus for Microsoft's AI for Earth program, which awards grants to researchers and innovators dedicated to solving environmental challenges.
In this Oct. 31, 2018, file photo, a man, who declined to be identified, has his face painted to represent efforts to defeat facial recognition during a protest at Amazon headquarters over the company's facial recognition system, "Rekognition," in Seattle. San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies. These days, with facial recognition technology, you've got a face that can launch a thousand applications, so to speak. Sure, you may love the ease of opening your phone just by facing it instead of tapping in a code. But how do you feel about having your mug scanned, identifying you as you drive across a bridge, when you board an airplane or to confirm you're not a stalker on your way into a Taylor Swift concert?
The first legal battle in the UK over police use of face recognition technology will begin today. Ed Bridges has crowdfunded action against South Wales Police over claims that the use of the technology on him was an unlawful violation of privacy. He will also argue it breaches data protection and equality laws during a three-day hearing at Cardiff Civil Justice and Family Centre. Face recognition technology maps faces in a crowd then compares results with a "watch list" of images which can include suspects, missing people and persons of interest. Police who have trialled the technology hope it can help tackle crime but campaigners argue it breaches privacy and civil liberty.
As San Francisco moves to regulate the use of facial recognition systems, we reflect on some of the many'faces' of the fast-growing technology Last week, San Francisco became the first city in the United States to ban the use of facial recognition technology, at least by law enforcement, local agencies, and the city's transport authority. My immediate reaction to the headlines was that this was great for individuals' privacy, a truly bold decision by the San Francisco board of supervisors. The ordinance actually covers more than just facial recognition, as it states the following: "'Surveillance Technology' means any software, electronic device, system utilizing an electronic device, or similar device used, designed, or primarily intended to collect, retain, process, or share audio, electronic, visual, location, thermal, biometric, olfactory or similar information specifically associated with, or capable of being associated with, any individual or group.". The ban excludes San Francisco's airport and sea port as these are operated by federal agencies. Nor does it mean that no individual, company or other organizations installing surveillance systems that include facial recognition, and the agencies banned from using the technology, can cooperate with the people allowed to use it.
ABBYY, a global leader in Content IQ technologies and solutions, today announced it has signed an agreement to acquire Philadelphia, Pennsylvania-based TimelinePI. TimelinePI provides a comprehensive process intelligence platform designed to empower users to understand, monitor and optimize any business process. The global process analytics market size is expected to grow to USD 1,421.7 million by 2023 according to Research and Markets. The acquisition of TimelinePI is a strategic investment by ABBYY into the emerging process intelligence market which is critical to truly understanding the impact and effectiveness of business processes and opportunities for productivity gains from digital transformation investments. TimelinePI's vision of combining the most versatile process mining and operational monitoring with cutting-edge, process-centric AI and machine learning will serve as a critical cornerstone to ABBYY's Digital IQ strategy.
On Tuesday, in an 8-1 tally, the San Francisco Board of Supervisors voted to ban the use of facial recognition software by city departments, including police. Supporters of the ban cited racial inequality in audits of facial recognition software from companies like Amazon and Microsoft, as well as dystopian surveillance happening now in China. At the core of arguments around the regulation of facial recognition software use is the question of whether a temporary moratorium should be put in place until police and governments adopt policies and standards or it should be permanently banned. Some believe facial recognition software can be used to exonerate the innocent and that more time is needed to gather information. Others, like San Francisco Supervisor Aaron Peskin, believe that even if AI systems achieve racial parity, facial recognition is a "uniquely dangerous and oppressive technology."
Ocado and Google DeepMind executives are among a cohort of experts that have been called to advise the Government on how to boost the use of artificial intelligence in Britain. Paul Clarke, chief technology officer of the e-commerce company, and DeepMind co-founder Mustafa Suleyman will join the new lineup of the Government's AI council, an advisory group set up as part of a push to boost investment in the technology. Mastercard vice chairman Ann Cairns, Amazon machine learning director Neil Lawrence and Microsoft research lab director Chris Bishop are also among those who gained seats on the new council. The executives are expected to promote the use of AI by businesses in the UK and advise the Government about future public investments in the industry. The Government already set aside £3m for AI projects aimed at boosting productivity in financial and legal services last year as part of this effort.