Why cancer-spotting AI needs to be handled with care


These days, it might seem like algorithms are out-diagnosing doctors at every turn, identifying dangerous lesions and dodgy moles with the unerring consistency only a machine can muster. Just this month, Google generated a wave of headlines with a study showing that its AI systems can spot breast cancer in mammograms more accurately than doctors. But for many in health care, what studies like these demonstrate is not just the promise of AI, but also its potential threat. They say that for all of the obvious abilities of algorithms to crunch data, the subtle, judgment-based skills of nurses and doctors are not so easily digitized. And in some areas where tech companies are pushing medical AI, this technology could exacerbate existing problems.

Artificial Intelligence Needs Private Markets for Regulation--Here's Why


A regulatory market approach would enable the dynamism needed for AI to flourish in a way consistent with safety and public trust. It seems the White House wants to ramp up America's artificial intelligence (AI) dominance. Earlier this month, the U.S. Office of Management and Budget released its "Guidance for Regulation of Artificial Intelligence Applications," for federal agencies to oversee AI's development in a way that protects innovation without making the public wary. The noble aims of these principles respond to the need for a coherent American vision for AI development--complete with transparency, public participation and interagency coordination. But the government is missing something key.

Secure and Robust Machine Learning for Healthcare: A Survey


Medical ML/DL system shall facilitate a deep understanding of the underlying healthcare task, which (in most cases) can only be achieved by utilising other forms of patients data. For example, radiology is not all about clinical imaging. Other patient EMR data is crucial for radiologists to derive the precise conclusion for an imaging study. This calls for the integration and data exchange between all healthcare systems. Despite extensive research on data exchange standards for healthcare, there is a huge ignorance in following those standards in healthcare IT systems which broadly affects the quality and efficacy of healthcare data, accumulated through these systems.

The battle for ethical AI at the world's biggest machine-learning conference


Facial-recognition algorithms have been at the centre of privacy and ethics debates.Credit: Qilai Shen/Bloomberg/Getty Diversity and inclusion took centre stage at one of the world's major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month's Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics. The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies -- such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. "There is no such thing as a neutral tech platform," warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs.

Amazon, Google personal assistants can handle more chores. Just ask them

USATODAY - Tech Top Stories

Already, about one in four U.S. consumers has a home personal assistant at their beck and call, thanks to the success of smart speakers like Amazon Echo and Google Nest. But many users are just scratching the surface of what these gadgets can do. If you aren't familiar with the speakers (both starting at $35), you wake up your artificial intelligence-driven helper with a keyword – "Alexa" for Amazon devices and "OK, Google" for a Google Nest or Google Home speaker – followed by a question or command. A human-like voice will give you a response, whether you want to hear the weather, a specific song, set a timer for the oven, or control your smart devices in your home, such as adjusting lighting or a thermostat. One-fourth of U.S. consumers (25%) will use a smart speaker in 2020, up from 17% in 2018, according to research firm eMarketer.

London Police to Deploy Facial Recognition Cameras Despite Privacy Concerns and Evidence of High Failure Rate

TIME - Tech

Police in London are moving ahead with a deploying a facial recognition camera system despite privacy concerns and evidence that the technology is riddled with false positives. The Metropolitan Police, the U.K.'s biggest police department with jurisdiction over most of London, announced Friday it would begin rolling out new "live facial recognition" cameras in London, making the capital one of the largest cities in the West to adopt the controversial technology. The "Met," as the police department is known in London, said in a statement the facial recognition technology, which is meant to identify people on a watch list and alert police to their real-time location, would be "intelligence-led" and deployed to only specific locations. It's expected to be rolled out as soon as next month. However, privacy activists immediately raised concerns, noting that independent reviews of trials of the technology showed a failure rate of 81%.

What is facial recognition - and how do police use it?

The Guardian

This is a catch-all term for any technology that involves cataloguing and recognising human faces, typically by recording the unique ratios between an individual's facial features, such as eyes, nose and mouth. The technology can be applied to everything from emotion tracking to animation, but the most controversial involve using facial features as biometric identifiers, that is, to identify individuals based on just a photo or video of their face. After a trial of the technology, the Metropolitan police have said they will start to use it in London within a month. On Friday, the force said it would be used to find suspects on "watchlists" for serious and violent crime, as well as to help find children and vulnerable people. Scotland Yard said the public would be aware of the surveillance, with the cameras being placed in open locations and officers handing out explanatory leaflets.

Self-driving big-rig trucks coming soon? Waymo set to begin mapping interstates in Texas, New Mexico

USATODAY - Tech Top Stories

The Lone Star State may become a little lonelier -- at least when it comes to big-rig trucking. Waymo, the self-driving vehicle division of Google parent Alphabet, is about to start mapping in Texas and New Mexico as a prelude to testing its self-driving big-rig trucks. The mapping minivans, to be followed by the large trucks, will run primarily along Interstates 10, 20 and 45 and through metropolitan areas like El Paso, Dallas and Houston, the company said. Waymo previously mapped and tested its big rigs in Arizona, California and Georgia. The latest move will add to that footprint as the company moves toward its vision of big rigs rolling down interstates with no one at the wheel, their sensors and computers making them safer than if they have a human in control.

It's too late to ban face recognition – here's what we need instead

New Scientist

Calls for an outright ban on face recognition technology are growing louder, but it is already too late. Given its widespread use by tech companies and the police, permanently rolling back the technology is impossible. It was widely reported this week that the European Commission is considering a temporary ban on the use of face recognition in public spaces. The proposed hiatus of up to five years, according to a white paper obtained by news site Politico, would aim to give politicians in Europe time to develop measures to mitigate the potential risks associated with the technology. Several US cities, including San Francisco, are mulling or have enacted similar bans.

Philips CTO outlines ethical guidelines for AI in healthcare


The use of artificial intelligence and machine learning algorithms in healthcare is poised to expand significantly over the next few years, but beyond the investment strategies and technological foundations lie serious questions around the ethical and responsible use of AI. In an effort to clarify its own position and add to the debate, the executive vice president and chief technology officer for Royal Philips, Henk van Houten, has published a list of five guiding principles for the design and responsible use of AI in healthcare and personal health applications. The five principles – well-being, oversight, robustness, fairness, and transparency – all stem from the basic viewpoint that AI-enabled solutions should complement and benefit customers, patients, and society as a whole. First and foremost, well-being should be front of mind when developing healthcare AI solutions, van Houten argues, helping to alleviate overstretched healthcare systems, but more importantly to act as a means of supplying proactive care, informing and supporting healthy living over the course of a person's entire life. When it comes to oversight, van Houten called for proper validation and interpretation of AI-generated insights through the participation and collaboration of AI engineers, data scientists, and clinical experts.