These days, it might seem like algorithms are out-diagnosing doctors at every turn, identifying dangerous lesions and dodgy moles with the unerring consistency only a machine can muster. Just this month, Google generated a wave of headlines with a study showing that its AI systems can spot breast cancer in mammograms more accurately than doctors. But for many in health care, what studies like these demonstrate is not just the promise of AI, but also its potential threat. They say that for all of the obvious abilities of algorithms to crunch data, the subtle, judgment-based skills of nurses and doctors are not so easily digitized. And in some areas where tech companies are pushing medical AI, this technology could exacerbate existing problems.
Medical ML/DL system shall facilitate a deep understanding of the underlying healthcare task, which (in most cases) can only be achieved by utilising other forms of patients data. For example, radiology is not all about clinical imaging. Other patient EMR data is crucial for radiologists to derive the precise conclusion for an imaging study. This calls for the integration and data exchange between all healthcare systems. Despite extensive research on data exchange standards for healthcare, there is a huge ignorance in following those standards in healthcare IT systems which broadly affects the quality and efficacy of healthcare data, accumulated through these systems.
Facial-recognition algorithms have been at the centre of privacy and ethics debates.Credit: Qilai Shen/Bloomberg/Getty Diversity and inclusion took centre stage at one of the world's major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month's Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics. The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies -- such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. "There is no such thing as a neutral tech platform," warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs.
Already, about one in four U.S. consumers has a home personal assistant at their beck and call, thanks to the success of smart speakers like Amazon Echo and Google Nest. But many users are just scratching the surface of what these gadgets can do. If you aren't familiar with the speakers (both starting at $35), you wake up your artificial intelligence-driven helper with a keyword – "Alexa" for Amazon devices and "OK, Google" for a Google Nest or Google Home speaker – followed by a question or command. A human-like voice will give you a response, whether you want to hear the weather, a specific song, set a timer for the oven, or control your smart devices in your home, such as adjusting lighting or a thermostat. One-fourth of U.S. consumers (25%) will use a smart speaker in 2020, up from 17% in 2018, according to research firm eMarketer.
Police in London are moving ahead with a deploying a facial recognition camera system despite privacy concerns and evidence that the technology is riddled with false positives. The Metropolitan Police, the U.K.'s biggest police department with jurisdiction over most of London, announced Friday it would begin rolling out new "live facial recognition" cameras in London, making the capital one of the largest cities in the West to adopt the controversial technology. The "Met," as the police department is known in London, said in a statement the facial recognition technology, which is meant to identify people on a watch list and alert police to their real-time location, would be "intelligence-led" and deployed to only specific locations. It's expected to be rolled out as soon as next month. However, privacy activists immediately raised concerns, noting that independent reviews of trials of the technology showed a failure rate of 81%.
This is a catch-all term for any technology that involves cataloguing and recognising human faces, typically by recording the unique ratios between an individual's facial features, such as eyes, nose and mouth. The technology can be applied to everything from emotion tracking to animation, but the most controversial involve using facial features as biometric identifiers, that is, to identify individuals based on just a photo or video of their face. After a trial of the technology, the Metropolitan police have said they will start to use it in London within a month. On Friday, the force said it would be used to find suspects on "watchlists" for serious and violent crime, as well as to help find children and vulnerable people. Scotland Yard said the public would be aware of the surveillance, with the cameras being placed in open locations and officers handing out explanatory leaflets.
The Lone Star State may become a little lonelier -- at least when it comes to big-rig trucking. Waymo, the self-driving vehicle division of Google parent Alphabet, is about to start mapping in Texas and New Mexico as a prelude to testing its self-driving big-rig trucks. The mapping minivans, to be followed by the large trucks, will run primarily along Interstates 10, 20 and 45 and through metropolitan areas like El Paso, Dallas and Houston, the company said. Waymo previously mapped and tested its big rigs in Arizona, California and Georgia. The latest move will add to that footprint as the company moves toward its vision of big rigs rolling down interstates with no one at the wheel, their sensors and computers making them safer than if they have a human in control.
Calls for an outright ban on face recognition technology are growing louder, but it is already too late. Given its widespread use by tech companies and the police, permanently rolling back the technology is impossible. It was widely reported this week that the European Commission is considering a temporary ban on the use of face recognition in public spaces. The proposed hiatus of up to five years, according to a white paper obtained by news site Politico, would aim to give politicians in Europe time to develop measures to mitigate the potential risks associated with the technology. Several US cities, including San Francisco, are mulling or have enacted similar bans.
Last month, researchers at OpenAI in San Francisco revealed an algorithm capable of learning, through trial and error, how to manipulate the pieces of a Rubik's Cube using a robotic hand. It was a remarkable research feat, but it required more than 1,000 desktop computers plus a dozen machines running specialized graphics chips crunching intensive calculations for several months. The effort may have consumed about 2.8 gigawatt-hours of electricity, estimates Evan Sparks, CEO of Determined AI, a startup that provides software to help companies manage AI projects. A spokesperson for OpenAI questioned the calculation, noting that it makes several assumptions. But OpenAI declined to disclose further details of the project or offer an estimate of the electricity it consumed.
The version of Project Debater used in the live debates included the seeds of the latest system, such as the capability to search hundreds of millions of new articles. But in the months since, the team has extensively tweaked the neural networks it uses, improving the quality of the evidence the system can unearth. One important addition is BERT, a neural network Google built for natural-language processing, which can answer queries. The work will be presented at the Association for the Advancement of Artificial Intelligence conference in New York next month.