Email firstname.lastname@example.org to subscribe to the weekly email alert.
Driverless cars will be on Britain's roads by 2021 as a result of sweeping regulatory reforms that will put the UK in the forefront of a post-Brexit technological revolution, chancellor Philip Hammond will say this week. In his budget on Wednesday Hammond will allow driverless cars to be tested without any human operator inside or outside the car, and without the legal constraints and rules that apply in many other EU nations, and much of the US. The move – welcomed by the UK motor industry – is part of an attempt by Hammond and the Treasury to project a more upbeat message about the prospects for the UK economy after Brexit, and focus on opportunities as well as the risks. Carmakers have warned that they may have to move at least some production abroad if there is no deal to keep Britain inside the EU single market and customs union, at least for a two-year transition period. But Mike Hawes, chief executive of the Society of Motor Manufacturers and Traders, said it was good news that the government was taking a lead by making the UK attractive to those seeking to develop, test and build an entirely new generation of cars.
MIT's Atlas robot, nicknamed Helios, completes the driving task at the June 2015 DARPA Robotics Challenge Finals. Helios is a second-generation Atlas, developed for DARPA by Boston Dynamics. MIT's Atlas robot, nicknamed Helios, completes the driving task at the June 2015 DARPA Robotics Challenge Finals. Helios is a second-generation Atlas, developed for DARPA by Boston Dynamics. Thursday, Nov. 16, 2017 was a day filled with news.
Stanford researchers have developed a machine-learning algorithm that can diagnose pneumonia from a chest x-ray better than a human radiologist can. And it learned how to do so in just about a month. The Machine Learning Group, led by Stanford adjunct professor Andrew Ng, was inspired by a data set released by the National Institutes of Health on 26 September. The data set contains 112,120 chest X-ray images labeled with 14 different possible diagnoses, along with some preliminary algorithms. The researchers asked four Stanford radiologists to annotate 420 of the images for possible indications of pneumonia.
Drones have become a common part of warfare -- but their use remains a subject of public contention. Lisa Parks, a professor in MIT's program in Comparative Media Studies/Writing and director of its Global Media Technologies and Cultures Lab, has spent extensive time analyzing this public debate. Now, she has co-edited a new volume examining the subject, while contributing a piece to it herself. The book, "Life in the Age of Drone Warfare," has just been published by Duke University Press. MIT News talked with Parks this week about the impact and public perception of drones.
If you're using Microsoft's word processor on a Windows computer, you have several voice-recognition options. This section will address three of them, mostly focusing on the Windows Speech Recognition program built into this operating system. The integrated voice-recognition service will work on any Windows application, including Microsoft Word. To launch it, type "windows speech recognition" into the search box on the taskbar, then click the app when it appears. The first time you run this software, you'll need to teach the utility to recognize your voice.
Google has a vision for a world full of cheap and tiny smart devices--and it hopes its software will power them all. A couple of years back, Google launched an open-source machine-learning software library called TensorFlow. It has since exploded in popularity, to the point where it's now used by the likes of Airbnb, eBay, Uber, Snapchat, and Dropbox to power their AI development. Its appeal is obvious: it allows relative beginners to build and train neural networks without needing a PhD in artificial intelligence. As a result, the library now forms a major component of Google's business plan.
Last fall, Google Translate rolled out a new-and-improved artificial intelligence translation engine that it claimed was, at times, "nearly indistinguishable" from human translation. Jost Zetzsche could only roll his eyes. The German native had been working as a professional translator for 20 years, and he'd heard time and time again that his industry would be threatened by advances in automation. Every time, he'd found, the hype was overblown--and Google Translate's makeover was no exception. It certainly wasn't the key to translation, he thought.
The world's most valuable company crammed a lot into the tablespoon-sized volume of an Apple Watch. There's GPS, a heart-rate sensor, cellular connectivity, and computing resources that not long ago would have filled a desk-dwelling beige box. The wonder gadget doesn't have a sphygmomanometer for measuring blood pressure or polysomnographic equipment found in a sleep lab--but thanks to machine learning, it might be able to help with their work. Research presented at the American Heart Association meeting in Anaheim Monday claims that when paired with the right machine-learning algorithms, the Apple Watch's heart-rate sensor and step counter can make a fair prediction of whether a person has high blood pressure, or sleep apnea, in which breathing stops and starts repeatedly through the night. Both are common--and commonly undiagnosed--conditions associated with life-threatening problems, including stroke and heart attack.