Apple's Siri has fallen behind its virtual assistant competition. Google's Assistant expertly surfaces information, while Amazon's Alexa works with a staggering number of third-party apps for a broad variety of capabilities. Apple isn't admitting defeat, though: Siri played a prominent role at WWDC, the company's "Worldwide Developers Conference," on Monday. Specifically, Apple is finally tackling the issue of customization with its personal digital assistant: Making Siri do more, proactively, to make your day easier. Apple is primarily accomplishing this with a new tool called Siri Shortcuts, a sort of IFTTT built straight into iOS for personalizing and automating Siri commands and functions.
As much as fully autonomous vehicles are in the news, none of us will be commuting to work in a self-driving car for at least two decades. Meanwhile, Toyota says it will use technology, called V-2-V, in all its cars within a few years with claims it will save thousands of lives each year -- as cars talk to each other on the highway.
With today's unveilling of iOS 12 at WWDC, Apple hinted at an upgraded Siri worthy of 2018. The news: On opening day of its annual developer conference, Apple announced plans to make its AI-powered digital assistant Siri more robust in iOS 12 with a new property called "shortcuts." Users will be able to create voice commands that let Siri help more effectively with day-to-day tasks, from ordering coffee to finding lost keys. Catching up on AI: The launch of Siri in 2011 was a breakout moment for voice-activated smart assistants, but competitors like Google's Assistant and Amazon's Alexa have since caught up, if not surpassed, Apple's technology. The news in April that the firm poached Google's AI chief confirmed that it was doubling down to get back on pace.
Microsoft is building a tool to automatically identify bias in a range of different AI algorithms. It is the boldest effort yet to automate the detection of unfairness that may creep into machine learning--and it could help businesses make use of AI without inadvertently discriminating against certain people. Big tech companies are racing to sell off-the-shelf machine-learning technology that can be accessed via the cloud. As more customers make use of these algorithms to automate important judgements and decisions, the issue of bias will become crucial. And since bias can easily creep into machine-learning models, ways to automate the detection of unfairness could become a valuable part of the AI toolkit.
A new technique developed by MIT physicists could someday provide a way to custom-design multilayered nanoparticles with desired properties, potentially for use in displays, cloaking systems, or biomedical devices. It may also help physicists tackle a variety of thorny research problems, in ways that could in some cases be orders of magnitude faster than existing methods. The innovation uses computational neural networks, a form of artificial intelligence, to "learn" how a nanoparticle's structure affects its behavior, in this case the way it scatters different colors of light, based on thousands of training examples. Then, having learned the relationship, the program can essentially be run backward to design a particle with a desired set of light-scattering properties -- a process called inverse design. The findings are being reported in the journal Science Advances, in a paper by MIT senior John Peurifoy, research affiliate Yichen Shen, graduate student Li Jing, professor of physics Marin Soljačić, and five others.
University of Toronto researchers have designed an algorithm to disrupt facial recognition technology. The past few months have witnessed a mainstream groundswell around security and data privacy, embodied most notably in news of Cambridge Analytica's data-collection tactics and the Facebook CEO Mark Zuckerberg's testimony before the U.S. senate. One major form of data emerges from facial recognition technology, which uses algorithms to identify us based on facial feature points. Every time you upload a photo to Facebook, Instagram, or otherwise, you give these learning systems another data point around your face -- and anybody else in the picture with you -- as well as metadata such as phone type and location. To address this problem, researchers at University of Toronto, led by Professor Parham Aarabi and graduate student Avishek Bose, have developed an algorithm to dynamically disrupt this technology.
Spanish police are introducing an artificial-intelligence system to detect liars.Credit: SubstanceP/Getty If you live in southern Spain, last June was not a good time to lose your smartphone and, as a way of getting an insurance payout, falsely claiming that you had been mugged. Ten police forces in Murcia and Malaga had some extra help in spotting your deceit: a computer tool that analysed statements given to officers about robberies and identified the telltale signs of a lie. According to results published in the journal Knowledge-Based Systems, the algorithm was so good at pointing officers towards false claimants that detection of such offences in one week was an impressive 31 and 49 for the respective regions, up from an average of 3 and 12 closed cases over the entire month (L. The government in Madrid is now rolling the system out across the country, and its developers are trying to apply its machine-learning methods to help detect other types of crime. In this case, the algorithm flagged up suspicious wording (based on a training set of statements known to be true and false), and left it up to the police to question suspects and get them to confess.
Today's store-bought drones are remarkably easy to fly, thanks to features like self-stabilization technology, obstacle avoidance sensors and so on. You could walk out of a shop, charge up your batteries and be airborne for the first time all within a single afternoon. But as the video compilation above shows, it's probably still a good idea to get some practice in before attempting any particularly tricky stunts. Even if drones have all sorts of high-tech features designed to keep them airborne, they aren't impervious to the constant pull of earth's gravity, the branches of an unseen tree, or even the grasp of a curious animal. Watch the video above to see a selection of drone crashes from the aircraft's perspective.
Researchers have come up with all sorts of ways to propel tiny robots deep into the human body to perform tasks, such as delivering drugs and taking biopsies. Now, there's a nanorobot that can clean up infections in blood. Directed by ultrasound, the tiny robots, made of gold nanowires with a biological coating, dart around blood, attach to bacteria, and neutralize toxins produced by the bacteria. It's like injecting millions of miniature decoys into blood to distract an infection from attacking the real human cells. The invention, developed in the labs of Joseph Wang and Liangfang Zhang at the University of California San Diego (UCSD), was described today in Science Robotics.
Intuitive Surgical (ISRG) started with a very simple plan. That is, to make surgery less invasive with surgical robots. For laypeople in 1995, the concept was science fiction. At the time, researchers at the Stanford Research Institute had been kicking around the idea for years. That's because the U.S. Army had hired them in the late '80s to make remote battlefield surgery feasible.