IBM Watson: Not So Elementary

#artificialintelligence

It's now a hired gun for thousands of companies in at least 20 industries. David Kenny took the helm of IBM's Watson Group ibm in February, after Big Blue acquired The Weather Company, where Kenny had served as CEO. In the months since then, the Watson business has grown dramatically, with well over 100,000 developers worldwide now working with more than three dozen Watson application program interfaces (APIs). Fortune Deputy Editor Clifton Leaf caught up with Kenny in mid-October, when IBM Watson's General Manager was in San Francisco, getting ready to open Watson West--the AI system's newest business outpost--and to launch the company's second World of Watson conference, a gathering of its burgeoning ecosystem of partners and users, in Las Vegas on Oct. 24. KENNY: Deep learning is a subset of machine learning, which essentially is a set of algorithms. Deep-learning uses more advanced things like convolutional neural networks, which basically means you can look at things more deeply into more layers. Machine learning could work, for example, when it came to reading text.


'Explainable Artificial Intelligence': Cracking open the black box of AI

#artificialintelligence

At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant. "It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble," said AWS' chief architect in his day two keynote at the company's summit in Sydney. Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why.


'Explainable Artificial Intelligence': Cracking open the black box of AI

#artificialintelligence

At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant. "It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble," said AWS' chief architect in his day two keynote at the company's summit in Sydney. Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why.


Dr. Robot Will See You Now: AI, Blockchain Technology & the Future of Healthcare

#artificialintelligence

Blockchain technology and artificial intelligence, two cutting-edge technologies, have the potential to change the face of healthcare as we know it by improving the quality and reducing costs through improved efficiencies. Most of us are at least somewhat familiar with artificial intelligence primarily through virtual assistants such as Siri and Alexa. Artificial intelligence automates repetitive learning and discovery through data after initially being set up by a human being. As many people also know, you have to be fairly specific when asking Siri and Alexa any questions -- the question must be posed in the right way -- to get the answer you are looking for. As an example, our interactions with Alexa, Siri, Google Search and Google Photos are based on deep learning.


Kernel Approximation Methods for Speech Recognition

arXiv.org Machine Learning

We study large-scale kernel methods for acoustic modeling in speech recognition and compare their performance to deep neural networks (DNNs). We perform experiments on four speech recognition datasets, including the TIMIT and Broadcast News benchmark tasks, and compare these two types of models on frame-level performance metrics (accuracy, cross-entropy), as well as on recognition metrics (word/character error rate). In order to scale kernel methods to these large datasets, we use the random Fourier feature method of Rahimi and Recht (2007). We propose two novel techniques for improving the performance of kernel acoustic models. First, in order to reduce the number of random features required by kernel models, we propose a simple but effective method for feature selection. The method is able to explore a large number of non-linear features while maintaining a compact model more efficiently than existing approaches. Second, we present a number of frame-level metrics which correlate very strongly with recognition performance when computed on the heldout set; we take advantage of these correlations by monitoring these metrics during training in order to decide when to stop learning. This technique can noticeably improve the recognition performance of both DNN and kernel models, while narrowing the gap between them. Additionally, we show that the linear bottleneck method of Sainath et al. (2013) improves the performance of our kernel models significantly, in addition to speeding up training and making the models more compact. Together, these three methods dramatically improve the performance of kernel acoustic models, making their performance comparable to DNNs on the tasks we explored.