IBM Watson steps into real-world cybersecurity

#artificialintelligence

MoneyLion secures $22.5M to bring fresh talent to AI finance management Stay up-to-date on the topics you care about. We'll send you an email alert whenever a news article matches your alert term. It's free, and you can add new alerts at any time. We won't share your personal information with anyone.


Korean IBM Watson to launch in 2017 ZDNet

#artificialintelligence

IBM will launch a Korean version of its AI platform Watson next year in cooperation with local IT service vendor SK C&C, the companies have announced. SK announced Monday that it signed a cooperation agreement with Big Blue on May 4 and will together build an integrated system to market Watson in South Korea. They will develop Korean data analysis solutions based on machine learning and natural language semantic analysis technology for Watson within this year, and will commercialise it sometime in the first half of 2017, SK said. IBM and SK will also build a "Watson Cloud Platform" at the Korean company's datacentre in Pangyo -- the local version of Silicon Valley -- that IT developers and managers can access to make their own applications. For example, an open market business can apply the Watson solution to its product search features to make a personalized contents recommendation solution.


IBM is funding new Watson AI lab at MIT with $240 Million

#artificialintelligence

IBM said on Thursday it will spend $240 million over the next decade to fund a new artificial intelligence research lab at the Massachusetts Institute of Technology. The resulting MIT–IBM Watson AI Lab will focus on a handful of key AI areas including the development of new "deep learning" algorithms. Deep learning is a subset of AI that aims to bring human-like learning capabilities to computers so they can operate more autonomously. The Cambridge, Mass.-based lab will be led by Dario Gil, vice president of AI for IBM Research and Anantha Chandrakasan, dean of MIT's engineering school. It will draw upon about 100 researchers from IBM (ibm) itself and the university.


Downsampling leads to Image Memorization in Convolutional Autoencoders

arXiv.org Machine Learning

Memorization of data in deep neural networks has become a subject of significant research interest. In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution. To analyze this mechanism in a simpler setting, we train linear convolutional autoencoders and show that linear combinations of training data are stored as eigenvectors in the linear operator corresponding to the network when downsampling is used. On the other hand, networks without downsampling do not memorize training data. We provide further evidence that the same effect happens in nonlinear networks. Moreover, downsampling in nonlinear networks causes the model to not only memorize linear combinations of images, but individual training images. Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important phenomenon of memorization in over-parameterized deep networks.


Detecting Learning vs Memorization in Deep Neural Networks using Shared Structure Validation Sets

arXiv.org Machine Learning

The roles played by learning and memorization represent an important topic in deep learning research. Recent work on this subject has shown that the optimization behavior of DNNs trained on shuffled labels is qualitatively different from DNNs trained with real labels. Here, we propose a novel permutation approach that can differentiate memorization from learning in deep neural networks (DNNs) trained as usual (i.e., using the real labels to guide the learning, rather than shuffled labels). The evaluation of weather the DNN has learned and/or memorized, happens in a separate step where we compare the predictive performance of a shallow classifier trained with the features learned by the DNN, against multiple instances of the same classifier, trained on the same input, but using shuffled labels as outputs. By evaluating these shallow classifiers in validation sets that share structure with the training set, we are able to tell apart learning from memorization. Application of our permutation approach to multi-layer perceptrons and convolutional neural networks trained on image data corroborated many findings from other groups. Most importantly, our illustrations also uncovered interesting dynamic patterns about how DNNs memorize over increasing numbers of training epochs, and support the surprising result that DNNs are still able to learn, rather than only memorize, when trained with pure Gaussian noise as input.