processing


Deep Learning: CNNs for Visual Recognition - Udemy

@machinelearnbot

Welcome to this course: Deep Learning - Learn Convolutional Neural Networks. Convolutional neural networks have gained a special status over the last few years as an especially promising form of deep learning. Rooted in image processing, convolutional layers have found their way into virtually all subfields of deep learning, and are very successful for the most part. Convolutional neural networks (CNNs) enable very powerful deep learning based techniques for processing, generating, and sensemaking of visual information.


AI processors go mobile

ZDNet

At its iPhone X event last week, Apple devoted a lot of time to the A11 processor's new neural engine that powers facial recognition and other features. The week before, at IFA in Berlin, Huawei announced its latest flagship processor, the Kirin 970, equipped with a Neural Processing Unit capable of processing images 20 times faster than the CPU alone. The company also has math libraries for neural networks including QSML (Qualcomm Snapdragon Math Library) and nnlib for Hexagon DSP developers. The closest thing that Qualcomm currently has to specialized hardware is the HvX modules added to the Hexagon DSP to accelerate 8-bit fixed operations for inferencing, but Brotman said that eventually mobile SoCs will need specialized processors with tightly-coupled memory and an efficient dataflow (fabric interconnects) for neural networks.


TigerGraph, a graph database born to roar

ZDNet

One thing that may have helped in getting for example what TigerGraph says is the largest transaction graph in production in the world at Alipay, with more than 100 billion vertices, 600 billion edges and 2 billion daily real time updates, is TigerGraph's backing. According to TigerGraph's benchmark, TigerGraph runs queries from 4 to almost 500 times faster than the competition, loads data from 2 to 25 times faster, and uses about 80 percent less space to store that data. By having a native C graph storage engine (GSE) work side-by-side with a graph processing engine (GPE) to handle of data and algorithms and by using parallelism and a distributed architecture. TigerGraph offers a browser-based SDK called GraphStudio to enable users to create graph models, map and load data sources and build graph queries.


Using Recurrent Neural Networks to Predict Player Performance

#artificialintelligence

For data prone to noise and anomalies (most data, if we're being honest), a Long Short Term Memory network (LSTM), preserves the long term memory capabilities of the RNN, while filtering out irrelevant data points that are not part of the pattern. Mechanically speaking, the LSTM adds an extra operation to nodes on the map, the outcome of which determines whether the data point will be remembered as part of a potential pattern, used to update the weight matrix, or forgotten and cast aside as noise. For example, to train the HR network, the first input to the network is the number of homers the player hit in his first game, the second input to the network is the number the player hit in his second game and so on. With a network to train and data to train it with, we can now look at a test case where the network attempted to learn Manny Machado's performance patterns and then made some predictions.


Artificial Intelligence and the Military

#artificialintelligence

ANNs with two or more hidden layers are capable of deep learning; such ANNs can process more complex data sets than ANNs having only one hidden layer. A clear advantage of AI is its ability to learn and evolve in ways that frozen software cannot. A clear advantage of AI is its ability to learn and evolve in ways that frozen software cannot. For example, an early chess program was developed using the great chess player Gary Kasparov as a subject matter expert.


Challenges in Deep Learning – Hacker Noon

@machinelearnbot

Deep Learning algorithms mimic human brains using artificial neural networks and progressively learn to accurately solve a given problem. Training a data set for a Deep Learning solution requires a lot of data. Industry level Deep Learning systems require high-end data centers while smart devices such as drones, robots other mobile devices require small but efficient processing units. Deep Learning models, once trained, can deliver tremendously efficient and accurate solution to a specific problem.


Intel-Waymo-team-self-driving-car-computers.html?ITO=1490&ns_mchannel=rss&ns_campaign=1490

Daily Mail

The chipmaker admitted it had worked with the company during the design of its compute platform to allow autonomous cars to process information in real time. The announcement marked the first time Waymo, formerly Google's autonomous program, has acknowledged a collaboration with a supplier. Intel began supplying chips for then-Google's autonomous program beginning in 2009, but that relationship grew into a deeper collaboration when Google began working with Fiat Chrysler Automobiles (FCHA.MI) to develop and install the company's autonomous driving technology into the automaker's minivans. Intel began supplying chips for then-Google's autonomous program beginning in 2009.


Max Tegmark: 'Machines taking control doesn't have to be a bad thing'

The Guardian

We're in a situation where something truly dramatic might happen within decades – that's a good time to start preparing With his friend the Skype co-founder Jaan Tallinn, and funding from the tech billionaire Elon Musk, he set up the Future of Life Institute, which researches the existential risks facing humanity. Life 2.0, or the cultural stage, is where humans are: able to learn, adapt to changing environments, and intentionally change those environments. But if trends continue apace, then it's not unreasonable to assume that at some point – 30 years' time, 50 years, 200 years? Yet if we're looking at creating an intelligence that we can't possibly understand, how much will preparation affect what takes place on the other side of the singularity?


The future of search engines: Researchers combine artificial intelligence, crowdsourcing and supercomputers

#artificialintelligence

This week, at the Annual Meeting of the Association for Computational Linguistics in Vancouver, Canada, Lease and collaborators from UT Austin and Northeastern University presented two papers describing their novel IR systems. They proposed a method for exploiting these existing linguistic resources via weight sharing to improve NLP models for automatic text classification. "This provides a general framework for codifying and exploiting domain knowledge in data-driven neural network models," say Byron Wallace, Lease's collaborator from Northeastern University. By improving core natural language processing technologies for automatic information extraction and the classification of texts, web search engines built on these technologies can continue to improve.


We are making on-device AI ubiquitous

#artificialintelligence

You may have heard this vision or may think that AI is really about big data and the cloud, and yet Qualcomm's solutions already have the power, thermal, and processing efficiency to run powerful AI algorithms on the actual device -- which brings several advantages. We've also had our own success at the ImageNet Challenge using deep learning techniques, placing as a top-3 performer in challenges for object localization, object detection, and scene classification. We have also expanded our own research and collaborated with the external AI community into other promising areas and applications of machine learning, like recurrent neural networks, object tracking, natural language processing, and handwriting recognition. As an example, at this year's F8 conference, Facebook and Qualcomm Technologies announced a collaboration to support the optimization of Caffe2, Facebook's open source deep learning framework, and the NPE framework.