Neural Networks


Artificial Intelligence to Sort Through ISR Data Glut

#artificialintelligence

Inundated with more data than humans can analyze, the U.S. military and intelligence community are banking on machine learning and advanced computing technologies to separate the wheat from the chaff. The Defense Department operates more than 11,000 drones that collect hundreds of thousands of hours of video footage every year. "When it comes to intelligence, surveillance and reconnaissance, or ISR, we have more platforms and sensors than at any time in Department of Defense history," said Air Force Lt. Gen. John N.T. "Jack" Shanahan, director for defense intelligence (warfighter support) in the office of the undersecretary of defense for intelligence. "It's an avalanche of data that we are not capable of fully exploiting," he said at a technology conference in Washington, D.C., hosted by Nvidia, a Santa Clara, California-based artificial intelligence computing company. For example, the Pentagon has deployed a wide-area motion imagery sensor that can look at an entire city.


High Bandwidth Memory: The Great Awakening of AI

#artificialintelligence

Artificial intelligence (AI) is fast becoming one of the most important areas of digital expansion in history. The CEO of Applied Materials recently stated that "the war" for AI leadership will be the "biggest battle of our lifetime."1 AI promises to transform almost every industry, including healthcare (diagnosis, treatments), automotive (autonomous driving), manufacturing (robot assembly), and retail (purchasing assistance). Although the field of AI has been around since the 1950s, it was not until very recently that computing power and the methods used in AI have reached a tipping point for major disruption and rapid advancement. Both of these areas have a tremendous need for much higher memory bandwidth.


Trends in Artificial Intelligence to Consider for 2018

#artificialintelligence

In 2017, artificial intelligence was all of the rage because of the potential advantages it held for the population in most fields. This is the year that humanity may experience the first tangible effects of the concept. New advances in machine learning and AI courtesy of tech corporations and academic research will be felt in business models. Some of the artificial intelligence trends to look out for in 2018 include the following. Capsule Networks is a type of neural networks advanced by Geoffrey Hinton, a researcher from Google. the method allows for remedying the issues brought by convolutional neural networks which are currently the main standard in image recognition.


Ambient AI and XAVIER, an AI Car Supercomputer

#artificialintelligence

NVIDIA's background is in gaming and building supercomputers and GPUs for that purpose. Whilst that might not appeal to everyone, it has been the training field for some incredibly complex computing and has led NVIDIA to be able to participate in so many additional markets, and to be the best performing stock in the S&P500. The four main areas of activity, and of this evening's announcements, are Gaming, VR/AR/MR (Virtual, Augmented and Mixed Reality), Data Centers and Self Driving Cars. Huang started by suggesting we were enjoying the most exciting time in the computer industry ever, with machine learning and deep Neural Networks creating a big bang for AI. I won't cover the announcements in gaming in this blog, but needless to say they were exciting for those in the community and included a partnership with Facebook and the launch of GeForce Now, an on demand option for gamers without the computing power required on their own PC, leveraging cloud supercomputing.


Transforming financial services with AI technologies

#artificialintelligence

The financial services industry is undergoing one of the largest transformational shifts in decades, driven by the development of new digital products and services, broadening availability of powerful computing solutions, and increased customer adoption of cloud, mobile, web-based and AI technologies. As the financial industry increasingly realizes the impact of faster analytical insights on overall business strategy, AI techniques like machine learning are permeating nearly every industry. Deep learning, the fastest-growing field in machine learning, leverages many-layered deep neural networks (DNNs) to learn levels of representation and abstraction that make sense of data such as images, sound, and text. This technique is showing great promise for automating a variety of operational processes and ushering in disruptive new business models for the industry. However, these newfound capabilities are quickly pushing conventional computing architectures to their limits.


Deep Learning Frameworks Hands-on Review – Knowm.org

@machinelearnbot

At Knowm, we are building a new and exciting type of computer processor to accelerate machine learning (ML) and artificial intelligence applications. The goal of Thermodynamic-RAM (kT-RAM) is to run general ML operations, traditionally deployed to CPUs and GPUs, to a physically-adaptive analog processor based on memristors which unites memory and processing. If you haven't heard yet, we call this new way of computing "AHaH Computing", which stands for Anti-Hebbian and Hebbian Computing, and it provides a universal computing framework for in-memory reconfigurable logic, memory, and ML. While we have shown a long time ago that AHaH Computing is capable of solving problems across many domains of ML, we only recently figured out how to use the kT-RAM instruction set and low precision/noisy memristors to build supervised and unsupervised compositional (deep) ML systems. Our method does not require the propagation of error algorithm (Backprop) and is easy to attain with realistic analog hardware, including but not limited to memristors.


IBM targets AI workloads with POWER9 systems; claims to be faster than x86 - CIOL

#artificialintelligence

Speed to insight is going to emerge as the key competitive differentiator for businesses, as they start stepping into the era of compute-and-speed-hungry artificial intelligence(AI), and deep learning workloads. IBM, recently announced a new line of accelerated IBM Power Systems Servers, keeping this new requirement of businesses in mind. The systems are built on its new POWER9 processor, which reduces the training times of deep learning frameworks significantly from days to hours and allows building more accurate AI applications in considerably less time. "The era of AI demands a tremendous amount of processing power at unprecedented speed," said Monica Aggarwal, Vice President, IBM India Systems Development Lab. "To meet the demands of the cognitive workload, businesses need to change everything right from the start- the algorithms, the software, and the hardware as well. POWER9 systems bring an integrated AI platform designed to accelerate machine learning and deep learning with both software and hardware that are optimized to work together."


Must Read: Top 7 Technology Trends for 2018 - Engineering

#artificialintelligence

"The Computer Society's predictions, based on a deep-dive analysis by a team of leading technology experts, identify top-trending technologies that hold extensive disruptive potential for 2018," said Jean-Luc Gaudiot, IEEE Computer Society President. "The vast computing community depends on the Computer Society as the provider for relevant technology news and information, and our predictions directly align with our commitment to keeping our community well-informed and prepared for the changing technological landscape of the future." Dejan Milojicic, Hewlett Packard Enterprise Distinguished Technologist and IEEE Computer Society past president, said, "The following year we will witness some of the most intriguing dilemmas in the future of technology. Will deep learning and AI indeed expand deployment domains or remain within the realms of neural networks? Will cryptocurrency technologies keep their extraordinary evolution or experience a bubble burst?


Flipboard on Flipboard

#artificialintelligence

While the number of headlines about machine learning might lead one to think that we just discovered something profoundly new, the reality is that the technology is nearly as old as computing. It's no coincidence that Alan Turing, one of the most influential computer scientists of all time, started his 1950 treatise on computing with the question "Can machines think?" From our science fiction to our research labs, we have long questioned whether the creation of artificial versions of ourselves will somehow help us uncover the origin of our own consciousness, and more broadly, our role on earth. Unfortunately, the learning curve on AI is really damn steep. By tracing a bit of history, we should hopefully be able to get to the bottom of wtf machine learning really is.


Reservoir computing - Wikipedia

#artificialintelligence

Reservoir computing is a framework for computation that may be viewed as an extension of neural networks.[1] Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that the training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines[2] and echo state networks[3] are two major types of reservoir computing.[4]