neural network


IBM, Intel Rethink Processor Designs to Accommodate AI Workloads - The New Stack

#artificialintelligence

Artificial intelligence is bringing new demands to processors. The algorithmic data crunching is different from earlier models of processing data highlighted by benchmarks like LINPACK. It is also changing computing architectures by de-emphasizing the CPU and harnessing the faster computing power of coprocessors. The CPU is just a facilitator, and a lot of deep-learning is done on accelerator chips like GPUs, FPGAs and Google's Tensor processing unit. Major hardware companies like IBM, Intel, Nvidia and AMD are embracing the change in architecture and tuning hardware that encourage the creation of artificial neural nets, as envisioned by researchers in 1960s.


Transfer Learning using differential learning rates

@machinelearnbot

In this post, I will be sharing how one can use popular deep learning models for their own specific task using transfer learning. We will cover some concepts like differential learning rates which are not even currently in implementation in some of the deep learning libraries. I have learned about these from the fast.ai This course content will be available to the general public early 2018 as a MOOC. It is the process of using the knowledge learned in one process/activity and applying it to a different task.


Data Science Bowl 2018: A Deep Learning Drive

@machinelearnbot

For the next 90 days, data scientists will have the chance to submit algorithms that can identify nuclei in cell samples without human intervention.


Faster R-CNN: Down the rabbit hole of modern object detection - Tryolabs Blog

@machinelearnbot

Previously, we talked about object detection, what it is and how it has been recently tackled using deep learning. If you haven't read our previous blog post, we suggest you take a look at it before continuing. Last year, we decided to get into Faster R-CNN, reading the original paper, and all the referenced papers (and so on and on) until we got a clear understanding of how it works and how to implement it. We ended up implementing Faster R-CNN in Luminoth, a computer vision toolkit based on TensorFlow which makes it easy to train, monitor and use these types of models. So far, Luminoth has raised an incredible amount of interest and we even talked about it at both ODSC Europe and ODSC West.


AI Definitions: Machine Learning vs. Deep Learning vs. Cognitive Computing vs. Robotics vs. Strong AI….

#artificialintelligence

AI is the compelling topic of tech conversations du jour, yet within these conversations confusion often reigns – confusion caused by loose use of AI terminology. The problem is that AI comes in a variety of forms, each one with its own distinct range of capabilities and techniques, and at its own stage of development. Some forms of AI that we frequently hear about, such as Artificial General Intelligence, the kind of AI that might someday automate all work and that we might lose control of – may never come to pass. Others are doing useful work and are driving growth in the high performance sector of the technology industry. These definitions aren't meant to be the final word on AI terminology, the industry is growing and changing so fast that terms will change and new ones will be added.


Adobe's AI-powered Photoshop update is a time-saver

Engadget

Adobe has unveiled Photoshop 19.1 with a much-anticipated AI-based feature for photo retouchers and a fix for longstanding Windows display issues. The first feature is called "select subject," and uses Adobe's Sensei deep-learning algorithms to make it much easier to isolate subjects from backgrounds. Adobe sent Engadget a preview copy of Photoshop to test, and I found that it's a big time-saver that doesn't always work, especially if your subject and what's behind it are too similar. The tool is certainly simple to use. You load up your photo and choose either the "quick selection," "magic wand", or "select and mask" tools to bring up the "select subject" option at the top of the screen.


Google's self-training AI turns coders into machine-learning masters

#artificialintelligence

Google just made it a lot easier to build your very own custom AI system. A new service, called Cloud AutoML, uses several machine-learning tricks to automatically build and train a deep-learning algorithm that can recognize things in images. The technology is limited for now, but it could be the start of something big. Building and optimizing a deep neural network algorithm normally requires a detailed understanding of the underlying math and code, as well as extensive practice tweaking the parameters of algorithms to get things just right. The difficulty of developing AI systems has created a race to recruit talent, and it means that only big companies with deep pockets can usually afford to build their own bespoke AI algorithms.


Google's AI Chief on Why You Shouldn't Be Afraid of AI

#artificialintelligence

The field of A.I., of using computers to perform complex tasks as well as a human, is not new. People have been using deep learning with neural networks, a major subfield of A.I., for 40 years. A car used it to drive itself across the United States in 1995. But things started to change in 2010 and 2011 when a new set of academic papers identified the potential for machine learning models to become much better with scale: hundreds of times as many parameters for the algorithms and thousands of times as much data to train on. Computer systems working on a massively increased scale could produce not just quantitative growth, but a qualitative improvement in what machine learning could accomplish.


Artificial synapse creation makes brain-on-a-chip tech closer to reality

ZDNet

Researchers have engineered an artificial synapse in an important step to making brain-on-a-chip processing a reality. SaaS has set off a revolution in the way companies consume services on-demand. We look at how it's spreading to other IT services and transforming IT jobs. A team of researchers in the emerging field of neuromorphic computing from the Massachusetts Institute of Technology (MIT) revealed the project on Monday, which aims to bring the power of the human brain and supercomputers to mobile devices. The more we learn about the human brain, the less we seem to know.


fast.ai · Making neural nets uncool again

#artificialintelligence

From the time of our very first deep learning course at the USF Data Institute (which was recorded and formed the basis of our MOOC), we have allowed selected students that could not participate in person to attend via video and text chat through our International Fellowship. We want to get deep learning into the hands of as many people as possible, from as many diverse backgrounds as possible. People with different backgrounds have different problems they're interested in solving. We have seen and experienced some of the obstacles facing outsiders: inequality, discrimination, and lack of access. We've also observed that the field of artificial intelligence is missing out because of its lack of diversity.