Goto

Collaborating Authors

Deep learning using CNN : Learn to remember it visually

#artificialintelligence

"Deep Learning Is Setting Records!!" There is tremendous growth in people searching or showing interests about deep learning & AI in last few years. Every day hundreds of new articles get published on it in social media & press media. Above chart broadly explains as why search trend is ever growing for deep learning & AI. Fundamentally deep learning is a subset of Machine Learning. The reason as why it is exciting is that more data you give to deep learning usually you get more accuracy out from the model.


Transfer Learning and Image Classification with ML.NET

#artificialintelligence

Historically, image classification is a problem that popularized deep neural networks especially visual types of neural networks – Convolutional neural networks (CNN). We will not go into details about what are CNNs and how they work. However, we can say that CNNs were popularized after they broke a record in The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) back in 2012. This competition evaluates algorithms for object detection and image classification at a large scale. The dataset that they provide contains 1000 image categories and over 1.2 million images.


Watch an AI robot program itself to, er, pick things up and push them around

#artificialintelligence

Vid Robots normally need to be programmed in order to get them to perform a particular task, but they can be coaxed into writing the instructions themselves with the help of machine learning, according to research published in Science. Engineers at Vicarious AI, a robotics startup based in California, USA, have built what they call a "visual cognitive computer" (VCC), a software platform connected to a camera system and a robot gripper. Given a set of visual clues, the VCC writes a short program of instructions to be followed by the robot so it knows how to move its gripper to do simple tasks. "Humans are good at inferring the concepts conveyed in a pair of images and then applying them in a completely different setting," the paper states. "The human-inferred concepts are at a sufficiently high level to be effortlessly applied in situations that look very different, a capacity so natural that it is used by IKEA and LEGO to make language-independent assembly instructions."


Google figured out how to turn pixelated images into high-res ones

Mashable

You see it all the time in movies and TV shows: A security camera records footage of an intruder, but the image is too blurry or pixelated to make out who it is. Some nerdy-looking "hacker" then clacks at his keyboard and -- boom -- seconds later, pixelated image turns into a crisp one revealing the person's face in glorious detail. "Oh, come on!" we all say while rolling our eyes. Well, you might have to break that habit because Google has figured out a way to turn movie magic into reality (sort of). SEE ALSO: Google's biggest Android problem is also ruining emoji According to ArsTechnica, researchers at Google's deep learning research project, Google Brain, have created software that attempts to "sharpen" images made up of 8 x 8 pixels.


How many images do you need to train a neural network?

#artificialintelligence

Today I got an email with a question I've heard many times – "How many images do I need to train my classifier?". In the early days I would reply with the technically most correct, but also useless answer of "it depends", but over the last couple of years I've realized that just having a very approximate rule of thumb is useful, so here it is for posterity: You need 1,000 representative images for each class. Like all models, this rule is wrong but sometimes useful. In the rest of this post I'll cover where it came from, why it's wrong, and what it's still good for. The origin of the 1,000-image magic number comes from the original ImageNet classification challenge, where the dataset had 1,000 categories, each with a bit less than 1,000 images for each class (most I looked at had around seven or eight hundred).