Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data


Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic.

What's Driving the Machine Learning Explosion?


But by a fortuitous coincidence, a related type of computer chip, called a graphic processing unit, or GPU, turns out to be very effective when applied to the types of calculations needed for neural nets. In fact, speedups of 10X are not uncommon when neural nets are moved from traditional central processing units to GPUs. GPUs were initially developed to rapidly display graphics for applications such as computer gaming, which provided scale economies and drove down unit costs, but an increasing number of them are now used for neural nets. As neural net applications become even more common, several companies have developed specialized chips optimized for this application, including Google's tensor processing unit, or TPU.

How AI Is Already Changing Business


Erik Brynjolfsson, MIT Sloan School professor, explains how rapid advances in machine learning are presenting new opportunities for businesses. And there are lots of things that humans are pretty good at in distinguishing different kinds of images. And for a long time, machines were nowhere near as good as recently as seven, eight years ago, machines made about a 30 percent error rate on image net, this big database that Fei Fei Li created of over 10 million images. SARAH GREEN CARMICHAEL: With photo recognition and facial recognition, I know that Facebook facial recognition software can't tell the difference between me wearing makeup and me not wearing makeup, which is also sort of funny and horrifying right?

A day in the life of a journalist in 2027: Reporting meets AI


Here's what we came up with by extrapolating technology and journalism trends highlighted in AP's report: The sensors send an alert to his vehicle's smart dashboard: "There has been a 10 percent decrease in air quality in Springfield." He downloads images from a series of robotic cameras posted throughout the region and uses computer vision (an algorithm able to view and comprehend a photo or video with enhanced accuracy) to compare photos of the area around the factory over time. The representative, the journalist suspects, may be hiding something; voice analysis technology declares the tone of the person on the phone is "tentative" and "nervous." Sitting in his car on the way back to the newsroom, the journalist runs a voice recording of the interview through his sentiment analysis system, which determines the mother's tone to be "genuine" and "analytical."

Intel Movidius Neural Compute Stick brings AI brains to USB port


With a USB hub, you can plug several Intel Movidius Neural Compute Sticks into your laptop. Intel's $80 Movidius Neural Compute Stick lets you plug some computing brains into your laptop's USB port. That's the kind of thing that can be handy if you're trying to work out computer vision in your drone or help your cleaning robot tell the difference between a cat and a coffee table. Intel announced the device at the conference on Computer Vision and Pattern Recognition on Thursday.

How AI can help to enhance user experience


The challenge is insight: online store managers find it much harder to see what's really going on in the shop, compared to their real world counterparts. However new analytical techniques, powered by AI technologies, are helping businesses optimise their UX and improve their bottom lines in new and important ways. Combining these with advanced user journey mapping can provide essential insight to inform marketers as to why people are dropping out of the site at certain points, while next-generation element'zoning' of key elements on a certain page can give employees and much more micro and detailed overview of page performance (such as revenue generated or hesitation rate per'zone') at a glance. In the coming years businesses will find it progressively easier to eliminate intuition from the product and marketing development cycle through a powerful combination of UX analytics and AI-driven automated recommendations.

Q&A: Dashbot On Why Conversational Analytics are the Future


This month we're chatting with Arte Merritt, CEO and Cofounder of Dashbot, a bot analytics platform that enables publishers and developers to increase engagement, acquisition, and monetization. When we refer to "bots," we mean any conversational interface, whether more text based--like Facebook or Slack--or voice based--like Alexa or Google Home. Originally they just provided scores of the games, but noticed users were asking about players, and added support for player info. They saw this in the analytics and added support for a "mute" functionality that enabled users to mute the score updates when their teams are losing -- and thus they retained the users instead of losing them.

Siri-powered wireless charging dock could be the iPhone's ultimate companion


If a patent Apple was recently granted is any indication, a product with some of those features could be coming sometime in the future. The patent, which was spotted by The Verge, describes various designs for a device that could serve as a dock for "a portable electronic device" (an iPhone, obviously) outfitted with microphones, processors, and wireless energy transfer capabilities in some iterations. Siri integration is also outlined in the patent, although it's described as being "a voice recognition mode of the portable electronic device," rather than a built-in feature of the dock. A wireless charging, Siri-connected smart dock could be more appealing to consumers than Apple's upcoming HomePod, the premium $349 Siri speaker dropping in December to take on Amazon Echo and Google Home.

Robots for Kids: Designing Social Machines That Support Children's Learning

IEEE Spectrum Robotics Channel

In this guest post, Jacqueline M. Kory Westlund, a researcher in the Personal Robots Group at the MIT Media Lab describes her projects and explorations to understand children's relationships with social robots. What design features of the robots affect children's learning--like the expressivity of the robot's voice, the robot's social contingency, or whether it provides personalized feedback? When I tell people about the Media Lab's work with robots for children's education, a common question is: "Are you trying to replace teachers?" Despite all the research that seems to point to the conclusion "robots can be like people," there are also studies showing that children learn more from human tutors than from robot tutors.

Deep learning with word embeddings improves biomedical named entity recognition Bioinformatics Oxford Academic


Results: We show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. The most fundamental task in biomedical text mining is the recognition of named entities (called NER), such as proteins, species, diseases, chemicals or mutations. Here, we show that an entirely generic NER method based on deep learning and distributional word semantics outperforms such specific high-quality NER methods across different entity types and across different evaluation corpora. We assessed the performance of LSTM-CRF by performing 33 evaluations on 24 different gold standard corpora (some with annotations for more than one entity type) covering five different entity types, namely chemical names, disease names, species names, genes/protein names, and names of cell lines.