Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic.
But by a fortuitous coincidence, a related type of computer chip, called a graphic processing unit, or GPU, turns out to be very effective when applied to the types of calculations needed for neural nets. In fact, speedups of 10X are not uncommon when neural nets are moved from traditional central processing units to GPUs. GPUs were initially developed to rapidly display graphics for applications such as computer gaming, which provided scale economies and drove down unit costs, but an increasing number of them are now used for neural nets. As neural net applications become even more common, several companies have developed specialized chips optimized for this application, including Google's tensor processing unit, or TPU.
Erik Brynjolfsson, MIT Sloan School professor, explains how rapid advances in machine learning are presenting new opportunities for businesses. And there are lots of things that humans are pretty good at in distinguishing different kinds of images. And for a long time, machines were nowhere near as good as recently as seven, eight years ago, machines made about a 30 percent error rate on image net, this big database that Fei Fei Li created of over 10 million images. SARAH GREEN CARMICHAEL: With photo recognition and facial recognition, I know that Facebook facial recognition software can't tell the difference between me wearing makeup and me not wearing makeup, which is also sort of funny and horrifying right?
Here's what we came up with by extrapolating technology and journalism trends highlighted in AP's report: The sensors send an alert to his vehicle's smart dashboard: "There has been a 10 percent decrease in air quality in Springfield." He downloads images from a series of robotic cameras posted throughout the region and uses computer vision (an algorithm able to view and comprehend a photo or video with enhanced accuracy) to compare photos of the area around the factory over time. The representative, the journalist suspects, may be hiding something; voice analysis technology declares the tone of the person on the phone is "tentative" and "nervous." Sitting in his car on the way back to the newsroom, the journalist runs a voice recording of the interview through his sentiment analysis system, which determines the mother's tone to be "genuine" and "analytical."
With a USB hub, you can plug several Intel Movidius Neural Compute Sticks into your laptop. Intel's $80 Movidius Neural Compute Stick lets you plug some computing brains into your laptop's USB port. That's the kind of thing that can be handy if you're trying to work out computer vision in your drone or help your cleaning robot tell the difference between a cat and a coffee table. Intel announced the device at the conference on Computer Vision and Pattern Recognition on Thursday.
The challenge is insight: online store managers find it much harder to see what's really going on in the shop, compared to their real world counterparts. However new analytical techniques, powered by AI technologies, are helping businesses optimise their UX and improve their bottom lines in new and important ways. Combining these with advanced user journey mapping can provide essential insight to inform marketers as to why people are dropping out of the site at certain points, while next-generation element'zoning' of key elements on a certain page can give employees and much more micro and detailed overview of page performance (such as revenue generated or hesitation rate per'zone') at a glance. In the coming years businesses will find it progressively easier to eliminate intuition from the product and marketing development cycle through a powerful combination of UX analytics and AI-driven automated recommendations.
This month we're chatting with Arte Merritt, CEO and Cofounder of Dashbot, a bot analytics platform that enables publishers and developers to increase engagement, acquisition, and monetization. When we refer to "bots," we mean any conversational interface, whether more text based--like Facebook or Slack--or voice based--like Alexa or Google Home. Originally they just provided scores of the games, but noticed users were asking about players, and added support for player info. They saw this in the analytics and added support for a "mute" functionality that enabled users to mute the score updates when their teams are losing -- and thus they retained the users instead of losing them.
Results: We show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. The most fundamental task in biomedical text mining is the recognition of named entities (called NER), such as proteins, species, diseases, chemicals or mutations. Here, we show that an entirely generic NER method based on deep learning and distributional word semantics outperforms such specific high-quality NER methods across different entity types and across different evaluation corpora. We assessed the performance of LSTM-CRF by performing 33 evaluations on 24 different gold standard corpora (some with annotations for more than one entity type) covering five different entity types, namely chemical names, disease names, species names, genes/protein names, and names of cell lines.
A few years ago, I wrote about a fascinating Italian project to use mobile phone data to predict the onset of bipolar disorder. It isn't the only work utilizing AI to help those with bipolar disorder, as a recent paper from the University of Cincinnati outlined an approach to accurately predict treatment outcomes by using AI. The authors suggest that existing models of treatment predict the response to lithium treatment with an accuracy of no more than 75%. The Australian research team used the kind of AI algorithms that underpin many modern dating sites to try and improve organ acceptance and ensure a more accurate connection between organ donors and recipients.
"Frey found that human subjects exposed to 1310 MHz and 2982 MHz microwaves at average power densities of 0.4 to 2 mW/cm2 perceived auditory sensations described as buzzing or knocking sounds. Pulsed microwave voice-to-skull (or other-sound-to-skull) transmission was discovered during World War II by radar technicians who found they could hear the buzz of the train of pulses being transmitted by radar equipment they were working on. A spread spectrum signal received on a spectrum analyzer appears as just more "static" or noise. In 1975, researcher A. W. Guy stated that "one of the most widely observed and accepted biologic effects of low average power electromagnetic energy is the auditory sensation evoked in man when exposed to pulsed microwaves."