language


Consumer-goods giant Unilever has been hiring employees using brain games and artificial intelligence -- and it's a huge success

#artificialintelligence

Candidates learn about the jobs online through outlets like Facebook or LinkedIn and submit their LinkedIn profiles -- no résumé required. For example, the game that tests risk gives users three minutes to collect as much "money" as possible using this system: clicking "pump" inflates a balloon by 5 cents; at any point, the user can click "collect money"; and if the balloon pops, the user receives no money. The "balloon game" measures a candidate's relationship to risk. Unilever had exceptional employees in different roles play the games and used their results as a benchmark to measure new candidates against.


Hey Siri, an ancient algorithm may help you grasp metaphors: Study tracks the cognitive steps humans have taken over centuries to create and comprehend metaphoric language

#artificialintelligence

Mapping 1,100 years of metaphoric English language, researchers at UC Berkeley and Lehigh University in Pennsylvania have detected patterns in how English speakers have added figurative word meanings to their vocabulary. Using the Metaphor Map of English database, researchers examined more than 5,000 examples from the past millennium in which word meanings from one semantic domain, such as "water," were extended to another semantic domain, such as "mind." Researchers called the original semantic domain the "source domain" and the domain that the metaphorical meaning was extended to, the "target domain." More than 1,400 online participants were recruited to rate semantic domains such as "water" or "mind" according to the degree to which they were related to the external world (light, plants), animate things (humans, animals), or intense emotions (excitement, fear).


Now kids can easily program a complex robot to recognize a smile

Popular Science

But in a sea of coding-for-kids products, an update to Anki's holiday-shopping darling, Cozmo the robot, offers a chance for young would-be coders to tap into a complex machine capable of advanced feats, like facial recognition. It's also a complex machine, with 1.6 million lines of code running between the robot itself and its companion app. The adorable bot can recognize individual faces, expressions, and imitate human emotions. That changed on Monday, when Anki opened up Cozmo's brain to novice code-writers via a simple visual system based on Scratch, a graphical programming language developed at the MIT Media Lab.


10 Ugly Truths about Artificial Intelligence in 2017

#artificialintelligence

Most of the good bots out there rely on heavy back-end training before a human even begins to talk to it, shifting cost and time to implement to expensive dialog experts that attempt to hand-craft conversation flows. The bot of tomorrow will be able to call upon singleton controls (like forms, tables, grids, or text) depending on the machine reasoned circumstance. Most of the good bots out there rely on heavy back-end training before a human even begins to talk to it, shifting cost and time to implement to expensive dialog experts that attempt to hand-craft conversation flows. The bot of tomorrow will be able to call upon singleton controls (like forms, tables, grids, or text) depending on the machine reasoned circumstance.


Google has created a neural network that can multitask

Daily Mail

It can detect objects in images, provide captions, recognise speech, translate between four pairs of languages, and do grammatical constituency parsing at the same time. The inspiration for the MultiModel comes from how the brain transforms sensory input from modalities such as sound, vision and taste, and transforms them into a single shared representation and back out as language or actions. 'It can detect objects in images, provide captions, recognise speech, translate between four pairs of languages, and do grammatical constituency parsing at the same time', the researchers, led by Lukasz Kaiser wrote in their blog. The inspiration for the MultiModel comes from how the brain transforms sensory input from modalities such as sound, vision and taste, and transforms them into language or actions.


The history and potential of deep learning Thomson Reuters

@machinelearnbot

Most recently, AlphaGo versus Lee Sedol became another major victory, this time driven by a fast developing field known as "deep learning." Deep learning–a machine learning technique based on artificial neural networks–is growing in popularity due to a series of developments in the science and business of data mining. Rosenblatt developed the so-called "perceptron" that can learn from a set of input data similar to how biological neurons learn from stimuli. Given a well-defined task, enough annotated data and computing power, many complex tasks that required human expertise and reasoning seem to be ripe to be modeled by a neural network approach nowadays.


Artificial intelligence positioned to be a game-changer

#artificialintelligence

Five years ago, IBM built this system made up of 90 servers and 15 terabytes of memory – enough capacity to process all the books in the American Library of Congress. What happens when Charlie Rose attempts to interview a robot named "Sophia" for his 60 Minutes report on artificial intelligence Charlie Rose: Tell me about Watson's intelligence. John Kelly: That's a big day-- Charlie Rose: The day that you realize that, "If we can do this"-- Charlie Rose: --"the future is ours." He wanted to see if Watson could find the same genetic mutations that his team identified when they make treatment recommendations for cancer patients.


Artificial intelligence positioned to be a game-changer

#artificialintelligence

Five years ago, IBM built this system made up of 90 servers and 15 terabytes of memory – enough capacity to process all the books in the American Library of Congress. John Kelly: That's a big day-- Charlie Rose: The day that you realize that, "If we can do this"-- Charlie Rose: --"the future is ours." They come up with possible treatment options for cancer patients who already failed standard therapies. He wanted to see if Watson could find the same genetic mutations that his team identified when they make treatment recommendations for cancer patients.


Open Source Toolkits for Speech Recognition

@machinelearnbot

Typically, this consists of n-gram language models combined with Hidden Markov models (HMM). This article reviews the main options for free speech recognition toolkits that use traditional HMM and n-gram language models. However, Kaldi does cover both the phonetic and deep learning approaches to speech recognition. We didn't dig as deeply into the other three packages, but they all come with at least simple models or appear to be compatible with the format provided on the VoxForge site, a fairly active crowdsourced repository of speech recognition data and trained models.


Google's neural network is a multi-tasking pro

Engadget

Trying to train a neural network to do an additional task usually makes it much worse at its first. The company's multi-tasking machine learning system called MultiModal was able to learn how to detect objects in images, provide captions, recognize speech, translate between four pairs of languages as well as parse grammar and syntax. In a blog post the company said, "It is not only possible to achieve good performance while training jointly on multiple tasks, but on tasks with limited quantities of data, the performance actually improves. To our surprise, this happens even if the tasks come from different domains that would appear to have little in common, e.g., an image recognition task can improve performance on a language task."