Collaborating Authors


Five AI Startup Predictions for 2017


With AI in a full-fledged mania, 2017 will be the year of reckoning. Pure hype trends will reveal themselves to have no fundamentals behind them. Paradoxically, 2017 will also be the year of breakout successes from a handful of vertically-oriented AI startups solving full-stack industry problems that require subject matter expertise, unique data, and a product that uses AI to deliver its core value proposition. Over the past year a mania has risen up around'bots.' In the technical community, when we talk about bots, we usually mean software agents which tend to be defined by "four key notions that distinguish agents from arbitrary programs; reaction to the environment, autonomy, goal-orientation and persistence." Enterprises have decided to usurp the term'bot' to be mean'any form of business process automation' and create the term'RPA', robotic process automation.

Digital Reasoning Goes Cognitive: CEO Tim Estes on Text, Knowledge, and Technology

AITopics Original Links

IBM is big on cognitive. The company's recent AlchemyAPI acquisition is only the latest of many moves in the space. This particular acquisition adds market-proven text and image processing, backed by deep learning, a form of machine learning that resolves features at varying scales, to the IBM Watson technology stack. But IBM is by no means the only company applying machine learning to natural language understanding, and it's not the only company operating under the cognitive computing banner.

How Zuck Built His Jarvis AI Bot from APIs - The New Stack


You may have heard by now that Facebook co-founder Mark Zuckerberg's personal project for 2016 was to build his own Artificial Intelligence (AI) bot, which he affectionately named Jarvis. Zuckerberg's AI is far from Iron Man's fully functional cognitive assistant, called Jarvis, or even Rosie, the beleaguered maid of "The Jetsons." Still, for 100 hours worth of work, it manages to accomplish a few basic tasks. Using a combination of Python, PHP and Objective C and overlays natural language processing, speech recognition, face recognition, and reinforcement learning APIs, allowing him to talk to Jarvis on his phone or computer and control connected appliances, allowing him to turn on and off lights and music, launch a gray t-shirt from his t-shirt cannon, and even have warm toast ready for him in the morning. But just how does one build an AI? Iddo Gino, CEO of Rapid API, connected the dots.

Uncovering the Dynamics of Crowdlearning and the Value of Knowledge Machine Learning

Learning from the crowd has become increasingly popular in the Web and social media. There is a wide variety of crowdlearning sites in which, on the one hand, users learn from the knowledge that other users contribute to the site, and, on the other hand, knowledge is reviewed and curated by the same users using assessment measures such as upvotes or likes. In this paper, we present a probabilistic modeling framework of crowdlearning, which uncovers the evolution of a user's expertise over time by leveraging other users' assessments of her contributions. The model allows for both off-site and on-site learning and captures forgetting of knowledge. We then develop a scalable estimation method to fit the model parameters from millions of recorded learning and contributing events. We show the effectiveness of our model by tracing activity of ~25 thousand users in Stack Overflow over a 4.5 year period. We find that answers with high knowledge value are rare. Newbies and experts tend to acquire less knowledge than users in the middle range. Prolific learners tend to be also proficient contributors that post answers with high knowledge value.

Recently Updated Posts Bridge and Tunnel Investor


Companies like IBM, Microsoft, Apple, Google, Facebook and Amazon are actively leveraging A.I. as part of their technology stacks. However, in order to dominate the market, these vendors will need to monetize A.I. at a large scale. Who can win the early race to monetize A.I.? The battle in the artificial intelligence (A.I.) market has been heating up. IBM, Microsoft, Amazon, Apple, Facebook, and Google are all continuously releasing impressive technologies in the space that are capturing the minds of developers and customers. From a market perspective, A.I. is positioned to become a pillar of the next generation of software technologies.

What Machine Learning Can and Can't Do - The New Stack


And while the latest batch of machine learning products across both these channels may reduce some pain points for data science in the business environment, experts warn that machine learning can't solve two issues regardless of the predictive capacity of the new tools: Last year, new machine learning market entrants focused on speeding up processes around mapping the context that a machine learning algorithm would need to understand in order to predict needs in a given business situation. For example, if a voice translation machine learning product was listening in to a customer service call in order to more quickly help the call operator surface the appropriate solution-based content, the first job of the machine learning product would be to create an ontology that understands the customer call context: things like product codes, industry-specific language, brand items and other niche vocabulary. Products like MindMeld and MonkeyLearn built automatic ontology-creators so the resulting machine learning algorithm had a higher degree of accuracy without the end user first having to enter a whole heap of business-specific data into the product to make it work. Others, like Lingo24, created their own specific vertically-based machine learning engines for industries like banking and IT so that their machine learning translation service could apply the right phrase model to the right situation. The people developing those products recognized that to be accurate, even off-the-shelf machine learning products require a lot of customization and data science leg work to be an effective tool in any given business use case.

Training and serving NLP models using Spark MLlib


Identifying critical information out of a sea of unstructured data, or customizing real-time human interaction are a couple of examples of how clients utilize our technology at Idibon--a San Francisco startup focusing on Natural Language Processing (NLP). The machine learning libraries in Spark ML and MLlib have enabled us to create an adaptive machine intelligence environment that analyzes text in any language, at a scale far surpassing the number of words per second in the Twitter firehose. Our engineering team has built a platform that trains and serves thousands of NLP models, which function in a distributed environment. This allows us to scale out quickly and provide thousands of predictions per second for many clients simultaneously. In this post, we'll explore the types of problems we're working to resolve, the processes we follow, and the technology stack we use.

How IBM, Google, Microsoft, and Amazon do machine learning in the cloud


For any cloud to be taken seriously, it has to meet an ever rising bar of features. Machine learning seems to be on that list, as all the major cloud providers now feature it. But how they go about doing it is another story. Aside from the "curated API vs. open-ended algorithm marketplace" models, there are the "everything and then some vs. just enough" variants. Here's how the four big cloud providers -- IBM, Microsoft, Google, and Amazon -- stack up next to each other in machine learning.

Learning to Transduce with Unbounded Memory

Neural Information Processing Systems

Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems. In this paper we explore the representational power of these models using synthetic grammars designed to exhibit phenomena similar to those found in real transduction problems such as machine translation. These experiments lead us to propose new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in our transduction experiments.

A Connectionist Symbol Manipulator That Discovers the Structure of Context-Free Languages

Neural Information Processing Systems

We present a neural net architecture that can discover hierarchical and recursive structurein symbol strings. To detect structure at multiple levels, the architecture has the capability of reducing symbols substrings to single symbols, and makes use of an external stack memory. In terms of formal languages, the architecture can learn to parse strings in an LR(O) contextfree grammar.Given training sets of positive and negative exemplars, the architecture has been trained to recognize many different grammars. The architecture has only one layer of modifiable weights, allowing for a straightforward interpretation of its behavior. Many cognitive domains involve complex sequences that contain hierarchical or recursive structure, e.g., music, natural language parsing, event perception. To illustrate, "thespider that ate the hairy fly" is a noun phrase containing the embedded noun phrase "the hairy fly." Understanding such multilevel structures requires forming reduced descriptions (Hinton, 1988) in which a string of symbols or states ("the hairy fly") is reduced to a single symbolic entity (a noun phrase). We present a neural net architecture that learns to encode the structure of symbol strings via such red uction transformations. The difficult problem of extracting multilevel structure from complex, extended sequences has been studied by Mozer (1992), Ring (1993), Rohwer (1990), and Schmidhuber (1992), among others.