How Silicon Valley is teaching language to machines

#artificialintelligence

The dream of building computers or robots that communicate like humans has been with us for many decades now. And if market trends and investment levels are any guide, it's something we would really like to have. MarketsandMarkets says the natural language processing (NLP) industry will be worth $16.07 billion by 2021, growing at a rate of 16.1 percent, and deep learning is estimated to reach $1.7 billion by 2022, growing at a CAGR of 65.3 percent between 2016 and 2022. Of course, if you've played with any chatbots, you will know that it's a promise that is yet to be fulfilled. There's an "uncanny valley" where, at one end, we sense we're not talking to a real person and, at the other end, the machine just doesn't "get" what we mean.



Improving Argument Mining in Student Essays by Learning and Exploiting Argument Indicators versus Essay Topics

AAAI Conferences

Argument mining systems for student essays need to be able to reliably identify argument components independently of particular essay topics. Thus in addition to features that model argumentation through topic-independent linguistic indicators such as discourse markers, features that can abstract over lexical signals of particular essay topics might also be helpful to improve performance. Prior argument mining studies have focused on persuasive essays and proposed a variety of largely lexicalized features. Our current study examines the utility of such features, proposes new features to abstract over the domain topics of essays, and conducts evaluations using both 10-fold cross validation as well as cross-topic validation. Experimental results show that our proposed features significantly improve argument mining performance in both types of cross-fold evaluation settings. Feature ablation studies further shed light on relative feature utility.


Startup CEOs on how to keep the artificial intelligence ball rolling in Canada

#artificialintelligence

The next time you pull out your smartphone and ask Siri or Google for advice, or chat with a bot online, take pride in knowing that some of the theoretical foundation for that technology was brought to life here in Canada. Indeed, as far back as the early 1980s, key organizations such as the Canadian Institute for Advanced Research embarked on groundbreaking work in neural networks and machine learning. Academic pioneers such as Geoffrey Hinton (now a professor emeritus at the University of Toronto and an advisor to Google, among others), the University of Montreal's Yoshua Bengio and the University of Alberta's Rich Sutton produced critical research that helped fuel Canada's rise to prominence as a global leader in artificial intelligence (AI). Stephen Piron, co-CEO of Dessa, praises the federal government's efforts at cutting immigration processing timelines for highly skilled foreign workers. Canada now houses three major AI clusters – in Toronto, Montreal and Edmonton – that form the backbone of the country's machine-learning ecosystem and support homegrown AI startups.


Towards a Computational Model of Why Some Students Learn Faster than Others

AAAI Conferences

Learners that have better metacognition acquire knowledge faster than others who do not. If we had better models of such learning, we would be able to build a better metacognitive educational system. In this paper, we propose a computational model that uses a probabilistic context free grammar induction algorithm yielding metacognitive learning by acquiring deep features to assist future learning. We discuss the challenges of integrating this model into a synthetic student, and possible future studies in using this model to better understand human learning. Preliminary results suggest that both stronger prior knowledge and a better learning strategy can speed up the learning process. Some model variations generate human-like error pattern.