Goto

Collaborating Authors

 relish


At the crossroads of language, technology, and empathy

#artificialintelligence

Rujul Gandhi's love of reading blossomed into a love of language at age 6, when she discovered a book at a garage sale called "What's Behind the Word?" With forays into history, etymology, and language genealogies, the book captivated Gandhi, who as an MIT senior remains fascinated with words and how we use them. Growing up partially in the U.S. and mostly in India, Gandhi was surrounded by a variety of languages and dialects. When she moved to India at age 8, she could already see how knowing the Marathi language allowed her to connect more easily to her classmates -- an early lesson in how language shapes our human experiences. Initially thinking she might want to study creative writing or theater, Gandhi first learned about linguistics as its own field of study through an online course in ninth grade.


Are you eating your relish with dogs? Testing, testing AI

#artificialintelligence

Testing, testing: DeepMind sits AI down for an IQ test. While the AI performance results are not staggering in trumping or matching human reasoning, it is a start. AI scientists recognize that establishing their capacity to reason about abstract concepts has proven difficult. DeepMind wanted to see how AI could perform and the team proposed a dataset and challenge to probe abstract reasoning. Can AI match our abilities for abstract reasoning?


How Computers Parse the Ambiguity of Everyday Language

#artificialintelligence

If you're one of the 2.4 million Twitter followers of the Hamilton impresario Lin-Manuel Miranda, you've come to expect a delightful stream of observations, including tweets capturing conversations with his son Sebastian, now 3 years old. Earlier this month, Miranda offered one such exchange under the title, "S'MORES. Me: So that's the marshmallow but you're going to eat it with this graham cracker and chocolate. Sebastian: No, I'm going to eat it with my MOUTH. A charming slice of life, to be sure. But in that brief interaction, young Sebastian Miranda also inadvertently hit upon a kind of ambiguity that reveals a great deal about how people learn and process language--and how we might teach computers to do the same. The misinterpretation on which the s'mores story hinges is hiding in the humble preposition with. I'm going to eat this marshmallow with ... If you're in the mood for s'mores, then "graham cracker and chocolate" is an appropriate object of the preposition with.


How Computers Parse the Ambiguity of Everyday Language

#artificialintelligence

If you're one of the 2.4 million Twitter followers of the Hamilton impresario Lin-Manuel Miranda, you've come to expect a delightful stream of observations, including tweets capturing conversations with his son Sebastian, now 3 years old. Earlier this month, Miranda offered one such exchange under the title, "S'MORES. Me: So that's the marshmallow but you're going to eat it with this graham cracker and chocolate. Sebastian: No, I'm going to eat it with my MOUTH. A charming slice of life, to be sure. But in that brief interaction, young Sebastian Miranda also inadvertently hit upon a kind of ambiguity that reveals a great deal about how people learn and process language--and how we might teach computers to do the same. The misinterpretation on which the s'mores story hinges is hiding in the humble preposition with. I'm going to eat this marshmallow with ... If you're in the mood for s'mores, then "graham cracker and chocolate" is an appropriate object of the preposition with.


ReLISH: Reliable Label Inference via Smoothness Hypothesis

Gong, Chen (Shanghai Jiao Tong University and University of Technology Sydney) | Tao, Dacheng (University of Technology Sydney) | Fu, Keren (Shanghai Jiao Tong University) | Yang, Jie (Shanghai Jiao Tong University)

AAAI Conferences

The smoothness hypothesis is critical for graph-based semi-supervised learning. This paper defines local smoothness, based on which a new algorithm, Reliable Label Inference via Smoothness Hypothesis (ReLISH), is proposed. ReLISH has produced smoother labels than some existing methods for both labeled and unlabeled examples. Theoretical analyses demonstrate good stability and generalizability of ReLISH. Using real-world datasets, our empirical analyses reveal that ReLISH is promising for both transductive and inductive tasks, when compared with representative algorithms, including Harmonic Functions, Local and Global Consistency, Constraint Metric Learning, Linear Neighborhood Propagation, and Manifold Regularization.