hum
What Can Robots Teach Us About Trust and Reliance? An interdisciplinary dialogue between Social Sciences and Social Robotics
Wacquez, Julien, Zibetti, Elisabetta, Becker, Joffrey, Aloe, Lorenzo, Amadio, Fabio, Anzalone, Salvatore, Cañamero, Lola, Ivaldi, Serena
-- As robots find their way into more and more aspects of everyday life, questions around trust are becoming increasingly important. What does it mean to trust a robot? And how should we think about trust in relationships that involve both humans and non - human agents? While the field of Human - Robot Interaction (HRI) has made trust a central topic, the concept is often approached in fragmented ways. At the same time, established work in sociology, where trust has long been a key theme, is rarely brought into conversation with developme nts in robotics. This article argues that we need a more interdisciplinary approach. By drawing on insights from both social sciences and social robotics, we explore how trust is shaped, tested and made visible. Our goal is to open up a dialogue between di sciplines and help build a more grounded and adaptable framework for understanding trust in the evolving world of human - robot interaction.
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- (6 more...)
Helen Phillips's "Hum," Reviewed
"Hum," Helen Phillips's third novel, begins with a needle being drawn, steadily and irreversibly, across a woman named May's face. She is participating in a paid experiment in "adversarial tech," undergoing a procedure that will ever so slightly alter her features, making her harder for surveillance cameras to identify. As the book opens, May is mid-op, the needle advancing its "slender and relentless line of penetration" across her temple, toward the skin of her eyelid. What lies on the other side of the surgery? "Some sort of transformation, undeniable but undetectable," Phillips writes.
Deep Natural Language Feature Learning for Interpretable Prediction
Urrutia, Felipe, Buc, Cristian, Barriere, Valentin
We propose a general method to break down a main complex task into a set of intermediary easier sub-tasks, which are formulated in natural language as binary questions related to the final target task. Our method allows for representing each example by a vector consisting of the answers to these questions. We call this representation Natural Language Learned Features (NLLF). NLLF is generated by a small transformer language model (e.g., BERT) that has been trained in a Natural Language Inference (NLI) fashion, using weak labels automatically obtained from a Large Language Model (LLM). We show that the LLM normally struggles for the main task using in-context learning, but can handle these easiest subtasks and produce useful weak labels to train a BERT. The NLI-like training of the BERT allows for tackling zero-shot inference with any binary question, and not necessarily the ones seen during the training. We show that this NLLF vector not only helps to reach better performances by enhancing any classifier, but that it can be used as input of an easy-to-interpret machine learning model like a decision tree. This decision tree is interpretable but also reaches high performances, surpassing those of a pre-trained transformer in some cases.We have successfully applied this method to two completely different tasks: detecting incoherence in students' answers to open-ended mathematics exam questions, and screening abstracts for a systematic literature review of scientific papers on climate change and agroecology.
- Africa > Sub-Saharan Africa (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Overview (0.89)
- Food & Agriculture > Agriculture (1.00)
- Education (1.00)
- Energy (0.94)
Knowing-how & Knowing-that: A New Task for Machine Comprehension of User Manuals
Liang, Hongru, Liu, Jia, Du, Weihong, Jin, Dingnan, Lei, Wenqiang, Wen, Zujie, Lv, Jiancheng
The machine reading comprehension (MRC) of user manuals has huge potential in customer service. However, current methods have trouble answering complex questions. Therefore, we introduce the Knowing-how & Knowing-that task that requires the model to answer factoid-style, procedure-style, and inconsistent questions about user manuals. We resolve this task by jointly representing the steps and facts in a graph TARA, which supports a unified inference of various questions. Towards a systematical benchmarking study, we design a heuristic method to automatically parse user manuals into TARAs and build an annotated dataset to test the model's ability in answering real-world questions. Empirical results demonstrate that representing user manuals as TARAs is a desired solution for the MRC of user manuals. An in-depth investigation of TARA further sheds light on the issues and broader impacts of future representations of user manuals. We hope our work can move the MRC of user manuals to a more complex and realistic stage.
- Asia > China (0.04)
- North America > Dominican Republic (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Education (0.48)
- Information Technology (0.46)
The Fewer Splits are Better: Deconstructing Readability in Sentence Splitting
In this work, we focus on sentence splitting, a subfield of text simplification, motivated largely by an unproven idea that if you divide a sentence in pieces, it should become easier to understand. Our primary goal in this paper is to find out whether this is true. In particular, we ask, does it matter whether we break a sentence into two or three? We report on our findings based on Amazon Mechanical Turk. More specifically, we introduce a Bayesian modeling framework to further investigate to what degree a particular way of splitting the complex sentence affects readability, along with a number of other parameters adopted from diverse perspectives, including clinical linguistics, and cognitive linguistics. The Bayesian modeling experiment provides clear evidence that bisecting the sentence leads to enhanced readability to a degree greater than what we create by trisection.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Guernsey > Alderney (0.04)
- (11 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Grammars & Parsing (0.47)
Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature
Thai, Katherine, Karpinska, Marzena, Krishna, Kalpesh, Ray, Bill, Inghilleri, Moira, Wieting, John, Iyyer, Mohit
Literary translation is a culturally significant task, but it is bottlenecked by the small number of qualified literary translators relative to the many untranslated works published around the world. Machine translation (MT) holds potential to complement the work of human translators by improving both training procedures and their overall efficiency. Literary translation is less constrained than more traditional MT settings since translators must balance meaning equivalence, readability, and critical interpretability in the target language. This property, along with the complex discourse-level context present in literary texts, also makes literary MT more challenging to computationally model and evaluate. To explore this task, we collect a dataset (Par3) of non-English language novels in the public domain, each aligned at the paragraph level to both human and automatic English translations. Using Par3, we discover that expert literary translators prefer reference human translations over machine-translated paragraphs at a rate of 84%, while state-of-the-art automatic MT metrics do not correlate with those preferences. The experts note that MT outputs contain not only mistranslations, but also discourse-disrupting errors and stylistic inconsistencies. To address these problems, we train a post-editing model whose output is preferred over normal MT output at a rate of 69% by experts. We publicly release Par3 at https://github.com/katherinethai/par3/ to spur future research into literary MT.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.04)
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- (16 more...)
Hum to Search: The ML Behind Google's New Feature
Ever got stuck with a tune but couldn't name the song? We all have been there. It doesn't go away until we listen to the song again. The frustration of the faint memory forces people to resort to all kinds of tricks. One such effort is to hum the tune to people close to us so that they can help us with the song name.
Google's New "Hum to Search" AI-Powered Feature to Search Song
Everything that we can imagine is now possible with the power of Artificial Intelligence. Have you ever imagined if you can find the song you heard in someplace with only music running in your mind? You might have asked your best friend like this.Google's New AI-Powered "Hum to Search" Song Searching Feature What's that song that goes like this "hum hum hum hum hum." But your friend (A Human Being) also fails to tell you the song name. Artificial intelligence has proved that it can now read what's going in your mind, but a human being can't.
How Google Is Using AI & ML To Improve Search Experience
Recently, the developers at Google detailed the methods and ways they have been using artificial intelligence and machine learning in order to improve its search experience. The announcements were made during the Search On 2020 event, where the tech giant unveiled several enhancements in AI that will help to get search results in the coming years. In 2018, the tech giant introduced the neural network-based technique for natural language processing (NLP) pre-training called Bidirectional Encoder Representations from Transformers or simply, BERT. Last year, the company introduced how BERT language understanding systems are helping to deliver more relevant results in Google Search. Since then, there have been enhancements in a lot of areas including the language understanding capabilities of the engine, search queries and more.