Goto

Collaborating Authors

 location name


A generative framework to bridge data-driven models and scientific theories in language neuroscience

Antonello, Richard, Singh, Chandan, Jain, Shailee, Hsu, Aliyah, Gao, Jianfeng, Yu, Bin, Huth, Alexander

arXiv.org Artificial Intelligence

However, these models are not scientific theories that describe the world in natural language. Instead, they are implemented in the form of vast neural networks with millions or billions of largely inscrutable parameters. One emblematic field is language neuroscience, where large language models (LLMs) are highly effective at predicting human brain responses to natural language, but are virtually impossible to interpret or analyze by hand [4-10]. To overcome this challenge, we introduce the generative explanation-mediated validation (GEM-V) framework. GEM-V translates deep learning models of language selectivity in the brain into concise verbal explanations, and then designs follow-up experiments to verify that these explanations are causally related to brain activity.


PLUGH: A Benchmark for Spatial Understanding and Reasoning in Large Language Models

Tikhonov, Alexey

arXiv.org Artificial Intelligence

We present PLUGH (https://www.urbandictionary.com/define.php?term=plugh), a modern benchmark that currently consists of 5 tasks, each with 125 input texts extracted from 48 different games and representing 61 different (non-isomorphic) spatial graphs to assess the abilities of Large Language Models (LLMs) for spatial understanding and reasoning. Our evaluation of API-based and open-sourced LLMs shows that while some commercial LLMs exhibit strong reasoning abilities, open-sourced competitors can demonstrate almost the same level of quality; however, all models still have significant room for improvement. We identify typical reasons for LLM failures and discuss possible ways to deal with them. Datasets and evaluation code are released (https://github.com/altsoph/PLUGH).


Narrative to Trajectory (N2T+): Extracting Routes of Life or Death from Human Trafficking Text Corpora

Karabatis, Saydeh N., Janeja, Vandana P.

arXiv.org Artificial Intelligence

Climate change and political unrest in certain regions of the world are imposing extreme hardship on many communities and are forcing millions of vulnerable populations to abandon their homelands and seek refuge in safer lands. As international laws are not fully set to deal with the migration crisis, people are relying on networks of exploiting smugglers to escape the devastation in order to live in stability. During the smuggling journey, migrants can become victims of human trafficking if they fail to pay the smuggler and may be forced into coerced labor. Government agencies and anti-trafficking organizations try to identify the trafficking routes based on stories of survivors in order to gain knowledge and help prevent such crimes. In this paper, we propose a system called Narrative to Trajectory (N2T+), which extracts trajectories of trafficking routes. N2T+ uses Data Science and Natural Language Processing techniques to analyze trafficking narratives, automatically extract relevant location names, disambiguate possible name ambiguities, and plot the trafficking route on a map. In a comparative evaluation we show that the proposed multi-dimensional approach offers significantly higher geolocation detection than other state of the art techniques.


MANGO: A Benchmark for Evaluating Mapping and Navigation Abilities of Large Language Models

Ding, Peng, Fang, Jiading, Li, Peng, Wang, Kangrui, Zhou, Xiaochen, Yu, Mo, Li, Jing, Walter, Matthew R., Mei, Hongyuan

arXiv.org Artificial Intelligence

Large language models such as ChatGPT and GPT-4 have recently achieved astonishing performance on a variety of natural language processing tasks. In this paper, we propose MANGO, a benchmark to evaluate their capabilities to perform text-based mapping and navigation. Our benchmark includes 53 mazes taken from a suite of textgames: each maze is paired with a walkthrough that visits every location but does not cover all possible paths. The task is question-answering: for each maze, a large language model reads the walkthrough and answers hundreds of mapping and navigation questions such as "How should you go to Attic from West of House?" and "Where are we if we go north and east from Cellar?". Although these questions are easy to humans, it turns out that even GPT-4, the best-to-date language model, performs poorly at answering them. Further, our experiments suggest that a strong mapping and navigation ability would benefit large language models in performing relevant downstream tasks, such as playing textgames. Our MANGO benchmark will facilitate future research on methods that improve the mapping and navigation capabilities of language models. We host our leaderboard, data, code, and evaluation program at https://mango.ttic.edu and https://github.com/oaklight/mango/.


Large Language Models Relearn Removed Concepts

Lo, Michelle, Cohen, Shay B., Barez, Fazl

arXiv.org Artificial Intelligence

Advances in model editing through neuron pruning hold promise for removing undesirable concepts from large language models. However, it remains unclear whether models have the capacity to reacquire pruned concepts after editing. To investigate this, we evaluate concept relearning in models by tracking concept saliency and similarity in pruned neurons during retraining. Our findings reveal that models can quickly regain performance post-pruning by relocating advanced concepts to earlier layers and reallocating pruned concepts to primed neurons with similar semantics. This demonstrates that models exhibit polysemantic capacities and can blend old and new concepts in individual neurons. While neuron pruning provides interpretability into model concepts, our results highlight the challenges of permanent concept removal for improved model \textit{safety}. Monitoring concept reemergence and developing techniques to mitigate relearning of unsafe concepts will be important directions for more robust model editing. Overall, our work strongly demonstrates the resilience and fluidity of concept representations in LLMs post concept removal.


Improving the Quality of Neural Machine Translation Through Proper Translation of Name Entities

Sharma, Radhika, Katyayan, Pragya, Joshi, Nisheeth

arXiv.org Artificial Intelligence

In this paper, we have shown a method of improving the quality of neural machine translation by translating/transliterating name entities as a preprocessing step. Through experiments we have shown the performance gain of our system. For evaluation we considered three types of name entities viz person names, location names and organization names. The system was able to correctly translate mostly all the name entities. For person names the accuracy was 99.86%, for location names the accuracy was 99.63% and for organization names the accuracy was 99.05%. Overall, the accuracy of the system was 99.52%


TopoBERT: Plug and Play Toponym Recognition Module Harnessing Fine-tuned BERT

Zhou, Bing, Zou, Lei, Hu, Yingjie, Qiang, Yi, Goldberg, Daniel

arXiv.org Artificial Intelligence

Extracting precise geographical information from textual contents is crucial in a plethora of applications. For example, during hazardous events, a robust and unbiased toponym extraction framework can provide an avenue to tie the location concerned to the topic discussed by news media posts and pinpoint humanitarian help requests or damage reports from social media. Early studies have leveraged rule-based, gazetteer-based, deep learning, and hybrid approaches to address this problem. However, the performance of existing tools is deficient in supporting operations like emergency rescue, which relies on fine-grained, accurate geographic information. The emerging pretrained language models can better capture the underlying characteristics of text information, including place names, offering a promising pathway to optimize toponym recognition to underpin practical applications. In this paper, TopoBERT, a toponym recognition module based on a one dimensional Convolutional Neural Network (CNN1D) and Bidirectional Encoder Representation from Transformers (BERT), is proposed and fine-tuned. Three datasets (CoNLL2003-Train, Wikipedia3000, WNUT2017) are leveraged to tune the hyperparameters, discover the best training strategy, and train the model. Another two datasets (CoNLL2003-Test and Harvey2017) are used to evaluate the performance. Three distinguished classifiers, linear, multi-layer perceptron, and CNN1D, are benchmarked to determine the optimal model architecture. TopoBERT achieves state-of-the-art performance (f1-score=0.865) compared to the other five baseline models and can be applied to diverse toponym recognition tasks without additional training.


Hierarchical Bayesian Model for the Transfer of Knowledge on Spatial Concepts based on Multimodal Information

Hagiwara, Yoshinobu, Taguchi, Keishiro, Ishibushi, Satoshi, Taniguchi, Akira, Taniguchi, Tadahiro

arXiv.org Artificial Intelligence

This paper proposes a hierarchical Bayesian model based on spatial concepts that enables a robot to transfer the knowledge of places from experienced environments to a new environment. The transfer of knowledge based on spatial concepts is modeled as the calculation process of the posterior distribution based on the observations obtained in each environment with the parameters of spatial concepts generalized to environments as prior knowledge. We conducted experiments to evaluate the generalization performance of spatial knowledge for general places such as kitchens and the adaptive performance of spatial knowledge for unique places such as `Emma's room' in a new environment. In the experiments, the accuracies of the proposed method and conventional methods were compared in the prediction task of location names from an image and a position, and the prediction task of positions from a location name. The experimental results demonstrated that the proposed method has a higher prediction accuracy of location names and positions than the conventional method owing to the transfer of knowledge.


Facebook trains AI to generate worlds in a fantasy text adventure

#artificialintelligence

Tools like Promethean AI, which tap machine learning to generate scenes, promise to ease the design burden somewhat. That's why researchers at Facebook, the University of Lorraine, and the University College London in a preprint research paper investigated an AI approach to creating game worlds. Using content from LIGHT, a fantasy text-based multiplayer adventure, they designed models that could compositionally arrange locations and characters and generate new content on the fly. "We show how [machine learning] algorithms can learn to assemble … different elements, arranging locations and populating them with characters and objects," wrote the study's coauthors. "[Furthermore, we] demonstrate that these … tools can aid humans interactively in designing new game environments."


Location reference identification from tweets during emergencies: A deep learning approach

Kumar, Abhinav, Singh, Jyoti Prakash

arXiv.org Machine Learning

Twitter is recently being used during crises to communicate with officials and provide rescue and relief operation in real time. The geographical location information of the event, as well as users, are vitally important in such scenarios. The identification of geographic location is one of the challenging tasks as the location information fields, such as user location and place name of tweets are not reliable. The extraction of location information from tweet text is difficult as it contains a lot of nonstandard English, grammatical errors, spelling mistakes, nonstandard abbreviations, and so on. This research aims to extract location words used in the tweet using a Convolutional Neural Network (CNN) based model. We achieved the exact matching score of 0.929, Hamming loss of 0.002, and F Our model was able to extract even three-to four-word long location references which is also evident from the exact matching score of over 92%. The findings of this paper can help in early event localization, emergency situations, real-time road traffic management, localized advertisement, and in various location-based services. Keywords: Location references, Tweets, Geo-locations, Named entity recognition, Gazetteer, Convolutional Neural Network 1. Introduction Tweets are very responsive to real-world events, and are sometimes even more immediate than traditional news channels. Therefore, it is possible to keep track of the latest information by following tweets. Several examples were seen when the news was first reported on Twitter, such as an airplane crash over the Hudson River in New York in the year 2009 (Sakaki et al., 2013), the death of former British Prime Minister Margaret Thatcher in April 2013 Preprint submitted to Elsevier January 25, 2019 Sakaki et al., 2013; Singh et al., 2017; Yuan & Liu, 2018). In an American Red Cross survey, a question was asked to individuals that "whom they contacted in an emergency?" The estimation and detection of location information of events and users from tweets are a major concern in relation to the above-mentioned tasks. Twitter provides three location information fields for sharing a user's location: (1) User location; (2) Place name; and (3) Geo-coordinate. The user location field has 140 character spaces (previously it was limited to 30 characters) in which the user can write his/her home location information while creating their profile. This field is optional to the user and the user can write any arbitrary words or leave it blank.