individual word
Decoding individual words from non-invasive brain recordings across 723 participants
d'Ascoli, Stéphane, Bel, Corentin, Rapin, Jérémy, Banville, Hubert, Benchetrit, Yohann, Pallier, Christophe, King, Jean-Rémi
Deep learning has recently enabled the decoding of language from the neural activity of a few participants with electrodes implanted inside their brain. However, reliably decoding words from non-invasive recordings remains an open challenge. To tackle this issue, we introduce a novel deep learning pipeline to decode individual words from non-invasive electro- (EEG) and magneto-encephalography (MEG) signals. We train and evaluate our approach on an unprecedentedly large number of participants (723) exposed to five million words either written or spoken in English, French or Dutch. Our model outperforms existing methods consistently across participants, devices, languages, and tasks, and can decode words absent from the training set. Our analyses highlight the importance of the recording device and experimental protocol: MEG and reading are easier to decode than EEG and listening, respectively, and it is preferable to collect a large amount of data per participant than to repeat stimuli across a large number of participants. Furthermore, decoding performance consistently increases with the amount of (i) data used for training and (ii) data used for averaging during testing. Finally, single-word predictions show that our model effectively relies on word semantics but also captures syntactic and surface properties such as part-of-speech, word length and even individual letters, especially in the reading condition. Overall, our findings delineate the path and remaining challenges towards building non-invasive brain decoders for natural language.
- Europe > United Kingdom > England (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Puy-de-Dôme > Clermont-Ferrand (0.04)
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
Reviews: Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language
After rebuttal comments: * readability: I trust the authors to update the paper based on my suggestions (as they agreed to in their rebuttal). For AttrGAN, they did change the weight sweep and for SISGAN they used the same hyperparameters as they used in their method (which I would object to in general, but given that the authors took most of their hyperparameters from DCGAN, does not create an unfair advantage). I expect the additional details of the experimental results to be added in the paper (as supplementary material). Ensure that content that is not relevant to the text does not change. Method: to avoid changing too much of the image, use local discriminators that learn the presence of individual visual attributes.
What should I say? -- Interacting with AI and Natural Language Interfaces
As Artificial Intelligence (AI) technology becomes more and more prevalent, it becomes increasingly important to explore how we as humans interact with AI. The Human-AI Interaction (HAI) sub-field has emerged from the Human-Computer Interaction (HCI) field and aims to examine this very notion. Many interaction patterns have been implemented without fully understanding the changes in required cognition as well as the cognitive science implications of using these alternative interfaces that aim to be more human-like in nature. Prior research suggests that theory of mind representations are crucial to successful and effortless communication, however very little is understood when it comes to how theory of mind representations are established when interacting with AI.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.34)
- Personal > Interview (0.30)
The Architecture of a Biologically Plausible Language Organ
Mitropolsky, Daniel, Papadimitriou, Christos H.
We present a simulated biologically plausible language organ, made up of stylized but realistic neurons, synapses, brain areas, plasticity, and a simplified model of sensory perception. We show through experiments that this model succeeds in an important early step in language acquisition: the learning of nouns, verbs, and their meanings, from the grounded input of only a modest number of sentences. Learning in this system is achieved through Hebbian plasticity, and without backpropagation. Our model goes beyond a parser previously designed in a similar environment, with the critical addition of a biologically plausible account for how language can be acquired in the infant's brain, not just processed by a mature brain.
- North America > United States > New York > New York County > New York City (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Ireland > Connaught > County Galway > Galway (0.04)
- Europe > Hungary (0.04)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence > Cognitive Science > Neuroscience (0.68)
Weaviate is an open-source search engine powered by ML, vectors, graphs, and GraphQL
Bob van Luijt's career in technology started at age 15, building websites to help people sell toothbrushes online. Not many 15 year-olds do that. Apparently, this gave van Luijt enough of a head start to arrive at the confluence of technology trends today. Van Luijt went on to study arts but ended up working full time in technology anyway. In 2015, when Google introduced its RankBrain algorithm, the quality of search results jumped up.
Language: Dogs pick up on individual words in a similar way to human babies, study finds
Dogs are able to pick up on individual words in sentences spoken to them using similar computations and brain regions as human babies, a study has found. When we are infants, we learn to spot new words in a stream of speech first, before we actually learn what each individual word means. To tell where each word ends and another begins, babies use complex calculations that keep track of which syllables appear together -- and thus likely form words. By using a combination of brain imaging techniques, experts led from Hungary's Eötvös Loránd University have shown that dogs are capable of similar feats. This is the first time that the capacity to apply so-called statistical learning has been shown to be demonstrated in a non-human mammal.
- Europe > Hungary (0.25)
- North America > United States > New York > Suffolk County > Stony Brook (0.05)
Lexico-semantic and affective modelling of Spanish poetry: A semi-supervised learning approach
Barbado, Alberto, González, María Dolores, Carrera, Débora
Text classification tasks have improved substantially during the last years by the usage of transformers. However, the majority of researches focus on prose texts, with poetry receiving less attention, specially for Spanish language. In this paper, we propose a semi-supervised learning approach for inferring 21 psychological categories evoked by a corpus of 4572 sonnets, along with 10 affective and lexico-semantic multiclass ones. The subset of poems used for training an evaluation includes 270 sonnets. With our approach, we achieve an AUC beyond 0.7 for 76% of the psychological categories, and an AUC over 0.65 for 60% on the multiclass ones. The sonnets are modelled using transformers, through sentence embeddings, along with lexico-semantic and affective features, obtained by using external lexicons. Consequently, we see that this approach provides an AUC increase of up to 0.12, as opposed to using transformers alone.
- Europe > Spain > Galicia > Madrid (0.04)
- North America > United States > Colorado (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Projects to Learn Natural Language Processing - Analytics Vidhya
Machines understanding language fascinates me, and that I often ponder which algorithms Aristotle would have accustomed build a rhetorical analysis machine if he had the possibility. If you're new to Data Science, getting into NLP can seem complicated, especially since there are many recent advancements within the field. While a computer can be quite good at finding patterns and summarizing documents, it must transform words into numbers before making sense of them. This transformation is highly required because math doesn't work very well on words and machines "learn" thanks to mathematics. Before the transformation of the words into numbers, Data cleaning is required.
Creating a Reverse Dictionary - DZone AI
In this article, we are going to see how to use Word2Vec to create a reverse dictionary. We are going to use Word2Vec, but the same results can be achieved using any word embeddings model. Don't worry if you do not know what any of this means, we are going to explain it. A reverse dictionary is simply a dictionary in which you input a definition and get back the word that matches that definition. You can find the code in the companion repository. Natural Language Processing is a great field: We found it very interesting and our clients need to use it in their applications. We wrote a great explanatory article about it: Analyze and Understand Text: Guide to Natural Language Processing. Now we want to write more practical ones to help you use it in your project.