multiple sense
Kernelized Bayesian Softmax for Text Generation
Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase. Most existing approaches adopt single point embedding for each token. However, a word may have multiple senses according to different context, some of which might be distinct. In this paper, we propose KerBS, a novel approach for learning better embeddings for text generation. KerBS embodies two advantages: (a) it employs a Bayesian composition of embeddings for words with multiple senses; (b) it is adaptive to semantic variances of words and robust to rare sentence context by imposing learned kernels to capture the closeness of words (senses) in the embedding space. Empirical studies show that KerBS significantly boosts the performance of several text generation tasks.
Kernelized Bayesian Softmax for Text Generation
Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase. Most existing approaches adopt single point embedding for each token. However, a word may have multiple senses according to different context, some of which might be distinct. In this paper, we propose KerBS, a novel approach for learning better embeddings for text generation. KerBS embodies two advantages: (a) it employs a Bayesian composition of embeddings for words with multiple senses; (b) it is adaptive to semantic variances of words and robust to rare sentence context by imposing learned kernels to capture the closeness of words (senses) in the embedding space.
Analyzing Polysemy Evolution Using Semantic Cells
Ohsawa, Yukio, Xue, Dingming, Sekiguchi, Kaira
The senses of words evolve. The sense of the same word may change from today to tomorrow, and multiple senses of the same word may be the result of the evolution of each other, that is, they may be parents and children. If we view Juba as an evolving ecosystem, the paradigm of learning the correct answer, which does not move with the sense of a word, is no longer valid. This paper is a case study that shows that word polysemy is an evolutionary consequence of the modification of Semantic Cells, which has al-ready been presented by the author, by introducing a small amount of diversity in its initial state as an example of analyzing the current set of short sentences. In particular, the analysis of a sentence sequence of 1000 sentences in some order for each of the four senses of the word Spring, collected using Chat GPT, shows that the word acquires the most polysemy monotonically in the analysis when the senses are arranged in the order in which they have evolved. In other words, we present a method for analyzing the dynamism of a word's acquiring polysemy with evolution and, at the same time, a methodology for viewing polysemy from an evolutionary framework rather than a learning-based one.
- Africa > South Sudan > Equatoria > Central Equatoria > Juba (0.24)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- Asia > Japan > Honshū > Tōhoku (0.04)
Semantic Cells: Evolutional Process to Acquire Sense Diversity of Items
Ohsawa, Yukio, Xue, Dingming, Sekiguchi, Kaira
Previous models for learning the semantic vectors of items and their groups, such as words, sentences, nodes, and graphs, using distributed representation have been based on the assumption that the basic sense of an item corresponds to one vector composed of dimensions corresponding to hidden contexts in the target real world, from which multiple senses of the item are obtained by conforming to lexical databases or adapting to the context. However, there may be multiple senses of an item, which are hardly assimilated and change or evolve dynamically following the contextual shift even within a document or a restricted period. This is a process similar to the evolution or adaptation of a living entity with/to environmental shifts. Setting the scope of disambiguation of items for sensemaking, the author presents a method in which a word or item in the data embraces multiple semantic vectors that evolve via interaction with others, similar to a cell embracing chromosomes crossing over with each other. We obtained two preliminary results: (1) the role of a word that evolves to acquire the largest or lower-middle variance of semantic vectors tends to be explainable by the author of the text; (2) the epicenters of earthquakes that acquire larger variance via crossover, corresponding to the interaction with diverse areas of land crust, are likely to correspond to the epicenters of forthcoming large earthquakes. Keywords: evolutionary computing, diambiguity, items, words, earthquakes 1 Introduction Semantic vectors were invented in the 1960s, and have been applied to natural language analysis and large language models [Camacho-Collados and Pilevar 2018].
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- Asia > Japan > Honshū > Tōhoku (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Semantic Networks (0.67)
AI armed with multiple senses could gain more flexible intelligence
AI systems, on the other hand, are built to do only one of these things at a time. Computer-vision and audio-recognition algorithms can sense things but cannot use language to describe them. A natural- language model can manipulate words, but the words are detached from any sensory reality. If senses and language were combined to give an AI a more human-like way to gather and process new information, could it finally develop something like an understanding of the world? The hope is that these "multimodal" systems, with access to both the sensory and linguistic "modes" of human intelligence, should give rise to a more robust kind of AI that can adapt more easily to new situations or problems.
Good sense of smell may indicate lower risk of dementia in older adults: study
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. "Stop and smell the roses" may actually be important when it comes to detecting your risk for dementia and getting early treatment for the condition, according to a new study. A study out of the University of California San Francisco found that older Americans who can identify odors like roses, lemons, onions, paint-thinner, and turpentine may have half the risk of developing dementia compared to those with significant sensory loss, according to researchers performing the study. "The olfactory bulb, which is critical for smell, is affected fairly early on in the course of the disease," said first author Willa Brenowitz, Ph.D., of the UCSF Department of Psychiatry and Behavioral Sciences and the Weill Institute for Neurosciences, in a statement.
- Health & Medicine > Therapeutic Area > Neurology > Dementia (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (0.94)
Kernelized Bayesian Softmax for Text Generation
Miao, Ning, Zhou, Hao, Zhao, Chengqi, Shi, Wenxian, Li, Lei
Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase. Most existing approaches adopt single point embedding for each token. However, a word may have multiple senses according to different context, some of which might be distinct. In this paper, we propose KerBS, a novel approach for learning better embeddings for text generation. KerBS embodies two advantages: (a) it employs a Bayesian composition of embeddings for words with multiple senses; (b) it is adaptive to semantic variances of words and robust to rare sentence context by imposing learned kernels to capture the closeness of words (senses) in the embedding space. Empirical studies show that KerBS significantly boosts the performance of several text generation tasks.
Gartner's strategic tech trends for 2020: Part 1, augmenting skills
The first focuses on technology interacting with people. Part 2, which highlights technology advancements that will make the world tick. In 1981, Douglas Adams introduced the world to a universal translator via human augmentation. In "The Hitchhiker's Guide to the Galaxy," a bright yellow "Babel fish" is slipped into the hero Arthur Dent's ear to offer real-time translation from any language. It's a concept popularized in science fiction, but advancements in technology are making similar capabilities possible in 2019, albeit less invasive.
Distributed representation of multi-sense words: A loss-driven approach
Manchanda, Saurav, Karypis, George
Word2Vec's Skip Gram model is the current state-of-the-art approach for estimating the distributed representation of words. However, it assumes a single vector per word, which is not well-suited for representing words that have multiple senses. This work presents LDMI, a new model for estimating distributional representations of words. LDMI relies on the idea that, if a word carries multiple senses, then having a different representation for each of its senses should lead to a lower loss associated with predicting its co-occurring words, as opposed to the case when a single vector representation is used for all the senses. After identifying the multi-sense words, LDMI clusters the occurrences of these words to assign a sense to each occurrence. Experiments on the contextual word similarity task show that LDMI leads to better performance than competing approaches.
- North America > United States > Minnesota (0.05)
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.67)