hyperspace
HDC-X: Efficient Medical Data Classification for Embedded Devices
Wei, Jianglan, Zhang, Zhenyu, Wang, Pengcheng, Zeng, Mingjie, Zeng, Zhigang
Energy-efficient medical data classification is essential for modern disease screening, particularly in home and field healthcare where embedded devices are prevalent. While deep learning models achieve state-of-the-art accuracy, their substantial energy consumption and reliance on GPUs limit deployment on such platforms. We present HDC-X, a lightweight classification framework designed for low-power devices. HDC-X encodes data into high-dimensional hypervectors, aggregates them into multiple cluster-specific prototypes, and performs classification through similarity search in hyperspace. We evaluate HDC-X across three medical classification tasks; on heart sound classification, HDC-X is $350\times$ more energy-efficient than Bayesian ResNet with less than 1% accuracy difference. Moreover, HDC-X demonstrates exceptional robustness to noise, limited training data, and hardware error, supported by both theoretical analysis and empirical results, highlighting its potential for reliable deployment in real-world settings. Code is available at https://github.com/jianglanwei/HDC-X.
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > Wisconsin (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (4 more...)
Analogical Reasoning Within a Conceptual Hyperspace
Goldowsky, Howard, Sarathy, Vasanth
We propose an approach to analogical inference that marries the neuro-symbolic computational power of complex-sampled hyperdimensional computing (HDC) with Conceptual Spaces Theory (CST), a promising theory of semantic meaning. CST sketches, at an abstract level, approaches to analogical inference that go beyond the standard predicate-based structure mapping theories. But it does not describe how such an approach can be operationalized. We propose a concrete HDC-based architecture that computes several types of analogy classified by CST. We present preliminary proof-of-concept experimental results within a toy domain and describe how it can perform category-based and property-based analogical reasoning.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Exploring Effects of Hyperdimensional Vectors for Tsetlin Machines
Halenka, Vojtech, Kadhim, Ahmed K., Clarke, Paul F. A., Bhattarai, Bimal, Saha, Rupsa, Granmo, Ole-Christoffer, Jiao, Lei, Andersen, Per-Arne
Tsetlin machines (TMs) have been successful in several application domains, operating with high efficiency on Boolean representations of the input data. However, Booleanizing complex data structures such as sequences, graphs, images, signal spectra, chemical compounds, and natural language is not trivial. In this paper, we propose a hypervector (HV) based method for expressing arbitrarily large sets of concepts associated with any input data. Using a hyperdimensional space to build vectors drastically expands the capacity and flexibility of the TM. We demonstrate how images, chemical compounds, and natural language text are encoded according to the proposed method, and how the resulting HV-powered TM can achieve significantly higher accuracy and faster learning on well-known benchmarks. Our results open up a new research direction for TMs, namely how to expand and exploit the benefits of operating in hyperspace, including new booleanization strategies, optimization of TM inference and learning, as well as new TM applications.
Molecular Classification Using Hyperdimensional Graph Classification
Verges, Pere, Nunes, Igor, Heddes, Mike, Givargis, Tony, Nicolau, Alexandru
Our work introduces an innovative approach to graph learning by leveraging Hyperdimensional Computing. Graphs serve as a widely embraced method for conveying information, and their utilization in learning has gained significant attention. This is notable in the field of chemoinformatics, where learning from graph representations plays a pivotal role. An important application within this domain involves the identification of cancerous cells across diverse molecular structures. We propose an HDC-based model that demonstrates comparable Area Under the Curve results when compared to state-of-the-art models like Graph Neural Networks (GNNs) or the Weisfieler-Lehman graph kernel (WL). Moreover, it outperforms previously proposed hyperdimensional computing graph learning methods. Furthermore, it achieves noteworthy speed enhancements, boasting a 40x acceleration in the training phase and a 15x improvement in inference time compared to GNN and WL models. This not only underscores the efficacy of the HDC-based method, but also highlights its potential for expedited and resource-efficient graph learning.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
Thinking Darwinian
Some people have updated other people's views and understanding of life with the new idea they presented. Darwin is undoubtedly one of these people. Darwin's difference from other biologists and researchers is that he explains the evolutionary process in an algorithmic way and bases it on the laws of nature. Darwin's dangerous idea began in biology but has spread from engineering to sociology. There is greatness in this idea to be able to conceive of infinite beauty and complexity.
Visualising the loss landscape
When plotting and monitoring an architecture's loss function, we are looking at the loss landscape through a toilet paper tube. On the y-axis is the loss function and on the x the epochs. We have only a one-dimensional view of the loss function's space, and that, too, for a small range of gradients of the parameters. What if we could see, say, the 175-bn-dimensional loss space for GPT on a range of gradients of those billions of parameters? Well, let's not kid ourselves.
Visualising the loss landscape
When plotting and monitoring an architecture's loss function, we are looking at the loss landscape through a toilet paper tube. On the y-axis is the loss function and on the x the epochs. We have only a one-dimensional view of the loss function's space, and that, too, for a small range of gradients of the parameters. What if we could see, say, the 175-bn-dimensional loss space for GPT on a range of gradients of those billions of parameters? Well, let's not kid ourselves.
On Dollar Slices, Pizza Vectors, Prosciutto Zones and Topping Hyperspace
At Topos, we are fascinated by exactly this type of variation and believe it provides a powerful view into the culture of a location. While data sources like the United States Census are useful for understanding broad demographic trends over decades, they give little insight into what defines the moment-to-moment culture of a city, a neighborhood, a street corner. Inspired by thinkers like Walter Benjamin, who, in his unfinished Arcades Project examined subjects as varied as fashion, construction materials, poetry, lighting, and mirrors in order to understand Paris in the 19th century, we are fascinated by the way seemingly simple, ubiquitous subjects like the coffee we drink or the concerts we go to define a place. However, unlike Benjamin, we are interested in constructing this understanding in a way that can dynamically scale across the globe, allowing us to understand how different locations relate to one another, and how locations evolve in real time. To achieve this, we use data from dozens of different sources and techniques from a wide range of technologies and disciplines including computer vision, natural language processing, statistics, machine learning, network science, topology, architecture and urbanism.
- North America > United States > New York (0.15)
- Europe > Italy (0.05)
- North America > United States > Michigan > Washtenaw County > Ypsilanti (0.05)
- Consumer Products & Services (0.38)
- Retail (0.36)
Beck teams up with NASA and AI for 'Hyperspace' visual album experience
Grammy award-winning artist Beck took an ethereal journey to the stars for his 2019 record "Hyperspace." Now, he has taken this cosmic journey a giant leap forward in a collaboration with NASA's Jet Propulsion Laboratory and artificial intelligence creatives OSK. The result: A visual album experience titled "Hyperspace: A.I. Exploration." The new visual album, unveiled today (Aug. To launch "Hyperspace: A.I. Explorations," Beck premiered a bonus track from "Hyperspace" titled "I Am The Cosmos (42420)," on Aug. 12 on Youtube along with videos for the rest of the songs.
- Government > Space Agency (0.92)
- Government > Regional Government > North America Government > United States Government (0.92)
Beck teams up with NASA and AI for 'Hyperspace' visual album experience
Grammy award-winning artist Beck took an ethereal journey to the stars for his 2019 record "Hyperspace." Now, he has taken this cosmic journey a giant leap forward in a collaboration with NASA's Jet Propulsion Laboratory and artificial intelligence creatives OSK. The result: A visual album experience titled "Hyperspace: A.I. Exploration." The new visual album, unveiled today (Aug. To launch "Hyperspace: A.I. Explorations," Beck premiered a bonus track from "Hyperspace" titled "I Am The Cosmos (42420)," on Aug. 12 on Youtube along with videos for the rest of the songs.
- Government > Space Agency (0.92)
- Government > Regional Government > North America Government > United States Government (0.92)