Goto

Collaborating Authors

 computing hardware



MetaSDF-Supplementary Material-Vincent Sitzmann

Neural Information Processing Systems

These authors contributed equally to this work. We now analyze a single layer of a neural network with conditioning via concatenation. Here, we provide exact specifications of the 2D experiments to ensure reproducibility. NP, 4-layer set encoder 101 .7/ 5. 1 154 . NP, 9-layer set encoder 92 .5 /2 .0


Multilingual Machine Translation with Quantum Encoder Decoder Attention-based Convolutional Variational Circuits

Dikshit, Subrit, Tiwari, Ritu, Jain, Priyank

arXiv.org Artificial Intelligence

In the 2000s, artificial intelligence and deep learning - based systems became prevalent and took over the world by storm . Many modern multilingual state - of - the - art [ 1 ] networks and cloud - based translation services like Google Translate, Microsoft Translator, ChatGPT [ 2 ], DeepSeek [ 3 ] emerged and became available during this era . These Multilingual Large Language Networks are architected around Gated Recurrent Unit Networks ( GRU) [ 4 ], Long Short - Term Memory ( LSTM) [ 5 ], Bidirectional Encoder Representations from Transformers ( BERT) [ 6 ], Generative pre - trained transformer ( GPT) [ 7 ], Text - to - Text Transfer Transformer ( T5) [ 8 ] and similar attention - based transformers [ 9 ] networks with finer and improve d amendments to architectures. W hile m ost academicians, researchers, and organisations focused on these classical computing realm aspects and less emphasis was put on multilingual machine tr a nslation in the quantum computing realm . S ome practitioners and scholars who emphasis ed quantum computing for machine tr a nslation and their associated works are discussed in the Related Works section later. However, these researches under - u tilize d simulat ion and execution on quantum computing hardware along with under - exploit ing the novel perceptions of quantum convolution [ 10 ], quantum pooling [ 11 ], quantum variational circuit [ 12 ] and quantum attention [ 13 ] as quantum - based software amendments that are studie d, demonstrate d and stunned as shortcomings in QEDACVC system .


Explained: Generative AI's environmental impact

AIHub

In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI's carbon footprint and other impacts. The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI "gold rush" remain difficult to pin down, let alone mitigate.


Regulating AI Is Easier Than You Think

TIME - Tech

Artificial intelligence is poised to deliver tremendous benefits to society. But, as many have pointed out, it could also bring unprecedented new horrors. As a general-purpose technology, the same tools that will advance scientific discovery could also be used to develop cyber, chemical, or biological weapons. Governing AI will require widely sharing its benefits while keeping the most powerful AI out of the hands of bad actors. The good news is that there is already a template on how to do just that.

  ai model, computing hardware, government, (14 more...)
  Country:
  Industry:

RB5 Low-Cost Explorer: Implementing Autonomous Long-Term Exploration on Low-Cost Robotic Hardware

Seewald, Adam, Chancán, Marvin, McCann, Connor M., Noh, Seonghoon, Fallahi, Omeed, Castillo, Hector, Abraham, Ian, Dollar, Aaron M.

arXiv.org Artificial Intelligence

This systems paper presents the implementation and design of RB5, a wheeled robot for autonomous long-term exploration with fewer and cheaper sensors. Requiring just an RGB-D camera and low-power computing hardware, the system consists of an experimental platform with rocker-bogie suspension. It operates in unknown and GPS-denied environments and on indoor and outdoor terrains. The exploration consists of a methodology that extends frontier- and sampling-based exploration with a path-following vector field and a state-of-the-art SLAM algorithm. The methodology allows the robot to explore its surroundings at lower update frequencies, enabling the use of lower-performing and lower-cost hardware while still retaining good autonomous performance. The approach further consists of a methodology to interact with a remotely located human operator based on an inexpensive long-range and low-power communication technology from the internet-of-things domain (i.e., LoRa) and a customized communication protocol. The results and the feasibility analysis show the possible applications and limitations of the approach.


Energy-Aware Planning-Scheduling for Autonomous Aerial Robots

Seewald, Adam, de Marina, Héctor García, Midtiby, Henrik Skov, Schultz, Ulrik Pagh

arXiv.org Artificial Intelligence

In this paper, we present an online planning-scheduling approach for battery-powered autonomous aerial robots. The approach consists of simultaneously planning a coverage path and scheduling onboard computational tasks. We further derive a novel variable coverage motion robust to airborne constraints and an empirically motivated energy model. The model includes the energy contribution of the schedule based on an automatic computational energy modeling tool. Our experiments show how an initial flight plan is adjusted online as a function of the available battery, accounting for uncertainty. Our approach remedies possible in-flight failure in case of unexpected battery drops, e.g., due to adverse atmospheric conditions, and increases the overall fault tolerance.


A neuromorphic computing architecture that can run some deep neural networks more efficiently

#artificialintelligence

As artificial intelligence and deep learning techniques become increasingly advanced, engineers will need to create hardware that can run their computations both reliably and efficiently. Neuromorphic computing hardware, which is inspired by the structure and biology of the human brain, could be particularly promising for supporting the operation of sophisticated deep neural networks (DNNs). Researchers at Graz University of Technology and Intel have recently demonstrated the huge potential of neuromorphic computing hardware for running DNNs in an experimental setting. Their paper, published in Nature Machine Intelligence and funded by the Human Brain Project (HBP), shows that neuromorphic computing hardware could run large DNNs 4 to 16 times more efficiently than conventional (i.e., non-brain inspired) computing hardware. "We have shown that a large class of DNNs, those that process temporally extended inputs such as for example sentences, can be implemented substantially more energy-efficiently if one solves the same problems on neuromorphic hardware with brain-inspired neurons and neural network architectures," Wolfgang Maass, one of the researchers who carried out the study, told TechXplore.


Accelerating AI at the speed of light

#artificialintelligence

Improved computing power and an exponential increase in data have helped fuel the rapid rise of artificial intelligence. But as AI systems become more sophisticated, they'll need even more computational power to address their needs, which traditional computing hardware most likely won't be able to keep up with. To solve the problem, MIT spinout Lightelligence is developing the next generation of computing hardware. The Lightelligence solution makes use of the silicon fabrication platform used for traditional semiconductor chips, but in a novel way. Rather than building chips that use electricity to carry out computations, Lightelligence develops components powered by light that are low energy and fast, and they might just be the hardware we need to power the AI revolution.


Enabling fairer data clusters for machine learning

#artificialintelligence

Research published recently by CSE investigators can make training machine learning (ML) models fairer and faster. With a tool called AlloX, Prof. Mosharaf Chowdhury and a team from Stony Brook University developed a new way to fairly schedule high volumes of ML jobs in data centers that make use of multiple different types of computing hardware, like CPUs, GPUs, and specialized accelerators. As these so-called heterogeneous clusters grow to be the norm, fair scheduling systems like AlloX will become essential to their efficient operation. This project is a new step for Chowdhury's lab, which has recently published a number of tools aimed at speeding up the process of training and testing ML models. Their past projects Tiresias and Salus sped up GPU resource sharing at multiple scales: both within a single GPU (Salus) and across many GPUs in a cluster (Tiresias).