Goto

Collaborating Authors

 nni


Application-oriented automatic hyperparameter optimization for spiking neural network prototyping

Fra, Vittorio

arXiv.org Artificial Intelligence

Hyperparameter optimization (HPO) is of paramount importance in the development of high-performance, specialized artificial intelligence (AI) models, ranging from well-established machine learning (ML) solutions to the deep learning (DL) domain and the field of spiking neural networks (SNNs). The latter introduce further complexity due to the neuronal computational units and their additional hyperparameters, whose inadequate setting can dramatically impact the final model performance. At the cost of possible reduced generalization capabilities, the most suitable strategy to fully disclose the power of SNNs is to adopt an application-oriented approach and perform extensive HPO experiments. To facilitate these operations, automatic pipelines are fundamental, and their configuration is crucial. In this document, the Neural Network Intelligence (NNI) toolkit is used as reference framework to present one such solution, with a use case example providing evidence of the corresponding results. In addition, a summary of published works employing the presented pipeline is reported as possible source of insights into application-oriented HPO experiments for SNN prototyping.


Shirvani

AAAI Conferences

Learning strategies to address problems on graph and tree structures with no a-priori size limitations in cases where no known solution exists (and thus supervised data is hard to obtain), is a difficult problem with potential applications in a wide range of domains ranging from biological networks to protein folding and social network search. The main challenges here arise from the variable size representation that needs to be resolved in the context of Reinforcement Learning (RL) to address the problem. In this paper we consider a common, specific tree problem and show that it can be addressed using a combination of feature engineering and carefully designed learning processes. In particular, We consider the classical Nearest Neighbor Interchange (NNI) distance between unrooted labeled trees, which is defined as the minimum-cost sequence of operations that transform one tree into another. We introduce a representation and a reinforcement learning method that learns the transition dynamics and iteratively changes an arbitrary initial labeled tree into a goal configuration reachable through NNI. The differential tree representation and NNI actions permits the system to learn a strategy that is applicable to arbitrary sized trees. To train the system, we introduce a training process that uses randomly sampled trajectories to incrementally train more and more complex problems to overcome the difficulty of the overall strategy space. Experiments performed show that the system can successfully learn a strategy for effective NNI on complex trees.


A Big Bet on Nanotechnology Has Paid Off

#artificialintelligence

We're now more than two decades out from the initial announcement of the National Nanotechnology Initiative (NNI), a federal program from President Bill Clinton founded in 2000 to support nanotechnology research and development in universities, government agencies and industry laboratories across the United States. It was a significant financial bet on a field that was better known among the general public for science fiction than scientific achievement. Today it's clear that the NNI did more than influence the direction of research in the U.S. It catalyzed a worldwide effort and spurred an explosion of creativity in the scientific community. And we're reaping the rewards not just in medicine, but also clean energy, environmental remediation and beyond. Before the NNI, there were people who thought nanotechnology was a gimmick. I began my research career in chemistry, but it seemed to me that nanotechnology was a once-in-a-lifetime opportunity: the opening of a new field that crossed scientific disciplines.