Systems & Languages


r/MachineLearning - [R] On Network Design Spaces for Visual Recognition

#artificialintelligence

Abstract: Over the past several years progress in designing better neural network architectures for visual recognition has been substantial. To help sustain this rate of progress, in this work we propose to reexamine the methodology for comparing network architectures. In particular, we introduce a new comparison paradigm of distribution estimates, in which network design spaces are compared by applying statistical techniques to populations of sampled models, while controlling for confounding factors like network complexity. Compared to current methodologies of comparing point and curve estimates of model families, distribution estimates paint a more complete picture of the entire design landscape. As a case study, we examine design spaces used in neural architecture search (NAS).


r/MachineLearning - [D] AutoML/Neural Architecture Search has a giant CO2 footprint

#artificialintelligence

Energy does mean a thing. You are never creating energy, you are only transforming it. Meaning you are still taking energy from somewhere. Having enough energy for everyone to light their house is actually a rising problem since with less atom energy it gets harder to manage and distribute. While it gets hard to distribute that energy we are wasting tons of energy on ML.


Semantic Search using Spreading Activation based on Ontology

arXiv.org Artificial Intelligence

Currently, the text document retrieval systems have many challenges in exploring the semantics of queries and documents. Each query implies information which does not appear in the query but the documents related with the information are also expected by user. The disadvantage of the previous spreading activation algorithms could be many irrelevant concepts added to the query. In this paper, a proposed novel algorithm is only activate and add to the query named entities which are related with original entities in the query and explicit relations in the query.


Understanding Neural Architecture Search Techniques

arXiv.org Machine Learning

Automatic methods for generating state-of-the-art neural network architectures without human experts have generated significant attention recently. This is because of the potential to remove human experts from the design loop which can reduce costs and decrease time to model deployment. Neural architecture search (NAS) techniques have improved significantly in their computational efficiency since the original NAS was proposed. This reduction in computation is enabled via weight sharing such as in Efficient Neural Architecture Search (ENAS). However, recently a body of work confirms our discovery that ENAS does not do significantly better than random search with weight sharing, contradicting the initial claims of the authors. We provide an explanation for this phenomenon by investigating the interpretability of the ENAS controller's hidden state. We are interested in seeing if the controller embeddings are predictive of any properties of the final architecture - for example, graph properties like the number of connections, or validation performance. We find models sampled from identical controller hidden states have no correlation in various graph similarity metrics. This failure mode implies the RNN controller does not condition on past architecture choices. Importantly, we may need to condition on past choices if certain connection patterns prevent vanishing or exploding gradients. Lastly, we propose a solution to this failure mode by forcing the controller's hidden state to encode pasts decisions by training it with a memory buffer of previously sampled architectures. Doing this improves hidden state interpretability by increasing the correlation controller hidden states and graph similarity metrics.


Security Architecture for Smart Factories

#artificialintelligence

Building smart factories is a substantial endeavor for organizations. The initial steps involve understanding what makes them unique and what new advantages they offer. However, a realistic view of smart factories also involves acknowledging the risks and threats that may arise in its converged virtual and physical environment. As with many systems that integrate with the industrial internet of things (IIoT), the convergence of information technology (IT) and operational technology (OT) in smart factories allows for capabilities such as real-time monitoring, interoperability, and virtualization. But this also means an expanded attack surface.


Can machines have common sense? – Moral Robots – Medium

#artificialintelligence

The Cyc project (initially planned from 1984 to 1994) is the world's longest-lived AI project. The idea was to create a machine with "common sense," and it was predicted that about 10 years should suffice to see significant results. That didn't quite work out, and today, after 35 years, the project is still going on -- although by now very few experts still believe in the promises made by Cyc's developers. Common sense is more than just explaining the meaning of words. For example, we have already seen how "sibling" or "daughter" can be explained in Prolog with a dictionary-like definition.


Top IT predictions in APAC in 2019

#artificialintelligence

The growing use of AI will increase data usage exponentially. As part of Singapore's smart nation initiative, the government has planned to invest up to S$150m from the National Research Foundation on AI over five years through the AI Singapore programme. While first-generation AI architectures have historically been centralised, Equinix predicts that enterprises will enter the realm of distributed AI architectures, where AI model building and model inferencing will take place at the edge, physically closer to the origin source of the data. To access more external data sources for accurate predictions, enterprises will turn to secure data transaction marketplaces. They will also strive to leverage AI innovation in multiple public clouds without getting locked into a single cloud, further decentralising AI architectures.


r/MachineLearning - [R] Neural Architecture Optimization

#artificialintelligence

Abstract: Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy.


Macron Says Europe's Security Architecture Must Be Rethought

U.S. News

French President Emmanuel Macron says "we must rethink the European security architecture" as he pushed for a continent-wide effort to create "a strategic partnership, including in terms of defense, with our closest neighbors.".


Discovering Latent Information By Spreading Activation Algorithm For Document Retrieval

arXiv.org Artificial Intelligence

Syntactic search relies on keywords contained in a query to find suitable documents. So, documents that do not contain the keywords but contain information related to the query are not retrieved. Spreading activation is an algorithm for finding latent information in a query by exploiting relations between nodes in an associative network or semantic network. However, the classical spreading activation algorithm uses all relations of a node in the network that will add unsuitable information into the query. In this paper, we propose a novel approach for semantic text search, called query-oriented-constrained spreading activation that only uses relations relating to the content of the query to find really related information. Experiments on a benchmark dataset show that, in terms of the MAP measure, our search engine is 18.9% and 43.8% respectively better than the syntactic search and the search using the classical constrained spreading activation. KEYWORDS: Information Retrieval, Ontology, Semantic Search, Spreading Activation