Goto

Collaborating Authors

Results



What is Machine Learning Architecture? Do you have examples? • /r/MachineLearning

@machinelearnbot

What is Machine Learning Architecture? I'm looking for a new job in Data Science, and have found job positions titled as "Machine Learning Architect". Don't really sure about what it is, but maybe I am still a newbie. What does a Machine Learning Architect do and what are the required skills? Can you give examples, please?


An Extensive Report on Cellular Automata Based Artificial Immune System for Strengthening Automated Protein Prediction

arXiv.org Artificial Intelligence

Artificial Immune System (AIS-MACA) a novel computational intelligence technique is can be used for strengthening the automated protein prediction system with more adaptability and incorporating more parallelism to the system. Most of the existing approaches are sequential which will classify the input into four major classes and these are designed for similar sequences. AIS-MACA is designed to identify ten classes from the sequences that share twilight zone similarity and identity with the training sequences with mixed and hybrid variations. This method also predicts three states (helix, strand, and coil) for the secondary structure. Our comprehensive design considers 10 feature selection methods and 4 classifiers to develop MACA (Multiple Attractor Cellular Automata) based classifiers that are build for each of the ten classes. We have tested the proposed classifier with twilight-zone and 1-high-similarity benchmark datasets with over three dozens of modern competing predictors shows that AIS-MACA provides the best overall accuracy that ranges between 80% and 89.8% depending on the dataset.


Visualization and clustering by 3D cellular automata: Application to unstructured data

arXiv.org Artificial Intelligence

Given the limited performance of 2D cellular automata in terms of space when the number of documents increases and in terms of visualization clusters, our motivation was to experiment these cellular automata by increasing the size to view the impact of size on quality of results. The representation of textual data was carried out by a vector model whose components are derived from the overall balancing of the used corpus, Term Frequency Inverse Document Frequency (TF-IDF). The WorldNet thesaurus has been used to address the problem of the lemmatization of the words because the representation used in this study is that of the bags of words. Another independent method of the language was used to represent textual records is that of the n-grams. Several measures of similarity have been tested. To validate the classification we have used two measures of assessment based on the recall and precision (f-measure and entropy). The results are promising and confirm the idea to increase the dimension to the problem of the spatiality of the classes. The results obtained in terms of purity class (i.e. the minimum value of entropy) shows that the number of documents over longer believes the results are better for 3D cellular automata, which was not obvious to the 2D dimension. In terms of spatial navigation, cellular automata provide very good 3D performance visualization than 2D cellular automata.


Modeling self-organizing traffic lights with elementary cellular automata

arXiv.org Artificial Intelligence

There have been several highway traffic models proposed based on cellular automata. The simplest one is elementary cellular automaton rule 184. We extend this model to city traffic with cellular automata coupled at intersections using only rules 184, 252, and 136. The simplicity of the model offers a clear understanding of the main properties of city traffic and its phase transitions. We use the proposed model to compare two methods for coordinating traffic lights: a green-wave method that tries to optimize phases according to expected flows and a self-organizing method that adapts to the current traffic conditions. The self-organizing method delivers considerable improvements over the green-wave method. For low densities, the self-organizing method promotes the formation and coordination of platoons that flow freely in four directions, i.e. with a maximum velocity and no stops. For medium densities, the method allows a constant usage of the intersections, exploiting their maximum flux capacity. For high densities, the method prevents gridlocks and promotes the formation and coordination of "free-spaces" that flow in the opposite direction of traffic.


Evolving localizations in reaction-diffusion cellular automata

arXiv.org Artificial Intelligence

We consider hexagonal cellular automata with immediate cell neighbourhood and three cell-states. Every cell calculates its next state depending on the integral representation of states in its neighbourhood, i.e. how many neighbours are in each one state. We employ evolutionary algorithms to breed local transition functions that support mobile localizations (gliders), and characterize sets of the functions selected in terms of quasi-chemical systems. Analysis of the set of functions evolved allows to speculate that mobile localizations are likely to emerge in the quasi-chemical systems with limited diffusion of one reagent, a small number of molecules is required for amplification of travelling localizations, and reactions leading to stationary localizations involve relatively equal amount of quasi-chemical species. Techniques developed can be applied in cascading signals in nature-inspired spatially extended computing devices, and phenomenological studies and classification of non-linear discrete systems.


Inference, Attention, and Decision in a Bayesian Neural Architecture

Neural Information Processing Systems

We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and top-down attentional priors, leading to sound perceptual discrimination. The model offers an explicit explanation for the experimentally observed modulation that prior information in one stimulus feature (location) can have on an independent feature (orientation). The network's intermediate levels of representation instantiate known physiological properties of visual cortical neurons. The model also illustrates a possible reconciliation of cortical and neuromodulatory representations of uncertainty.


A competitive modular connectionist architecture

Neural Information Processing Systems

We describe a multi-network, or modular, connectionist architecture that captures that fact that many tasks have structure at a level of granularity intermediate to that assumed by local and global function approximation schemes. The main innovation of the architecture is that it combines associative and competitive learning in order to learn task decompositions. A task decomposition is discovered by forcing the networks comprising the architecture to compete to learn the training patterns. As a result of the competition, different networks learn different training patterns and, thus, learn to partition the input space. The performance of the architecture on a "what" and "where" vision task and on a multi-payload robotics task are presented.