Goto

Collaborating Authors

 Country


Randomized Smoothing for Stochastic Optimization

arXiv.org Machine Learning

We analyze convergence rates of stochastic optimization procedures for non-smooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for non-smooth optimization. We give several applications of our results to statistical estimation problems, and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal.


Clustering and Bayesian network for image of faces classification

arXiv.org Artificial Intelligence

In a content based image classification system, target images are sorted by feature similarities with respect to the query (CBIR). In this paper, we propose to use new approach combining distance tangent, k-means algorithm and Bayesian network for image classification. First, we use the technique of tangent distance to calculate several tangent spaces representing the same image. The objective is to reduce the error in the classification phase. Second, we cut the image in a whole of blocks. For each block, we compute a vector of descriptors. Then, we use K-means to cluster the low-level features including color and texture information to build a vector of labels for each image. Finally, we apply five variants of Bayesian networks classifiers (Na\"ive Bayes, Global Tree Augmented Na\"ive Bayes (GTAN), Global Forest Augmented Na\"ive Bayes (GFAN), Tree Augmented Na\"ive Bayes for each class (TAN), and Forest Augmented Na\"ive Bayes for each class (FAN) to classify the image of faces using the vector of labels. In order to validate the feasibility and effectively, we compare the results of GFAN to FAN and to the others classifiers (NB, GTAN, TAN). The results demonstrate FAN outperforms than GFAN, NB, GTAN and TAN in the overall classification accuracy.


Characterization of Dynamic Bayesian Network

arXiv.org Artificial Intelligence

In this report, we will be interested at Dynamic Bayesian Network (DBNs) as a model that tries to incorporate temporal dimension with uncertainty. We start with basics of DBN where we especially focus in Inference and Learning concepts and algorithms. Then we will present different levels and methods of creating DBNs as well as approaches of incorporating temporal dimension in static Bayesian network.


The threshold EM algorithm for parameter learning in bayesian network with incomplete data

arXiv.org Artificial Intelligence

Bayesian networks (BN) are used in a big range of applications but they have one issue concerning parameter learning. In real application, training data are always incomplete or some nodes are hidden. To deal with this problem many learning parameter algorithms are suggested foreground EM, Gibbs sampling and RBE algorithms. In order to limit the search space and escape from local maxima produced by executing EM algorithm, this paper presents a learning parameter algorithm that is a fusion of EM and RBE algorithms. This algorithm incorporates the range of a parameter into the EM algorithm. This range is calculated by the first step of RBE algorithm allowing a regularization of each parameter in bayesian network after the maximization step of the EM algorithm. The threshold EM algorithm is applied in brain tumor diagnosis and show some advantages and disadvantages over the EM algorithm.


Machine Cognition Models: EPAM and GPS

arXiv.org Artificial Intelligence

Through history, the human being tried to relay its daily tasks to other creatures, which was the main reason behind the rise of civilizations. It started with deploying animals to automate tasks in the field of agriculture(bulls), transportation (e.g. horses and donkeys), and even communication (pigeons). Millenniums after, come the Golden age with "Al-jazari" and other Muslim inventors, which were the pioneers of automation, this has given birth to industrial revolution in Europe, centuries after. At the end of the nineteenth century, a new era was to begin, the computational era, the most advanced technological and scientific development that is driving the mankind and the reason behind all the evolutions of science; such as medicine, communication, education, and physics. At this edge of technology engineers and scientists are trying to model a machine that behaves the same as they do, which pushed us to think about designing and implementing "Things that-Thinks", then artificial intelligence was. In this work we will cover each of the major discoveries and studies in the field of machine cognition, which are the "Elementary Perceiver and Memorizer"(EPAM) and "The General Problem Solver"(GPS). The First one focus mainly on implementing the human-verbal learning behavior, while the second one tries to model an architecture that is able to solve problems generally (e.g. theorem proving, chess playing, and arithmetic). We will cover the major goals and the main ideas of each model, as well as comparing their strengths and weaknesses, and finally giving their fields of applications. And Finally, we will suggest a real life implementation of a cognitive machine.


An Intelligent Location Management approaches in GSM Mobile Network

arXiv.org Artificial Intelligence

Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery


Fast projections onto mixed-norm balls with applications

arXiv.org Machine Learning

Joint sparsity offers powerful structural cues for feature selection, especially for variables that are expected to demonstrate a "grouped" behavior. Such behavior is commonly modeled via group-lasso, multitask lasso, and related methods where feature selection is effected via mixed-norms. Several mixed-norm based sparse models have received substantial attention, and for some cases efficient algorithms are also available. Surprisingly, several constrained sparse models seem to be lacking scalable algorithms. We address this deficiency by presenting batch and online (stochastic-gradient) optimization methods, both of which rely on efficient projections onto mixed-norm balls. We illustrate our methods by applying them to the multitask lasso. We conclude by mentioning some open problems.


Development of knowledge Base Expert System for Natural treatment of Diabetes disease

arXiv.org Artificial Intelligence

The development of expert system for treatment of Diabetes disease by using natural methods is new information technology derived from Artificial Intelligent research using ESTA (Expert System Text Animation) System. The proposed expert system contains knowledge about various methods of natural treatment methods (Massage, Herbal/Proper Nutrition, Acupuncture, Gems) for Diabetes diseases of Human Beings. The system is developed in the ESTA (Expert System shell for Text Animation) which is Visual Prolog 7.3 Application. The knowledge for the said system will be acquired from domain experts, texts and other related sources.


Tight Sample Complexity of Large-Margin Learning

arXiv.org Machine Learning

We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L_2 regularization: We introduce the \gamma-adapted-dimension, which is a simple function of the spectrum of a distribution's covariance matrix, and show distribution-specific upper and lower bounds on the sample complexity, both governed by the \gamma-adapted-dimension of the source distribution. We conclude that this new quantity tightly characterizes the true sample complexity of large-margin classification. The bounds hold for a rich family of sub-Gaussian distributions.


Mouse Simulation Using Two Coloured Tapes

arXiv.org Artificial Intelligence

In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can. The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.