Country
Artificial Intelligence in Reverse Supply Chain Management: The State of the Art
Xing, Bo, Gao, Wen-Jing, Battle, Kimberly, Marwala, Tshildzi, Nelwamondo, Fulufhelo V.
Product take-back legislation forces manufacturers to bear the costs of collection and disposal of products that have reached the end of their useful lives. In order to reduce these costs, manufacturers can consider reuse, remanufacturing and/or recycling of components as an alternative to disposal. The implementation of such alternatives usually requires an appropriate reverse supply chain management. With the concepts of reverse supply chain are gaining popularity in practice, the use of artificial intelligence approaches in these areas is also becoming popular. As a result, the purpose of this paper is to give an overview of the recent publications concerning the application of artificial intelligence techniques to reverse supply chain with emphasis on certain types of product returns.
Ultra-high Dimensional Multiple Output Learning With Simultaneous Orthogonal Matching Pursuit: A Sure Screening Approach
We propose a novel application of the Simultaneous Orthogonal Matching Pursuit (S-OMP) procedure for sparsistant variable selection in ultra-high dimensional multi-task regression problems. Screening of variables, as introduced in \cite{fan08sis}, is an efficient and highly scalable way to remove many irrelevant variables from the set of all variables, while retaining all the relevant variables. S-OMP can be applied to problems with hundreds of thousands of variables and once the number of variables is reduced to a manageable size, a more computationally demanding procedure can be used to identify the relevant variables for each of the regression outputs. To our knowledge, this is the first attempt to utilize relatedness of multiple outputs to perform fast screening of relevant variables. As our main theoretical contribution, we prove that, asymptotically, S-OMP is guaranteed to reduce an ultra-high number of variables to below the sample size without losing true relevant variables. We also provide formal evidence that a modified Bayesian information criterion (BIC) can be used to efficiently determine the number of iterations in S-OMP. We further provide empirical evidence on the benefit of variable selection using multiple regression outputs jointly, as opposed to performing variable selection for each output separately. The finite sample performance of S-OMP is demonstrated on extensive simulation studies, and on a genetic association mapping problem. $Keywords$ Adaptive Lasso; Greedy forward regression; Orthogonal matching pursuit; Multi-output regression; Multi-task learning; Simultaneous orthogonal matching pursuit; Sure screening; Variable selection
Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference
Seeger, Matthias W., Nickisch, Hannes
We propose a novel algorithm to solve the expectation propagation relaxation of Bayesian inference for continuous-variable graphical models. In contrast to most previous algorithms, our method is provably convergent. By marrying convergent EP ideas from (Opper&Winther 05) with covariance decoupling techniques (Wipf&Nagarajan 08, Nickisch&Seeger 09), it runs at least an order of magnitude faster than the most commonly used EP solver.
Descriptive-complexity based distance for fuzzy sets
A new distance function dist(A,B) for fuzzy sets A and B is introduced. It is based on the descriptive complexity, i.e., the number of bits (on average) that are needed to describe an element in the symmetric difference of the two sets. The distance gives the amount of additional information needed to describe any one of the two sets given the other. We prove its mathematical properties and perform pattern clustering on data based on this distance.
Dynamic Knowledge Capitalization through Annotation among Economic Intelligence Actors in a Collaborative Environment
Okunoye, Olusoji, Oladejo, Bolanle, Odumuyiwa, Victor
The shift from industrial economy to knowledge economy in today's world has revolutionalized strategic planning in organizations as well as their problem solving approaches. The point of focus today is knowledge and service production with more emphasis been laid on knowledge capital. Many organizations are investing on tools that facilitate knowledge sharing among their employees and they are as well promoting and encouraging collaboration among their staff in order to build the organization's knowledge capital with the ultimate goal of creating a lasting competitive advantage for their organizations. One of the current leading approaches used for solving organization's decision problem is the Economic Intelligence (EI) approach which involves interactions among various actors called EI actors. These actors collaborate to ensure the overall success of the decision problem solving process. In the course of the collaboration, the actors express knowledge which could be capitalized for future reuse. In this paper, we propose in the first place, an annotation model for knowledge elicitation among EI actors. Because of the need to build a knowledge capital, we also propose a dynamic knowledge capitalisation approach for managing knowledge produced by the actors. Finally, the need to manage the interactions and the interdependencies among collaborating EI actors, led to our third proposition which constitute an awareness mechanism for group work management.
A new Recommender system based on target tracking: a Kalman Filter approach
Nowakowski, Samuel, Bernier, Cédric, Boyer, Anne
In this paper, we propose a new approach for recommender systems based on target tracking by Kalman filtering. We assume that users and their seen resources are vectors in the multidimensional space of the categories of the resources. Knowing this space, we propose an algorithm based on a Kalman filter to track users and to predict the best prediction of their future position in the recommendation space.
On the Implementation of GNU Prolog
Diaz, Daniel, Abreu, Salvador, Codognet, Philippe
GNU Prolog is a general-purpose implementation of the Prolog language, which distinguishes itself from most other systems by being, above all else, a native-code compiler which produces standalone executables which don't rely on any byte-code emulator or meta-interpreter. Other aspects which stand out include the explicit organization of the Prolog system as a multipass compiler, where intermediate representations are materialized, in Unix compiler tradition. GNU Prolog also includes an extensible and high-performance finite domain constraint solver, integrated with the Prolog language but implemented using independent lower-level mechanisms. This article discusses the main issues involved in designing and implementing GNU Prolog: requirements, system organization, performance and portability issues as well as its position with respect to other Prolog system implementations and the ISO standardization initiative.
Dynamic Capitalization and Visualization Strategy in Collaborative Knowledge Management System for EI Process
Oladejo, Bolanle, Odumuyiwa, Victor, David, Amos
Knowledge is attributed to human whose problem-solving behavior is subjective and complex. In today's knowledge economy, the need to manage knowledge produced by a community of actors cannot be overemphasized. This is due to the fact that actors possess some level of tacit knowledge which is generally difficult to articulate. Problem-solving requires searching and sharing of knowledge among a group of actors in a particular context. Knowledge expressed within the context of a problem resolution must be capitalized for future reuse. In this paper, an approach that permits dynamic capitalization of relevant and reliable actors' knowledge in solving decision problem following Economic Intelligence process is proposed. Knowledge annotation method and temporal attributes are used for handling the complexity in the communication among actors and in contextualizing expressed knowledge. A prototype is built to demonstrate the functionalities of a collaborative Knowledge Management system based on this approach. It is tested with sample cases and the result showed that dynamic capitalization leads to knowledge validation hence increasing reliability of captured knowledge for reuse. The system can be adapted to various domains
Translating biomarkers between multi-way time-series experiments
Huopaniemi, Ilkka, Suvitaival, Tommi, Orešič, Matej, Kaski, Samuel
Translating potential disease biomarkers between multi-species 'omics' experiments is a new direction in biomedical research. The existing methods are limited to simple experimental setups such as basic healthy-diseased comparisons. Most of these methods also require an a priori matching of the variables (e.g., genes or metabolites) between the species. However, many experiments have a complicated multi-way experimental design often involving irregularly-sampled time-series measurements, and for instance metabolites do not always have known matchings between organisms. We introduce a Bayesian modelling framework for translating between multiple species the results from 'omics' experiments having a complex multi-way, time-series experimental design. The underlying assumption is that the unknown matching can be inferred from the response of the variables to multiple covariates including time.
To study the phenomenon of the Moravec's Paradox
"Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much powerful, though usually unconscious, sensor motor knowledge. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it."- Hans Moravec Moravec's paradox is involved with the fact that it is the seemingly easier day to day problems that are harder to implement in a machine, than the seemingly complicated logic based problems of today. The results prove that most artificially intelligent machines are as adept if not more than us at under-taking long calculations or even play chess, but their logic brings them nowhere when it comes to carrying out everyday tasks like walking, facial gesture recognition or speech recognition.