Country
Ergonomics Analysis for Vehicle Assembly Using Artificial Intelligence
In this article I discuss a deployed application at Ford Motor Company that utilizes AI technology for the analysis of potential ergonomic concerns at Ford's assembly plants. The manufacture of motor vehicles is a complex and dynamic problem, and the costs related to workplace injuries and lost productivity due to bad ergonomic design can be very significant. Ford has developed two separate ergonomic analysis systems that have been integrated into the process planning for manufacturing system at Ford known as the Global Study and Process Allocation System (GSPAS). GSPAS has become the global repository for standardized engineering processes and data for assembling all Ford vehicles, including parts, tools, and standard labor time. One of the more significant benefits of GSPAS is the use of a controlled language, known as Standard Language, which is used throughout Ford to write the process assembly instructions. AI is already used within GSPAS for Standard Language validation and direct labor management. The work described here shows how Ford built upon its previous success with AI to expand the technology into the new domain of ergonomics analysis.
AI in the News
"This summer, three local students will explore News" collection that can be found--complete Warburg school, will spend the Web pages. The only cost of the computers for five to six hours a day, the class is about $300 for a kit, which Gtech Gives Girls Blueprint for Success. May activities to break up the day." The space agency also provided 27, 2004 (www.westuexaminer.com). "The free kits for another 30 online students, best way to increase girls' interest in engineering Girls Build Robots at RoboCamp.
Tenth Anniversary of the Plastics Color Formulation Tool
Since 1994, GE Plastics has employed a case-based reasoning (CBR) tool that determines color formulas that match requested colors. This tool, called FormTool, has saved GE millions of dollars in productivity and material (that is, colorant) costs. The technology developed in FormTool has been used to create an online color-selection tool for our customers called ColorXpress Select. A customer innovation center has been developed around the FormTool software.
Special Issue on Innovative Applications of AI: Guest Editor's Introduction
Randall W. Hill, Jr., Jacobstein, Neil
We are pleased to publish this special selection of articles from the Sixteenth Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-04), which occurred July 27-29, 2004 in San Jose, California. IAAI is the premier venue for learning about AI's impact through deployed applications and emerging AI technologies. Case studies of deployed applications with measurable benefits arising from the use of AI technology provide clear evidence of the impact and value of AI technology to today's world. The emerging applications track features technologies that are rapidly maturing to the point of application. The seven articles selected for this special issue are extended versions of the papers that appeared at the conference. Four of the articles describe deployed applications that are already in use in the field. The other three articles, which are from the emerging technology track, were selected because they are particularly innovative and show great potential for deployment.
Pure Nash Equilibria: Hard and Easy Games
Gottlob, G., Greco, G., Scarcello, F.
We investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is NP-hard, while deciding whether a game has a strong Nash equilibrium is SigmaP2-complete. We then study practically relevant restrictions that lower the complexity. In particular, we are interested in quantitative and qualitative restrictions of the way each player's payoff depends on moves of other players. We say that a game has small neighborhood if the utility function for each player depends only on (the actions of) a logarithmically small number of other players. The dependency structure of a game G can be expressed by a graph DG(G) or by a hypergraph H(G). By relating Nash equilibrium problems to constraint satisfaction problems (CSPs), we show that if G has small neighborhood and if H(G) has bounded hypertree width (or if DG(G) has bounded treewidth), then finding pure Nash and Pareto equilibria is feasible in polynomial time. If the game is graphical, then these problems are LOGCFL-complete and thus in the class NC2 of highly parallelizable problems.
Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms
Khardon, R., Roth, D., Servedio, R. A.
The paper studies machine learning problems where each example is described using a set of Boolean features and where hypotheses are represented by linear threshold elements. One method of increasing the expressiveness of learned hypotheses in this context is to expand the feature set to include conjunctions of basic features. This can be done explicitly or where possible by using a kernel function. Focusing on the well known Perceptron and Winnow algorithms, the paper demonstrates a tradeoff between the computational efficiency with which the algorithm can be run over the expanded feature space and the generalization ability of the corresponding learning algorithm. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over a feature space of exponentially many conjunctions; however we also show that using such kernels, the Perceptron algorithm can provably make an exponential number of mistakes even when learning simple functions. We then consider the question of whether kernel functions can analogously be used to run the multiplicative-update Winnow algorithm over an expanded feature space of exponentially many conjunctions. Known upper bounds imply that the Winnow algorithm can learn Disjunctive Normal Form (DNF) formulae with a polynomial mistake bound in this setting. However, we prove that it is computationally hard to simulate Winnow's behavior for learning DNF over such a feature set. This implies that the kernel functions which correspond to running Winnow for this problem are not efficiently computable, and that there is no general construction that can run Winnow with kernels.
MAP estimation via agreement on (hyper)trees: Message-passing and linear programming
Wainwright, Martin J., Jaakkola, Tommi S., Willsky, Alan S.
We develop and analyze methods for computing provably optimal {\em maximum a posteriori} (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: (a) a {\em tree-relaxed linear program} (LP), which is derived from the Lagrangian dual of the upper bounds; and (b) a {\em tree-reweighted max-product message-passing algorithm} that is related to but distinct from the max-product algorithm. In this way, we establish a connection between a certain LP relaxation of the mode-finding problem, and a reweighted form of the max-product (min-sum) message-passing algorithm.
Lossy source encoding via message-passing and decimation over generalized codewords of LDGM codes
Wainwright, Martin J., Maneva, Elitza
We describe message-passing and decimation approaches for lossy source coding using low-density generator matrix (LDGM) codes. In particular, this paper addresses the problem of encoding a Bernoulli(0.5) source: for randomly generated LDGM codes with suitably irregular degree distributions, our methods yield performance very close to the rate distortion limit over a range of rates. Our approach is inspired by the survey propagation (SP) algorithm, originally developed by Mezard et al. for solving random satisfiability problems. Previous work by Maneva et al. shows how SP can be understood as belief propagation (BP) for an alternative representation of satisfiability problems. In analogy to this connection, our approach is to define a family of Markov random fields over generalized codewords, from which local message-passing rules can be derived in the standard way. The overall source encoding method is based on message-passing, setting a subset of bits to their preferred values (decimation), and reducing the code.
Learning Concept Hierarchies from Text Corpora using Formal Concept Analysis
Cimiano, P., Hotho, A., Staab, S.
We present a novel approach to the automatic acquisition of taxonomies or concept hierarchies from a text corpus. The approach is based on Formal Concept Analysis (FCA), a method mainly used for the analysis of data, i.e. for investigating and processing explicitly given information. We follow Harris' distributional hypothesis and model the context of a certain term as a vector representing syntactic dependencies which are automatically acquired from the text corpus with a linguistic parser. On the basis of this context information, FCA produces a lattice that we convert into a special kind of partial order constituting a concept hierarchy. The approach is evaluated by comparing the resulting concept hierarchies with hand-crafted taxonomies for two domains: tourism and finance. We also directly compare our approach with hierarchical agglomerative clustering as well as with Bi-Section-KMeans as an instance of a divisive clustering algorithm. Furthermore, we investigate the impact of using different measures weighting the contribution of each attribute as well as of applying a particular smoothing technique to cope with data sparseness.
Perseus: Randomized Point-based Value Iteration for POMDPs
Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agent's belief space. We present a randomized point-based value iteration algorithm called Perseus. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Contrary to other point-based methods, Perseus backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set. We show how the same idea can be extended to dealing with continuous action spaces. Experimental results show the potential of Perseus in large scale POMDP problems.