error parameter
Geometric Robot Calibration Using a Calibration Plate
Rameder, Bernhard, Gattringer, Hubert, Mueller, Andreas
In this paper a new method for geometric robot calibration is introduced, which uses a calibration plate with precisely known distances between its measuring points. The relative measurement between two points on the calibration plate is used to determine predefined error parameters of the system. In comparison to conventional measurement methods, like laser tracker or motion capture systems, the calibration plate provides a more mechanically robust and cheaper alternative, which is furthermore easier to transport due to its small size. The calibration method, the plate design, the mathematical description of the error system as well as the identification of the parameters are described in detail. For identifying the error parameters, the least squares method and a constrained optimization problem are used. The functionality of this method was demonstrated in experiments that led to promising results, correlated with one of a laser tracker calibration. The modeling and identification of the error parameters is done for a gantry machine, but is not restricted to that type of robot.
A novel step-by-step procedure for the kinematic calibration of robots using a single draw-wire encoder
Boschetti, Giovanni, Sinico, Teresa
Robot positioning accuracy is a key factory when performing high-precision manufacturing tasks. To effectively improve the accuracy of a manipulator, often up to a value close to its repeatability, calibration plays a crucial role. In the literature, various approaches to robot calibration have been proposed, and they range considerably in the type of measurement system and identification algorithm used. Our aim was to develop a novel step-by-step kinematic calibration procedure - where the parameters are subsequently estimated one at a time - that only uses 1D distance measurement data obtained through a draw-wire encoder. To pursue this objective, we derived an analytical approach to find, for each unknown parameter, a set of calibration points where the discrepancy between the measured and predicted distances only depends on that unknown parameter. This reduces the computational burden of the identification process while potentially improving its accuracy. Simulations and experimental tests were carried out on a 6 degrees-of-freedom robot arm: the results confirmed the validity of the proposed strategy. As a result, the proposed step-by-step calibration approach represents a practical, cost-effective and computationally less demanding alternative to standard calibration approaches, making robot calibration more accessible and easier to perform.
- Europe > Italy (0.04)
- North America > United States (0.04)
Counterfactual Prediction Under Outcome Measurement Error
Guerdan, Luke, Coston, Amanda, Holstein, Kenneth, Wu, Zhiwei Steven
Across domains such as medicine, employment, and criminal justice, predictive models often target labels that imperfectly reflect the outcomes of interest to experts and policymakers. For example, clinical risk assessments deployed to inform physician decision-making often predict measures of healthcare utilization (e.g., costs, hospitalization) as a proxy for patient medical need. These proxies can be subject to outcome measurement error when they systematically differ from the target outcome they are intended to measure. However, prior modeling efforts to characterize and mitigate outcome measurement error overlook the fact that the decision being informed by a model often serves as a risk-mitigating intervention that impacts the target outcome of interest and its recorded proxy. Thus, in these settings, addressing measurement error requires counterfactual modeling of treatment effects on outcomes. In this work, we study intersectional threats to model reliability introduced by outcome measurement error, treatment effects, and selection bias from historical decision-making policies. We develop an unbiased risk minimization method which, given knowledge of proxy measurement error properties, corrects for the combined effects of these challenges. We also develop a method for estimating treatment-dependent measurement error parameters when these are unknown in advance. We demonstrate the utility of our approach theoretically and via experiments on real-world data from randomized controlled trials conducted in healthcare and employment domains. As importantly, we demonstrate that models correcting for outcome measurement error or treatment effects alone suffer from considerable reliability limitations. Our work underscores the importance of considering intersectional threats to model validity during the design and evaluation of predictive models for decision support.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > Oregon (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Modeling & Simulation (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
Let's go to Space. But this time, through Artificial Intelligence
There's no denying the fact that we live in a period where technology has inevitably become less counterfeit but rather more intelligent. Regardless of whether we talk about AI applications or the uses of its subsets specifically machine learning and deep learning, the scope is huge on what people could have or can envision. Given that, would it be bizarre to realize that AI applications have outperformed our customary lives and are currently taking control over space (Indian moon mission – Chandrayaan-2, for example)? Expanding the levels of automation and autonomy utilizing strategies from artificial intelligence takes into account a more extensive variety of space missions and furthermore frees people to zero in on tasks for which they are more qualified. At times, autonomy and automation are crucial to the success of the mission. For instance, deep space exploration may require more autonomy in the rocket, as communication with ground operators is adequately inconsistent to block persistent human monitoring for conceivably hazardous situations.
- North America > United States (0.16)
- Asia > India (0.05)
Dynamically Switching between Synergistic Workflows for Crowdsourcing
Lin, Christopher H (University of Washington) | Mausam, . (University of Washington) | Weld, Daniel S (University of Washington)
To ensure quality results from unreliable crowdsourced workers, task designers often construct complex workflows and aggregate worker responses from redundant runs. Frequently, they create several alternative workflows to accomplish the task, and choose a single workflow to deploy (perhaps the one that achieves the best performance during early experiments). However, this seemingly natural design paradigm does not achieve the full potential of crowdsourcing. In particular, using a single workflow (even the best) to accomplish a task is suboptimal. We show that alternative workflows can compose synergistically to yield a much higher quality output. We formalize the insight with a novel probabilistic graphical model, design and implement AgentHunt, a POMDP-based controller that dynamically switches between these workflows to achieve higher returns on investment, and design offline and online methods for learning model parameters. Live experiments on Amazon Mechanical Turk demonstrate the superiority of AgentHunt for the practical task of generating NLP training data, yielding up to 50% error reduction and greater net utility compared to previous methods.
Dynamically Switching between Synergistic Workflows for Crowdsourcing
Lin, Christopher H. (University of Washington) | Mausam, Mausam (University of Washington) | Weld, Daniel S. (University of Washington)
To ensure quality results from unreliable crowdsourced workers, task designers often construct complex workflows and aggregate worker responses from redundant runs. Frequently, they experiment with several alternative workflows to accomplish the task, and eventually deploy the one that achieves the best performance during early trials. Surprisingly, this seemingly natural design paradigm does not achieve the full potential of crowdsourcing. In particular, using a single workflow (even the best) to accomplish a task is suboptimal. We show that alternative workflows can compose synergistically to yield much higher quality output. We formalize the insight with a novel probabilistic graphical model. Based on this model, we design and implement AGENTHUNT, a POMDP-based controller that dynamically switches between these workflows to achieve higher returns on investment. Additionally, we design offline and online methods for learning model parameters. Live experiments on Amazon Mechanical Turk demonstrate the superiority of AGENTHUNT for the task of generating NLP training data, yielding up to 50% error reduction and greater net utility compared to previous methods.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > Canada > British Columbia (0.14)
- Pacific Ocean (0.04)
- (7 more...)
Modeling Bounded Rationality of Agents During Interactions
Guo, Qing (University of Illinois at Chicago) | Gmytrasiewicz, Piotr (University of Illinois at Chicago)
Frequently, it is advantageous for an agent to model other agents in order to predict their behavior during an interaction. Modeling others as rational has a long tradition in AI and game theory, but modeling other agents’ departures from rationality is difficult and controversial. This paper proposes that bounded rationality be modeled as errors the agent being modeled is making while deciding on its action. We are motivated by the work on quantal response equilibria in behavioral game theory which uses Nash equilibria as the solution concept. In contrast, we use decision-theoretic maximization of expected utility. Quantal response assumes that a decision maker is rational, i.e., is maximizing his expected utility, but only approximately so, with an error rate characterized by a single error parameter. Another agent’s error rate may be unknown and needs to be estimated during an interaction. We show that the error rate of the quantal response can be estimated using Bayesian update of a suitable conjugate prior, and that it has a finitely dimensional sufficient statistic under strong simplifying assumptions. However, if the simplifying assumptions are relaxed, the quantal response does not admit a finite sufficient statistic and a more complex update is needed. This confirms the difficulty of using simple models of bounded rationality in general settings.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Artificial Intelligence for Artificial Artificial Intelligence
Dai, Peng (University of Washington) | Mausam, . (University of Washington) | Weld, Daniel Sabby (University of Washington)
Crowdsourcing platforms such as Amazon Mechanical Turk have become popular for a wide variety of human intelligence tasks; however, quality control continues to be a significant challenge. Recently, we propose TurKontrol, a theoretical model based on POMDPs to optimize iterative, crowd-sourced workflows. However, they neither describe how to learn the model parameters, nor show its effectiveness in a real crowd-sourced setting. Learning is challenging due to the scale of the model and noisy data: there are hundreds of thousands of workers with high-variance abilities. This paper presents an end-to-end system that first learns TurKontrol's POMDP parameters from real Mechanical Turk data, and then applies the model to dynamically optimize live tasks. We validate the model and use it to control a successive-improvement process on Mechanical Turk. By modeling worker accuracy and voting patterns, our system produces significantly superior artifacts compared to those generated through nonadaptive workflows using the same amount of money.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Research Report > New Finding (0.94)
- Workflow (0.91)