Goto

Collaborating Authors

 Country


An In-Depth Look at Information Fusion Rules & the Unification of Fusion Theories

arXiv.org Artificial Intelligence

This paper may look like a glossary of the fusion rules and we also introduce new ones presenting their formulas and examples: Conjunctive, Disjunctive, Exclusive Disjunctive, Mixed Conjunctive-Disjunctive rules, Conditional rule, Dempster's, Yager's, Smets' TBM rule, Dubois-Prade's, Dezert-Smarandache classical and hybrid rules, Murphy's average rule, Inagaki-Lefevre-Colot-Vannoorenberghe Unified Combination rules [and, as particular cases: Iganaki's parameterized rule, Weighting Average Operator, minC (M. Daniel), and newly Proportional Conflict Redistribution rules (Smarandache-Dezert) among which PCR5 is the most exact way of redistribution of the conflicting mass to non-empty sets following the path of the conjunctive rule], Zhang's Center Combination rule, Convolutive x-Averaging, Consensus Operator (Josang), Cautious Rule (Smets), ?-junctions rules (Smets), etc. and three new T-norm & T-conorm rules adjusted from fuzzy and neutrosophic sets to information fusion (Tchamova-Smarandache). Introducing the degree of union and degree of inclusion with respect to the cardinal of sets not with the fuzzy set point of view, besides that of intersection, many fusion rules can be improved. There are corner cases where each rule might have difficulties working or may not get an expected result.


Responsibility and Blame: A Structural-Model Approach

Journal of Artificial Intelligence Research

Causality is typically treated an all-or-nothing concept; either A is a cause of B or it is not. We extend the definition of causality introduced by Halpern and Pearl (2004a) to take into account the degree of responsibility of A for B. For example, if someone wins an election 11-0, then each person who votes for him is less responsible for the victory than if he had won 6-5. We then define a notion of degree of blame, which takes into account an agent's epistemic state. Roughly speaking, the degree of blame of A for B is the expected degree of responsibility of A for B, taken over the epistemic state of an agent.


Explicit Learning Curves for Transduction and Application to Clustering and Compression Algorithms

Journal of Artificial Intelligence Research

Inductive learning is based on inferring a general rule from a finite data set and using it to label new data. In transduction one attempts to solve the problem of using a labeled training set to label a set of unlabeled points, which are given to the learner prior to learning. Although transduction seems at the outset to be an easier task than induction, there have not been many provably useful algorithms for transduction. Moreover, the precise relation between induction and transduction has not yet been determined. The main theoretical developments related to transduction were presented by Vapnik more than twenty years ago. One of Vapnik's basic results is a rather tight error bound for transductive classification based on an exact computation of the hypergeometric tail. While tight, this bound is given implicitly via a computational routine. Our first contribution is a somewhat looser but explicit characterization of a slightly extended PAC-Bayesian version of Vapnik's transductive bound. This characterization is obtained using concentration inequalities for the tail of sums of random variables obtained by sampling without replacement. We then derive error bounds for compression schemes such as (transductive) support vector machines and for transduction algorithms based on clustering. The main observation used for deriving these new error bounds and algorithms is that the unlabeled test points, which in the transductive setting are known in advance, can be used in order to construct useful data dependent prior distributions over the hypothesis space.


A Cellular Telephone-Based Application for Skin-Grading to Support Cosmetic Sales

AI Magazine

We have developed a sales-support system for door-to-door sales of cosmetics based on a system called Skin-Expert, a skin-image grading service that includes analysis and diagnosis. Skin-Expert analyzes a customer's current skin quality from a picture of the skin. Several parameters are extracted by image processing, and the skin grading is done by rules generated by data mining from a baseline of grades given by human skin-care experts. Communication with the Skin-Expert is through a cellular telephone with a camera, using e-mail software and a Web browser. Salespeople photograph the customer's skin using the camera in a standard cellular telephone and then send an e-mail message that includes the picture as an attachment to our analysis system. Other parameters associated with the customer (for example, age and gender) are included in the body of the message. The picture is analyzed by our skin-grading system, and the results are made available as a page in HTML format on a customer-accessible Web site. An e-mail is sent when the results are available, usually within minutes. Salespeople check the results by using a Web browser on their cellular telephones. The output not only provides a grading result but also gives recommendations for the care and cosmetics that are most suitable for the customer. Our system integrates cellular communication, Web technology, computer analysis, data mining, and an expert system. Though salespeople use only a cellular telephone with very little computing power as the front end, they can take advantage of intelligent services such as computer grading and data mining. The salespeople do not need to think about what is running in the background, and there is no requirement that end users have any special hardware.


Guest Editor's Introduction

AI Magazine

We are pleased to publish this special selection of papers from the 2003 Innovative Applications of Artificial Intelligence Conference (IAAI-03). IAAI seeks out applications of artificial intelligence that either demonstrate new technology or use previously known technology in innovative ways. IAAI particularly seeks out examples of deployments of AI technology that tackle the problems of demonstrating value and planning for long-term deployment. The five articles we have selected for this special issue are extended versions of papers that appeared in the conference. Two of the articles are deployed applications that have already demonstrated practical value. The remaining three articles are particularly innovative emerging applications. We will briefly outline each of them.


Building Agents to Serve Customers

AI Magazine

AI agents combining natural language interaction, task planning, and business ontologies can help companies provide better-quality and more costeffective customer service. Our customer-service agents use natural language to interact with customers, enabling customers to state their intentions directly instead of searching for the places on the Web site that may address their concern. We use planning methods to search systematically for the solution to the customer's problem, ensuring that a resolution satisfactory for both the customer and the company is found, if one exists. Our agents converse with customers, guaranteeing that needed information is acquired from customers and that relevant information is provided to them in order for both parties to make the right decision. The net effect is a more frictionless interaction process that improves the customer experience and makes businesses more competitive on the service front.


Calendar of Events

AI Magazine

Trends in Intelligent Information Knowledge Based Computer Systems. The 18th International FLAIRS Conference seeks high quality, original, Larry Holder, University of Texas at Arlington unpublished submissions in all areas of AI, including, but not limited to, holder@cse.uta.edu The FLAIRS conference offers a set of special tracks, and authors are encouraged to submit papers to a relevant track.


Automated Essay Evaluation: The Criterion Online Writing Service

AI Magazine

Critique is an application he best way to improve one's writing instructor, revise based on the feedback, that is comprised of a suite of programs and then repeat the whole process as often as that evaluate and provide feedback for errors in possible. Unfortunately, this puts an enormous grammar, usage, and mechanics, that identify load on the classroom teacher, who is faced the essay's discourse structure, and that recognize with reading and providing feedback for perhaps potentially undesirable stylistic features. The companion scoring application, e-rater version As a result, teachers are not able to give 2.0, extracts linguistically-based features writing assignments as often as they would from an essay and uses a statistical model of wish. For example, the singular indefinite determiner a is labeled with the part-of-speech symbol AT, the adjective good is tagged JJ, the singular common noun job gets the label NN. After the corpus is tagged, frequencies are collected for each tag and for each function word (determiners, prepositions, etc.), and also for each adjacent pair of tags and function words. The individual tags and words are called unigrams, and the adjacent pairs are the bigrams. To illustrate, the word sequence, "a good job" contributes to the counts of three bigrams: a-JJ, AT-JJ, JJ-NN, which represent, respectively, the fact that the function word a was followed by an adjective, an indefinite singular determiner was followed by a noun, and an adjective was followed by a noun.



Say Cheese! Experiences with a Robot Photographer

AI Magazine

This model makes system debugging significantly easier, because we know We introduced a sensor abstraction layer to exactly what each sensor reading is at every separate the task layer from concerns about point in the computation; something that physical sensing devices. We process the sensor would not be the case if we were reading from information (from the laser rangefinder in this the sensors every time a reading was used in a application) into distance measurements from calculation. This model also allows us to inject the center of the robot, thus allowing consideration modified sensor readings into the system, as of sensor error models and performance described in the next section.