If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The patient appeared to be dying. She had chronic lung disease, and she had been told she had little reserve left and had barely survived on home oxygen for the past few years. Each time she picked up a lung infection, the buzzards circled closer. Now she had tripped, fallen, broken a bone, had surgery, and her subsequent infection seemed to have pushed her past the point of no return. Still, I held off the palliative care/comfort care team for as long as I could, and she rallied.
Since the 1950s, researchers have documented the many types of predictions in which algorithms outperform humans. Algorithms beat doctors and pathologists in predicting the survival of cancer patients, occurrence of heart attacks, and severity of diseases. Algorithms predict recidivism of parolees better than parole boards. And they predict whether a business will go bankrupt better than loan officers. According to anecdotes in a classic book on the accuracy of algorithms, many of these earliest findings were met with skepticism.
Cognitive and ethical biases: Humans exhibit a variety of biases which interfere with reasoning, including cognitive biases and ethical biases such as in-group bias. In general, we expect direct answers to questions to reflect primarily Type 1 thinking (fast heuristic judgment), while we would like to target a combination of Type 1 and Type 2 thinking (slow, deliberative judgment). Lack of domain knowledge: We may be interested in questions that require domain knowledge unavailable to people answering the questions. For example, a correct answer to whether a particular injury constitutes medical malpractice may require detailed knowledge of medicine and law. In some cases, a question might require so many areas of specialized expertise that no one person is sufficient, or (if AI is sufficiently advanced) deeper expertise than any human possesses.
Much discussion and debate surround the topic of physicians and the use of artificial intelligence. The notion that AI could ever fully replace a doctor is not a completely absurd one -- there are many jobs, including white-collar professions, that eventually will be replaced by automation and various levels of machine-learning technology. Certainly, from a pragmatic perspective, it is interesting to consider the possibility of a physician who never needs to eat, never tires, can read thousands of pages of new research every day, can record and remember every experience and can even communicate in multiple languages. But can a machine provide better patient care? In a recent Harvard Business Review article, authors Richard Susskind, chairman of the advisory board of the Oxford Internet Institute, and his son Daniel, an economics fellow at the University of Oxford's Balliol College, say that AI will not only support physicians in their work, but also ultimately replace them.
We propose a new method for analyzing a set of parameters in a multiple criteria ranking method. Unlike the existing techniques, we do not use any optimization technique, instead incorporating and extending a Segmenting Description approach. While considering a value-based preference disaggregation method, we demonstrate the usefulness of the introduced algorithm in a multi-purpose decision analysis exploiting a system of inequalities that models the Decision Maker's preferences. Specifically, we discuss how it can be applied for verifying the consistency between the revealed and estimated preferences as well as for identifying the sources of potential incoherence. Moreover, we employ the method for conducting robustness analysis, i.e., discovering a set of all compatible parameter values and verifying the stability of suggested recommendation in view of multiplicity of feasible solutions. In addition, we make clear its suitability for generating arguments about the validity of outcomes and the role of particular criteria. We discuss the favorable characteristics of the Segmenting Description approach which enhance its suitability for use in Multiple Criteria Decision Aiding. These include keeping in memory an entire process of transforming a system of inequalities and avoiding the need for processing the inequalities contained in the basic system which is subsequently enriched with some hypothesis to be verified. The applicability of the proposed method is exemplified on a numerical study.
In abstract argumentation, multiple argumentation semantics have been proposed that allow to select sets of jointly acceptable arguments from a given argumentation framework, i.e. based only on the attack relation between arguments. The existence of multiple argumentation semantics raises the question which of these semantics predicts best how humans evaluate arguments. Previous empirical cognitive studies that have tested how humans evaluate sets of arguments depending on the attack relation between them have been limited to a small set of very simple argumentation frameworks, so that some semantics studied in the literature could not be meaningfully distinguished by these studies. In this paper we report on an empirical cognitive study that overcomes these limitations by taking into consideration twelve argumentation frameworks of three to eight arguments each. These argumentation frameworks were mostly more complex than the argumentation frameworks considered in previous studies. All twelve argumentation framework were systematically instantiated with natural language arguments based on a certain fictional scenario, and participants were shown both the natural language arguments and a graphical depiction of the attack relation between them. Our data shows that grounded and CF2 semantics were the best predictors of human argument evaluation. A detailed analysis revealed that part of the participants chose a cognitively simpler strategy that is predicted very well by grounded semantics, while another part of the participants chose a cognitively more demanding strategy that is mostly predicted well by CF2 semantics.
The ancient philosopher Confucius has been credited with saying "study your past to know your future." This wisdom applies not only to life but to machine learning also. Specifically, the availability and application of labeled data (things past) for the labeling of previously unseen data (things future) is fundamental to supervised machine learning. Without labels (diagnoses, classes, known outcomes) in past data, then how do we make progress in labeling (explaining) future data? This would be a problem.
A spy movie with its paraphernalia of cool gadgets and technologies has always enticed audiences. In these movies, we have seen the use of a polygraph to detect if somebody is being truthful or not. Needless to say, polygraph is a multi-billion dollar industry and plays a crucial role in crime adjudication. Polygraphs do not have any "intelligence" built into them. They are simple machines that do what they were designed to do: measure vital statistics like blood pressure and pulse to reach a conclusion.
For visual-effects artists, time is always a struggle. When the call comes in to create something spectacular, artists and supervisors have to calculate how much run- way they have to get from the point of the idea for the vfx to the deadline. On "Avengers: Infinity War," the vfx crew found that a new innovation -- machine learning -- made it possible to create the character Thanos in a way that would have simply been impossible without it. The filmmakers envisioned a version of Thanos -- played by Josh Brolin -- that would be CG, but also incorporate all the subtle facial expressions and delicate hallmarks of a physical performance that could only been done by an actor. They knew that the facial tracking tech was there but asking vfx artists to manually adjust every inch of the CG version of the face of Thanos once they had all the tracking and scanning information would have been a disaster.
This paper shows that the fuzzy temporal logic can model figures of thought to describe decision-making behaviors. In order to exemplify, some economic behaviors observed experimentally were modeled from problems of choice containing time, uncertainty and fuzziness. Related to time preference, it is noted that the subadditive discounting is mandatory in positive rewards situations and, consequently, results in the magnitude effect and time effect, where the last has a stronger discounting for earlier delay periods (as in, one hour, one day), but a weaker discounting for longer delay periods (for instance, six months, one year, ten years). In addition, it is possible to explain the preference reversal (change of preference when two rewards proposed on different dates are shifted in the time). Related to the Prospect Theory, it is shown that the risk seeking and the risk aversion are magnitude dependents, where the risk seeking may disappear when the values to be lost are very high.