deutsch
VeriMinder: Mitigating Analytical Vulnerabilities in NL2SQL
Mohole, Shubham, Galhotra, Sainyam
Application systems using natural language interfaces to databases (NLIDBs) have democratized data analysis. This positive development has also brought forth an urgent challenge to help users who might use these systems without a background in statistical analysis to formulate bias-free analytical questions. Although significant research has focused on text-to-SQL generation accuracy, addressing cognitive biases in analytical questions remains underexplored. We present VeriMinder, https://veriminder.ai, an interactive system for detecting and mitigating such analytical vulnerabilities. Our approach introduces three key innovations: (1) a contextual semantic mapping framework for biases relevant to specific analysis contexts (2) an analytical framework that operationalizes the Hard-to-Vary principle and guides users in systematic data analysis (3) an optimized LLM-powered system that generates high-quality, task-specific prompts using a structured process involving multiple candidates, critic feedback, and self-reflection. User testing confirms the merits of our approach. In direct user experience evaluation, 82.5% participants reported positively impacting the quality of the analysis. In comparative evaluation, VeriMinder scored significantly higher than alternative approaches, at least 20% better when considered for metrics of the analysis's concreteness, comprehensiveness, and accuracy. Our system, implemented as a web application, is set to help users avoid "wrong question" vulnerability during data analysis. VeriMinder code base with prompts, https://reproducibility.link/veriminder, is available as an MIT-licensed open-source software to facilitate further research and adoption within the community.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- (4 more...)
A decision-theoretic approach to dealing with uncertainty in quantum mechanics
De Vos, Keano, de Cooman, Gert, Erreygers, Alexander, De Bock, Jasper
We provide a decision-theoretic framework for dealing with uncertainty in quantum mechanics. This uncertainty is two-fold: on the one hand there may be uncertainty about the state the quantum system is in, and on the other hand, as is essential to quantum mechanical uncertainty, even if the quantum state is known, measurements may still produce an uncertain outcome. In our framework, measurements therefore play the role of acts with an uncertain outcome and our simple decision-theoretic postulates ensure that Born's rule is encapsulated in the utility functions associated with such acts. This approach allows us to uncouple (precise) probability theory from quantum mechanics, in the sense that it leaves room for a more general, so-called imprecise probabilities approach. We discuss the mathematical implications of our findings, which allow us to give a decision-theoretic foundation to recent seminal work by Benavoli, Facchini and Zaffalon, and we compare our approach to earlier and different approaches by Deutsch and Wallace.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (2 more...)
Problems in AI, their roots in philosophy, and implications for science and society
Artificial Intelligence (AI) is one of today's most relevant emergent technologies. In view thereof, this paper proposes that more attention should be paid to the philosophical aspects of AI technology and its use. It is argued that this deficit is generally combined with philosophical misconceptions about the growth of knowledge. To identify these misconceptions, reference is made to the ideas of the philosopher of science Karl Popper and the physicist David Deutsch. The works of both thinkers aim against mistaken theories of knowledge, such as inductivism, empiricism, and instrumentalism. This paper shows that these theories bear similarities to how current AI technology operates. It also shows that these theories are very much alive in the (public) discourse on AI, often called Bayesianism. In line with Popper and Deutsch, it is proposed that all these theories are based on mistaken philosophies of knowledge. This includes an analysis of the implications of these mistaken philosophies for the use of AI in science and society, including some of the likely problem situations that will arise. This paper finally provides a realistic outlook on Artificial General Intelligence (AGI) and three propositions on A(G)I and philosophy (i.e., epistemology).
- Europe > Netherlands > North Holland > Amsterdam (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Law (1.00)
- Health & Medicine (1.00)
- Government (0.69)
A Rational Analysis of the Speech-to-Song Illusion
Marjieh, Raja, van Rijn, Pol, Sucholutsky, Ilia, Lee, Harin, Griffiths, Thomas L., Jacoby, Nori
The speech-to-song illusion is a robust psychological phenomenon whereby a spoken sentence sounds increasingly more musical as it is repeated. Despite decades of research, a complete formal account of this transformation is still lacking, and some of its nuanced characteristics, namely, that certain phrases appear to transform while others do not, is not well understood. Here we provide a formal account of this phenomenon, by recasting it as a statistical inference whereby a rational agent attempts to decide whether a sequence of utterances is more likely to have been produced in a song or speech. Using this approach and analyzing song and speech corpora, we further introduce a novel prose-to-lyrics illusion that is purely text-based. In this illusion, simply duplicating written sentences makes them appear more like song lyrics. We provide robust evidence for this new illusion in both human participants and large language models.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Switzerland > Geneva > Geneva (0.04)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.69)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
Inductive Models for Artificial Intelligence Systems are Insufficient without Good Explanations
Instead of providing an explanation networks (ANNs), which are effective at approximating of a phenomenon, models trained this way present complex functions but often lack transparency us with yet another phenomenon that needs an explanation and explanatory power. It highlights the [Wiegreffe and Pinter, 2019; Jain and Wallace, 2019]. 'problem of induction'--the philosophical issue Thus, despite the recent surge in the field of'explainable that past observations may not necessarily predict AI' [Doshi-Velez and Kim, 2017], which attempts to provide future events, a challenge that ML models face some insight in to the generalizations made by trained models, when encountering new, unseen data. The paper argues it may be the case that the underlying problem of induction for the importance of not just making predictions and a lack of good explanations will remain so long as but also providing good explanations, a feature we use machine induction as the primary path in AI. that current models often fail to deliver.
- North America > United States > Virginia (0.04)
- Europe > United Kingdom (0.04)
- Europe > Russia (0.04)
- (2 more...)
Unified Information Dynamic Analysis of Quantum Decision-Making and Search Algorithms: Computational Intelligence Measure
Ulyanov, Sergey V., Ghisi, Fabio, Kurawaki, Ichiro, Ulyanov, Viktor S.
There are important algorithms built upon a mixture of basic techniques described; for example, the Fast Fourier Transform (FFT) employs both Divide - and - Conquer and Transform - and - Conquer techniques. In this article, the evolution of a quantum algorithm (QA) is examined from a n information theory viewpoint. The complex vector entering the quantum algorithmic gate - QAG is considered as an information source both from the classical and the quantum level. The analysis of the classical and quantum information flow in Deutsch - Jozsa, Shor and Grover algorithm s is used. It is shown that QAG, based on superposition of states, quantum entanglement and interference, when acting on the input vector, stores information into the system state, minimizing the gap between classical Shannon ent ropy and quantum von Neumann entropy. Minimizing of the gap between Shannon and von Neumann entropies is considered as a termination criterion of QA computational intelligence measure. Let us discuss the main properties of classical and quantum information that in dynamic analysis of quantum algorithms are used. Additional necessary detail description of general properties of information amounts in Appendix 1 to this article is given. Any c omputation (both classical and quantum) is formally identical to a communication in time. By considering quantum computation as a communication process, it is possible to relate its efficiency to its classical communication capacity. At time, the programmer sets the computer to accomplish any one of several possible tasks. Each of these tasks can be regarded as embodying a different message. Another programmer can obtain this message by looking at the output of the computer when th e computation is finished at time . Computation based on quantum principles allows for more efficient algorithms for solving certain problems than algorithms based on pure classical principles [ 1 ]. The sender conveys the maximum information when all the message states have equal a priori probability (which also maximizes the channel capacity). In that case the mutual information (channel capacity) at the end of the computation is . Let us consider any peculiarities of information axioms and information capability of quantum computing as the dynamic evolution of QAs. If one breaks down the general unitary transformation of a QA into a number of successive unitary blocks, then the maximum capacity may be achieved only after the number of applications of the blocks. When its total value reaches the maximum possible value consistent with a given initial state o f the quantum computing, the computation is regarded as being complete (see, in details [ 2,3 ]). The classical capacity of a quantum communication channel is connected with the efficiency of quantum computing using entropic arguments [ 1 - 9 ]. This formalism allows us to derive lower bounds on the computational complexity of QA's in the most general context.
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Italy (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- (5 more...)
New AI 'cancer chatbot' provides patients and families with 24/7 support: 'Empathetic approach'
Texas residents share how familiar they are with artificial intelligence on a scale from one to 10 and detailed how much they use it each day. Cancer patients looking for quick answers or support between their appointments can now turn to "Dave," an artificial intelligence chatbot trained to discuss all things related to oncology. Launched earlier this month by Belong.Life, a New York-based health technology company, Dave is described as the world's first conversational AI oncology mentor for cancer patients. "Dave has aided patients in understanding their situations and equipping them with valuable information to engage in informed discussions with their physicians," said Irad Deutsch, co-founder and CTO of Belong, in an interview with Fox News Digital. Some of the most common questions include potential treatments for diagnoses and what to expect in terms of side effects, he said.
- North America > United States > New York (0.25)
- North America > United States > Texas (0.25)
Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience -- an initial exploration
Artificial intelligence has made great strides since the deep learning revolution, but AI systems still struggle to extrapolate outside of their training data and adapt to new situations. For inspiration we look to the domain of science, where scientists have been able to develop theories which show remarkable ability to extrapolate and sometimes predict the existence of phenomena which have never been observed before. According to David Deutsch, this type of extrapolation, which he calls "reach", is due to scientific theories being hard to vary. In this work we investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning such as the bias-variance trade-off and Occam's razor. We distinguish internal variability, how much a model/theory can be varied internally while still yielding the same predictions, with external variability, which is how much a model must be varied to accurately predict new, out-of-distribution data. We discuss how to measure internal variability using the size of the Rashomon set and how to measure external variability using Kolmogorov complexity. We explore what role hard-to-vary explanations play in intelligence by looking at the human brain and distinguish two learning systems in the brain. The first system operates similar to deep learning and likely underlies most of perception and motor control while the second is a more creative system capable of generating hard-to-vary explanations of the world. We argue that figuring out how replicate this second system, which is capable of generating hard-to-vary explanations, is a key challenge which needs to be solved in order to realize artificial general intelligence. We make contact with the framework of Popperian epistemology which rejects induction and asserts that knowledge generation is an evolutionary process which proceeds through conjecture and refutation.
- North America > United States > New York > New York County > New York City (0.14)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Maryland > Montgomery County > Bethesda (0.04)
- (5 more...)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
- Leisure & Entertainment > Games (0.93)
- Health & Medicine > Therapeutic Area > Neurology (0.82)
Sharpening The AI Problem
In 2017, the cognitive scientist and entrepreneur, Gary Marcus, argued that AGI needs a moonshot. In an interview with Alice Lloyd George, he said, "Let's have an international consortium kind of like we had for CERN, the large hadron collider. What if you had $7 billion dollars that was carefully orchestrated towards a common goal." Marcus felt that the political climate of the time made such a collective effort unlikely. But the moonshot analogy for AGI has taken hold in the private sector and captured the public imagination. In a 2017 talk, the CEO and co-founder of DeepMind, Demis Hassabis, evoked the moonshot analogy to describe his company as "a kind of Apollo program effort for artificial intelligence." Hassabis unpacks his vision with pitch deck efficiency: First they'll understand human intelligence, then they'll recreate it artificially.
Should our machines sound human?
Yesterday, Google announced an AI product called Duplex, which is capable of having human-sounding conversations. I am genuinely bothered and disturbed at how morally wrong it is for the Google Assistant voice to act like a human and deceive other humans on the other line of a phone call, using upspeak and other quirks of language. "Hi um, do you have anything available on uh May 3?" If Google created a way for a machine to sound so much like a human that now we can't tell what is real and what is fake, we need to have a talk about ethics and when it's right for a human to know when they are speaking to a robot. In this age of disinformation, where people don't know what's fake news… how do you know what to believe if you can't even trust your ears with now Google Assistant calling businesses and posing as a human? That means any dialogue can be spoofed by a machine and you can't tell.