entail
To Post or Not to Post: AI Ethics in the Age of Big Tech
What is the role of an ethicist? Is it to be an impartial observer? A guide to what is good or bad? Here, I will explore the different roles in the context of AI ethics through the terms descriptive, normative, and action AI ethics. AI ethics is a specific field of applied ethics nested in technology ethics and computer ethics.30
What is vibe coding? A computer scientist explains what it means to have AI write computer code and what risks that can entail
Whether you're streaming a show, paying bills online or sending an email, each of these actions relies on computer programs that run behind the scenes. The process of writing computer programs is known as coding. Until recently, most computer code was written, at least originally, by human beings. But with the advent of generative artificial intelligence, that has begun to change. Now, just as you can ask ChatGPT to spin up a recipe for a favorite dish or write a sonnet in the style of Lord Byron, you can now ask generative AI tools to write computer code for you.
Forgetting in short and heterogeneous sequences of belief revisions
Forgetting a specific belief revision episode may not erase information because the other revisions may provide or entail the same information. Whether it does was proved coNP-hard for sequences of two arbitrary lexicographic revisions or arbitrarily long lexicographic Horn revisions. A polynomial algorithm is presented for the case of two lexicographic Horn revision. Heterogeneous sequences, including revisions other than lexicographic, were proved to belong in Delta2. Their previously proved coNP-hardness is enhanced to Dp-hardness.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
A Logic of Uncertain Interpretation
We do not always know how to interpret the statements that we hear, the observations that we make, or the evidence that we gather. Traditional frameworks for reasoning about uncertainty and belief revision typically suppose that new information is presented definitively: there is no question about what was learned. The paradigm of Bayesian conditioning exemplifies this assumption: "evidence" takes the simple form of an event E, and belief revision proceeds by updating probabilities accordingly: π π( | E). In order to capture the kind of uncertainty about interpretation we wish to reason about, we change the fundamental representation of events so that the sets they correspond to are themselves variable--the "true meaning" of a statement thus becomes itself an object of uncertainty. This approach follows in the spirit of other recent work [1, 2], expanding on it along two key dimensions.
Rulebreakers Challenge: Revealing a Blind Spot in Large Language Models' Reasoning with Formal Logic
Chan, Jason, Gaizauskas, Robert, Zhao, Zhixue
Formal logic has long been applied to natural language reasoning, but this approach can sometimes lead to conclusions that, while logically entailed, are factually inconsistent with the premises or are not typically inferred by humans. This study introduces the concept of "rulebreakers", which refers to instances where logical entailment diverges from factually acceptable inference. We present RULEBREAKERS, a novel dataset for evaluating Large Language Models' (LLMs) ability to distinguish between rulebreakers and non-rulebreakers. Focusing on modus tollens and disjunctive syllogism, we assess six state-of-the-art LLMs using RULEBREAKERS, measuring their performance in terms of token-level exact accuracy and model confidence. Our findings reveal that while most models perform poorly to moderately in recognizing rulebreakers, they demonstrate a latent ability to distinguish rulebreakers when assessed by their confidence levels. Further analysis suggests that the failure to recognize rulebreakers is potentially associated with the models' world knowledge and their attention distribution patterns. This research highlights the limitation of LLMs' reasoning capabilities, and contributes to the ongoing discussion on reasoning in LLMs.
Rethinking Semantic Parsing for Large Language Models: Enhancing LLM Performance with Semantic Hints
An, Kaikai, Si, Shuzheng, Hu, Helan, Zhao, Haozhe, Wang, Yuchi, Guo, Qingyan, Chang, Baobao
Semantic Parsing aims to capture the meaning of a sentence and convert it into a logical, structured form. Previous studies show that semantic parsing enhances the performance of smaller models (e.g., BERT) on downstream tasks. However, it remains unclear whether the improvements extend similarly to LLMs. In this paper, our empirical findings reveal that, unlike smaller models, directly adding semantic parsing results into LLMs reduces their performance. To overcome this, we propose SENSE, a novel prompting approach that embeds semantic hints within the prompt. Experiments show that SENSE consistently improves LLMs' performance across various tasks, highlighting the potential of integrating semantic information to improve LLM capabilities.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Singapore (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Grammars & Parsing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.98)
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for Fact-Checking
Chen, Ting-Chih, Tang, Chia-Wei, Thomas, Chris
Fact-checking real-world claims often requires reviewing multiple multimodal documents to assess a claim's truthfulness, which is a highly laborious and time-consuming task. In this paper, we present a summarization model designed to generate claim-specific summaries useful for fact-checking from multimodal, multi-document datasets. The model takes inputs in the form of documents, images, and a claim, with the objective of assisting in fact-checking tasks. We introduce a dynamic perceiver-based model that can handle inputs from multiple modalities of arbitrary lengths. To train our model, we leverage a novel reinforcement learning-based entailment objective to generate summaries that provide evidence distinguishing between different truthfulness labels. To assess the efficacy of our approach, we conduct experiments on both an existing benchmark and a new dataset of multi-document claims that we contribute. Our approach outperforms the SOTA approach by 4.6% in the claim verification task on the MOCHEG dataset and demonstrates strong performance on our new Multi-News-Fact-Checking dataset.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- (11 more...)
Entangled Relations: Leveraging NLI and Meta-analysis to Enhance Biomedical Relation Extraction
Recent research efforts have explored the potential of leveraging natural language inference (NLI) techniques to enhance relation extraction (RE). In this vein, we introduce MetaEntail-RE, a novel adaptation method that harnesses NLI principles to enhance RE performance. Our approach follows past works by verbalizing relation classes into class-indicative hypotheses, aligning a traditionally multi-class classification task to one of textual entailment. We introduce three key enhancements: (1) Instead of labeling non-entailed premise-hypothesis pairs with the uninformative "neutral" entailment label, we introduce meta-class analysis, which provides additional context by analyzing overarching meta relationships between classes when assigning entailment labels; (2) Feasible hypothesis filtering, which removes unlikely hypotheses from consideration based on pairs of entity types; and (3) Group-based prediction selection, which further improves performance by selecting highly confident predictions. MetaEntail-RE is conceptually simple and empirically powerful, yielding significant improvements over conventional relation extraction techniques and other NLI formulations. Our experimental results underscore the versatility of MetaEntail-RE, demonstrating performance gains across both biomedical and general domains.
- Europe > Portugal > Lisbon > Lisbon (0.04)
- North America > United States > Texas (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (6 more...)
Can we forget how we learned? Doxastic redundancy in iterated belief revision
How information was acquired may become irrelevant. An obvious case is when something is confirmed many times. In terms of iterated belief revision, a specific revision may become irrelevant in presence of others. Simple repetitions are an example, but not the only case when this happens. Sometimes, a revision becomes redundant even in presence of none equal, or even no else implying it. A necessary and sufficient condition for the redundancy of the first of a sequence of lexicographic revisions is given. The problem is coNP-complete even with two propositional revisions only. Complexity is the same in the Horn case but only with an unbounded number of revisions: it becomes polynomial with two revisions. Lexicographic revisions are not only relevant by themselves, but also because sequences of them are the most compact of the common mechanisms used to represent the state of an iterated revision process. Shortening sequences of lexicographic revisions is shortening the most compact representations of iterated belief revision states.
- South America > Suriname (0.04)
- Oceania > Australia (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Uzbekistan (0.04)
On Formally Undecidable Traits of Intelligent Machines
Building on work by Alfonseca et al. (2021), we study the conditions necessary for it to be logically possible to prove that an arbitrary artificially intelligent machine will exhibit certain behavior. To do this, we develop a formalism like -- but mathematically distinct from -- the theory of formal languages and their properties. Our formalism affords a precise means for not only talking about the traits we desire of machines (such as them being intelligent, contained, moral, and so forth), but also for detailing the conditions necessary for it to be logically possible to decide whether a given arbitrary machine possesses such a trait or not. Contrary to Alfonseca et al.'s (2021) results, we find that Rice's theorem from computability theory cannot in general be used to determine whether an arbitrary machine possesses a given trait or not. Therefore, it is not necessarily the case that deciding whether an arbitrary machine is intelligent, contained, moral, and so forth is logically impossible.
- North America > United States > Colorado > Boulder County > Boulder (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Health & Medicine (0.67)
- Government > Regional Government > North America Government > United States Government (0.46)
- Transportation > Ground > Road (0.45)