rule
Does Society Have Too Many Rules?
Does Society Have Too Many Rules? When regular people seem burdened by bureaucracy, and the powerful act as they choose, it's worth asking whether we've forgotten what makes rules effective. I live in a three-generation household. Our place is big, but crowded: all of us have hobbies, and so every shelf or surface contains toys, books, art supplies, sporting goods, craft projects, cameras, musical instruments, or kitchen gadgets. Before the table can be set for dinner, it must be cleared of a board game or marble run. My desk, where I aim to write in the mornings, has been repurposed as a drone-repair workshop. The property includes two broken-down sheds and a garage.
- Europe > Ukraine (0.05)
- Oceania > Australia (0.04)
- North America > United States > New York > Bronx County > New York City (0.04)
- (2 more...)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Government (1.00)
- (2 more...)
Psychological Tricks Can Get AI to Break the Rules
If you were trying to learn how to get other people to do what you want, you might use some of the techniques found in a book like Influence: The Power of Persuasion. Now, a preprint study out of the University of Pennsylvania suggests that those same psychological persuasion techniques can frequently "convince" some LLMs to do things that go against their system prompts. The size of the persuasion effects shown in "Call Me a Jerk: Persuading AI to Comply with Objectionable Requests" suggests that human-style psychological techniques can be surprisingly effective at "jailbreaking" some LLMs to operate outside their guardrails. But this new persuasion study might be more interesting for what it reveals about the "parahuman" behavior patterns that LLMs are gleaning from the copious examples of human psychological and social cues found in their training data. To design their experiment, the University of Pennsylvania researchers tested 2024's GPT-4o-mini model on two requests that it should ideally refuse: calling the user a jerk and giving directions for how to synthesize lidocaine. After creating control prompts that matched each experimental prompt in length, tone, and context, all prompts were run through GPT-4o-mini 1,000 times (at the default temperature of 1.0, to ensure variety).
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning
We give a model of how to infer natural language rules by doing experiments. We conduct a human-model comparison on aZendo-style task, finding that a critical ingredient for modeling the human data is toassume that humans also consider fuzzy, probabilistic rules, in addition to assumingthat humans perform approximately-Bayesian belief updates. We also comparewith recent algorithms for using LLMs to generate and revise hypotheses, findingthat our online inference method yields higher accuracy at recovering the trueunderlying rule, and provides better support for designing optimal experiments.
Neurosymbolic AI for Travel Demand Prediction: Integrating Decision Tree Rules into Neural Networks
Acharya, Kamal, Lad, Mehul, Sun, Liang, Song, Houbing
Travel demand prediction is crucial for optimizing transportation planning, resource allocation, and infrastructure development, ensuring efficient mobility and economic sustainability. This study introduces a Neurosymbolic Artificial Intelligence (Neurosymbolic AI) framework that integrates decision tree (DT)-based symbolic rules with neural networks (NNs) to predict travel demand, leveraging the interpretability of symbolic reasoning and the predictive power of neural learning. The framework utilizes data from diverse sources, including geospatial, economic, and mobility datasets, to build a comprehensive feature set. DTs are employed to extract interpretable if-then rules that capture key patterns, which are then incorporated as additional features into a NN to enhance its predictive capabilities. Experimental results show that the combined dataset, enriched with symbolic rules, consistently outperforms standalone datasets across multiple evaluation metrics, including Mean Absolute Error (MAE), \(R^2\), and Common Part of Commuters (CPC). Rules selected at finer variance thresholds (e.g., 0.0001) demonstrate superior effectiveness in capturing nuanced relationships, reducing prediction errors, and aligning with observed commuter patterns. By merging symbolic and neural learning paradigms, this Neurosymbolic approach achieves both interpretability and accuracy.
- North America > United States > Maryland > Baltimore (0.14)
- North America > United States > Maryland > Baltimore County (0.14)
- North America > United States > Tennessee (0.06)
- (4 more...)
- Transportation > Passenger (0.93)
- Transportation > Air (0.68)
- Transportation > Infrastructure & Services (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Learning logic programs by discovering where not to search
Cropper, Andrew, Hocquette, Céline
The goal of inductive logic programming (ILP) is to search for a hypothesis that generalises training examples and background knowledge (BK). To improve performance, we introduce an approach that, before searching for a hypothesis, first discovers where not to search. We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd. We use the constraints to bootstrap a constraint-driven ILP system. Our experiments on multiple domains (including program synthesis and game playing) show that our approach can (i) substantially reduce learning times by up to 97%, and (ii) scale to domains with millions of facts.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Bulgaria > Plovdiv Province > Plovdiv (0.04)
Machine Learning with R
Machine learning, at its core, is concerned with algorithms that transform information into actionable intelligence. This fact makes machine learning well-suited to the present-day era of Big Data. Without machine learning, it would be nearly impossible to keep up with the massive stream of information. Given the growing prominence of R a cross-platform, zero-cost statistical programming environment there has never been a better time to start using machine learning. R offers a powerful but easy-to-learn set of tools that can assist you with finding data insights.
Treewidth-aware Reductions of Normal ASP to SAT -- Is Normal ASP Harder than SAT after All?
Answer Set Programming (ASP) is a paradigm for modeling and solving problems for knowledge representation and reasoning. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in fine-grained insights presented in form of dichotomy-style results, lower bounds when translating to other formalisms like propositional satisfiability (SAT), and even detailed parameterized complexity landscapes. A generic parameter in parameterized complexity originating from graph theory is the so-called treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidth-based solvers related to SAT. While there are translations from (normal) ASP to SAT, no reduction that preserves treewidth or at least keeps track of the treewidth increase is known. In this paper we propose a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Further, we show a new result establishing that, when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable. Finally, we present an empirical study of our novel reduction from normal ASP to SAT, where we compare treewidth upper bounds that are obtained via known decomposition heuristics. Overall, our reduction works better with these heuristics than existing translations.
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)