Goto

Collaborating Authors

 montague


When Should Neural Data Inform Welfare? A Critical Framework for Policy Uses of Neuroeconomics

Yiven, null, Zhu, null

arXiv.org Artificial Intelligence

Neuroeconomics promises to ground welfare analysis in neural and computational evidence about how people value outcomes, learn from experience and exercise self-control. At the same time, policy and commercial actors increasingly invoke neural data to justify paternalistic regulation, "brain-based" interventions and new welfare measures. This paper asks under what conditions neural data can legitimately inform welfare judgements for policy rather than merely describing behaviour. I develop a non-empirical, model-based framework that links three levels: neural signals, computational decision models and normative welfare criteria. Within an actor-critic reinforcement-learning model, I formalise the inference path from neural activity to latent values and prediction errors and then to welfare claims. I show that neural evidence constrains welfare judgements only when the neural-computational mapping is well validated, the decision model identifies "true" interests versus context-dependent mistakes, and the welfare criterion is explicitly specified and defended. Applying the framework to addiction, neuromarketing and environmental policy, I derive a Neuroeconomic Welfare Inference Checklist for regulators and for designers of NeuroAI systems. The analysis treats brains and artificial agents as value-learning systems while showing that internal reward signals, whether biological or artificial, are computational quantities and cannot be treated as welfare measures without an explicit normative model.


The Algebra of Meaning: Why Machines Need Montague More Than Moore's Law

Jeong, Cheonkam, Kim, Sungdo, Park, Jewoo

arXiv.org Artificial Intelligence

Contemporary language models are fluent yet routinely mis-handle the types of meaning their outputs entail. We argue that hallucination, brittle moderation, and opaque compliance outcomes are symptoms of missing type-theoretic semantics rather than data or scale limitations. Building on Montague's view of language as typed, compositional algebra, we recast alignment as a parsing problem: natural-language inputs must be compiled into structures that make explicit their descriptive, normative, and legal dimensions under context. We present Savassan, a neuro-symbolic architecture that compiles utterances into Montague-style logical forms and maps them to typed ontologies extended with deontic operators and jurisdictional contexts. Neural components extract candidate structures from unstructured inputs; symbolic components perform type checking, constraint reasoning, and cross-jurisdiction mapping to produce compliance-aware guidance rather than binary censorship. In cross-border scenarios, the system "parses once" (e.g., defect claim(product x, company y)) and projects the result into multiple legal ontologies (e.g., defamation risk in KR/JP, protected opinion in US, GDPR checks in EU), composing outcomes into a single, explainable decision. This paper contributes: (i) a diagnosis of hallucination as a type error; (ii) a formal Montague-ontology bridge for business/legal reasoning; and (iii) a production-oriented design that embeds typed interfaces across the pipeline. We outline an evaluation plan using legal reasoning benchmarks and synthetic multi-jurisdiction suites. Our position is that trustworthy autonomy requires compositional typing of meaning, enabling systems to reason about what is described, what is prescribed, and what incurs liability within a unified algebra of meaning.


We're Finding Out More About What Using A.I. for Writing Does to Your Thinking. The Timing Couldn't Be Worse.

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. This spring's hot topic of conversation for my colleagues in higher ed was that "Everyone Is Cheating Their Way Through College" article in New York magazine. Most of the fellow professors I spoke with about this were horrified by how often students now can and do let A.I. write their papers. Others are joining their students in asking, Why not? A surprising coalition--William Shakespeare and 17th-century scribes, as well as 21st-century elementary school teachers, anti-fascist scholars, and epidemiologists--would tell you why not. A key principle for 17th-century scholars transcribing or translating classical or biblical texts was lectio difficilior potior: The reading that is stranger is stronger.


Intensional FOL: Many-Sorted Extension

Majkic, Zoran

arXiv.org Artificial Intelligence

The concepts used in IFOL have associated to them a list of sorted attributes, and the sorts are the intensional concepts as well. The requirement to extend the unsorted IFOL (Intensional FOL) to many-sorted IFOL is mainly based on the fact that a natural language is implicitly many-sorted and that we intend to use IFOL to support applications that use natural languages. Thus, the proposed version of many-sorted IFOL is just the completion of this conceptual feature of the IFOL.


Benchmarking Compositionality with Formal Languages

Valvoda, Josef, Saphra, Naomi, Rawski, Jonathan, Williams, Adina, Cotterell, Ryan

arXiv.org Artificial Intelligence

Recombining known primitive concepts into larger novel combinations is a quintessentially human cognitive capability. Whether large neural models in NLP can acquire this ability while learning from data is an open question. In this paper, we investigate this problem from the perspective of formal languages. We use deterministic finite-state transducers to make an unbounded number of datasets with controllable properties governing compositionality. By randomly sampling over many transducers, we explore which of their properties contribute to learnability of a compositional relation by a neural network. We find that the models either learn the relations completely or not at all. The key is transition coverage, setting a soft learnability limit at 400 examples per transition.


AI in climate change: Machine learning helps predict methane well leaks

#artificialintelligence

AI could have a key role to play in climate change after the technology was used by scientists to identify greenhouse gas leaks in oil and gas wells. Research conducted at the University of Vermont used machine learning algorithms to predict whether the wells would emit significant amounts of methane – one of the most harmful gases contributing to global warming. It tested 38,391 wells in Alberta, Canada, and was able to determine which wells leaked – and those that didn't – with up to 87% accuracy. Professor George Pinder, who conducted the research alongside former doctoral student James Montague, said: "The big picture is that we can now have tool that could help us much more efficiently identify leaking wells. "Given that methane is such a significant contributor to global warming, this is powerful information that should be put to use." The analysis yielded a cluster of 16 traits that predicted whether a well would fail and leak. Researchers were given access to more complete information, including the fluid properties of the oil or natural gas being mined, for 4,000 wells. For these wells, the machine learning algorithm identified leaks with 87% accuracy. For a larger sample of about 28,500 wells, where the fluid property was not known and taken into account, the accuracy level was 62%. Companies in Alberta are required to test wells at the time they begin operating to determine if they have failed and are leaking methane. They must also keep careful records of each well's construction characteristics. Professor Anthony R Ingraffea – based at Cornell University's School of Civil and Environmental Engineering, in Ithaca, New York – is an expert in oil and natural gas well design and construction, but was not involved in the study. He said: "Provincial and state regulatory agencies never have enough inspectors or financial resources to locate, let alone repair, leaking wells.


Car voice commands won't suck with Nuance's assistant - Roadshow

#artificialintelligence

Prompted by an activation phrase, Dragon Drive recognizes a driver named Lior by his voice. Voice command in cars shows so much potential to help drivers keep their eyes on the road, but since its implementation, the technology largely resulted in frustration. Sure, placing a call to a specific contact usually works, but just try finding a destination in the navigation system. It becomes worse when the car doesn't show what commands it understands. Nuance, the company behind the majority of voice systems in cars, thinks it has the problem licked through the use of machine learning and the cloud, essentially equipping cars with a virtual assistant.


Analytics, OR, data science and machine learning: what's in a name?

#artificialintelligence

Analytics, statistics, operations research, data science and machine learning - with which term do you prefer associate? Are you from the House of Capulet or Montague, or do you even care? That which we call a rose By any other name would smell as sweet." Romeo was from the house of Montague, Juliet, the house of Capulet, and this distinction that meant that their families were sworn enemies. The play is a tragedy, because by the end the two lovers end up dead as a result of this long-running feud. Statistics, data science and machine learning are but a few of the "houses" that feud today over names, and while to my knowledge no deaths have resulted from this debate the competing camps have nearly come to blows. How has the emerging field of Analytics impacted the Operations Research Profession? Is Analytics part of OR or the other way around? Is it good, bad, relevant, a nuisance or an opportunity for the OR profession? Is OR just Prescriptive or is it something more? In this panel discussion, we will explore these topics in a session with some of the leading thinkers in both OR and Analytics. Be sure to attend to have your questions answered on these highly complementary and valuable fields."


Foraging in an Uncertain Environment Using Predictive Hebbian Learning

Montague, P. Read, Dayan, Peter, Sejnowski, Terrence J.

Neural Information Processing Systems

Survival is enhanced by an ability to predict the availability of food, the likelihood of predators, and the presence of mates. We present a concrete model that uses diffuse neurotransmitter systems to implement a predictive version of a Hebb learning rule embedded in a neural architecture based on anatomical and physiological studies on bees. The model captured the strategies seen in the behavior of bees and a number of other animals when foraging in an uncertain environment. The predictive model suggests a unified way in which neuromodulatory influences can be used to bias actions and control synaptic plasticity. Successful predictions enhance adaptive behavior by allowing organisms to prepare for future actions, rewards, or punishments. Moreover, it is possible to improve upon behavioral choices if the consequences of executing different actions can be reliably predicted. Although classical and instrumental conditioning results from the psychological literature [1] demonstrate that the vertebrate brain is capable of reliable prediction, how these predictions are computed in brains is not yet known. The brains of vertebrates and invertebrates possess small nuclei which project axons throughout large expanses of target tissue and deliver various neurotransmitters such as dopamine, norepinephrine, and acetylcholine [4]. The activity in these systems may report on reinforcing stimuli in the world or may reflect an expectation of future reward [5, 6,7,8].


Foraging in an Uncertain Environment Using Predictive Hebbian Learning

Montague, P. Read, Dayan, Peter, Sejnowski, Terrence J.

Neural Information Processing Systems

Survival is enhanced by an ability to predict the availability of food, the likelihood of predators, and the presence of mates. We present a concrete model that uses diffuse neurotransmitter systems to implement a predictive version of a Hebb learning rule embedded in a neural architecture based on anatomical and physiological studies on bees. The model captured the strategies seen in the behavior of bees and a number of other animals when foraging in an uncertain environment. The predictive model suggests a unified way in which neuromodulatory influences can be used to bias actions and control synaptic plasticity. Successful predictions enhance adaptive behavior by allowing organisms to prepare for future actions, rewards, or punishments. Moreover, it is possible to improve upon behavioral choices if the consequences of executing different actions can be reliably predicted. Although classical and instrumental conditioning results from the psychological literature [1] demonstrate that the vertebrate brain is capable of reliable prediction, how these predictions are computed in brains is not yet known. The brains of vertebrates and invertebrates possess small nuclei which project axons throughout large expanses of target tissue and deliver various neurotransmitters such as dopamine, norepinephrine, and acetylcholine [4]. The activity in these systems may report on reinforcing stimuli in the world or may reflect an expectation of future reward [5, 6,7,8].