aporia
The Effect of Belief Boxes and Open-mindedness on Persuasion
Bilgin, Onur, Sami, Abdullah As, Vujjini, Sriram Sai, Licato, John
As multi-agent systems are increasingly utilized for reasoning and decision-making applications, there is a greater need for LLM-based agents to have something resembling propositional beliefs. One simple method for doing so is to include statements describing beliefs maintained in the prompt space (in what we'll call their belief boxes). But when agents have such statements in belief boxes, how does it actually affect their behaviors and dispositions towards those beliefs? And does it significantly affect agents' ability to be persuasive in multi-agent scenarios? Likewise, if the agents are given instructions to be open-minded, how does that affect their behaviors? We explore these and related questions in a series of experiments. Our findings confirm that instructing agents to be open-minded affects how amenable they are to belief change. We show that incorporating belief statements and their strengths influences an agent's resistance to (and persuasiveness against) opposing viewpoints. Furthermore, it affects the likelihood of belief change, particularly when the agent is outnumbered in a debate by opposing viewpoints, i.e., peer pressure scenarios. The results demonstrate the feasibility and validity of the belief box technique in reasoning and decision-making tasks.
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Belief Revision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)
Resolving Open-textured Rules with Templated Interpretive Arguments
Licato, John, Fields, Logan, Marji, Zaid
Open-textured terms in written rules are typically settled through interpretive argumentation. Ongoing work has attempted to catalogue the schemes used in such interpretive argumentation. But how can the use of these schemes affect the way in which people actually use and reason over the proper interpretations of open-textured terms? Using the interpretive argument-eliciting game Aporia as our framework, we carried out an empirical study to answer this question. Differing from previous work, we did not allow participants to argue for interpretations arbitrarily, but to only use arguments that fit with a given set of interpretive argument templates. Finally, we analyze the results captured by this new dataset, specifically focusing on practical implications for the development of interpretation-capable artificial reasoners.
- North America > United States > Florida (0.05)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Law (0.70)
- Information Technology > Services (0.68)
Aporia takes aim at ML observability, responsible AI and more
Is there a line connecting machine learning observability to explainability, leading to responsible AI? Aporia, an observability platform for machine learning, thinks so. After launching its platform in 2021, and seeing good traction, Aporia today announced a $25 million Series A funding round. Aporia CEO and co-founder Liran Hason met with VentureBeat to discuss Aporia's vision, its inner workings and its growth. Hason, who founded Aporia in 2019, has a background in software engineering. After a five-year stint in the elite technological unit of the Israeli intelligence forces, he joined Adallom, a cloud security startup that was later acquired by Microsoft.
- North America > United States (0.06)
- Asia > Middle East > Israel (0.05)
Global Big Data Conference
Machine learning (ML) models are only as good as the data you feed them. That's true during training, but also once a model is put in production. In the real world, the data itself can change as new events occur and even small changes to how databases and APIs report and store data could have implications on how the models react. Since ML models will simply give you wrong predictions and not throw an error, it's imperative that businesses monitor their data pipelines for these systems. That's where tools like Aporia come in.