Goto

Collaborating Authors

 informant


AI in Money Matters

Tchatchoua, Nadine Sandjo, Harper, Richard

arXiv.org Artificial Intelligence

In November 2022, Europe and the world by and large were stunned by the birth of a new large language model : ChatGPT. Ever since then, both academic and populist discussions have taken place in various public spheres such as LinkedIn and X(formerly known as Twitter) with the view to both understand the tool and its benefits for the society. The views of real actors in professional spaces, especially in regulated industries such as finance and law have been largely missing. We aim to begin to close this gap by presenting results from an empirical investigation conducted through interviews with professional actors in the Fintech industry. The paper asks the question, how and to what extent are large language models in general and ChatGPT in particular being adopted and used in the Fintech industry? The results show that while the fintech experts we spoke with see a potential in using large language models in the future, a lot of questions marks remain concerning how they are policed and therefore might be adopted in a regulated industry such as Fintech. This paper aims to add to the existing academic discussing around large language models, with a contribution to our understanding of professional viewpoints.


New Jersey woman accused of hiring Tinder date to kill her ex and his teen daughter: court docs

FOX News

'The Big Weekend Show' co-hosts discuss Tinder user traffic peaking during'Dating Sunday.' A New Jersey woman is accused of hiring a man she met on Tinder to kill her police officer ex-boyfriend and his daughter, according to authorities. Camden County Prosecutor Grace C. MacAulay charged Jaclyn Diiorio, 26, with two counts of attempted first-degree murder, one count of conspiracy to commit murder and one count of third-degree possession of a controlled dangerous substance in connection with the alleged crime. Diiorio, of Runnemede, allegedly told a confidential informant she met on Tinder that she wanted her ex, a 53-year-old Philadelphia Police Department officer, and his 19-year-old daughter killed, Gloucester New Jersey Township Police said in a news release. The informant and Diiorio allegedly exchanged several phone calls and text messages after meeting on the dating app and later in person at a Wawa, according to court documents obtained by Fox News Digital.


Learning Phonotactics from Linguistic Informants

Breiss, Canaan, Ross, Alexis, Maina-Kilaas, Amani, Levy, Roger, Andreas, Jacob

arXiv.org Artificial Intelligence

We propose an interactive approach to language learning that utilizes linguistic acceptability judgments from an informant (a competent language user) to learn a grammar. Given a grammar formalism and a framework for synthesizing data, our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies, asks the informant for a binary judgment, and updates its own parameters in preparation for the next query. We demonstrate the effectiveness of our model in the domain of phonotactics, the rules governing what kinds of sound-sequences are acceptable in a language, and carry out two experiments, one with typologically-natural linguistic data and another with a range of procedurally-generated languages. We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, and sometimes greater than, fully supervised approaches.


Kids can learn from robots--with a lot of help from humans

#artificialintelligence

Could robots be part of the answer to alleviating teacher shortages (and other staffing issues) in the future? Lots of folks think so, and new research indicates kids might already be primed to accept a non-human information source. A group of researchers from Concordia University in Montréal, Canada, ran two experiments with groups of three- and five-year-old children, all recruited from a database of existing research participants. Families received gift cards and the children received certificates of merit for participating. Approximately half the sample was white, a quarter of the sample was mixed race, and the remainder consisted of various other ethnic groups (such as African, Asian, and South American).


Lensing Machines: Representing Perspective in Latent Variable Models

Dinakar, Karthik, Lieberman, Henry

arXiv.org Artificial Intelligence

Many datasets represent a combination of several viewpoints - different ways of looking at the same data that lead to different generalizations. For example, a corpus with examples generated by different people may be mixtures of many perspectives and can be viewed with different perspectives by others. It isn't always possible to represent the viewpoints by a clean separation, in advance, of examples representing each viewpoint and train a separate model for each viewpoint. We introduce lensing, a mixed-initiative technique to (1) extract'lenses' or mappings between machine-learned representations and perspectives of human experts, and to (2) generate'lensed' models that afford multiple perspectives of the same dataset. We apply lensing for two classes of latent variable models (a) a mixed-membership model and (b) a matrix factorization model in the context of two mental health applications, and we capture and imbue the perspectives of clinical psychologists into these models. Our work shows the benefits of the machine learning practitioner formally incorporating the perspective of a knowledgeable domain expert into their models rather than estimating unlensed models themselves in isolation.


Would a robot trust you? Developmental robotics model of trust and theory of mind

#artificialintelligence

The technological revolution taking place in the fields of robotics and artificial intelligence seems to indicate a future shift in our human-centred social paradigm towards a greater inclusion of artificial cognitive agents in our everyday environments. This means that collaborative scenarios between humans and robots will become more frequent and will have a deeper impact on everyday life. In this setting, research regarding trust in human–robot interactions (HRI) assumes a major importance in order to ensure the highest quality of the interaction itself, as trust directly affects the willingness of people to accept information produced by a robot and to cooperate with it. Many studies have already explored trust that humans give to robots and how this can be enhanced by tuning both the design and the behaviour of the machine, but not so much research has focused on the opposite scenario, that is the trust that artificial agents can assign to people. Despite this, the latter is a critical factor in joint tasks where humans and robots depend on each other's effort to achieve a shared goal: whereas a robot can fail, so can a person. For an artificial agent to know when to trust or distrust somebody and adapt its plans to this prediction can make all the difference in the success or failure of the task. Our work is centred on the design and development of an artificial cognitive architecture for a humanoid autonomous robot that incorporates trust, theory of mind (ToM) and episodic memory, as we believe these are the three key factors for the purpose of estimating the trustworthiness of others. We have tested our architecture on an established developmental psychology experiment [1] and the results we obtained confirm that our approach successfully models trust mechanisms and dynamics in cognitive robots. Trust is a fundamental, unavoidable component of social interactions that can be defined as the willingness of a party (the trustor) to rely on the actions of another party (the trustee), with the former having no control over the latter [2].


Amazon's Ring is the largest civilian surveillance network the US has ever seen Lauren Bridges

The Guardian

In a 2020 letter to management, Max Eliaser, an Amazon software engineer, said Ring is "simply not compatible with a free society". We should take his claim seriously. Ring video doorbells, Amazon's signature home security product, pose a serious threat to a free and democratic society. Not only is Ring's surveillance network spreading rapidly, it is extending the reach of law enforcement into private property and expanding the surveillance of everyday life. What's more, once Ring users agree to release video content to law enforcement, there is no way to revoke access and few limitations on how that content can be used, stored, and with whom it can be shared.


Learning Half-Spaces and other Concept Classes in the Limit with Iterative Learners

Khazraei, Ardalan, Kötzing, Timo, Seidel, Karen

arXiv.org Machine Learning

In order to model an efficient learning paradigm, iterative learning algorithms access data one by one, updating the current hypothesis without regress to past data. Past research on iterative learning analyzed for example many important additional requirements and their impact on iterative learners. In this paper, our results are twofold. First, we analyze the relative learning power of various settings of iterative learning, including learning from text and from informant, as well as various further restrictions, for example we show that strongly non-U-shaped learning is restrictive for iterative learning from informant. Second, we investigate the learnability of the concept class of half-spaces and provide a constructive iterative algorithm to learn the set of half-spaces from informant.


Questioning the AI: Informing Design Practices for Explainable AI User Experiences

Liao, Q. Vera, Gruen, Daniel, Miller, Sarah

arXiv.org Artificial Intelligence

A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI remains an open question. By interviewing 20 UX and design practitioners working on various AI products, we seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. To do so, we develop an algorithm-informed XAI question bank in which user needs for explainability are represented as prototypical questions users might ask about the AI, and use it as a study probe. Our work contributes insights into the design space of XAI, informs efforts to support design practices in this space, and identifies opportunities for future XAI work. We also provide an extended XAI question bank and discuss how it can be used for creating user-centered XAI.


How Data Scientists Work Together With Domain Experts in Scientific Collaborations: To Find The Right Answer Or To Ask The Right Question?

Mao, Yaoli, Wang, Dakuo, Muller, Michael, Varshney, Kush R., Baldini, Ioana, Dugan, Casey, AleksandraMojsilović, null

arXiv.org Artificial Intelligence

In recent years there has been an increasing trend in which data scientists and domain experts work together to tackle complex scientific questions. However, such collaborations often face challenges. In this paper, we aim to decipher this collaboration complexity through a semi-structured interview study with 22 interviewees from teams of bio-medical scientists collaborating with data scientists. In the analysis, we adopt the Olsons' four-dimensions framework proposed in Distance Matters to code interview transcripts. Our findings suggest that besides the glitches in the collaboration readiness, technology readiness, and coupling of work dimensions, the tensions that exist in the common ground building process influence the collaboration outcomes, and then persist in the actual collaboration process. In contrast to prior works' general account of building a high level of common ground, the breakdowns of content common ground together with the strengthen of process common ground in this process is more beneficial for scientific discovery. We discuss why that is and what the design suggestions are, and conclude the paper with future directions and limitations.