trustee
Where did I put it? Loss of vital crypto key voids election
Feedback is entertained by the commotion at the International Association for Cryptologic Research's recent elections, where results could not be decrypted after an honest but unfortunate human mistake The phrase "you couldn't make it up", Feedback feels, is often misunderstood. It doesn't mean there are limits to the imagination, but rather that there are some developments you can't include in a fictional story because people would say "oh come on, that would never happen". The trouble is, those people are wrong, because real life is frequently ridiculous. In the world of codes and ciphers, one of the more important organisations is the International Association for Cryptologic Research, described as " a non-profit organization devoted to supporting the promotion of the science of cryptology ". The IACR recently held elections to choose new officers and directors and to tweak its bylaws.
- North America > United States > California (0.05)
- North America > Mexico (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Michigan (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine (1.00)
- Government (0.67)
- Leisure & Entertainment > Games (0.46)
- (2 more...)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Michigan (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine (1.00)
- Government (0.67)
- Leisure & Entertainment > Games (0.46)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.72)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.45)
Can LLMs Reason About Trust?: A Pilot Study
Debnath, Anushka, Cranefield, Stephen, Lorini, Emiliano, Savarimuthu, Bastin Tony Roy
In human society, trust is an essential component of social attitude that helps build and maintain long-term, healthy relationships which creates a strong foundation for cooperation, enabling individuals to work together effectively and achieve shared goals. As many human interactions occur through electronic means such as using mobile apps, the potential arises for AI systems to assist users in understanding the social state of their relationships. In this paper we investigate the ability of Large Language Models (LLMs) to reason about trust between two individuals in an environment which requires fostering trust relationships. We also assess whether LLMs are capable of inducing trust by role-playing one party in a trust based interaction and planning actions which can instil trust.
- Europe > Spain > Aragón (0.04)
- Oceania > New Zealand (0.04)
- North America > Canada (0.04)
- (4 more...)
- Education (0.68)
- Information Technology > Security & Privacy (0.46)
New research centre to explore how AI can help humans 'speak' with pets
If your cat's sulking, your dog's whining or your rabbit's doing that strange thing with its paws again, you will recognise that familiar pang of guilt shared by most other pet owners. But for those who wish they knew just what was going on in the minds of their loyal companions, help may soon be at hand – thanks to the establishment of first scientific institution dedicated to empirically investigating the consciousness of animals. The Jeremy Coller Centre for Animal Sentience, based at the London School of Economics and Political Science (LSE), will begin its work on 30 September, researching non-human animals, including those as evolutionarily distant from us as insects, crabs and cuttlefish. One of its most eye-catching projects will be to explore how AI can help humans "speak" with their pets, the dangers of it going wrong – and what we need to do to prevent that happening. "We like our pets to display human characteristics and with the advent of AI, the ways in which your pet will be able to speak to you is going to be taken to a whole new level," said Prof Jonathan Birch, the inaugural director of the centre.
Head of State Bar of California to step down after exam fiasco
The State Bar of California announced Friday that its embattled leader, who has faced growing pressure to resign over the botched February roll out of a new bar exam, will step down in July. Leah T. Wilson, the agency's executive director, informed the Board of Trustees she will not seek another term in the position she has held on and off since 2017. She also apologized for her role in the February bar exam chaos. "Accountability is a bedrock principle for any leader," Wilson said in a statement. "At the end of the day, I am responsible for everything that occurs within the organization. Despite our best intentions, the experiences of applicants for the February Bar Exam simply were unacceptable, and I fully recognize the frustration and stress this experience caused. While there are no words to assuage those emotions, I do sincerely apologize."
- Law > Government & the Courts (0.54)
- Government > Regional Government > North America Government > United States Government (0.32)
Fostering Trust and Quantifying Value of AI and ML
Artificial Intelligence (AI) and Machine Learning (ML) providers have a responsibility to develop valid and reliable systems. Much has been discussed about trusting AI and ML inferences (the process of running live data through a trained AI model to make a prediction or solve a task), but little has been done to define what that means. Those in the space of ML- based products are familiar with topics such as transparency, explainability, safety, bias, and so forth. Yet, there are no frameworks to quantify and measure those. Producing ever more trustworthy machine learning inferences is a path to increase the value of products (i.e., increased trust in the results) and to engage in conversations with users to gather feedback to improve products. In this paper, we begin by examining the dynamic of trust between a provider (Trustor) and users (Trustees). Trustors are required to be trusting and trustworthy, whereas trustees need not be trusting nor trustworthy. The challenge for trustors is to provide results that are good enough to make a trustee increase their level of trust above a minimum threshold for: 1- doing business together; 2- continuation of service. We conclude by defining and proposing a framework, and a set of viable metrics, to be used for computing a trust score and objectively understand how trustworthy a machine learning system can claim to be, plus their behavior over time.
- Europe > Switzerland > Geneva > Geneva (0.04)
- Oceania > Australia > Queensland (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
Sam Bankman-Fried funded a group with racist ties. FTX wants its 5m back
Multiple events hosted at a historic former hotel in Berkeley, California, have brought together people from intellectual movements popular at the highest levels in Silicon Valley while platforming prominent people linked to scientific racism, the Guardian reveals. But because of alleged financial ties between the non-profit that owns the building – Lightcone Infrastructure (Lightcone) – and jailed crypto mogul Sam Bankman-Fried, the administrators of FTX, Bankman-Fried's failed crypto exchange, are demanding the return of almost 5m that new court filings allege were used to bankroll the purchase of the property. During the last year, Lightcone and its director, Oliver Habryka, have made the 20m Lighthaven Campus available for conferences and workshops associated with the "longtermism", "rationalism" and "effective altruism" (EA) communities, all of which often see empowering the tech sector, its elites and its beliefs as crucial to human survival in the far future. At these events, movement influencers rub shoulders with startup founders and tech-funded San Francisco politicians – as well as people linked to eugenics and scientific racism. Since acquiring the Lighthaven property – formerly the Rose Garden Inn – in late 2022, Lightcone has transformed it into a walled, surveilled compound without attracting much notice outside the subculture it exists to promote.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > California > Alameda County > Berkeley (0.24)
- Europe > Estonia > Harju County > Tallinn (0.06)
- (5 more...)
An Empirical Exploration of Trust Dynamics in LLM Supply Chains
Balayn, Agathe, Yurrita, Mireia, Rancourt, Fanny, Casati, Fabio, Gadiraju, Ujwal
With the widespread proliferation of AI systems, trust in AI is an important and timely topic to navigate. Researchers so far have largely employed a myopic view of this relationship. In particular, a limited number of relevant trustors (e.g., end-users) and trustees (i.e., AI systems) have been considered, and empirical explorations have remained in laboratory settings, potentially overlooking factors that impact human-AI relationships in the real world. In this paper, we argue for broadening the scope of studies addressing `trust in AI' by accounting for the complex and dynamic supply chains that AI systems result from. AI supply chains entail various technical artifacts that diverse individuals, organizations, and stakeholders interact with, in a variety of ways. We present insights from an in-situ, empirical study of LLM supply chains. Our work reveals additional types of trustors and trustees and new factors impacting their trust relationships. These relationships were found to be central to the development and adoption of LLMs, but they can also be the terrain for uncalibrated trust and reliance on untrustworthy LLMs. Based on these findings, we discuss the implications for research on `trust in AI'. We highlight new research opportunities and challenges concerning the appropriate study of inter-actor relationships across the supply chain and the development of calibrated trust and meaningful reliance behaviors. We also question the meaning of building trust in the LLM supply chain.
- North America > United States > Hawaii (0.05)
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Law (1.00)
- Social Sector (0.93)
Using Deep Q-Learning to Dynamically Toggle between Push/Pull Actions in Computational Trust Mechanisms
Lygizou, Zoi, Kalles, Dimitris
Recent work on decentralized computational trust models for open Multi Agent Systems has resulted in the development of CA, a biologically inspired model which focuses on the trustee's perspective. This new model addresses a serious unresolved problem in existing trust and reputation models, namely the inability to handle constantly changing behaviors and agents' continuous entry and exit from the system. In previous work, we compared CA to FIRE, a well-known trust and reputation model, and found that CA is superior when the trustor population changes, whereas FIRE is more resilient to the trustee population changes. Thus, in this paper, we investigate how the trustors can detect the presence of several dynamic factors in their environment and then decide which trust model to employ in order to maximize utility. We frame this problem as a machine learning problem in a partially observable environment, where the presence of several dynamic factors is not known to the trustor and we describe how an adaptable trustor can rely on a few measurable features so as to assess the current state of the environment and then use Deep Q Learning (DQN), in a single-agent Reinforcement Learning setting, to learn how to adapt to a changing environment. We ran a series of simulation experiments to compare the performance of the adaptable trustor with the performance of trustors using only one model (FIRE or CA) and we show that an adaptable agent is indeed capable of learning when to use each model and, thus, perform consistently in dynamic environments.