likeliness
Resonance: Drawing from Memories to Imagine Positive Futures through AI-Augmented Journaling
Zulfikar, Wazeer, Chiaravalloti, Treyden, Shen, Jocelyn, Picard, Rosalind, Maes, Pattie
People inherently use experiences of their past while imagining their future, a capability that plays a crucial role in mental health. Resonance is an AI-powered journaling tool designed to augment this ability by offering AI-generated, action-oriented suggestions for future activities based on the user's own past memories. Suggestions are offered when a new memory is logged and are followed by a prompt for the user to imagine carrying out the suggestion. In a two-week randomized controlled study (N=55), we found that using Resonance significantly improved mental health outcomes, reducing the users' PHQ8 scores, a measure of current depression, and increasing their daily positive affect, particularly when they would likely act on the suggestion. Notably, the effectiveness of the suggestions was higher when they were personal, novel, and referenced the user's logged memories. Finally, through open-ended feedback, we discuss the factors that encouraged or hindered the use of the tool.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom (0.04)
- (7 more...)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Health & Medicine > Consumer Health (1.00)
Natural revision is contingently-conditionalized revision
Natural revision seems so natural: it changes beliefs as little as possible to incorporate new information. Yet, some counterexamples show it wrong. It is so conservative that it never fully believes. It only believes in the current conditions. This is right in some cases and wrong in others. Which is which? The answer requires extending natural revision from simple formulae expressing universal truths (something holds) to conditionals expressing conditional truth (something holds in certain conditions). The extension is based on the basic principles natural revision follows, identified as minimal change, indifference and naivety: change beliefs as little as possible; equate the likeliness of scenarios by default; believe all until contradicted. The extension says that natural revision restricts changes to the current conditions. A comparison with an unrestricting revision shows what exactly the current conditions are. It is not what currently considered true if it contradicts the new information. It includes something more and more unlikely until the new information is at least possible.
Naive probability
Historically, the theory of probability emerged from the efforts of Pascal and Fermat in the 1650s to solve problems posed by a gambler, Chevalier de Méré (Rényi, 1972; Devlin, 2008), and reached its current form in Kolmogorov, 1933. Remarkably, not even highly experienced gamblers can extract high precision probability estimates from observed data: one of de Méré's questions concerned comparing the probabilities of getting at least one 6 in four rolls of one die (p 0.5177) and getting at least one double-6 in 24 throws of a pair of dice (p 0.4914). Four decades later, Samuel Pepys is asking Newton to discern the difference between at least two 6s when 12 dice are rolled (p 0.6187) and at least 3 6s when 18 dice are rolled (p 0.5973). In this paper we make this phenomenon, the very limited ability of people to deal with probabilities, the focal point of our inquiry. These limitations, we will argue, go beyond the well understood limits of numerosity (Dehaene, 1997), and touch upon areas such as cognitive limits of deduction (Kracht, 2011) and default inheritance (Etherington, 1987). We will offer a model of the naive/commonsensical theory of probability. In Section 2 we discuss likeliness, which we take to be a valuation of propositions on a discrete (seven-point) scale. In Section 3 we turn to the inference mechanism supported by the naive theory, akin to Jeffreys-style probability updates. In Section 4 we briefly sketch the background theory and discuss what we take to be the central concern, learnability.
- Europe > Iceland (0.16)
- Europe > United Kingdom > England (0.14)
- Health & Medicine (0.70)
- Leisure & Entertainment > Sports > Skiing (0.48)
All that is English may be Hindi: Enhancing language identification through automatic ranking of likeliness of word borrowing in social media
Patro, Jasabanta, Samanta, Bidisha, Singh, Saurabh, Basu, Abhipsa, Mukherjee, Prithwish, Choudhury, Monojit, Mukherjee, Animesh
In this paper, we present a set of computational methods to identify the likeliness of a word being borrowed, based on the signals from social media. In terms of Spearman correlation coefficient values, our methods perform more than two times better (nearly 0.62) in predicting the borrowing likeliness compared to the best performing baseline (nearly 0.26) reported in literature. Based on this likeliness estimate we asked annotators to re-annotate the language tags of foreign words in predominantly native contexts. In 88 percent of cases the annotators felt that the foreign language tag should be replaced by native language tag, thus indicating a huge scope for improvement of automatic language identification systems.