aspiration
Eggie, Neo, Isaac and Memo are domestic robots. But would you let them load your dishwasher?
Eggie, Neo, Isaac and Memo are domestic robots. But would you let them load your dishwasher? The idea of having a friendly robot butler that can do all the dull duties of running a home has existed for decades. But now, thanks to AI, it's genuinely happening and this year the first truly multi-purpose domestic bots will start to enter homes. In Silicon Valley, they're being trained at speed to fold laundry, load the dishwasher, and clean up after us.
- North America > Central America (0.14)
- Oceania > Australia (0.05)
- Europe > United Kingdom > Wales (0.05)
- (14 more...)
- Information Technology (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.48)
- Leisure & Entertainment > Sports (0.42)
Hope, Aspirations, and the Impact of LLMs on Female Programming Learners in Afghanistan
Behmanush, Hamayoon, Akhtari, Freshta, Nooripour, Roghieh, Weber, Ingmar, Cannanure, Vikram Kamath
Designing impactful educational technologies in contexts of socio-political instability requires a nuanced understanding of educational aspirations. Currently, scalable metrics for measuring aspirations are limited. This study adapts, translates, and evaluates Snyder's Hope Scale as a metric for measuring aspirations among 136 women learning programming online during a period of systemic educational restrictions in Afghanistan. The adapted scale demonstrated good reliability (Cronbach's α = 0.78) and participants rated it as understandable and relevant. While overall aspiration-related scores did not differ significantly by access to Large Language Models (LLMs), those with access reported marginally higher scores on the Avenues subscale (p = .056), suggesting broader perceived pathways to achieving educational aspirations. These findings support the use of the adapted scale as a metric for aspirations in contexts of socio-political instability. More broadly, the adapted scale can be used to evaluate the impact of aspiration-driven design of educational technologies.
- Asia > Japan > Honshū > Chūbu > Toyama Prefecture > Toyama (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > Saarland > Saarbrücken (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Instructional Material (1.00)
- Education > Educational Setting (1.00)
- Education > Curriculum > Subject-Specific Education (0.49)
What Causes Postoperative Aspiration?
Nagesh, Supriya, Covarrubias, Karina, El-Kareh, Robert, Kasiviswanathan, Shiva Prasad, Mishra, Nina
Background: Aspiration, the inhalation of foreign material into the lungs, significantly impacts surgical patient morbidity and mortality. This study develops a machine learning (ML) model to predict postoperative aspiration, enabling timely preventative interventions. Methods: From the MIMIC-IV database of over 400,000 hospital admissions, we identified 826 surgical patients (mean age: 62, 55.7\% male) who experienced aspiration within seven days post-surgery, along with a matched non-aspiration cohort. Three ML models: XGBoost, Multilayer Perceptron, and Random Forest were trained using pre-surgical hospitalization data to predict postoperative aspiration. To investigate causation, we estimated Average Treatment Effects (ATE) using Augmented Inverse Probability Weighting. Results: Our ML model achieved an AUROC of 0.86 and 77.3\% sensitivity on a held-out test set. Maximum daily opioid dose, length of stay, and patient age emerged as the most important predictors. ATE analysis identified significant causative factors: opioids (0.25 +/- 0.06) and operative site (neck: 0.20 +/- 0.13, head: 0.19 +/- 0.13). Despite equal surgery rates across genders, men were 1.5 times more likely to aspirate and received 27\% higher maximum daily opioid dosages compared to women. Conclusion: ML models can effectively predict postoperative aspiration risk, enabling targeted preventative measures. Maximum daily opioid dosage and operative site significantly influence aspiration risk. The gender disparity in both opioid administration and aspiration rates warrants further investigation. These findings have important implications for improving postoperative care protocols and aspiration prevention strategies.
- North America > United States > New York (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > Middle East > Israel (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Between Fear and Desire, the Monster Artificial Intelligence (AI): Analysis through the Lenses of Monster Theory
With the increasing adoption of Artificial Intelligence (AI) in all fields and daily activities, a heated debate is found about the advantages and challenges of AI and the need for navigating the concerns associated with AI to make the best of it. To contribute to this literature and the ongoing debate related to it, this study draws on the Monster theory to explain the conflicting representation of AI. It suggests that studying monsters in popular culture can provide an in-depth understanding of AI and its monstrous effects. Specifically, this study aims to discuss AI perception and development through the seven theses of Monster theory. The obtained results revealed that, just like monsters, AI is complex in nature, and it should not be studied as a separate entity but rather within a given society or culture. Similarly, readers may perceive and interpret AI differently, just as readers may interpret monsters differently. The relationship between AI and monsters, as depicted in this study, does not seem to be as odd as it might be at first.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Minnesota (0.04)
- Asia > Middle East > Saudi Arabia > Northern Borders Province > Arar (0.04)
- Law (1.00)
- Health & Medicine > Therapeutic Area (0.46)
Fostering Self-Directed Growth with Generative AI: Toward a New Learning Analytics Framework
In an era increasingly shaped by decentralized knowledge ecosystems and pervasive AI technologies, fostering sustainable learner agency has become a critical educational imperative. This paper introduces a novel conceptual framework integrating Generative Artificial Intelligence (GAI) and Learning Analytics (LA) to cultivate Self - Directed Growth -- a dynamic competency enabling learner s to iteratively drive their own developmental pathways across diverse contexts. Building upon critical gaps in current Self - Directed Learning (SDL) and AI - mediated educational research, the proposed Aspire to Potentials for Learners (A2PL) model reconcept ualizes the interplay of learner aspirations, complex thinking, and summative self - assessment within GAI - supported environments. Methodological implications for future intervention designs and data analytics are discussed, positioning Self - Directed Growth as a pivotal axis for designing equitable, adaptive, and sustainable learning systems in the digital era. 1. Introduction The educational realm faces two increasingly prominent challenges that threaten to reshape the landscape of learning and development . Firstly, the traditional teacher - dominated, institution - centered environment is being eclipsed by a decentralized, ever - evolving, and technologically advanced online landscape. In this new paradigm, knowledge and skills are not poised and delivered by a single expositor, but are constantly renewed, reproduced, and reiterated through sharing and co - creation, rendering existing models of education insufficient. And the overreliance on EdTech tools, as well as information search and synthesis tools, such as Generative Artificial Intelligence (GAI), among students poses a significant challenge in the contemporary educational landscape, while there is a concerning lack of research examining whether these tools genuinely foster the development of learner agency. The integration of AI into educational practices offers a transformative opportunity to enhance learning outcomes and promote equity. According to the United Nations Educational, Scientific and Cultural Organization (UNESCO), AI has the potential to acc elerate the achievement of Sustainable Development Goal 4 (SDG 4) by improving access to quality education for all learners, regardless of their socioeconomic background (UNESCO, 2019; UNESCO, 2021). As some noted, AI facilitates access to information and online education, helping to bridge the information, skill, and educational gaps faced by disadvantaged individuals who encounter barriers to traditional learning opportunities due to time constraints, financial limitations, geographic distance, or physic al challenges (Thakkar et al., 2020; Sanabria - Z et al., 2023).
- Asia > Singapore (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- North America > United States > New Jersey > Bergen County > Mahwah (0.04)
- (7 more...)
- Research Report (1.00)
- Instructional Material (1.00)
- Education > Educational Setting > Online (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.48)
Non-maximizing policies that fulfill multi-criterion aspirations in expectation
Dima, Simon, Fischer, Simon, Heitzig, Jobst, Oliver, Joss
In dynamic programming and reinforcement learning, the policy for the sequential decision making of an agent in a stochastic environment is usually determined by expressing the goal as a scalar reward function and seeking a policy that maximizes the expected total reward. However, many goals that humans care about naturally concern multiple aspects of the world, and it may not be obvious how to condense those into a single reward function. Furthermore, maximization suffers from specification gaming, where the obtained policy achieves a high expected total reward in an unintended way, often taking extreme or nonsensical actions. Here we consider finite acyclic Markov Decision Processes with multiple distinct evaluation metrics, which do not necessarily represent quantities that the user wants to be maximized. We assume the task of the agent is to ensure that the vector of expected totals of the evaluation metrics falls into some given convex set, called the aspiration set. Our algorithm guarantees that this task is fulfilled by using simplices to approximate feasibility sets and propagate aspirations forward while ensuring they remain feasible. It has complexity linear in the number of possible state-action-successor triples and polynomial in the number of evaluation metrics. Moreover, the explicitly non-maximizing nature of the chosen policy and goals yields additional degrees of freedom, which can be used to apply heuristic safety criteria to the choice of actions. We discuss several such safety criteria that aim to steer the agent towards more conservative behavior.
- Europe > Germany > Brandenburg > Potsdam (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Cologne (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.34)
Using Large Language Models for Qualitative Analysis can Introduce Serious Bias
Ashwin, Julian, Chhabra, Aditya, Rao, Vijayendra
Large Language Models (LLMs) are quickly becoming ubiquitous, but the implications for social science research are not yet well understood. This paper asks whether LLMs can help us analyse large-N qualitative data from open-ended interviews, with an application to transcripts of interviews with Rohingya refugees in Cox's Bazaar, Bangladesh. We find that a great deal of caution is needed in using LLMs to annotate text as there is a risk of introducing biases that can lead to misleading inferences. We here mean bias in the technical sense, that the errors that LLMs make in annotating interview transcripts are not random with respect to the characteristics of the interview subjects. Training simpler supervised models on high-quality human annotations with flexible coding leads to less measurement error and bias than LLM annotations. Therefore, given that some high quality annotations are necessary in order to asses whether an LLM introduces bias, we argue that it is probably preferable to train a bespoke model on these annotations than it is to use an LLM for annotation.
- Asia > Bangladesh (0.24)
- North America > United States > California > Santa Clara County > Stanford (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Netherlands > Limburg > Maastricht (0.04)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (1.00)
- Research Report > New Finding (0.93)
- Government (0.85)
- Education > Educational Setting > Religious School (0.46)
- Education > Educational Setting > Higher Education (0.46)
Do Performance Aspirations Matter for Guiding Software Configuration Tuning?
Configurable software systems can be tuned for better performance. Leveraging on some Pareto optimizers, recent work has shifted from tuning for a single, time-related performance objective to two intrinsically different objectives that assess distinct performance aspects of the system, each with varying aspirations. Before we design better optimizers, a crucial engineering decision to make therein is how to handle the performance requirements with clear aspirations in the tuning process. For this, the community takes two alternative optimization models: either quantifying and incorporating the aspirations into the search objectives that guide the tuning, or not considering the aspirations during the search but purely using them in the later decision-making process only. However, despite being a crucial decision that determines how an optimizer can be designed and tailored, there is a rather limited understanding of which optimization model should be chosen under what particular circumstance, and why. In this paper, we seek to close this gap. Firstly, we do that through a review of over 426 papers in the literature and 14 real-world requirements datasets. Drawing on these, we then conduct a comprehensive empirical study that covers 15 combinations of the state-of-the-art performance requirement patterns, four types of aspiration space, three Pareto optimizers, and eight real-world systems/environments, leading to 1,296 cases of investigation. We found that (1) the realism of aspirations is the key factor that determines whether they should be used to guide the tuning; (2) the given patterns and the position of the realistic aspirations in the objective landscape are less important for the choice, but they do matter to the extents of improvement; (3) the available tuning budget can also influence the choice for unrealistic aspirations but it is insignificant under realistic ones.
- Europe > United Kingdom > England > West Midlands > Birmingham (0.14)
- Europe > United Kingdom > England > Leicestershire > Loughborough (0.04)
- Oceania > New Zealand > North Island > Wellington Region > Wellington (0.04)
- (29 more...)
Investors have big expections for generative AI startups
The company then spent the next decade, and billions of dollars, trying to use Watson's artificial intelligence capabilities to solve a broad set of healthcare challenges, from helping doctors diagnose diseases based on symptoms to recommending clinical trials. In January, IBM announced it was selling Watson for parts to PE firm Francisco Partners. AI technologies have come a long way since that game show triumph. Some AI, such as those recommending ads on Google or detecting cancer on medical scans, have become part of everyday life. While improvements for these types of AI have been mostly incremental, over the last year machines suddenly became good at generating images and writing text.
Part Two: Hope, Hype, and Disappointment - Forward to the Future
The first major wave of AI was based on the premise that knowledge could be "represented" as a set of rules that computers could process with logic. If you could add enough rules, you could eventually produce commonsense knowledge of the world and general intelligence. In its day, it generated great excitement and funding. But its focus was on a process to produce knowledge (logic), not on knowledge itself. The assumption that knowledge consists merely as a set of assertions that could be represented in symbols was flawed. It did not scale; knowledge was never achieved.