vigilance
Many-Eyes and Sentinels in Selfish and Cooperative Groups
Pilgrim, Charlie, Bate, Andrew M, Sigalou, Anna, Aellen, Mélisande, Morford, Joe, Warren, Elizabeth, Krupenye, Christopher, Biro, Dora, Mann, Richard P
Collective vigilance describes how animals in groups benefit from the predator detection efforts of others. Empirical observations typically find either a many-eyes strategy with all (or many) group members maintaining a low level of individual vigilance, or a sentinel strategy with one (or a few) individuals maintaining a high level of individual vigilance while others do not. With a general analytical treatment that makes minimal assumptions, we show that these two strategies are alternate solutions to the same adaptive problem of balancing the costs of predation and vigilance. Which strategy is preferred depends on how costs scale with the level of individual vigilance: many-eyes strategies are preferred where costs of vigilance rise gently at low levels but become steeper at higher levels (convex; e.g. an open field); sentinel strategies are preferred where costs of vigilance rise steeply at low levels and then flatten out (concave; e.g. environments with vantage points). This same dichotomy emerges whether individuals act selfishly to optimise their own fitness or cooperatively to optimise group fitness. The model is extended to explain discrete behavioural switching between strategies and differential levels of vigilance such as edge effects.
- North America > United States (0.14)
- Europe > United Kingdom > England > West Yorkshire > Leeds (0.04)
- Europe > Spain (0.04)
- Information Technology > Game Theory (0.69)
- Information Technology > Artificial Intelligence (0.46)
- Information Technology > Communications (0.46)
Are Large Language Models Sensitive to the Motives Behind Communication?
Wu, Addison J., Liu, Ryan, Oktar, Kerem, Sumers, Theodore R., Griffiths, Thomas L.
Human communication is motivated: people speak, write, and create content with a particular communicative intent in mind. As a result, information that large language models (LLMs) and AI agents process is inherently framed by humans' intentions and incentives. People are adept at navigating such nuanced information: we routinely identify benevolent or self-serving motives in order to decide what statements to trust. For LLMs to be effective in the real world, they too must critically evaluate content by factoring in the motivations of the source -- for instance, weighing the credibility of claims made in a sales pitch. In this paper, we undertake a comprehensive study of whether LLMs have this capacity for motivational vigilance. We first employ controlled experiments from cognitive science to verify that LLMs' behavior is consistent with rational models of learning from motivated testimony, and find they successfully discount information from biased sources in a human-like manner. We then extend our evaluation to sponsored online adverts, a more naturalistic reflection of LLM agents' information ecosystems. In these settings, we find that LLMs' inferences do not track the rational models' predictions nearly as closely -- partly due to additional information that distracts them from vigilance-relevant considerations. However, a simple steering intervention that boosts the salience of intentions and incentives substantially increases the correspondence between LLMs and the rational model. These results suggest that LLMs possess a basic sensitivity to the motivations of others, but generalizing to novel real-world settings will require further improvements to these models.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Banking & Finance (1.00)
- Health & Medicine > Therapeutic Area (0.67)
- Education (0.67)
- Leisure & Entertainment > Games (0.45)
Human Control: Definitions and Algorithms
How can humans stay in control of advanced artificial intelligence systems? One proposal is corrigibility, which requires the agent to follow the instructions of a human overseer, without inappropriately influencing them. In this paper, we formally define a variant of corrigibility called shutdown instructability, and show that it implies appropriate shutdown behavior, retention of human autonomy, and avoidance of user harm. We also analyse the related concepts of non-obstruction and shutdown alignment, three previously proposed algorithms for human control, and one new algorithm.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Cybercrime: be careful what you tell your chatbot helper…
Concerns about the growing abilities of chatbots trained on large language models, such as OpenAI's GPT-4, Google's Bard and Microsoft's Bing Chat, are making headlines. Experts warn of their ability to spread misinformation on a monumental scale, as well as the existential risk their development may pose to humanity. As if this isn't worrying enough, a third area of concern has opened up – illustrated by Italy's recent ban of ChatGPT on privacy grounds. The Italian data regulator has voiced concerns over the model used by ChatGPT owner OpenAI and announced it would investigate whether the firm had broken strict European data protection laws. Chatbots can be useful for work and personal tasks, but they collect vast amounts of data.
- Europe > Italy (0.25)
- North America > United States (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.57)
VigiFlood: evaluating the impact of a change of perspective on flood vigilance
Emergency managers receive communication training about the importance of being 'first, right and credible', and taking into account the psychology of their audience and their particular reasoning under stress and risk. But we believe that citizens should be similarly trained about how to deal with risk communication. In particular, such messages necessarily carry a part of uncertainty since most natural risks are difficult to accurately forecast ahead of time. Yet, citizens should keep trusting the emergency communicators even after they made forecasting errors in the past. We have designed a serious game called Vigiflood, based on a real case study of flash floods hitting the South West of France in October 2018. In this game, the user changes perspective by taking the role of an emergency communicator, having to set the level of vigilance to alert the population, based on uncertain clues. Our hypothesis is that this change of perspective can improve the player's awareness and response to future flood vigilance announcements. We evaluated this game through an online survey where people were asked to answer a questionnaire about flood risk awareness and behavioural intentions before and after playing the game, in order to assess its impact.
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Oceania > New Zealand (0.04)
- (6 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report (0.82)
- Health & Medicine (1.00)
- Education (1.00)
- Leisure & Entertainment > Games (0.68)
- Government > Regional Government > North America Government > United States Government (0.68)
ProMedica Health System to Deploy PeriGen Artificial Intelligence Solution Focused on Improving Outcomes in Childbirth Markets Insider
PeriGen, an innovator of perinatal early warning systems, today announced that ProMedica, a not-for-profit integrated health care organization serving 30 states, plans to deploy the company's PeriWatch Vigilance, an artificial intelligence-based maternal-fetal early warning system (EWS), in all of its labor and delivery hospitals. Vigilance is designed to help clinicians identify troubling trends earlier and more consistently than manual assessments and creates a common language for nurses and physicians to assess cases. The artificial intelligence-driven technology, developed by PeriGen, is the latest chapter in ProMedica's commitment to lead improvement in Ohio and Michigan's infant and maternal mortality and morbidity rates, which currently rank near the bottom of the nation. The software is designed to be implemented in a matter of weeks and brings an unprecedented level of monitoring to the labor and delivery floor. It does not require replacing any current systems already in place.
- North America > United States > Michigan (0.26)
- North America > United States > Ohio > Lucas County > Toledo (0.08)
- North America > United States > North Carolina > Wake County > Cary (0.06)
Most Stressful Job on the Road: Not Driving an Autonomous Car
"The computer is fallible, so it's the human who is supposed to be perfect," one former Uber test driver said. "It's kind of the reverse of what you think about computers." The fatal crash last week in Tempe, Ariz., involving an Uber autonomous vehicle is bringing new scrutiny to both the quality of Uber's technology for avoiding collision and the efficacy of its backup system of so-called safety drivers. The accident, in which a woman was struck and killed as she walked a bicycle across a road at night, is believed to be the first involving a death from a self-driving car. In much of the autonomous-vehicle testing done on public roads, there are two safety drivers: one in the driver's seat; and one in the front passenger seat who is assigned the task of logging incidents onto a computer, but, drivers say, also helps by keeping a second set of eyes on the road.
- North America > United States > Arizona > Maricopa County > Tempe (0.25)
- North America > United States > California (0.05)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.85)
How autonomous vehicles could save over 350K lives in the US and millions worldwide ZDNet
In 2016, 37,461 people died in traffic accidents in the US, a 5.6 percent increase over 2015, according to the US Department of Transportation (DoT). This is down from 1970, when around 60,000 people died in traffic accidents in the US. The addition of safety features such as seat belts and air bags have reduced the number of deaths, and new technology from autonomous vehicles could help even more as driver error is eliminated. This ebook, based on a special feature from ZDNet and TechRepublic, looks at emerging autonomous transport technologies and how they will affect society and the future of business. DoT researchers estimate that fully autonomous vehicles, also known as self-driving cars, could reduce traffic fatalities by up to 94 percent by eliminating those accidents that are due to human error.
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Transportation > Electric Vehicle (0.97)
- (2 more...)
Drones, AI To Guard Against Shark Attacks On Australian Beaches
With shark attacks increasing every year, more vigilance is needed, especially at beaches popular for surfing. Since it has become evident that human vigilance is just not enough, technology is coming to the fore -- starting next month, the Australian government will deploy artificially intelligent'Little Ripper' drones on the country's beaches for enhanced surveillance. Little rippers cost $250,000 and can stay in the air for two and a half hours at a time. In case of an imminent attack, the drone will carry inflatable rafts and GPS beacons to aid rescuers. They will monitor the beaches using on board cameras.
- Oceania > Australia (0.39)
- North America > United States > California (0.06)
Neural Network Analysis of Event Related Potentials and Electroencephalogram Predicts Vigilance
Venturini, Rita, Lytton, William W., Sejnowski, Terrence J.
Automated monitoring of vigilance in attention intensive tasks such as air traffic control or sonar operation is highly desirable. As the operator monitors the instrument, the instrument would monitor the operator, insuring against lapses. We have taken a first step toward this goal by using feedforward neural networks trained with backpropagation to interpret event related potentials (ERPs) and electroencephalogram (EEG) associated with periods of high and low vigilance. The accuracy of our system on an ERP data set averaged over 28 minutes was 96%, better than the 83% accuracy obtained using linear discriminant analysis. Practical vigilance monitoring will require prediction over shorter time periods. We were able to average the ERP over as little as 2 minutes and still get 90% correct prediction of a vigilance measure. Additionally, we achieved similarly good performance using segments of EEG power spectrum as short as 56 sec.
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > United States > New York (0.04)
- Europe > Italy (0.04)
- Transportation > Infrastructure & Services (0.54)
- Transportation > Air (0.54)
- Health & Medicine > Therapeutic Area (0.49)