Goto

Collaborating Authors

 seriousness


Risk-graded Safety for Handling Medical Queries in Conversational AI

Abercrombie, Gavin, Rieser, Verena

arXiv.org Artificial Intelligence

Conversational AI systems can engage in unsafe behaviour when handling users' medical queries that can have severe consequences and could even lead to deaths. Systems therefore need to be capable of both recognising the seriousness of medical inputs and producing responses with appropriate levels of risk. We create a corpus of human written English language medical queries and the responses of different types of systems. We label these with both crowdsourced and expert annotations. While individual crowdworkers may be unreliable at grading the seriousness of the prompts, their aggregated labels tend to agree with professional opinion to a greater extent on identifying the medical queries and recognising the risk types posed by the responses. Results of classification experiments suggest that, while these tasks can be automated, caution should be exercised, as errors can potentially be very serious.


Top 7 Countries Using Artificial Intelligence to Address Climate Concerns

#artificialintelligence

In the present time slowing down climate change should be everyone's priority. Because if the matter gets ignored any further, people might face more extensive crises than they experienced during the global COVID-19 pandemic. No doubt that climate change is the biggest challenge the world is facing right now and it will need every solution possible, including technology like artificial intelligence. This technology is capable of finding solutions faster and it could effectively power climate change strategy. As per the climate update issued by the World Meteorological Organization (WMO), there is a 40% chance of the annual average global temperature temporarily reaching 1.5 C above the pre-industrial level in the next couple of years.


How To Measure ML Model Accuracy

#artificialintelligence

Machine learning (ML) is about making predictions about new data based on old data. The quality of any machine-learning algorithm is ultimately determined by the quality of those predictions. However, there is no one universal way to measure that quality across all ML applications, and that has broad implications for the value and usefulness of machine learning. "Every industry, every domain, every application has different care-abouts," said Nick Ni, director of product marketing, AI and software at Xilinx. "And you have to measure that care-about." Classification is the most familiar application, and "accuracy" is the measure used for it. But even so, there remain disagreements about exactly how accuracy should be measured or what it should mean. With other applications, it's much less clear how to measure the quality of results.


Local gov'ts plan to use AI to rate seriousness of bullying cases

#artificialintelligence

Nearly 30 local governments across Japan are planning to or interested in introducing an artificial intelligence system designed to assess the seriousness of school bullying cases in hopes of better responding to them, a source close to the matter said Thursday. Otsu city government, which came under fire for the way it handled a high-profile bullying case in 2011, has teamed up with information technology services provider Hitachi Systems Ltd, to develop the AI system, which predicts how serious a case of bullying has the potential to become based on an analysis of past cases. School bullying has long been a concern in Japan, with the education ministry data showing that elementary, junior and senior high as well as special-needs schools nationwide reported 612,496 cases in the year through March, up 68,563 from a year earlier. When a new case of bullying is reported, information on the incident, such as time, place and perpetrator, is fed into the system, which then searches its database to come up with an estimate of how serious the case is, expressed as a percentage. In all, about 50 pieces of data are used for analysis.


Local governments consider using AI to rate seriousness of bullying cases

The Japan Times

Nearly 30 local governments are planning to or are interested in introducing an artificial intelligence system designed to assess the seriousness of school bullying cases in hopes of better responding to them, a source close to the matter said Thursday. Otsu Municipal Government, which came under fire for the way it handled a high-profile bullying case in 2011, has teamed up with information technology services provider Hitachi Systems Ltd., to develop the AI system, which predicts how a case of bullying has the potential to become serious based on an analysis of past cases. School bullying has long been a concern in Japan, with education ministry data showing that elementary, junior and high schools as well as special-needs schools nationwide reported 612,496 cases in the year through March, up 68,563 from a year earlier. When a new case of bullying is reported, information on the incident, such as time, place and perpetrator, is fed into the system, which then searches its database to come up with an estimate of how serious the case is, expressed as a percentage. In all, about 50 pieces of data are used for analysis.


Mei uses AI to improve relationships by analyzing text messages

#artificialintelligence

All it takes is one misinterpreted text to land you in a heap of trouble with a friend, significant other, or colleague. Even serial texters aren't immune -- studies show that most recipients fail to tell the difference between sarcasm and seriousness about 44 percent of the time. That's why Es Lee, a Harvard graduate with a degree in computer science, founded Mei, a mobile messaging startup that leverages machine learning to suss out the subtext of conversations. "One of the difficulties of maintaining relationships through text is that it's [possible] to come across as crass or rude -- even when that was never the intention," Lee told VentureBeat in a phone interview. "Emotion is lost in text messages. Mei, which launched in beta earlier this year, is built on the back of "millions" of messages sourced from the app's more than 100,000 users, data from two universities, and the dev team's own exchanges. Lee claims it's one of the largest datasets of its kind. Using natural language processing and sophisticated algorithms that take into account response time, terseness, word choice, and other factors, Mei builds a psychological profile of your texting partners. It's more nuanced than you might expect; Lee said that it's able to determine the gender and age of a person from nothing more than the types of emoji they use. Add messages to the picture, and Mei can tease out the type of relationship between two people -- and the strength of that relationship. "When you're a 25-year-old woman texting a 40-year-old man, you might think that from the one-word messages he's sending, he's not into you," Lee said. "But our data shows otherwise." In practice, Mei calculates a compatibility percentage, scoring people across five key traits -- openness, emotional control, extraversion, agreeableness, and conscientiousness -- and breaking each into subscores (e.g., "self-focused," "contrary," "respectful"). It also highlights the top characteristics they share in common, like "proudness" and "seriousness." It's much more personalized than the feedback most relationship apps are able to provide, Lee said. AI chatbots like NTT Resonant's Oshi-el are trained on common questions and answers, but Mei promises to take each of your interactions into account. "Our idea is to use aggregated data to improve relationships with people.


Algorithm can detect when you're distracted while driving

Daily Mail - Science & tech

While texting and driving is illegal, many people are still tempted to send a quick message while behind the wheel. But a new AI has been designed that could stop you from doing so. The incredible system can accurately determine when drivers are distracted at the wheel, and could one day be used to develop protective measures. While texting and driving is illegal, many people are still tempted to send a quick message while behind the wheel. The researchers trained an algorithm using machine-learning to recognise actions such as texting, talking on the phone or reaching into the backseat to get something.