Goto

Collaborating Authors

Results


Artificial Intelligence in the Pharma Industry: Clinical Trials

#artificialintelligence

Artificial Intelligence has played an increasingly important role within the pharmaceutical space especially with recent restrictions due to COVID-19. The drug development process can be lengthy and costly but many companies have begun implementing AI into their clinical trials to speed up patient on-site visits, test efficacy and bring more drugs to market. As we discussed previously, AI has played an important role in the discovery process. Now let's take a look at AI in clinical trials… PRNewswire reports the global virtual clinical trials market size is expected to reach 11.5 billion USD by 2028 with a compound annual growth rate of 5.7% from 2021 to 2028 according to Grand View Research, Inc. The growth in the virtual clinical trial space is directly related to the need for an increase in patient diversity and an increase in the number of decentralized/virtual trials due to the impact of COVID-19.


Conversational AI startup Cognigy nabs $44M

#artificialintelligence

Conversational AI startup Cognigy today announced that it closed a $44 million series B funding round led by Insight Partners, which brings the company's total raised to over $50 million to date. Cofounder and CEO Philipp Heltewig says that the proceeds will be put toward accelerating customer growth, creating new partnerships, and continuing to enhance Cognigy's AI platform. The ubiquity of smartphones and messaging apps -- as well as the pandemic -- have contributed to the increased adoption of conversational technologies. Fifty-six percent of companies told Accenture in a survey that conversational bots and other experiences are driving disruption in their industry. And a Twilio study showed that 9 out of 10 consumers would like the option to use messaging to contact a business.


Meta-evaluation of Conversational Search Evaluation Metrics

arXiv.org Artificial Intelligence

Conversational search systems, such as Google Assistant and Microsoft Cortana, enable users to interact with search systems in multiple rounds through natural language dialogues. Evaluating such systems is very challenging given that any natural language responses could be generated, and users commonly interact for multiple semantically coherent rounds to accomplish a search task. Although prior studies proposed many evaluation metrics, the extent of how those measures effectively capture user preference remains to be investigated. In this paper, we systematically meta-evaluate a variety of conversational search metrics. We specifically study three perspectives on those metrics: (1) reliability: the ability to detect "actual" performance differences as opposed to those observed by chance; (2) fidelity: the ability to agree with ultimate user preference; and (3) intuitiveness: the ability to capture any property deemed important: adequacy, informativeness, and fluency in the context of conversational search. By conducting experiments on two test collections, we find that the performance of different metrics varies significantly across different scenarios whereas consistent with prior studies, existing metrics only achieve a weak correlation with ultimate user preference and satisfaction. METEOR is, comparatively speaking, the best existing single-turn metric considering all three perspectives. We also demonstrate that adapted session-based evaluation metrics can be used to measure multi-turn conversational search, achieving moderate concordance with user satisfaction. To our knowledge, our work establishes the most comprehensive meta-evaluation for conversational search to date.


Knowledge Triggering, Extraction and Storage via Human-Robot Verbal Interaction

arXiv.org Artificial Intelligence

This article describes a novel approach to expand in run-time the knowledge base of an Artificial Conversational Agent. A technique for automatic knowledge extraction from the user's sentence and four methods to insert the new acquired concepts in the knowledge base have been developed and integrated into a system that has already been tested for knowledge-based conversation between a social humanoid robot and residents of care homes. The run-time addition of new knowledge allows overcoming some limitations that affect most robots and chatbots: the incapability of engaging the user for a long time due to the restricted number of conversation topics. The insertion in the knowledge base of new concepts recognized in the user's sentence is expected to result in a wider range of topics that can be covered during an interaction, making the conversation less repetitive. Two experiments are presented to assess the performance of the knowledge extraction technique, and the efficiency of the developed insertion methods when adding several concepts in the Ontology.


GupShup: An Annotated Corpus for Abstractive Summarization of Open-Domain Code-Switched Conversations

arXiv.org Artificial Intelligence

Code-switching is the communication phenomenon where speakers switch between different languages during a conversation. With the widespread adoption of conversational agents and chat platforms, code-switching has become an integral part of written conversations in many multi-lingual communities worldwide. This makes it essential to develop techniques for summarizing and understanding these conversations. Towards this objective, we introduce abstractive summarization of Hindi-English code-switched conversations and develop the first code-switched conversation summarization dataset - GupShup, which contains over 6,831 conversations in Hindi-English and their corresponding human-annotated summaries in English and Hindi-English. We present a detailed account of the entire data collection and annotation processes. We analyze the dataset using various code-switching statistics. We train state-of-the-art abstractive summarization models and report their performances using both automated metrics and human evaluation. Our results show that multi-lingual mBART and multi-view seq2seq models obtain the best performances on the new dataset


Why is AI mostly presented as female in pop culture and demos?

#artificialintelligence

With the proliferation of female robots such as Sophia and the popularity of female virtual assistants such as Siri (Apple), Alexa (Amazon), and Cortana (Microsoft), artificial intelligence seems to have a gender issue. This gender imbalance in AI is a pervasive trend that has drawn sharp criticism in the media (even Unesco warned against the dangers of this practice) because it could reinforce stereotypes about women being objects. But why is femininity injected in artificial intelligent objects? If we want to curb the massive use of female gendering in AI, we need to better understand the deep roots of this phenomenon. In an article published in the journal Psychology & Marketing, we argue that research on what makes people human can provide a new perspective into why feminization is systematically used in AI.


New Apple TV box will be able to watch you too thanks to built-in camera, reports say

The Independent - Tech

Apple is working on a new product that combines its HomePod smart speaker with the Apple TV. The upcoming device would include a camera for video conferencing and control over smart home equipment, Bloomberg reported, citing people familiar with Apple's internal developments. It would also include the same functionality we've seen from Apple's existing products: streaming video and audio, gaming capabilities, and Siri support. Apple is also said to be working on another product that would combine an iPad with a HomePod speaker. Similar products have been released by Amazon and Google recently, such as the Echo Show 10 or the Google Nest Hub.


UW scientists turn Amazon's Alexa into heart monitoring device using sound waves

University of Washington Computer Science

Researchers at the University of Washington have figured out a way to use machine-learning algorithms to turn smart speakers into sensitive medical devices that can detect irregular heartbeats. The scientists use smart speakers like Amazon Echo or Google Home to send out an inaudible sound that bounces off a person's chest and returns to the device, reshaped in a way that reveals the heartbeat. An uneven cardiac rhythm can be associated with ailments including strokes or sleep apnea. The researchers employed a machine-learning algorithm to tease out the heartbeats from other sounds and signals such as breathing, which is easier to detect because it involves a much larger motion. The algorithm was also needed to zero in on erratic heart rhythms -- which from a health perspective are generally more important to identify than a steady "lub-dub."


Researchers develop system for smart speakers like Amazon Echo to monitor heartbeats

USATODAY - Tech Top Stories

You might soon have a new use for your Amazon Echo, Google Home or other smart speaker: checking your heart for irregular rhythms. Researchers at the University of Washington have developed an artificial intelligence system using smart speakers to monitor your heartbeat without requiring physical contact. Their findings were published in the peer-reviewed journal Communications Biology. The study had people sit 1 to 2 feet from a smart speaker, which starts playing an inaudible continuous sound. The sound then bounces off the person and back to the speaker, where the AI system is able to detect individual heartbeats.


AI system can measure heart rhythms using smart SPEAKERS

#artificialintelligence

Amazon's Echo and other smart speakers like the Google Home could be used to monitor the rhythm of a person's heart. Academics created an AI-powered device which monitors regular, and irregular, heartbeats using the same tools found in smart speakers. The prototype, which was built in a lab but could be incorporated into speakers in the future, was found to be almost as good as medical devices in hospitals. The search for heartbeats begins when a person sits within one to two feet of the smart speaker. Then the system plays an inaudible continuous sound, which bounces off the person and then returns to the speaker.