Goto

Collaborating Authors

Results


Can You Code Empathy? with Pascale Fung

#artificialintelligence

ANJA KASPERSEN: Today I am very pleased to be joined by Pascale Fung. Pascale is a;rofessor in the Department of Electronic and Computer Engineering and Department of Computer Science and Engineering at The Hong Kong University of Science and Technology. She is known globally for her pioneering work on conversational artificial intelligence (AI), computational linguistics, and was one of the earliest proponents of statistical and machine-learning approaches for natural language processing (NLP). She is now leading groundbreaking research on how to build intelligent systems that can understand and empathize with humans. I have really been looking forward to this conversation with you. Your professional accolades are many, most of which we will touch on during our conversation. However, for our listeners to get to know you a bit better, I would like us to go back to your upbringing during what I understand to be a very tenuous political period in China. I was born, spent my childhood, ...


Conversational Agents: Theory and Applications

arXiv.org Artificial Intelligence

In this chapter, we provide a review of conversational agents (CAs), discussing chatbots, intended for casual conversation with a user, as well as task-oriented agents that generally engage in discussions intended to reach one or several specific goals, often (but not always) within a specific domain. We also consider the concept of embodied conversational agents, briefly reviewing aspects such as character animation and speech processing. The many different approaches for representing dialogue in CAs are discussed in some detail, along with methods for evaluating such agents, emphasizing the important topics of accountability and interpretability. A brief historical overview is given, followed by an extensive overview of various applications, especially in the fields of health and education. We end the chapter by discussing benefits and potential risks regarding the societal impact of current and future CA technology.


Geometry- and Accuracy-Preserving Random Forest Proximities

arXiv.org Machine Learning

Abstract--Random forests are considered one of the best out-of-the-box classification and regression algorithms due to their high level of predictive performance with relatively little tuning. Pairwise proximities can be computed from a trained random forest which measure the similarity between data points relative to the supervised task. Random forest proximities have been used in many applications including the identification of variable importance, data imputation, outlier detection, and data visualization. However, existing definitions of random forest proximities do not accurately reflect the data geometry learned by the random forest. In this paper, we introduce a novel definition of random forest proximities called Random Forest-Geometry-and Accuracy-Preserving proximities (RF-GAP). We prove that the proximity-weighted sum (regression) or majority vote (classification) using RF-GAP exactly match the out-of-bag random forest prediction, thus capturing the data geometry learned by the random forest. We empirically show that this improved geometric representation outperforms traditional random forest proximities in tasks such as data imputation and provides outlier detection and visualization results consistent with the learned data geometry. ANDOM forests [1] are well-known, powerful predictors comprised of an ensemble of binary recursive was first defined by Leo Breiman as the proportion of decision trees. Random forests are easily adapted for both trees in which the observations reside in the same terminal classification and regression, are trivially parallelizable, can node [16].


Meet Sri Lankan Researcher -- Jayakody Kankanamalage Chamani Shiranthika

#artificialintelligence

What are you currently working on or worked on before? I worked on international research projects related to Artificial Intelligence research areas. My main research area is reinforcement learning. Apart from that, I engaged in machine learning-related research projects related to personalized recommendations, cancer chemotherapy treatments, frailty analysis, cancer patients' survival rates analysis, etc. Other core research areas I have worked in areas like the travel industry, Internet, Internet of Things, air pollution, behavioral sciences computing, convolutional neural nets, environmental factors, health care,human-computer interaction, recommender systems, recurrent neural nets, sentiment analysis, social networking (online), time series, unsupervised learning, etc. I am seeking research collaboration opportunities, academic positions, industrial AI events, worldwide, and would love to work on collaborative projects.


AGMI: Attention-Guided Multi-omics Integration for Drug Response Prediction with Graph Neural Networks

arXiv.org Artificial Intelligence

Accurate drug response prediction (DRP) is a crucial yet challenging task in precision medicine. This paper presents a novel Attention-Guided Multi-omics Integration (AGMI) approach for DRP, which first constructs a Multi-edge Graph (MeG) for each cell line, and then aggregates multi-omics features to predict drug response using a novel structure, called Graph edge-aware Network (GeNet). For the first time, our AGMI approach explores gene constraint based multi-omics integration for DRP with the whole-genome using GNNs. Empirical experiments on the CCLE and GDSC datasets show that our AGMI largely outperforms state-of-the-art DRP methods by 8.3%--34.2% on four metrics. Our data and code are available at https://github.com/yivan-WYYGDSG/AGMI.


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


Explain It To Me : Confusion Matrix

#artificialintelligence

You can refer to the documentation if you want to learn more. Through this article you've learn about: I hope you can gain basic understanding about confusion matrix and the important metrics for classification task. Remember, never stop to learn & stay awesome!


Artificial Intellgence -- Application in Life Sciences and Beyond. The Upper Rhine Artificial Intelligence Symposium UR-AI 2021

arXiv.org Artificial Intelligence

The TriRhenaTech alliance presents the accepted papers of the 'Upper-Rhine Artificial Intelligence Symposium' held on October 27th 2021 in Kaiserslautern, Germany. Topics of the conference are applications of Artificial Intellgence in life sciences, intelligent systems, industry 4.0, mobility and others. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, Offenburg and Trier, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.


The company working to build a cancer drug with AI is opening a lab in Israel

#artificialintelligence

The U.S. startup company DeepCure, which works to develop medications with the help of artificial intelligence, said this week that it will be opening a lab and offices in Israel for the first time. DeepCure is part of an emerging wave of companies seeking to improve and accelerate the drug-development process with tools like machine learning and AI. It was formed in 2018 by CEO Kfir Schreiber, alongside Joseph Jacobson and Thrasyvoulos (Thras) Karydis, who are today chief science and chief technology officer, respectively. The three met as students at the Massachusetts Institute of Technology. The company is developing small molecules drugs– in other words, medicines generally sold in capsule form, as opposed to antibody-based biological therapies given as an infusion, for example. DeepCure currently has five development programs underway for therapies against cancer, inflammatory diseases and nervous system diseases.


Use of machine learning in geriatric clinical care for chronic diseases: a systematic literature review

arXiv.org Artificial Intelligence

Objectives-Geriatric clinical care is a multidisciplinary assessment designed to evaluate older patients (age 65 years and above) functional ability, physical health, and cognitive wellbeing. The majority of these patients suffer from multiple chronic conditions and require special attention. Recently, hospitals utilize various artificial intelligence (AI) systems to improve care for elderly patients. The purpose of this systematic literature review is to understand the current use of AI systems, particularly machine learning (ML), in geriatric clinical care for chronic diseases. Materials and Methods-We restricted our search to eight databases, namely PubMed, WorldCat, MEDLINE, ProQuest, ScienceDirect, SpringerLink, Wiley, and ERIC, to analyze research articles published in English between January 2010 and June 2019. We focused on studies that used ML algorithms in the care of geriatrics patients with chronic conditions. Results-We identified 35 eligible studies and classified in three groups-psychological disorder (n=22), eye diseases (n=6), and others (n=7). This review identified the lack of standardized ML evaluation metrics and the need for data governance specific to health care applications. Conclusion- More studies and ML standardization tailored to health care applications are required to confirm whether ML could aid in improving geriatric clinical care.