The University of Helsinki is an international scientific community of 40,000 students and researchers. It is one of the leading multidisciplinary research universities in Europe and ranks among the top 100 international universities in the world. We are an equal opportunity employer and offer an attractive and diverse workplace in an inspiring environment with a variety of development opportunities and benefits. As a part of the Faculty of Science, the Department of Computer Science (https://www.helsinki.fi/en/computer-science) is a leading unit in Finland in its area and responsible for the teaching and research in computer science at the University of Helsinki. The number of professors at the Department has grown in recent years and there are now 29 professorships.
Prof. Hima Lakkaraju and Prof. Marinka Zitnik invite applications for a Postdoctoral Research Fellowship position at Harvard University starting in the Summer or Fall of 2020. The selected candidate will be expected to lead research in novel machine learning methods to combat COVID-19. More specifically, this fellowship will focus on leveraging recent advances in explainable and interpretable AI/ML to help with the diagnosis and treatment of COVID-19. For instance, the candidate will be developing explainable methods which not only facilitate early detection of COVID-19 as well as its spread across various communities, but also provide interpretable insights into these aspects. In addition, the candidate will also devise novel explainable algorithms that can detect and filter out misinformation about COVID-19.
Explainable Recommendation refers to the personalized recommendation algorithms that address the problem of why -- they not only provide the user with the recommendations, but also make the user aware why such items are recommended by generating recommendation explanations, which help to improve the effectiveness, efficiency, persuasiveness, and user satisfaction of recommender systems. In recent years, a large number of explainable recommendation approaches -- especially model-based explainable recommendation algorithms -- have been proposed and adopted in real-world systems. In this survey, we review the work on explainable recommendation that has been published in or before the year of 2018. We first high-light the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation itself in terms of three aspects: 1) We provide a chronological research line of explanations in recommender systems, including the user study approaches in the early years, as well as the more recent model-based approaches. 2) We provide a taxonomy for explainable recommendation algorithms, including user-based, item-based, model-based, and post-model explanations. 3) We summarize the application of explainable recommendation in different recommendation tasks, including product recommendation, social recommendation, POI recommendation, etc. We devote a chapter to discuss the explanation perspectives in the broader IR and machine learning settings, as well as their relationship with explainable recommendation research. We end the survey by discussing potential future research directions to promote the explainable recommendation research area.
We are the Department of Data Science and Knowledge Engineering (DKE) at Maastricht University, the Netherlands: an international community of 50 researchers at various stages of their career, embedded in the Faculty of Science and Engineering (FSE). Our department has nearly 30 years' experience with research and teaching in the fields of Artificial Intelligence, Computer Science and Mathematics, and we do so in a highly collaborative and cross-disciplinary manner. To strengthen our team, we are looking for a full professor who will work on AI systems that are able to explain the decisions and actions they recommend or take in a human-understandable way. Our department is growing rapidly. This position is one of multiple job openings: you are more than welcome to browse through our other vacancies.
The notion that we should understand how artificial intelligences make decisions is gaining increasing currency. As we face a future in which important decisions affecting the course of our lives may be made by artificial intelligence (AI), the idea that we should understand how AIs make decisions is gaining increasing currency. Which hill to position a 20-year-old soldier on, who gets (or does not get) a home mortgage, which treatment a cancer patient receives … such decisions, and many more, already are being made based on an often unverifiable technology. "The problem is that not all AI approaches are created equal," says Jeff Nicholson, a vice president at Pega Systems Inc., makers of AI-based Customer Relationship Management (CRM) software. "Certain'black box' approaches to AI are opaque and simply cannot be explained."