Goto

Collaborating Authors

Communications: Overviews


AI remains priority for CEOs, according to new Gartner survey

#artificialintelligence

For the third year running, AI is the top priority for CEOs, according to a survey of CEOs and senior executives released by Gartner on Wednesday. The findings also revealed that the metaverse, which has received a lot of hype in the last year, especially since the rebranding of Facebook to Meta, is not as relevant to business leaders – 63% say that they do not see the metaverse as a key technology for their organization. It's not a big surprise that AI continues to be on the mind of top business leaders. As TechRepublic reported in June 2021, 97% of senior executives planned to invest heavily in AI. Jobs in AI, which are often high-pay, are also in demand, according to the jobs board Indeed.com.


65 Competencies

Communications of the ACM

Analyzing data is now essential to success in education, employment, and other areas of activity in the knowledge society. Even though several frameworks describe the competencies and skills needed to meet current and future challenges, no data analytics competency framework exists to describe the importance of specific skills to succeed in data analytics assignments.


Dynamic Object Comprehension: A Framework For Evaluating Artificial Visual Perception

arXiv.org Artificial Intelligence

Augmented and Mixed Reality are emerging as likely successors to the mobile internet. However, many technical challenges remain. One of the key requirements of these systems is the ability to create a continuity between physical and virtual worlds, with the user's visual perception as the primary interface medium. Building this continuity requires the system to develop a visual understanding of the physical world. While there has been significant recent progress in computer vision and AI techniques such as image classification and object detection, success in these areas has not yet led to the visual perception required for these critical MR and AR applications. A significant issue is that current evaluation criteria are insufficient for these applications. To motivate and evaluate progress in this emerging area, there is a need for new metrics. In this paper we outline limitations of current evaluation criteria and propose new criteria.


LaMDA: Language Models for Dialog Applications

arXiv.org Artificial Intelligence

We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text. While model scaling alone can improve quality, it shows less improvements on safety and factual grounding. We demonstrate that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding. The first challenge, safety, involves ensuring that the model's responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias. We quantify safety using a metric based on an illustrative set of human values, and we find that filtering candidate responses using a LaMDA classifier fine-tuned with a small amount of crowdworker-annotated data offers a promising approach to improving model safety. The second challenge, factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator. We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible. Finally, we explore the use of LaMDA in the domains of education and content recommendations, and analyze their helpfulness and role consistency.


Conversational Agents: Theory and Applications

arXiv.org Artificial Intelligence

In this chapter, we provide a review of conversational agents (CAs), discussing chatbots, intended for casual conversation with a user, as well as task-oriented agents that generally engage in discussions intended to reach one or several specific goals, often (but not always) within a specific domain. We also consider the concept of embodied conversational agents, briefly reviewing aspects such as character animation and speech processing. The many different approaches for representing dialogue in CAs are discussed in some detail, along with methods for evaluating such agents, emphasizing the important topics of accountability and interpretability. A brief historical overview is given, followed by an extensive overview of various applications, especially in the fields of health and education. We end the chapter by discussing benefits and potential risks regarding the societal impact of current and future CA technology.


Mental Stress Detection using Data from Wearable and Non-wearable Sensors: A Review

arXiv.org Artificial Intelligence

This paper presents a comprehensive review of methods covering significant subjective and objective human stress detection techniques available in the literature. The methods for measuring human stress responses could include subjective questionnaires (developed by psychologists) and objective markers observed using data from wearable and non-wearable sensors. In particular, wearable sensor-based methods commonly use data from electroencephalography, electrocardiogram, galvanic skin response, electromyography, electrodermal activity, heart rate, heart rate variability, and photoplethysmography both individually and in multimodal fusion strategies. Whereas, methods based on non-wearable sensors include strategies such as analyzing pupil dilation and speech, smartphone data, eye movement, body posture, and thermal imaging. Whenever a stressful situation is encountered by an individual, physiological, physical, or behavioral changes are induced which help in coping with the challenge at hand. A wide range of studies has attempted to establish a relationship between these stressful situations and the response of human beings by using different kinds of psychological, physiological, physical, and behavioral measures. Inspired by the lack of availability of a definitive verdict about the relationship of human stress with these different kinds of markers, a detailed survey about human stress detection methods is conducted in this paper. In particular, we explore how stress detection methods can benefit from artificial intelligence utilizing relevant data from various sources. This review will prove to be a reference document that would provide guidelines for future research enabling effective detection of human stress conditions.


An Empirical Analysis of AI Contributions to Sustainable Cities (SDG11)

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) presents opportunities to develop tools and techniques for addressing some of the major global challenges and deliver solutions with significant social and economic impacts. The application of AI has far-reaching implications for the 17 Sustainable Development Goals (SDGs) in general and sustainable urban development in particular. However, existing attempts to understand and use the opportunities offered by AI for SDG 11 have been explored sparsely, and the shortage of empirical evidence about the practical application of AI remains. In this chapter, we analyze the contribution of AI to support the progress of SDG 11 (Sustainable Cities and Communities). We address the knowledge gap by empirically analyzing the AI systems (N 29) from the AI SDG database and the Community Research and Development Information Service (CORDIS) database. Our analysis revealed that AI systems have indeed contributed to advancing sustainable cities in several ways (e.g., waste management, air quality monitoring, disaster response management, transportation management), but many projects are still working for citizens and not with them. This snapshot of AI's impact on SDG11 is inherently partial yet useful to advance our understanding as we move towards more mature systems and research on the impact of AI systems for the social good. Introduction Artificial intelligence (AI) has the potential to mitigate several issues facing cities, such as road safety, waste management, air pollution, and disaster risk reduction (Gupta et al., 2021). Examples of recent AI systems for improved well-being in cities include a tool for semi-automatic digitization of sketch maps to support the inclusion of indigenous communities through the documentation of their land rights (Degbelo et al., 2021; Chipofya et al., 2020), a system for traffic monitoring based on Wireless Signals (Gupta et al., 2018), approaches for efficient waste management (Barns, 2019), air quality modelling (Gupta et al., 2018) and urban health monitoring systems (Allam and Jones, 2020).


Stop Oversampling for Class Imbalance Learning: A Critical Review

arXiv.org Artificial Intelligence

For the last two decades, oversampling has been employed to overcome the challenge of learning from imbalanced datasets. Many approaches to solving this challenge have been offered in the literature. Oversampling, on the other hand, is a concern. That is, models trained on fictitious data may fail spectacularly when put to real-world problems. The fundamental difficulty with oversampling approaches is that, given a real-life population, the synthesized samples may not truly belong to the minority class. As a result, training a classifier on these samples while pretending they represent minority may result in incorrect predictions when the model is used in the real world. We analyzed a large number of oversampling methods in this paper and devised a new oversampling evaluation system based on hiding a number of majority examples and comparing them to those generated by the oversampling process. Based on our evaluation system, we ranked all these methods based on their incorrectly generated examples for comparison. Our experiments using more than 70 oversampling methods and three imbalanced real-world datasets reveal that all oversampling methods studied generate minority samples that are most likely to be majority. Given data and methods in hand, we argue that oversampling in its current forms and methodologies is unreliable for learning from class imbalanced data and should be avoided in real-world applications.


Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge Intelligence

arXiv.org Artificial Intelligence

Mobile edge computing (MEC) is considered a novel paradigm for computation-intensive and delay-sensitive tasks in fifth generation (5G) networks and beyond. However, its uncertainty, referred to as dynamic and randomness, from the mobile device, wireless channel, and edge network sides, results in high-dimensional, nonconvex, nonlinear, and NP-hard optimization problems. Thanks to the evolved reinforcement learning (RL), upon iteratively interacting with the dynamic and random environment, its trained agent can intelligently obtain the optimal policy in MEC. Furthermore, its evolved versions, such as deep RL (DRL), can achieve higher convergence speed efficiency and learning accuracy based on the parametric approximation for the large-scale state-action space. This paper provides a comprehensive research review on RL-enabled MEC and offers insight for development in this area. More importantly, associated with free mobility, dynamic channels, and distributed services, the MEC challenges that can be solved by different kinds of RL algorithms are identified, followed by how they can be solved by RL solutions in diverse mobile applications. Finally, the open challenges are discussed to provide helpful guidance for future research in RL training and learning MEC.


Technology Ethics in Action: Critical and Interdisciplinary Perspectives

arXiv.org Artificial Intelligence

This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.