Goto

Collaborating Authors

Results


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


Advancements of federated learning towards privacy preservation: from federated learning to split learning

arXiv.org Artificial Intelligence

In the distributed collaborative machine learning (DCML) paradigm, federated learning (FL) recently attracted much attention due to its applications in health, finance, and the latest innovations such as industry 4.0 and smart vehicles. FL provides privacy-by-design. It trains a machine learning model collaboratively over several distributed clients (ranging from two to millions) such as mobile phones, without sharing their raw data with any other participant. In practical scenarios, all clients do not have sufficient computing resources (e.g., Internet of Things), the machine learning model has millions of parameters, and its privacy between the server and the clients while training/testing is a prime concern (e.g., rival parties). In this regard, FL is not sufficient, so split learning (SL) is introduced. SL is reliable in these scenarios as it splits a model into multiple portions, distributes them among clients and server, and trains/tests their respective model portions to accomplish the full model training/testing. In SL, the participants do not share both data and their model portions to any other parties, and usually, a smaller network portion is assigned to the clients where data resides. Recently, a hybrid of FL and SL, called splitfed learning, is introduced to elevate the benefits of both FL (faster training/testing time) and SL (model split and training). Following the developments from FL to SL, and considering the importance of SL, this chapter is designed to provide extensive coverage in SL and its variants. The coverage includes fundamentals, existing findings, integration with privacy measures such as differential privacy, open problems, and code implementation.


A Survey on the Explainability of Supervised Machine Learning

arXiv.org Machine Learning

Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or fifinance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.


Domain-specific Knowledge Graphs: A survey

arXiv.org Artificial Intelligence

Knowledge Graphs (KGs) have made a qualitative leap and effected a real revolution in knowledge representation. This is leveraged by the underlying structure of the KG which underpins a better comprehension, reasoning and interpreting of knowledge for both human and machine. Therefore, KGs continue to be used as a main driver to tackle a plethora of real-life problems in dissimilar domains. However, there is no consensus on a plausible and inclusive definition to domain KG. Further, in conjunction with several limitations and deficiencies, various domain KG construction approaches are far from perfection. This survey is the first to provide an inclusive definition to the notion of domain KG. Also, a comprehensive review of the state-of-the-art approaches drawn from academic works relevant to seven dissimilar domains of knowledge is provided. The scrutiny of the current approaches reveals a correlated array of limitations and deficiencies. The set of improvements to address the limitations of the current approaches are introduced followed by recommendations and opportunities for future research directions.


Privacy-Aware Recommender Systems Challenge on Twitter's Home Timeline

arXiv.org Machine Learning

Recommender systems constitute the core engine of most social network platforms nowadays, aiming to maximize user satisfaction along with other key business objectives. Twitter is no exception. Despite the fact that Twitter data has been extensively used to understand socioeconomic and political phenomena and user behaviour, the implicit feedback provided by users on Tweets through their engagements on the Home Timeline has only been explored to a limited extent. At the same time, there is a lack of large-scale public social network datasets that would enable the scientific community to both benchmark and build more powerful and comprehensive models that tailor content to user interests. By releasing an original dataset of 160 million Tweets along with engagement information, Twitter aims to address exactly that. During this release, special attention is drawn on maintaining compliance with existing privacy laws. Apart from user privacy, this paper touches on the key challenges faced by researchers and professionals striving to predict user engagements. It further describes the key aspects of the RecSys 2020 Challenge that was organized by ACM RecSys in partnership with Twitter using this dataset.


Artificial Intelligence: Research Impact on Key Industries; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2020)

arXiv.org Artificial Intelligence

The TriRhenaTech alliance presents a collection of accepted papers of the cancelled tri-national 'Upper-Rhine Artificial Inteeligence Symposium' planned for 13th May 2020 in Karlsruhe. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.


A Unifying Review of Deep and Shallow Anomaly Detection

arXiv.org Artificial Intelligence

Deep learning approaches to anomaly detection have recently improved the state of the art in detection performance on complex datasets such as large collections of images or text. These results have sparked a renewed interest in the anomaly detection problem and led to the introduction of a great variety of new methods. With the emergence of numerous such methods, including approaches based on generative models, one-class classification, and reconstruction, there is a growing need to bring methods of this field into a systematic and unified perspective. In this review we aim to identify the common underlying principles as well as the assumptions that are often made implicitly by various methods. In particular, we draw connections between classic 'shallow' and novel deep approaches and show how this relation might cross-fertilize or extend both directions. We further provide an empirical assessment of major existing methods that is enriched by the use of recent explainability techniques, and present specific worked-through examples together with practical advice. Finally, we outline critical open challenges and identify specific paths for future research in anomaly detection.


Future of AI Part 2

#artificialintelligence

This part of the series looks at the future of AI with much of the focus in the period after 2025. The leading AI researcher, Geoff Hinton, stated that it is very hard to predict what advances AI will bring beyond five years, noting that exponential progress makes the uncertainty too great. This article will therefore consider both the opportunities as well as the challenges that we will face along the way across different sectors of the economy. It is not intended to be exhaustive. AI deals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment. Some of the classical approaches to AI include (non-exhaustive list) Search algorithms such as Breath-First, Depth-First, Iterative Deepening Search, A* algorithm, and the field of Logic including Predicate Calculus and Propositional Calculus. Local Search approaches were also developed for example Simulated Annealing, Hill Climbing (see also Greedy), Beam Search and Genetic Algorithms (see below). Machine Learning is defined as the field of AI that applies statistical methods to enable computer systems to learn from the data towards an end goal. The term was introduced by Arthur Samuel in 1959. A non-exhaustive list of examples of techniques include Linear Regression, Logistic Regression, K-Means, k-Nearest Neighbour (kNN), Naive Bayes, Support Vector Machine (SVM), Decision Trees, Random Forests, XG Boost, Light Gradient Boosting Machine (LightGBM), CatBoost. Deep Learning refers to the field of Neural Networks with several hidden layers. Such a neural network is often referred to as a deep neural network. Neural Networks are biologically inspired networks that extract abstract features from the data in a hierarchical fashion.


Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy

arXiv.org Artificial Intelligence

Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.


Intelligent Radio Signal Processing: A Contemporary Survey

arXiv.org Artificial Intelligence

Intelligent signal processing for wireless communications is a vital task in modern wireless systems, but it faces new challenges because of network heterogeneity, diverse service requirements, a massive number of connections, and various radio characteristics. Owing to recent advancements in big data and computing technologies, artificial intelligence (AI) has become a useful tool for radio signal processing and has enabled the realization of intelligent radio signal processing. This survey covers four intelligent signal processing topics for the wireless physical layer, including modulation classification, signal detection, beamforming, and channel estimation. In particular, each theme is presented in a dedicated section, starting with the most fundamental principles, followed by a review of up-to-date studies and a summary. To provide the necessary background, we first present a brief overview of AI techniques such as machine learning, deep learning, and federated learning. Finally, we highlight a number of research challenges and future directions in the area of intelligent radio signal processing. We expect this survey to be a good source of information for anyone interested in intelligent radio signal processing, and the perspectives we provide therein will stimulate many more novel ideas and contributions in the future.