Goto

Collaborating Authors

Results


Reface grabs $5.5M seed led by A16z to stoke its viral face-swap video app – TechCrunch

#artificialintelligence

Buzzy face-swap video app Reface, which lends users celebrity'superpowers' by turning their selfies into "eerily realistic" famous video clips at the tap of a button, has caught the attention of Andreessen Horowitz. The Silicon Valley venture firm leads a $5.5 million seed round in the deep tech entertainment startup, announced today. Reface tells us its apps (iOS and Android) have been downloaded some 70 million times since it launched in January 2020 -- up from 20M when we spoke to one of its (seven) co-founders back in August. It's also attained'top five' leading app status in around 100 countries, the US included -- as well as bagging a'top app' award in the annual Google Play best of. That kind of viral growth clip has been turning heads all over the place.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


GAEA: Graph Augmentation for Equitable Access via Reinforcement Learning

arXiv.org Artificial Intelligence

Disparate access to resources by different subpopulations is a prevalent issue in societal and sociotechnical networks. For example, urban infrastructure networks may enable certain racial groups to more easily access resources such as high-quality schools, grocery stores, and polling places. Similarly, social networks within universities and organizations may enable certain groups to more easily access people with valuable information or influence. Here we introduce a new class of problems, Graph Augmentation for Equitable Access (GAEA), to enhance equity in networked systems by editing graph edges under budget constraints. We prove such problems are NP-hard, and cannot be approximated within a factor of $(1-\tfrac{1}{3e})$. We develop a principled, sample- and time- efficient Markov Reward Process (MRP)-based mechanism design framework for GAEA. Our algorithm outperforms baselines on a diverse set of synthetic graphs. We further demonstrate the method on real-world networks, by merging public census, school, and transportation datasets for the city of Chicago and applying our algorithm to find human-interpretable edits to the bus network that enhance equitable access to high-quality schools across racial groups. Further experiments on Facebook networks of universities yield sets of new social connections that would increase equitable access to certain attributed nodes across gender groups.


Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

arXiv.org Artificial Intelligence

In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.


Translation Is Trickier For Business, And Artificial Intelligence Can Help

#artificialintelligence

Artificial intelligence (AI) for translation is something Google and other companies have provided for individuals. It can be accessed on your phone. However, translation is still a much larger and complex issue than many people realize. The business community has many complex and unique needs that add to the challenge of accurate and reliable translation, and AI is showing increasing capability. One of the keys to business translation is the simple reality that each business sector has its own terms, phrases, and even idioms.


Intrusion Detection Systems for IoT: opportunities and challenges offered by Edge Computing

arXiv.org Artificial Intelligence

Key components of current cybersecurity methods are the Intrusion Detection Systems (IDSs) were different techniques and architectures are applied to detect intrusions. IDSs can be based either on cross-checking monitored events with a database of known intrusion experiences, known as signature-based, or on learning the normal behavior of the system and reporting whether some anomalous events occur, named anomaly-based. This work is dedicated to the application to the Internet of Things (IoT) network where edge computing is used to support the IDS implementation. New challenges that arise when deploying an IDS in an edge scenario are identified and remedies are proposed. We focus on anomaly-based IDSs, showing the main techniques that can be leveraged to detect anomalies and we present machine learning techniques and their application in the context of an IDS, describing the expected advantages and disadvantages that a specific technique could cause.


2021 Healthcare Cybersecurity Priorities: Experts Weigh In

#artificialintelligence

Healthcare cybersecurity is in triage mode. As systems are stretched to the limits by COVID-19 and technology becomes an essential part of everyday patient interactions, hospital and healthcare IT departments have been left to figure out how to make it all work together, safely and securely. Most notably, the connectivity of everything from thermometers to defibrillators is exponentially increasing the attack surface, presenting vulnerabilities IT professionals might not even know are on their networks. Get the whole story and DOWNLOAD the eBook now – on us!] The result has been a newfound attention from ransomware and other malicious actors circling and waiting for the right time to strike. Rather than feeling overwhelmed in the current cybersecurity environment, it's important for healthcare and hospital IT teams to look at security their networks as a constant work in progress, rather than a single project with a start and end point, according to experts Jeff Horne from Ordr and G. Anthony Reina who participated in Threatpost's November webinar on Heathcare Cybersecurity. "This is a proactive space," Reina said. "This is something where you can't just be reactive. You actually have to be going out there, searching for those sorts of things, and so even on the technologies that we have, you know, we're, we're proactive about saying that security is an evolving, you know, kind of technology, It's not something where we're going to be finished." Healthcare IT pros, and security professionals more generally, also need to get a firm handle on what lives their networks and its potential level of exposure. The fine-tuned expertise of healthcare connected machines, along with the enormous cost to upgrade hardware in many instances, leave holes on a network that simply cannot be patched. "Because, from an IT perspective, you cannot manage what you can't see, and from a security perspective, you can't control and protect what you don't know," Horne said. Threatpost's experts explained how healthcare organizations can get out of triage mode and ahead of the next attack. The webinar covers everything from bread and butter patching to a brand-new secure data model which applies federated learning to functions as critical as diagnosing a brain tumor. Alternatively, a lightly edited transcript of the event follows below. Thank you so much for joining. We have an excellent conversation planned on a critically important topic, Healthcare cybersecurity. My name is Becky Bracken, I'll be your host for today's discussion. Before we get started, I want to remind you there's a widget on the upper right-hand corner of your screen where you can submit questions to our panelists at any time. We encourage you to do that. You'll have to answer questions and we want to make sure we're covering topics most interesting to you, OK, sure. Let's just introduce our panelists today. First we have Jeff Horne. Jeff is currently the CSO at Ordr and his priors include SpaceX.


Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

arXiv.org Artificial Intelligence

Social Networking Sites (SNS) are one of the most important ways of communication. In particular, microblogging sites are being used as analysis avenues due to their peculiarities (promptness, short texts...). There are countless researches that use SNS in novel manners, but machine learning (ML) has focused mainly in classification performance rather than interpretability and/or other goodness metrics. Thus, state-of-the-art models are black boxes that should not be used to solve problems that may have a social impact. When the problem requires transparency, it is necessary to build interpretable pipelines. Arguably, the most decisive component in the pipeline is the classifier, but it is not the only thing that we need to consider. Despite that the classifier may be interpretable, resulting models are too complex to be considered comprehensible, making it impossible for humans to comprehend the actual decisions. The purpose of this paper is to present a feature selection mechanism (the first step in the pipeline) that is able to improve comprehensibility by using less but more meaningful features while achieving a good performance in microblogging contexts where interpretability is mandatory. Moreover, we present a ranking method to evaluate features in terms of statistical relevance and bias. We conducted exhaustive tests with five different datasets in order to evaluate classification performance, generalisation capacity and actual interpretability of the model. Our results shows that our proposal is better and, by far, the most stable in terms of accuracy, generalisation and comprehensibility.


A Survey on Data Pricing: from Economics to Data Science

arXiv.org Artificial Intelligence

How can we assess the value of data objectively, systematically and quantitatively? Pricing data, or information goods in general, has been studied and practiced in dispersed areas and principles, such as economics, marketing, electronic commerce, data management, data mining and machine learning. In this article, we present a unified, interdisciplinary and comprehensive overview of this important direction. We examine various motivations behind data pricing, understand the economics of data pricing and review the development and evolution of pricing models according to a series of fundamental principles. We discuss both digital products and data products. We also consider a series of challenges and directions for future work.


Emotional Semantics-Preserved and Feature-Aligned CycleGAN for Visual Emotion Adaptation

arXiv.org Artificial Intelligence

Thanks to large-scale labeled training data, deep neural networks (DNNs) have obtained remarkable success in many vision and multimedia tasks. However, because of the presence of domain shift, the learned knowledge of the well-trained DNNs cannot be well generalized to new domains or datasets that have few labels. Unsupervised domain adaptation (UDA) studies the problem of transferring models trained on one labeled source domain to another unlabeled target domain. In this paper, we focus on UDA in visual emotion analysis for both emotion distribution learning and dominant emotion classification. Specifically, we design a novel end-to-end cycle-consistent adversarial model, termed CycleEmotionGAN++. First, we generate an adapted domain to align the source and target domains on the pixel-level by improving CycleGAN with a multi-scale structured cycle-consistency loss. During the image translation, we propose a dynamic emotional semantic consistency loss to preserve the emotion labels of the source images. Second, we train a transferable task classifier on the adapted domain with feature-level alignment between the adapted and target domains. We conduct extensive UDA experiments on the Flickr-LDL & Twitter-LDL datasets for distribution learning and ArtPhoto & FI datasets for emotion classification. The results demonstrate the significant improvements yielded by the proposed CycleEmotionGAN++ as compared to state-of-the-art UDA approaches.