Goto

Collaborating Authors

 perspective


Debugging and Explaining Metric Learning Approaches: An Influence Function Based Perspective

Neural Information Processing Systems

Deep metric learning (DML) learns a generalizable embedding space where the representations of semantically similar samples are closer. Despite achieving good performance, the state-of-the-art models still suffer from the generalization errors such as farther similar samples and closer dissimilar samples in the space.


Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power

Neural Information Processing Systems

It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate this problem, a series of robust learning algorithms have been proposed. However, although the robust training error can be near zero via some methods, all existing algorithms lead to a high robust generalization error. In this paper, we provide a theoretical understanding of this puzzling phenomenon from the perspective of expressive power for deep neural networks. Specifically, for binary classification problems with well-separated data, we show that, for ReLU networks, while mild over-parameterization is sufficient for high robust training accuracy, there exists a constant robust generalization gap unless the size of the neural network is exponential in the data dimension $d$. This result holds even if the data is linear separable (which means achieving standard generalization is easy), and more generally for any parameterized function classes as long as their VC dimension is at most polynomial in the number of parameters. Moreover, we establish an improved upper bound of $\exp({\mathcal{O}}(k))$ for the network size to achieve low robust generalization error when the data lies on a manifold with intrinsic dimension $k$ ($k \ll d$). Nonetheless, we also have a lower bound that grows exponentially with respect to $k$ --- the curse of dimensionality is inevitable. By demonstrating an exponential separation between the network size for achieving low robust training and generalization error, our results reveal that the hardness of robust generalization may stem from the expressive power of practical models.


Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory

Neural Information Processing Systems

Counterfactual explanations provide ways of achieving a favorable model outcome with minimum input perturbation. However, counterfactual explanations can also be leveraged to reconstruct the model by strategically training a surrogate model to give similar predictions as the original (target) model. In this work, we analyze how model reconstruction using counterfactuals can be improved byfurther leveraging the fact that the counterfactuals also lie quite close to the decision boundary. Our main contribution is to derive novel theoretical relationships between the error in model reconstruction and the number of counterfactual queries required using polytope theory. Our theoretical analysis leads us to propose a strategy for model reconstruction that we call Counterfactual Clamping Attack (CCA) which trains a surrogate model using a unique loss function that treats counterfactuals differently than ordinary instances.


Popular book app's AI is deemed 'bigoted' and 'racist' after calling one user a 'diversity devotee' and telling another to 'surface for the occasional white author'

Daily Mail - Science & tech

A popular book app's AI has been scrapped after being deemed'bigoted and racist'. Fable, a social media app for book enthusiasts, used an AI to create a Spotify-like'wrapped' experience, summarising users' reading habits throughout the year. However, outraged readers soon complained that the feature, designed to offer a'playful roast', was lashing out with racist putdowns. One user was shocked when the app told them to'surface for the occasional white author' after spending the year reading'Black narratives and transformative tales'. Another was slammed by their AI summary as a'diversity devotee', with the app questioning whether they were'ever in the mood for a straight, cis white man's perspective'.


Here's My Perspective on Artificial Intelligence.

#artificialintelligence

I have been interested in Artificial Intelligence (AI) for a long time, ever since I started learning about technology, design, media, and gaming. In a previous post, I introduced Smart Home Technologies. Some subscribers asked my opinion about AI, so I decided to create this short post to respond to them. AI refers to the ability of computers or machines to perform tasks that would typically require human intelligence, such as learning, problem-solving, decision-making, and language understanding. Recently, I have come across different types of AI, including narrow AI and general AI.


Slowly but surely, gains from AI innovation are coming

#artificialintelligence

Each day we read about amazing technology breakthroughs, particularly when it comes to artificial intelligence (AI). But if AI is so great, why are these breathtaking technological achievements not matched with soaring productivity and economic growth? Or, to paraphrase an old jibe: If the economy is so smart, why aren't we all rich? After all, we live among astonishing examples of potentially transformative new technologies that could greatly increase productivity and economic welfare. As noted in the 2014 book, "The Second Machine Age," leaps in AI, machine learning and, more recently in areas such as image recognition, abound.


Qualitative Reasoning about Physical Systems with Multiple Perspective

AI Magazine

It was motivated by two observations regarding modeling in general and work in qualitative physics in particular. First, all modelbased reasoning is only as good as the model used (Davis and Hamscher 1988). Second, no single model is adequate or appropriate for a wide range of tasks (Weld 1989). A model of a real-world system is but an abstraction of some aspects of the system. To formulate a model of a physical system for a given task, we inevitably take certain perspectives of the system to capture proper scenarios by deciding what to describe and what to ignore (Hobbs 1985).


Object-Oriented Programming: Themes and Variations

AI Magazine

The first substantial interactive, display-based implementation was the SMALLTALK language (Goldberg & Robson, 1983). The object-oriented style has often been advocated for simulation programs, systems programming, graphics, and AI programming. The history of ideas has some additional threads including work on message passing as in ACTORS (Lieberman, 1981), and multiple inheritance as in FLAVORS (Weinreb & Moon, 1981). It is also related to a line of work in AI on the theory of frames (Minsky, 1975) and their implementation in knowledge representation languages such as KRL (Bobrow & Winograd, 1977), KEE (Fikes & Kehler, 1985), FRL (Goldstein & Roberts, 1977) and UNITS (Stefik, 1979). One might expect from this long history that by now there would be agreement on the fundamental principles of object-oriented programming.


Approaches to Cognitive Science

AI Magazine

Regardless of training, most people who come in contact with the field of AI are at least partially motivated by the glimmer of hope that they will get a better understanding of the mind. This quest, of course, is a rich and complex one. It is easy to get mired in minutiae along the way, be they the optimization of an algorithm, the details of a mental model, or the intricacies of a logical argument. Thagard's book attempts to call us back to the larger picture and to draw in new devotees--and, in general, he succeeds. This book begins, "Cognitive science is the interdisciplinary study of mind and intelligence..." (p.


Differing Methodological Perspectives in Artificial Intelligence Research

AI Magazine

A variety of proposals for preferred methodological approaches has been advanced in the recent artificial intelligence (AI) literature Rather than advocating a particular approach, this article attempts to explain the apparent confusion of efforts in the field in terms of differences among underlying methodological perspectives held by practicing researchers The article presents a review of such perspectives discussed in the existing literature and then considers a descriptive and relatively specific typology of these differing research perspectives. Studies are reported in a wide range of publications. While some focus on the field (e.g., Artzficial Intelligence), others are concerned with different research areas (e.g., Behavzoral and Brain Sczences). Perhaps, as others have pointed out, "there are undoubtedly some views AI simply adds to the prevailing sense of confusion. AI research, which have been previously reported in .