Goto

Collaborating Authors

 perspective


Debugging and Explaining Metric Learning Approaches: An Influence Function Based Perspective

Neural Information Processing Systems

Deep metric learning (DML) learns a generalizable embedding space where the representations of semantically similar samples are closer. Despite achieving good performance, the state-of-the-art models still suffer from the generalization errors such as farther similar samples and closer dissimilar samples in the space.


Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power

Neural Information Processing Systems

It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate this problem, a series of robust learning algorithms have been proposed. However, although the robust training error can be near zero via some methods, all existing algorithms lead to a high robust generalization error. In this paper, we provide a theoretical understanding of this puzzling phenomenon from the perspective of expressive power for deep neural networks. Specifically, for binary classification problems with well-separated data, we show that, for ReLU networks, while mild over-parameterization is sufficient for high robust training accuracy, there exists a constant robust generalization gap unless the size of the neural network is exponential in the data dimension $d$. This result holds even if the data is linear separable (which means achieving standard generalization is easy), and more generally for any parameterized function classes as long as their VC dimension is at most polynomial in the number of parameters. Moreover, we establish an improved upper bound of $\exp({\mathcal{O}}(k))$ for the network size to achieve low robust generalization error when the data lies on a manifold with intrinsic dimension $k$ ($k \ll d$). Nonetheless, we also have a lower bound that grows exponentially with respect to $k$ --- the curse of dimensionality is inevitable. By demonstrating an exponential separation between the network size for achieving low robust training and generalization error, our results reveal that the hardness of robust generalization may stem from the expressive power of practical models.


Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory

Neural Information Processing Systems

Counterfactual explanations provide ways of achieving a favorable model outcome with minimum input perturbation. However, counterfactual explanations can also be leveraged to reconstruct the model by strategically training a surrogate model to give similar predictions as the original (target) model. In this work, we analyze how model reconstruction using counterfactuals can be improved byfurther leveraging the fact that the counterfactuals also lie quite close to the decision boundary. Our main contribution is to derive novel theoretical relationships between the error in model reconstruction and the number of counterfactual queries required using polytope theory. Our theoretical analysis leads us to propose a strategy for model reconstruction that we call Counterfactual Clamping Attack (CCA) which trains a surrogate model using a unique loss function that treats counterfactuals differently than ordinary instances.


Junior Software Developers' Perspectives on Adopting LLMs for Software Engineering: a Systematic Literature Review

Ferino, Samuel, Hoda, Rashina, Grundy, John, Treude, Christoph

arXiv.org Artificial Intelligence

Many studies exploring the adoption of Large Language Model-based tools for software development by junior developers have emerged in recent years. These studies have sought to understand developers' perspectives about using those tools, a fundamental pillar for successfully adopting LLM-based tools in Software Engineering. The aim of this paper is to provide an overview of junior software developers' perspectives and use of LLM-based tools for software engineering (LLM4SE). We conducted a systematic literature review (SLR) following guidelines by Kitchenham et al. on 56 primary studies, applying the definition for junior software developers as software developers with equal or less than five years of experience, including Computer Science/Software Engineering students. We found that the majority of the studies focused on comprehending the different aspects of integrating AI tools in SE. Only 8.9\% of the studies provide a clear definition for junior software developers, and there is no uniformity. Searching for relevant information is the most common task using LLM tools. ChatGPT was the most common LLM tool present in the studies (and experiments). A majority of the studies (83.9\%) report both positive and negative perceptions about the impact of adopting LLM tools. We also found and categorised advantages, challenges, and recommendations regarding LLM adoption. Our results indicate that developers are using LLMs not just for code generation, but also to improve their development skills. Critically, they are not just experiencing the benefits of adopting LLM tools, but they are also aware of at least a few LLM limitations, such as the generation of wrong suggestions, potential data leaking, and AI hallucination. Our findings offer implications for software engineering researchers, educators, and developers.


Popular book app's AI is deemed 'bigoted' and 'racist' after calling one user a 'diversity devotee' and telling another to 'surface for the occasional white author'

Daily Mail - Science & tech

A popular book app's AI has been scrapped after being deemed'bigoted and racist'. Fable, a social media app for book enthusiasts, used an AI to create a Spotify-like'wrapped' experience, summarising users' reading habits throughout the year. However, outraged readers soon complained that the feature, designed to offer a'playful roast', was lashing out with racist putdowns. One user was shocked when the app told them to'surface for the occasional white author' after spending the year reading'Black narratives and transformative tales'. Another was slammed by their AI summary as a'diversity devotee', with the app questioning whether they were'ever in the mood for a straight, cis white man's perspective'.


Here's My Perspective on Artificial Intelligence.

#artificialintelligence

I have been interested in Artificial Intelligence (AI) for a long time, ever since I started learning about technology, design, media, and gaming. In a previous post, I introduced Smart Home Technologies. Some subscribers asked my opinion about AI, so I decided to create this short post to respond to them. AI refers to the ability of computers or machines to perform tasks that would typically require human intelligence, such as learning, problem-solving, decision-making, and language understanding. Recently, I have come across different types of AI, including narrow AI and general AI.


Perspective: The Metaverse Is Ushering in the Next Era of Computing

#artificialintelligence

The term meta, by its most modern definition can be described as self-referencing or self-reflective. In contemporary nomenclature, meta is often used as a standalone adjective. A "meta" name for a dog would be Dog or a meta movie – would be a movie about movies. And so, we have the metaverse. Another world for people and businesses to inhabit to conduct transactions and interact without the necessity of being fully, physically present.


Citizen Debate : Artificial Intelligence & Law, Perspectives From Europe And Canada(15) - AI Summary

#artificialintelligence

Professors Mireille Hildebrandt (VUB, Brussels) and Catherine Régis (Université de Montréal – Mila, Canada) will present some of the major current questions around Law and Artificial Intelligence. How to bring AI applications under the rule of law, and what fundamental rights assessments must be put in place? Does the GDPR set the right tone and how can AI development be aligned with individual rights and freedoms, including rights to non-discrimination, privacy, due process and the presumption of innocence? Mireille Hildebrandt will focus on the concepts of robust AI (in terms of reliability and resilience) and robust law (in terms of the rule of law), and discuss how robust AI could support the rule of law and vice versa. She will also give an overview of future developments in the legal regulation of AI, and explore the role of ethical guidelines and charters, through their formalization process and their potential for legal developments.


Women are better at finding and remembering words than men, study shows

Daily Mail - Science & tech

That's because a new study has found that women are better at finding and remembering words than men. Researchers from the University of Bergen in Norway have analysed the results of 168 studies on gender differences in'verbal fluency' and'verbal-episodic memory'. Verbal fluency is a measure of one's vocabulary, while verbal-episodic memory is the ability to recall words one has come across in the past. The female advantage is consistent across time and life span, but it is also relatively small,' said Professor Marco Hirnstein. Researchers from the University of Bergen in Norway have analysed the results of 168 studies on gender differences in'verbal fluency' and'verbal-episodic memory' (stock image) A study by a team from the University of Pennsylvania scanned the brains of 900 men, women and children aged eight to 22. From the scans they were able to create a complete road map of the connections in each of their brains, called their'connectome'.


Tesla AI Day: An Investor's Perspective

#artificialintelligence

Tesla has been in the financial news for a variety of reasons over the last few months. There was a Tesla stock split, the company announced a recall, and the CEO Elon Musk is a constant source of buzz, a substantive Kardashian of science capable of space travel. The celebrity CEO of Tesla, Elon Musk, enjoys putting on a show and garnering interest for the company. Unfortunately, the Tesla stock often feels the impact of Musk's actions. However, Tesla just made some announcements that could change the landscape of artificial intelligence forever.