Goto

Collaborating Authors

Systems & Languages


Why Do Interviewers Ask Linked List Questions? • Hillel Wayne

#artificialintelligence

A couple years back I gave a talk on researching software history, using "linked list interview questions" as an example topic. Since referring people to a video is less accessible than just writing a blog post, I've reproduced the question here. So why do interviewers like to ask linked list questions? These answers are contradictory: if you want to know if someone knows CS fundamentals, you don't want to give them a problem they can trick their way through, and if you want to test reasoning ability, you don't want to give a problem that they've already seen in CS. Two contradictory answers tells me there's some history involved. My guess is that originally people asked LL questions for a very good reason, and then over time forgot the reason and came up with post-hoc justifications.


Watch artificial intelligence grow a walking caterpillar in Minecraft

#artificialintelligence

The video above will be familiar to anyone who's played the 3D world-building game Minecraft. The algorithm takes its cue from the "Game of Life," a so-called cellular automaton. There, squares in a grid turn black or white over a series of timesteps based on how many of their neighbors are black or white. The program mimics biological development, in which cells in an embryo behave according to cues in their local environment. Some researchers have replaced the simple rules (e.g., any white square with three black neighbors turns black) with more complex ones decided by neural networks, machine-learning algorithms that roughly mimic the brain's wiring.


Regenerating Soft Robots through Neural Cellular Automata

#artificialintelligence

Neural cellular automata (CA) is a kind of cellular automaton (Figure 1). While cellular automata determine the state transition rule of cells by hand-making, neural CA obtains the transition rule by training a neural network. Recently, this neural CA has been shown to be a powerful tool in morphogenesis [1]. Mordvintsev et al. trained a neural CA to grow complex two-dimensional images starting from a few initial cells. Furthermore, authors also successfully trained it to regenerate a target pattern even if part of it is removed.


Tree-based Node Aggregation in Sparse Graphical Models

arXiv.org Machine Learning

High-dimensional graphical models are often estimated using regularization that is aimed at reducing the number of edges in a network. In this work, we show how even simpler networks can be produced by aggregating the nodes of the graphical model. We develop a new convex regularized method, called the tree-aggregated graphical lasso or tag-lasso, that estimates graphical models that are both edge-sparse and node-aggregated. The aggregation is performed in a data-driven fashion by leveraging side information in the form of a tree that encodes node similarity and facilitates the interpretation of the resulting aggregated nodes. We provide an efficient implementation of the tag-lasso by using the locally adaptive alternating direction method of multipliers and illustrate our proposal's practical advantages in simulation and in applications in finance and biology.


Why AI Can't Properly Translate Proust--Yet

Oxford Comp Sci

This observation--that to understand Proust's text requires knowledge of various kinds--is not a new one. We came across it before, in the context of the Cyc project. Remember that Cyc was supposed to be given knowledge corresponding to the whole of consensus reality, and the Cyc hypothesis was that this would yield human-level general intelligence. Researchers in knowledge-based AI would be keen for me to point out to you that, decades ago, they anticipated exactly this issue. But it is not obvious that just continuing to refine deep learning techniques will address this problem.


GIID-Net: Generalizable Image Inpainting Detection via Neural Architecture Search and Attention

arXiv.org Artificial Intelligence

Deep learning (DL) has demonstrated its powerful capabilities in the field of image inpainting, which could produce visually plausible results. Meanwhile, the malicious use of advanced image inpainting tools (e.g. removing key objects to report fake news) has led to increasing threats to the reliability of image data. To fight against the inpainting forgeries, in this work, we propose a novel end-to-end Generalizable Image Inpainting Detection Network (GIID-Net), to detect the inpainted regions at pixel accuracy. The proposed GIID-Net consists of three sub-blocks: the enhancement block, the extraction block and the decision block. Specifically, the enhancement block aims to enhance the inpainting traces by using hierarchically combined special layers. The extraction block, automatically designed by Neural Architecture Search (NAS) algorithm, is targeted to extract features for the actual inpainting detection tasks. In order to further optimize the extracted latent features, we integrate global and local attention modules in the decision block, where the global attention reduces the intra-class differences by measuring the similarity of global features, while the local attention strengthens the consistency of local features. Furthermore, we thoroughly study the generalizability of our GIID-Net, and find that different training data could result in vastly different generalization capability. Extensive experimental results are presented to validate the superiority of the proposed GIID-Net, compared with the state-of-the-art competitors. Our results would suggest that common artifacts are shared across diverse image inpainting methods. Finally, we build a public inpainting dataset of 10K image pairs for the future research in this area.


Open Problems in Cooperative AI

arXiv.org Artificial Intelligence

Problems of cooperation--in which agents seek ways to jointly improve their welfare--are ubiquitous and important. They can be found at scales ranging from our daily routines--such as driving on highways, scheduling meetings, and working collaboratively--to our global challenges--such as peace, commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation. We see an opportunity for the field of artificial intelligence to explicitly focus effort on this class of problems, which we term Cooperative AI. The objective of this research would be to study the many aspects of the problems of cooperation and to innovate in AI to contribute to solving these problems. Central goals include building machine agents with the capabilities needed for cooperation, building tools to foster cooperation in populations of (machine and/or human) agents, and otherwise conducting AI research for insight relevant to problems of cooperation. This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms. However, Cooperative AI is not the union of these existing areas, but rather an independent bet about the productivity of specific kinds of conversations that involve these and other areas. We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences.


Council Post: The Importance Of Security Architecture And Attack Surface Analysis

#artificialintelligence

Automation, cloud-based systems, internet-enabled devices, API-centric environments -- all of these things within software application development have paved the way for greater enterprise efficiency, productivity and innovation. But they have also opened up new avenues for cybercriminals to target private, sensitive information and compromise the systems that process it. Security pros and hackers tend to stay neck and neck in a race against each other. As new security innovations emerge, hackers crop up almost immediately, finding new ways to get around them. The only way for the good guys to pull ahead in the race is to shift their security and risk management approach from reactive to proactive.


Cellular Automata in Stream Learning - KDnuggets

#artificialintelligence

This post is dedicated to John Horton Conway and Tom Fawcett, who recently passed away, for their noted contributions to the field of cellular automata and machine learning. With the advent of fast data streams, real-time machine learning has become a challenging task. They can be affected by the concept drift effect, by which stream learning methods have to detect changes and adapt to these evolving conditions. Several emerging paradigms such as the so-called "Smart Dust", "Utility Fog", "TinyML" or "Swarm Robotics" are in need for efficient and scalable solutions in real-time scenarios. Cellular Automata (CA), as low-bias and robust-to-noise pattern recognition methods with competitive classification performances, meet the requirements imposed by the aforementioned paradigms mainly due to their simplicity and parallel nature.


Adversarial Turing Patterns from Cellular Automata

arXiv.org Artificial Intelligence

State-of-the-art deep classifiers are intriguingly vulnerable to universal adversarial perturbations: single disturbances of small magnitude that lead to misclassification of most inputs. This phenomena may potentially result in a serious security problem. Despite the extensive research in this area, there is a lack of theoretical understanding of the structure of these perturbations. In image domain, there is a certain visual similarity between patterns, that represent these perturbations, and classical Turing patterns, which appear as a solution of non-linear partial differential equations and are underlying concept of many processes in nature. In this paper, we provide a theoretical bridge between these two different theories, by mapping a simplified algorithm for crafting universal perturbations to (inhomogeneous) cellular automata, the latter is known to generate Turing patterns. Furthermore, we propose to use Turing patterns, generated by cellular automata, as universal perturbations, and experimentally show that they significantly degrade the performance of deep learning models. We found this method to be a fast and efficient way to create a data-agnostic quasi-imperceptible perturbation in the black-box scenario.