Goto

Collaborating Authors

 Li, Tzu-Mao


HotSpot: Screened Poisson Equation for Signed Distance Function Optimization

arXiv.org Artificial Intelligence

Existing losses such as the eikonal loss is to ensure that the implicit function indeed outputs cannot guarantee the recovered implicit function to be a the signed distance. A standard regularization loss used is distance function, even when the implicit function satisfies the eikonal equation: it constrains the norm of the gradient the eikonal equation almost everywhere. Furthermore, the of an implicit function to be 1 almost everywhere. If eikonal loss suffers from stability issues in optimization and the implicit function is a signed distance function, then it the remedies that introduce area or divergence minimization satisfies the eikonal equation. However, the converse is not can lead to oversmoothing. We address these challenges true. Figure 1 shows an example: on the left, we optimize by designing a loss function that when minimized can an implicit function to satisfy the eikonal equation, while converge to the true distance function, is stable, and naturally it successfully does so, it converges to a solution that is far penalize large surface area. We provide theoretical from the actual distance [5, 6].


WatChat: Explaining perplexing programs by debugging mental models

arXiv.org Artificial Intelligence

Often, a good explanation for a program's unexpected behavior is a bug in the programmer's code. But sometimes, an even better explanation is a bug in the programmer's mental model of the language they are using. Instead of merely debugging our current code ("giving the programmer a fish"), what if our tools could directly debug our mental models ("teaching the programmer to fish")? In this paper, we apply ideas from computational cognitive science to do exactly that. Given a perplexing program, we use program synthesis techniques to automatically infer potential misconceptions that might cause the user to be surprised by the program's behavior. By analyzing these misconceptions, we provide succinct, useful explanations of the program's behavior. Our methods can even be inverted to synthesize pedagogical example programs for diagnosing and correcting misconceptions in students.


Differentiable Visual Computing for Inverse Problems and Machine Learning

arXiv.org Artificial Intelligence

Originally designed for applications in computer graphics, visual computing (VC) methods synthesize information about physical and virtual worlds, using prescribed algorithms optimized for spatial computing. VC is used to analyze geometry, physically simulate solids, fluids, and other media, and render the world via optical techniques. These fine-tuned computations that operate explicitly on a given input solve so-called forward problems, VC excels at. By contrast, deep learning (DL) allows for the construction of general algorithmic models, side stepping the need for a purely first principles-based approach to problem solving. DL is powered by highly parameterized neural network architectures -- universal function approximators -- and gradient-based search algorithms which can efficiently search that large parameter space for optimal models. This approach is predicated by neural network differentiability, the requirement that analytic derivatives of a given problem's task metric can be computed with respect to neural network's parameters. Neural networks excel when an explicit model is not known, and neural network training solves an inverse problem in which a model is computed from data. While VC provides a strong inductive bias about the dynamics of realworld phenomena -- one that would otherwise have to be learned from scratch in a pure DL context -- its inability to adapt its mathematical models based on observations of real-world phenomena precludes its direct integration into larger DL-based systems.


Inferring the Future by Imagining the Past

arXiv.org Artificial Intelligence

A single panel of a comic book can say a lot: it can depict not only where the characters currently are, but also their motions, their motivations, their emotions, and what they might do next. More generally, humans routinely infer complex sequences of past and future events from a *static snapshot* of a *dynamic scene*, even in situations they have never seen before. In this paper, we model how humans make such rapid and flexible inferences. Building on a long line of work in cognitive science, we offer a Monte Carlo algorithm whose inferences correlate well with human intuitions in a wide variety of domains, while only using a small, cognitively-plausible number of samples. Our key technical insight is a surprising connection between our inference problem and Monte Carlo path tracing, which allows us to apply decades of ideas from the computer graphics community to this seemingly-unrelated theory of mind task.


Acting as Inverse Inverse Planning

arXiv.org Artificial Intelligence

Great storytellers know how to take us on a journey. They direct characters to act -- not necessarily in the most rational way -- but rather in a way that leads to interesting situations, and ultimately creates an impactful experience for audience members looking on. If audience experience is what matters most, then can we help artists and animators *directly* craft such experiences, independent of the concrete character actions needed to evoke those experiences? In this paper, we offer a novel computational framework for such tools. Our key idea is to optimize animations with respect to *simulated* audience members' experiences. To simulate the audience, we borrow an established principle from cognitive science: that human social intuition can be modeled as "inverse planning," the task of inferring an agent's (hidden) goals from its (observed) actions. Building on this model, we treat storytelling as "*inverse* inverse planning," the task of choosing actions to manipulate an inverse planner's inferences. Our framework is grounded in literary theory, naturally capturing many storytelling elements from first principles. We give a series of examples to demonstrate this, with supporting evidence from human subject studies.


Adversarial examples within the training distribution: A widespread challenge

arXiv.org Artificial Intelligence

Despite a plethora of proposed theories, understanding why deep neural networks are susceptible to adversarial attacks remains an open question. A promising recent strand of research investigates adversarial attacks within the training data distribution, providing a more stringent and worrisome definition for these attacks. These theories posit that the key issue is that in high dimensional datasets, most data points are close to the ground-truth class boundaries. This has been shown in theory for some simple data distributions, but it is unclear if this theory is relevant in practice. Here, we demonstrate the existence of in-distribution adversarial examples for object recognition. This result provides evidence supporting theories attributing adversarial examples to the proximity of data to ground-truth class boundaries, and calls into question other theories which do not account for this more stringent definition of adversarial attacks. These experiments are enabled by our novel gradient-free, evolutionary strategies (ES) based approach for finding in-distribution adversarial examples in 3D rendered objects, which we call CMA-Search.


DiffTaichi: Differentiable Programming for Physical Simulation

arXiv.org Machine Learning

We study the problem of learning and optimizing through physical simulations via differentiable programming. We present DiffTaichi, a new differentiable programming language tailored for building high-performance differentiable physical simulations. We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators. For example, a differentiable elastic object simulator written in our language is 4.2x faster than the hand-engineered CUDA version yet runs as fast, and is 188x faster than TensorFlow. Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations. Finally, we share the lessons learned from our experience developing these simulators, that is, differentiating physical simulators does not always yield useful gradients of the physical system being simulated. We systematically study the underlying reasons and propose solutions to improve gradient quality.