Goto

Collaborating Authors

 min




Appendix

Neural Information Processing Systems

According to Alg. 2, in each exploration, at least one leaf node will be expanded. Moreover, the overall size of the belief tree isO((|A|min(Pδmax,Nmax))D), where Nmax is the maximum sample size given by KLD-Sampling,Pδmax = supb,aPδ(Yb,a), and Yb,a is the set of reachable beliefs after executing actiona at belief b. The tree size is limited sinceNmax is finite. The weights are normalized, i.e., There exist bounded functionsα and α0 such that V (b) = R α(s)b(s)ds, and V (b0) = R α0(s)b0(s)ds. Wecan bound the first and third terms, respectively,byλinlight ofthe assumptions.




323746f0ae2fbd8b6f500dc2d5c5f898-Paper-Conference.pdf

Neural Information Processing Systems

Hence, in this infinite-width limit, it suffices that the smallest eigenvalue of the NTK is bounded away from0for gradient descent to reach zero loss.


Model Inversion Networks for Model-Based Optimization

Neural Information Processing Systems

This work addresses data-driven optimization problems, where the goal is to find an input that maximizes an unknown score or reward function given access to a dataset of inputs with corresponding scores. When the inputs are high-dimensional and valid inputs constitute a small subset of this space (e.g., valid protein sequences or valid natural images), such model-based optimization problems become exceptionally difficult, since the optimizer must avoid out-of-distribution and invalid inputs. We propose to address such problems with model inversion networks (MINs), which learn an inverse mapping from scores to inputs. MINs can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems. MINs can also handle both purely offline data sources and active data collection. We evaluate MINs on high-dimensional model-based optimization problems over images, protein designs, and neural network controller parameters, and bandit optimization from logged data.


A computational system to handle the orthographic layer of tajwid in contemporary Quranic Orthography

Martínez, Alicia González

arXiv.org Artificial Intelligence

Contemporary Quranic Orthography (CQO) relies on a precise system of phonetic notation that can be traced back to the early stages of Islam, when the Quran was mainly oral in nature and the first written renderings of it served as memory aids for this oral tradition. The early systems of diacritical marks created on top of the Quranic Consonantal Text (QCT) motivated the creation and further development of a fine-grained system of phonetic notation that represented tajwid-the rules of recitation. We explored the systematicity of the rules of tajwid, as they are encountered in the Cairo Quran, using a fully and accurately encoded digital edition of the Quranic text. For this purpose, we developed a python module that can remove or add the orthographic layer of tajwid from a Quranic text in CQO. The interesting characteristic of these two sets of rules is that they address the complete Quranic text of the Cairo Quran, so they can be used as precise witnesses to study its phonetic and prosodic processes. From a computational point of view, the text of the Cairo Quran can be used as a linchpin to align and compare Quranic manuscripts, due to its richness and completeness. This will let us create a very powerful framework to work with the Arabic script, not just within an isolated text, but automatically exploring a specific textual phenomenon in other connected manuscripts. Having all the texts mapped among each other can serve as a powerful tool to study the nature of the notation systems of diacritics added to the consonantal skeleton.


Eurekaverse: Environment Curriculum Generation via Large Language Models

Liang, William, Wang, Sam, Wang, Hung-Ju, Bastani, Osbert, Jayaraman, Dinesh, Ma, Yecheng Jason

arXiv.org Artificial Intelligence

Recent work has demonstrated that a promising strategy for teaching robots a wide range of complex skills is by training them on a curriculum of progressively more challenging environments. However, developing an effective curriculum of environment distributions currently requires significant expertise, which must be repeated for every new domain. Our key insight is that environments are often naturally represented as code. Thus, we probe whether effective environment curriculum design can be achieved and automated via code generation by large language models (LLM). In this paper, we introduce Eurekaverse, an unsupervised environment design algorithm that uses LLMs to sample progressively more challenging, diverse, and learnable environments for skill training. We validate Eurekaverse's effectiveness in the domain of quadrupedal parkour learning, in which a quadruped robot must traverse through a variety of obstacle courses. The automatic curriculum designed by Eurekaverse enables gradual learning of complex parkour skills in simulation and can successfully transfer to the real-world, outperforming manual training courses designed by humans.


Revisiting Multi-Modal LLM Evaluation

Lu, Jian, Srivastava, Shikhar, Chen, Junyu, Shrestha, Robik, Acharya, Manoj, Kafle, Kushal, Kanan, Christopher

arXiv.org Artificial Intelligence

With the advent of multi-modal large language models (MLLMs), datasets used for visual question answering (VQA) and referring expression comprehension have seen a resurgence. However, the most popular datasets used to evaluate MLLMs are some of the earliest ones created, and they have many known problems, including extreme bias, spurious correlations, and an inability to permit fine-grained analysis. In this paper, we pioneer evaluating recent MLLMs (LLaVA 1.5, LLaVA-NeXT, BLIP2, InstructBLIP, GPT-4V, and GPT-4o) on datasets designed to address weaknesses in earlier ones. We assess three VQA datasets: 1) TDIUC, which permits fine-grained analysis on 12 question types; 2) TallyQA, which has simple and complex counting questions; and 3) DVQA, which requires optical character recognition for chart understanding. We also study VQDv1, a dataset that requires identifying all image regions that satisfy a given query. Our experiments reveal the weaknesses of many MLLMs that have not previously been reported. Our code is integrated into the widely used LAVIS framework for MLLM evaluation, enabling the rapid assessment of future MLLMs.