Plotting

993edc98ca87f7e08494eec37fa836f7-AuthorFeedback.pdf

Neural Information Processing Systems

Thank you to all the reviewers for their detailed reviews. We address specific concerns below. This amounts to saying, "my classifier should be equally good on all classes, except the extremely rare ones which Reviewer 2 [results from a single dataset] As you point out, dataset availability is a challenge. In short - we found that EGAL's performance degrades to that of the standard approaches as the Reviewer 3 Thank you for picking up those typos - we will correct them in the final draft. We will make this clear in the camera ready.


OpenAI Can Stop Pretending

The Atlantic - Technology

OpenAI is a strange company for strange times. Valued at 300 billion--roughly the same as seven Fords or one and a half PepsiCos--the AI start-up has an era-defining product in ChatGPT and is racing to be the first to build superintelligent machines. The company is also, to the apparent frustration of its CEO Sam Altman, beholden to its nonprofit status. When OpenAI was founded in 2015, it was meant to be a research lab that would work toward the goal of AI that is "safe" and "benefits all of humanity." There wasn't supposed to be any pressure--or desire, really--to make money.





On the Worst Prompt Performance of Large Language Models

Neural Information Processing Systems

The performance of large language models (LLMs) is acutely sensitive to the phrasing of prompts, which raises significant concerns about their reliability in real-world scenarios. Existing studies often divide prompts into task-level instructions and case-level inputs and primarily focus on evaluating and improving robustness against variations in tasks-level instructions. However, this setup fails to fully address the diversity of real-world user queries and assumes the existence of task-specific datasets.


Appendix for Rethinking Variational Inference for Probabilistic Programs with Stochastic Support Tim Reichelt 1 Luke Ong 1,2 Tom Rainforth

Neural Information Processing Systems

B.1 Background on Successive Halving Successive Halving (SH) divides a total budget of T iterations into L " rlog This results in an exponential distribution of resources allocated to the different candidates, with more resources allocated to those that are more promising after intermediate evaluation. Adapting it to our setting of treating the problem as a top-m identification is done by simply using L " rlog The online variant of the algorithm is useful if a user is unsure about the total iteration budget that they want to spend on the input program. We therefore need to adapt Algo. 1 so that it can be'restarted' after it has terminated. A naive approach to this would be to simply run Algo. 1 again but re-use the q's for the SLPs that have already been discovered and only initialize the q However, this scheme is limited as it disproportionately favours SLPs which were discovered in the previous run. This is because for those SLPs the local ELBOs will already be relatively large compared to the newly added SLPs.


Rethinking Variational Inference for Probabilistic Programs with Stochastic Support Tim Reichelt 1 Luke Ong 1,2 Tom Rainforth

Neural Information Processing Systems

We introduce Support Decomposition Variational Inference (SDVI), a new variational inference (VI) approach for probabilistic programs with stochastic support. Existing approaches to this problem rely on designing a single global variational guide on a variable-by-variable basis, while maintaining the stochastic control flow of the original program. SDVI instead breaks the program down into sub-programs with static support, before automatically building separate sub-guides for each. This decomposition significantly aids in the construction of suitable variational families, enabling, in turn, substantial improvements in inference performance.


SpeAr: A Spectral Approach for Zero-Shot Node Classification

Neural Information Processing Systems

Zero-shot node classification is a vital task in the field of graph data processing, aiming to identify nodes of classes unseen during the training process. Prediction bias is one of the primary challenges in zero-shot node classification, referring to the model's propensity to misclassify nodes of unseen classes as seen classes. However, most methods introduce external knowledge to mitigate the bias, inadequately leveraging the inherent cluster information within the unlabeled nodes. To address this issue, we employ spectral analysis coupled with learnable class prototypes to discover the implicit cluster structures within the graph, providing a more comprehensive understanding of classes. In this paper, we propose a Spectral Approach for zero-shot node classification (SpeAr). Specifically, we establish an approximate relationship between minimizing the spectral contrastive loss and performing spectral decomposition on the graph, thereby enabling effective node characterization through loss minimization. Subsequently, the class prototypes are iteratively refined based on the learned node representations, initialized with the semantic vectors. Finally, extensive experiments verify the effectiveness of the SpeAr, which can further alleviate the bias problem.


Language-Conditioned Imitation Learning for Robot Manipulation Tasks Simon Stepputtis 1 Joseph Campbell 1 Mariano Phielipp 2 Stefan Lee

Neural Information Processing Systems

Imitation learning is a popular approach for teaching motor skills to robots. However, most approaches focus on extracting policy parameters from execution traces alone (i.e., motion trajectories and perceptual data). No adequate communication channel exists between the human expert and the robot to describe critical aspects of the task, such as the properties of the target object or the intended shape of the motion. Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent (e.g., "go to the large green bowl").