Goto

Collaborating Authors

 princeton







07168af6cb0ef9f78dae15739dd73255-Paper.pdf

Neural Information Processing Systems

Our algorithm is based on an abstract (and simple) reduction to online convex optimization, which efficiently converts an arbitrary online convex optimizer to a boosting algorithm. Moreover, this reduction extends to the statistical as well astheonlinerealizablesettings, thusunifying the4casesofstatistical/online and agnostic/realizableboosting.


Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning

Wang, Jinyuan, Li, Junlong, Zhao, Hai

arXiv.org Artificial Intelligence

In open-domain question-answering (ODQA), most existing questions require single-hop reasoning on commonsense. To further extend this task, we officially introduce open-domain multi-hop reasoning (ODMR) by answering multi-hop questions with explicit reasoning steps in open-domain setting. Recently, large language models (LLMs) have found significant utility in facilitating ODQA without external corpus. Furthermore, chain-of-thought (CoT) prompting boosts the reasoning capability of LLMs to a greater extent with manual or automated paradigms. However, existing automated methods lack of quality assurance, while manual approaches suffer from limited scalability and poor diversity, hindering the capabilities of LLMs. In this paper, we propose Self-prompted Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality CoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generation pipeline of high quality ODMR datasets, an adaptive sampler for in-context CoT selection and self-prompted inference via in-context learning. Extensive experiments on four multi-hop question-answering benchmarks show that our proposed SP-CoT not only significantly surpasses the previous SOTA methods on large-scale (175B) LLMs, but also nearly doubles the zero-shot performance of small-scale (13B) LLMs. Further analysis reveals the remarkable capability of SP-CoT to elicit direct and concise intermediate reasoning steps by recalling $\sim$50\% of intermediate answers on MuSiQue-Ans dataset.


Wirelessly-Controlled Untethered Piezoelectric Planar Soft Robot Capable of Bidirectional Crawling and Rotation

Zheng, Zhiwu, Cheng, Hsin, Kumar, Prakhar, Wagner, Sigurd, Chen, Minjie, Verma, Naveen, Sturm, James C.

arXiv.org Artificial Intelligence

Electrostatic actuators provide a promising approach to creating soft robotic sheets, due to their flexible form factor, modular integration, and fast response speed. However, their control requires kilo-Volt signals and understanding of complex dynamics resulting from force interactions by on-board and environmental effects. In this work, we demonstrate an untethered planar five-actuator piezoelectric robot powered by batteries and on-board high-voltage circuitry, and controlled through a wireless link. The scalable fabrication approach is based on bonding different functional layers on top of each other (steel foil substrate, actuators, flexible electronics). The robot exhibits a range of controllable motions, including bidirectional crawling (up to ~0.6 cm/s), turning, and in-place rotation (at ~1 degree/s). High-speed videos and control experiments show that the richness of the motion results from the interaction of an asymmetric mass distribution in the robot and the associated dependence of the dynamics on the driving frequency of the piezoelectrics. The robot's speed can reach 6 cm/s with specific payload distribution.



'Learning to see and learning to read': Artificial intelligence enters a new era

#artificialintelligence

For artificial intelligence to realize its potential -- to relieve humans from mundane tasks, make life easier, and eventually invent entirely new solutions to our problems -- computers will need to surpass us at two things that we humans do pretty well: see the world around us and understand our language. "Learning to see and learning to read are the two main things we need for the computer to do to gain knowledge," said Jen Rexford, chair of Princeton's computer science department and the Gordon Y.S. Wu Professor in Engineering. "We call these fields computer vision and natural language processing. These two fields have evolved independently but our faculty are bringing them together in interesting ways." In recent years, researchers at Princeton and beyond have made major strides in these two fields, opening up rapid progress across a variety of applications.