Not enough data to create a plot.
Try a different view from the menu above.
Inside Google's Two-Year Frenzy to Catch Up With OpenAI
That was how long Google was giving Sissie Hsiao. A hundred days to build a ChatGPT rival. By the time Hsiao took on the project in December 2022, she had spent more than 16 years at the company. She led thousands of employees. Hsiao had seen her share of corporate crises--but nothing like the code red that had been brewing in the days since OpenAI, a small research lab, released its public experiment in artificial intelligence.
InterDreamer: Zero-Shot Textto 3D Dynamic Human-Object Interaction Ziyin Wang
Text-conditioned human motion generation has experienced significant advancements with diffusion models trained on extensive motion capture data and corresponding textual annotations. However, extending such success to 3D dynamic human-object interaction (HOI) generation faces notable challenges, primarily due to the lack of large-scale interaction data and comprehensive descriptions that align with these interactions. This paper takes the initiative and showcases the potential of generating human-object interactions without direct training on text-interaction pair data. Our key insight in achieving this is that interaction semantics and dynamics can be decoupled. Being unable to learn interaction semantics through supervised training, we instead leverage pre-trained large models, synergizing knowledge from a large language model and a text-to-motion model. While such knowledge offers high-level control over interaction semantics, it cannot grasp the intricacies of low-level interaction dynamics. To overcome this issue, we introduce a world model designed to comprehend simple physics, modeling how human actions influence object motion. By integrating these components, our novel framework, InterDreamer, is able to generate text-aligned 3D HOI sequences without relying on paired text-interaction data. We apply InterDreamer to the BEHAVE, OMOMO, and CHAIRS datasets, and our comprehensive experimental analysis demonstrates its capability to generate realistic and coherent interaction sequences that seamlessly align with the text directives.
Appendix
The following lemma demonstrates the convergence property of the SGD framework when the gradient estimator v(x) is unbiased and has bounded variance. Differently, these papers assumed that the norm of the gradient estimator, i.e., v(x) One can also refer to the recent summary [6] for more general results on SGD. When the variance of v(x) is of order O(ฯต), one can use stepsizes that are independent of ฯต to guarantee ฯต-optimality or ฯต-stationarity. The algorithm would behave similar like gradient descent. Suppose that it holds true for t. ( We prove the case when F (x) is convex. This section demonstrates the bias, variance, and per-iteration cost of the L-SGD and the MLMCbased gradient estimators.
HYDRA: Pruning Adversarially Robust Neural Networks
In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning independently to address one of these challenges, only a few recent works have studied them jointly. However, these works inherit a heuristic pruning strategy that was developed for benign training, which performs poorly when integrated with robust training techniques, including adversarial training and verifiable robust training. To overcome this challenge, we propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is solved efficiently using SGD.