Plotting

 taxnodes:Technology: Instructional Materials


Online Structure Learning for Feed-Forward and Recurrent Sum-Product Networks

Neural Information Processing Systems

Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which marginal inference is always tractable. These properties follow from the conditions of completeness and decomposability, which must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes a new online structure learning technique for feed-forward and recurrent SPNs. The algorithm is demonstrated on real-world datasets with continuous features and sequence datasets of varying length for which the best network architecture is not obvious.


Building Open-Ended Embodied Agents with Internet-Scale Knowledge

Neural Information Processing Systems

Autonomous agents have made great strides in specialist domains like Atari games and Go. However, they typically learn tabula rasa in isolated environments with limited and manually conceived objectives, thus failing to generalize across a wide spectrum of tasks and capabilities. Inspired by how humans continually learn and adapt in the open world, we advocate a trinity of ingredients for building generalist agents: 1) an environment that supports a multitude of tasks and goals, 2) a large-scale database of multimodal knowledge, and 3) a flexible and scalable agent architecture.




Physics-based Deep Learning

arXiv.org Artificial Intelligence

Rather than just theory, we emphasize practical application: every concept is paired with interactive Jupyter notebooks to get you up and running quickly. Beyond traditional supervised learning, we dive into physical loss-constraints, differentiable simulations, diffusion-based approaches for probabilistic generative AI, as well as reinforcement learning and advanced neural network architectures. These foundations are paving the way for the next generation of scientific foundation models . We are living in an era of rapid transformation. These methods have the potential to redefine what's possible in computational science. Note What's new in v0.3? This latest edition takes things even further with a major new chapter on generative modeling, covering cutting-edge techniques like denoising, flow-matching, autoregressive learning, physics-integrated constraints, and diffusion-based graph networks. We've also introduced a dedicated section on neural architectures specifically designed for physics simulations. All code examples have been updated to leverage the latest frameworks.


Most Japanese high school textbooks to include QR codes

The Japan Times

Almost all textbooks to be used by first- and second-year high school students in Japan from fiscal 2026 will include quick response (QR) codes that link to websites with video and audio learning aid materials, sources said Tuesday. The education ministry said the same day that a total of 253 textbooks in 13 subjects have passed the second screenings under the current curriculum guidelines. In response to the rapid progress of digitalization, many of the textbooks include descriptions on information ethics and generative artificial intelligence. The average number of pages per textbook in 11 commonly taught subjects came to 321, slightly up from the previous screenings in 2021. All geography-history and civics textbooks take up the Northern Territories, which are effectively controlled by Russia; Takeshima, the Sea of Japan islets controlled by South Korea; and the Japanese-administered Senkaku Islands, which are also claimed by China.



An Overview of Low-Rank Structures in the Training and Adaptation of Large Models

arXiv.org Machine Learning

The rise of deep learning has revolutionized data processing and prediction in signal processing and machine learning, yet the substantial computational demands of training and deploying modern large-scale deep models present significant challenges, including high computational costs and energy consumption. Recent research has uncovered a widespread phenomenon in deep networks: the emergence of low-rank structures in weight matrices and learned representations during training. These implicit low-dimensional patterns provide valuable insights for improving the efficiency of training and fine-tuning large-scale models. Practical techniques inspired by this phenomenon--such as low-rank adaptation (LoRA) and training--enable significant reductions in computational cost while preserving model performance. In this paper, we present a comprehensive review of recent advances in exploiting low-rank structures for deep learning and shed light on their mathematical foundations. Mathematically, we present two complementary perspectives on understanding the low-rankness in deep networks: (i) the emergence of low-rank structures throughout the whole optimization dynamics of gradient and (ii) the implicit regularization effects that induce such low-rank structures at convergence. From a practical standpoint, studying the low-rank learning dynamics of gradient descent offers a mathematical foundation for understanding the effectiveness of LoRA in fine-tuning large-scale models and inspires parameter-efficient low-rank training strategies. Furthermore, the implicit low-rank regularization effect helps explain the success of various masked training approaches in deep neural networks, ranging from dropout to masked self-supervised learning. In summary, this tutorial provides researchers and practitioners with a deeper understanding of low-rank structures in the training and adaptation of large-scale deep learning models, highlighting both the theoretical foundations and practical applications of low-rank methods, and outlining promising directions for future research.


Traveling abroad soon? Learn a language quickly with these 4 apps

FOX News

These apps let you choose from over a hundred different languages. Traveling to another country is an exciting experience, but learning a new language in order to do so can be a challenge. Fitting lessons into your schedule is difficult and getting the right pronunciation down is always a struggle. With language learning apps like Babbel, Rosetta Stone, Beelinguap and uTalk, you can learn a language at your own pace. These apps have hundreds of languages to choose from, and each app has a different approach and teaches a language differently.


IKEA Manuals at Work: 4D Grounding of Assembly Instructions on Internet Videos

Neural Information Processing Systems

Shape assembly is a ubiquitous task in daily life, integral for constructing complex 3D structures like IKEA furniture. While significant progress has been made in developing autonomous agents for shape assembly, existing datasets have not yet tackled the 4D grounding of assembly instructions in videos, essential for a holistic understanding of assembly in 3D space over time. We introduce IKEA Video Manuals, a dataset that features 3D models of furniture parts, instructional manuals, assembly videos from the Internet, and most importantly, annotations of dense spatio-temporal alignments between these data modalities. To demonstrate the utility of IKEA Video Manuals, we present five applications essential for shape assembly: assembly plan generation, part-conditioned segmentation, partconditioned pose estimation, video object segmentation, and furniture assembly based on instructional video manuals. For each application, we provide evaluation metrics and baseline methods. Through experiments on our annotated data, we highlight many challenges in grounding assembly instructions in videos to improve shape assembly, including handling occlusions, varying viewpoints, and extended assembly sequences.