Communications: Instructional Materials
You can try Microsoft's free AI skills training for two more weeks, and I recommend you do
I know you've heard of gamification, but have you ever heard of festification? That's what Microsoft did last month and is continuing until May 28, with the Microsoft AI Skills Fest. It's a little odd, but it also looks like it might be a heck of a lot of fun. And you still three full weeks to participate. Microsoft's AI Skills Fest offers courses that are open for all skill levels.
New Google Labs experiments help you learn new languages in 'bite-sized' lessons
My wife and I like to travel to other countries, but we always face a familiar obstacle -- how to learn the language well enough to converse with people. We've tried taking language lessons, yet we invariably run into situations where we can't find the right words to express ourselves. Now, Google has launched a trio of translation tools that could help overcome this obstacle. Also: Want a quick daily podcast based on your interests? Try Google's latest AI experiment Launched on Tuesday as Google Labs experiments, the "Little Language Lessons" are designed to assist you in specific situations, especially when you're traveling in a foreign country.
Microsoft is offering free AI skills training for all - and it's not too late to sign up
I know you've heard of gamification, but have you ever heard of festification? That's what Microsoft will be doing in April and May, with the Microsoft AI Skills Fest. It's a little odd, but it also looks like it might be a heck of a lot of fun. Microsoft's AI Skills Fest offers courses that are open for all skill levels. You can learn early stages of the lessons if you're new to AI, or work on deeper topics if you're more familiar with AI concepts.
Microsoft is offering free AI skills training for everyone - how to sign up
I know you've heard of gamification, but have you ever heard of festification? That's what Microsoft will be doing in April and May, with the Microsoft AI Skills Fest. It's a little odd, but it also looks like it might be a heck of a lot of fun. I've written a lot about Microsoft over the years. I've mocked its product naming.
Want free AI training from Microsoft? You can sign up for its AI Skills Fest now
I know you've heard of gamification, but have you ever heard of festification? That's what Microsoft will be doing in April and May, with the Microsoft AI Skills Fest. It's a little odd, but it also looks like it might be a heck of a lot of fun. I've written a lot about Microsoft over the years. I've mocked its product naming.
Microsoft's free AI skills training 'Fest' starts next week - anyone can sign up
I know you've heard of gamification, but have you ever heard of festification? That's what Microsoft will be doing in April and May, with the Microsoft AI Skills Fest. It's a little odd, but it also looks like it might be a heck of a lot of fun. I've written a lot about Microsoft over the years. I've mocked its product naming.
ProG: A Graph Prompt Learning Benchmark
Artificial general intelligence on graphs has shown significant advancements across various applications, yet the traditional'Pre-train & Fine-tune' paradigm faces inefficiencies and negative transfer issues, particularly in complex and few-shot settings. Graph prompt learning emerges as a promising alternative, leveraging lightweight prompts to manipulate data and fill the task gap by reformulating downstream tasks to the pretext. However, several critical challenges still remain: how to unify diverse graph prompt models, how to evaluate the quality of graph prompts, and to improve their usability for practical comparisons and selection. In response to these challenges, we introduce the first comprehensive benchmark for graph prompt learning. Our benchmark integrates SIX pre-training methods and FIVE state-of-the-art graph prompt techniques, evaluated across FIFTEEN diverse datasets to assess performance, flexibility, and efficiency. We also present'ProG', an easy-to-use open-source library that streamlines the execution of various graph prompt models, facilitating objective evaluations. Additionally, we propose a unified framework that categorizes existing graph prompt methods into two main approaches: prompts as graphs and prompts as tokens. This framework enhances the applicability and comparison of graph prompt techniques.
Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale
LLMs can now act as autonomous agents that interact with digital environments and complete specific objectives (e.g., arranging an online meeting). However, accuracy is still far from satisfactory, partly due to a lack of large-scale, direct demonstrations for digital tasks. Obtaining supervised data from humans is costly, and automatic data collection through exploration or reinforcement learning relies on complex environmental and content setup, resulting in datasets that lack comprehensive coverage of various scenarios. On the other hand, there is abundant knowledge that may indirectly assist task completion, such as online tutorials that were created for human consumption. In this work, we present Synatra, an approach that effectively transforms this indirect knowledge into direct supervision at scale. We define different types of indirect knowledge, and carefully study the available sources to obtain it, methods to encode the structure of direct demonstrations, and finally methods to transform indirect knowledge into direct demonstrations. We use 100k such synthetically-created demonstrations to finetune a 7B CodeLlama, and demonstrate that the resulting agent surpasses all comparably sized models on three web-based task benchmarks Mind2Web, MiniWoB++ and WebArena, as well as surpassing GPT-3.5 on WebArena and Mind2Web. In addition, while synthetic demonstrations prove to be only 3% the cost of human demonstrations (at $0.031 each), we show that the synthetic demonstrations can be more effective than an identical number of human demonstrations collected from limited domains.
Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning
Online continual learning (OCL) requires the models to learn from constant, endless streams of data. While significant efforts have been made in this field, most were focused on mitigating the catastrophic forgetting issue to achieve better classification ability, at the cost of a much heavier training workload. They overlooked that in real-world scenarios, e.g., in high-speed data stream environments, data do not pause to accommodate slow models. In this paper, we emphasize that model throughput-defined as the maximum number of training samples that a model can process within a unit of time - is equally important. It directly limits how much data a model can utilize and presents a challenging dilemma for current methods. With this understanding, we revisit key challenges in OCL from both empirical and theoretical perspectives, highlighting two critical issues beyond the well-documented catastrophic forgetting: (i) Model's ignorance: the single-pass nature of OCL challenges models to learn effective features within constrained training time and storage capacity, leading to a trade-off between effective learning and model throughput; (ii) Model's myopia: the local learning nature of OCL on the current task leads the model to adopt overly simplified, task-specific features and excessively sparse classifier, resulting in the gap between the optimal solution for the current task and the global objective. To tackle these issues, we propose the Non-sparse Classifier Evolution framework (NsCE) to facilitate effective global discriminative feature learning with minimal time cost. NsCE integrates non-sparse maximum separation regularization and targeted experience replay techniques with the help of pre-trained models, enabling rapid acquisition of new globally discriminative features. Extensive experiments demonstrate the substantial improvements of our framework in performance, throughput and real-world practicality.