Generative AI: Instructional Materials
The AI Industry is Funding A Massive AI Training Initiative for Teachers
AI tools have become deeply embedded in how many students learn and complete schoolwork--and that usage is only poised to increase. On Tuesday, the American Federation of Teachers announced an AI training hub for educators, backed by 23 million from Microsoft, OpenAI, and Anthropic. The AFT is the second-largest teachers' union, representing 1.8 million teachers and educational staffers across the country. Their training hub will open in New York City this fall, featuring workshops that will educate teachers on how to use AI tools for tasks like generating lesson plans and quizzes, or writing emails to parents. Microsoft is providing 12.5 million for AI teacher training over the next five years.
Microsoft, OpenAI, Anthropic announce free AI academy with national teachers union
The nation's largest teachers' union -- representing millions of staff within America's education system -- has joined forces with some of the world's top players in AI to ready another generation of tech-savvy educators. Announced Tuesday, July 8, by the American Federation of Teachers (AFT) and New York City-based affiliate United Federation of Teachers, along with tech giants Microsoft, OpenAI, and Anthropic, the new National Academy for AI Instruction will funnel 23 million toward free AI training and curriculum for all 1.8 million union members. The goal of the program and its brick-and-mortar Manhattan facility -- the brainchild of venture capitalist Roy Bahat and modeled after other high-tech training centers -- is to create a "national model for AI-integrated curriculum," according to the coalition, focused on skills-based workshops, online courses, and hands-on training. Microsoft will invest 12.5 million into the training program, with an additional 8 million in funding from OpenAI and 500,000 from Anthropic, the New York Times reports. OpenAI will also provide 2 million in technical resources.
Dealing with Synthetic Data Contamination in Online Continual Learning Maorong Wang Nicolas Michel 1,2 Jiafeng Mao 1
Image generation has shown remarkable results in generating high-fidelity realistic images, in particular with the advancement of diffusion-based models. However, the prevalence of AI-generated images may have side effects for the machine learning community that are not clearly identified. Meanwhile, the success of deep learning in computer vision is driven by the massive dataset collected on the Internet. The extensive quantity of synthetic data being added to the Internet would become an obstacle for future researchers to collect "clean" datasets without AI-generated content. Prior research has shown that using datasets contaminated by synthetic images may result in performance degradation when used for training. In this paper, we investigate the potential impact of contaminated datasets on Online Continual Learning (CL) research. We experimentally show that contaminated datasets might hinder the training of existing online CL methods. Also, we propose Entropy Selection with Real-synthetic similarity Maximization (ESRM), a method to alleviate the performance deterioration caused by synthetic images when training online CL models. Experiments show that our method can significantly alleviate performance deterioration, especially when the contamination is severe.
Dealing with Synthetic Data Contamination in Online Continual Learning Maorong Wang Nicolas Michel 1,2 Jiafeng Mao 1
Image generation has shown remarkable results in generating high-fidelity realistic images, in particular with the advancement of diffusion-based models. However, the prevalence of AI-generated images may have side effects for the machine learning community that are not clearly identified. Meanwhile, the success of deep learning in computer vision is driven by the massive dataset collected on the Internet. The extensive quantity of synthetic data being added to the Internet would become an obstacle for future researchers to collect "clean" datasets without AI-generated content. Prior research has shown that using datasets contaminated by synthetic images may result in performance degradation when used for training. In this paper, we investigate the potential impact of contaminated datasets on Online Continual Learning (CL) research. We experimentally show that contaminated datasets might hinder the training of existing online CL methods. Also, we propose Entropy Selection with Real-synthetic similarity Maximization (ESRM), a method to alleviate the performance deterioration caused by synthetic images when training online CL models. Experiments show that our method can significantly alleviate performance deterioration, especially when the contamination is severe.
HEMM: Holistic Evaluation of Multimodal Foundation Models
Multimodal foundation models that can holistically process text alongside images, video, audio, and other sensory modalities are increasingly used in a variety of realworld applications. However, it is challenging to characterize and study progress in multimodal foundation models, given the range of possible modeling decisions, tasks, and domains. In this paper, we introduce Holistic Evaluation of Multimodal Models (HEMM) to systematically evaluate the capabilities of multimodal foundation models across a set of 3 dimensions: basic skills, information flow, and real-world use cases. Basic multimodal skills are internal abilities required to solve problems, such as learning interactions across modalities, fine-grained alignment, multi-step reasoning, and the ability to handle external knowledge.
Google offers AI certification for business leaders now - and the training is free
As AI becomes an increasingly used tool by organizations across all industries, studies show that employees' expectations of being knowledgeable about AI are only increasing. Now, Google is presenting business leaders with a new AI literacy opportunity. On Wednesday, Google Cloud announced a "first-of-its-kind" generative AI certification geared toward non-technical learners, such as managers and business leaders, who want to learn about AI's impacts beyond coding. According to Google, the course focuses on how to strategically adopt, discuss, and lead generative AI efforts. The Google Cloud Generative AI Leader certification exam, which costs 99 and lasts 90 minutes, is available starting May 14.
Interview with Joseph Marvin Imperial: aligning generative AI with technical standards
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Joseph Marvin Imperial, who is focussed on aligning generative AI with technical standards for regulatory and operational compliance. Standards are documents created by industry and/or academic experts that have been recognized to ensure the quality, accuracy, and interoperability of systems and processes (aka "the best way of doing things"). You'll see standards in almost all sectors and domains, including the sciences, healthcare, education, finance, journalism, law, and engineering.
Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI
Nnamdi, Micky C., Tamo, J. Ben, Shi, Wenqi, Wang, May D.
Problem-Based Learning (PBL) has significantly impacted biomedical engineering (BME) education since its introduction in the early 2000s, effectively enhancing critical thinking and real-world knowledge application among students. With biomedical engineering rapidly converging with artificial intelligence (AI), integrating effective AI education into established curricula has become challenging yet increasingly necessary. Recent advancements, including AI's recognition by the 2024 Nobel Prize, have highlighted the importance of training students comprehensively in biomedical AI. However, effective biomedical AI education faces substantial obstacles, such as diverse student backgrounds, limited personalized mentoring, constrained computational resources, and difficulties in safely scaling hands-on practical experiments due to privacy and ethical concerns associated with biomedical data. To overcome these issues, we conducted a three-year (2021-2023) case study implementing an advanced PBL framework tailored specifically for biomedical AI education, involving 92 undergraduate and 156 graduate students from the joint Biomedical Engineering program of Georgia Institute of Technology and Emory University. Our approach emphasizes collaborative, interdisciplinary problem-solving through authentic biomedical AI challenges. The implementation led to measurable improvements in learning outcomes, evidenced by high research productivity (16 student-authored publications), consistently positive peer evaluations, and successful development of innovative computational methods addressing real biomedical challenges. Additionally, we examined the role of generative AI both as a teaching subject and an educational support tool within the PBL framework. Our study presents a practical and scalable roadmap for biomedical engineering departments aiming to integrate robust AI education into their curricula.