Communications: Instructional Materials
Manipulation and Peer Mechanisms: A Survey
In peer mechanisms, the competitors for a prize also determine who wins. Each competitor may be asked to rank, grade, or nominate peers for the prize. Since the prize can be valuable, such as financial aid, course grades, or an award at a conference, competitors may be tempted to manipulate the mechanism. We survey approaches to prevent or discourage the manipulation of peer mechanisms. We conclude our survey by identifying several important research challenges.
EIT: Earnest Insight Toolkit for Evaluating Students' Earnestness in Interactive Lecture Participation Exercises
Miroyan, Mihran, Weng, Shiny, Shah, Rahul, Yan, Lisa, Norouzi, Narges
In today's rapidly evolving educational landscape, traditional modes of passive information delivery are giving way to transformative pedagogical approaches that prioritize active student engagement. Within the context of large-scale hybrid classrooms, the challenge lies in fostering meaningful and active interaction between students and course content. This study delves into the significance of measuring students' earnestness during interactive lecture participation exercises. By analyzing students' responses to interactive lecture poll questions, establishing a clear rubric for evaluating earnestness, and conducting a comprehensive assessment, we introduce EIT (Earnest Insight Toolkit), a tool designed to assess students' engagement within interactive lecture participation exercises - particularly in the context of large-scale hybrid classrooms. Through the utilization of EIT, our objective is to equip educators with valuable means of identifying at-risk students for enhancing intervention and support strategies, as well as measuring students' levels of engagement with course content.
Distributed Variational Inference for Online Supervised Learning
Paritosh, Parth, Atanasov, Nikolay, Martinez, Sonia
Developing efficient solutions for inference problems in intelligent sensor networks is crucial for the next generation of location, tracking, and mapping services. This paper develops a scalable distributed probabilistic inference algorithm that applies to continuous variables, intractable posteriors and large-scale real-time data in sensor networks. In a centralized setting, variational inference is a fundamental technique for performing approximate Bayesian estimation, in which an intractable posterior density is approximated with a parametric density. Our key contribution lies in the derivation of a separable lower bound on the centralized estimation objective, which enables distributed variational inference with one-hop communication in a sensor network. Our distributed evidence lower bound (DELBO) consists of a weighted sum of observation likelihood and divergence to prior densities, and its gap to the measurement evidence is due to consensus and modeling errors. To solve binary classification and regression problems while handling streaming data, we design an online distributed algorithm that maximizes DELBO, and specialize it to Gaussian variational densities with non-linear likelihoods. The resulting distributed Gaussian variational inference (DGVI) efficiently inverts a $1$-rank correction to the covariance matrix. Finally, we derive a diagonalized version for online distributed inference in high-dimensional models, and apply it to multi-robot probabilistic mapping using indoor LiDAR data.
Watch Mark Zuckerberg learn how to braid his daughter's hair from AI
In a valiant effort to promote Meta's new Smart Glasses Collection with Ray-Ban, Mark Zuckerberg has done the unthinkable: He has learned to braid his daughter's hair with the help of AI. In a clip posted to Instagram, Zuckerberg films the back of his daughter's head using the video recording feature embedded within the smart glasses he's wearing. He says "Hey Meta, how do you make a braid?" and a little voice walks him through three steps: brush the hair, separate it into three parts, cross the right section over the middle, then the left, and continue. He ties the end of the braid with such difficulty that it's clear he's never done his girls' hair before (Zuckerberg has three daughters). As a final step, he asks the glasses to take a photo of his handiwork and send it to Priscilla, his wife, who is presumably busy co-running the couple's charitable organization, the Chan Zuckerberg Initiative.
Artificial Intelligence Index Report 2023
Maslej, Nestor, Fattorini, Loredana, Brynjolfsson, Erik, Etchemendy, John, Ligett, Katrina, Lyons, Terah, Manyika, James, Ngo, Helen, Niebles, Juan Carlos, Parli, Vanessa, Shoham, Yoav, Wald, Russell, Clark, Jack, Perrault, Raymond
Welcome to the sixth edition of the AI Index Report! This year, the report introduces more original data than any previous edition, including a new chapter on AI public opinion, a more thorough technical performance chapter, original analysis about large language and multimodal models, detailed trends in global AI legislation records, a study of the environmental impact of AI systems, and more. The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI. The report aims to be the world's most credible and authoritative source for data and insights about AI.
The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice
Delgado, Fernando, Yang, Stephen, Madaio, Michael, Yang, Qian
Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a participatory approach to AI design and development, it remains challenging to assess the extent to which any participatory approach grants substantive agency to stakeholders. This article thus aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation and through empirical investigation and critique of its current practices. Specifically, we derive a conceptual framework through synthesis of literature across technology design, political theory, and the social sciences that researchers and practitioners can leverage to evaluate approaches to participation in AI design. Additionally, we articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners. We use these empirical findings to understand the current state of participatory practice and subsequently provide guidance to better align participatory goals and methods in a way that accounts for practical constraints.
FairComp: Workshop on Fairness and Robustness in Machine Learning for Ubiquitous Computing
Yfantidou, Sofia, Spathis, Dimitris, Constantinides, Marios, Xia, Tong, van Berkel, Niels
How can we ensure that Ubiquitous Computing (UbiComp) research outcomes are both ethical and fair? While fairness in machine learning (ML) has gained traction in recent years, fairness in UbiComp remains unexplored. This workshop aims to discuss fairness in UbiComp research and its social, technical, and legal implications. From a social perspective, we will examine the relationship between fairness and UbiComp research and identify pathways to ensure that ubiquitous technologies do not cause harm or infringe on individual rights. From a technical perspective, we will initiate a discussion on data practices to develop bias mitigation approaches tailored to UbiComp research. From a legal perspective, we will examine how new policies shape our community's work and future research. We aim to foster a vibrant community centered around the topic of responsible UbiComp, while also charting a clear path for future research endeavours in this field.
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Zhang, Yi, Zhao, Yuying, Li, Zhaoqing, Cheng, Xueqi, Wang, Yu, Kotevska, Olivera, Yu, Philip S., Derr, Tyler
Privacy attack is a popular and well-developed topic in various fields such as social network analysis, healthcare, finance, system, etc. [88], [89], [90]. During recent years, the surge of machine learning has provided powerful tools to solve many practical problems. However, data-driven approaches also threaten users' privacy due to the associated risks of data leakage and inference [85]. Consequently, a substantial amount of work has been devoted to investigate the vulnerabilities of ML models and the risks of privacy leakage [47]. A branch of privacy research is to develop privacy attack models, which has received much attention during the past few years. However, attack models with respect to GNNs have only been explored very recently because GNN techniques are relatively new compared with CNN/transformers in image/natural language processing(NLP) domains, and the irregular graph structure poses unique challenges to transfer existing attack techniques that are well-established in other domains. In this section, we summarize papers that have developed attack models specifically targeting GNNs. Figure 1: Illustrations of the four categories of privacy attack We classify the privacy attack models on GNN into models on graphs: a) Model extraction attacks (MEA); b) four categories (which are visualized in Figure 4): a) model Graph structure reconstruction (GSR); c) Attribute inference extraction attack (MEA), b) graph structure reconstruction attacks (AIA); and d) Membership inference attacks (MIA).
Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges
Dou, Fei, Ye, Jin, Yuan, Geng, Lu, Qin, Niu, Wei, Sun, Haijian, Guan, Le, Lu, Guoyu, Mai, Gengchen, Liu, Ninghao, Lu, Jin, Liu, Zhengliang, Wu, Zihao, Tan, Chenjiao, Xu, Shaochen, Wang, Xianqiao, Li, Guoming, Chai, Lilong, Li, Sheng, Sun, Jin, Sun, Hongyue, Shao, Yunli, Li, Changying, Liu, Tianming, Song, Wenzhan
Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas. This fascination extends particularly to the Internet of Things (IoT), a landscape characterized by the interconnection of countless devices, sensors, and systems, collectively gathering and sharing data to enable intelligent decision-making and automation. This research embarks on an exploration of the opportunities and challenges towards achieving AGI in the context of the IoT. Specifically, it starts by outlining the fundamental principles of IoT and the critical role of Artificial Intelligence (AI) in IoT systems. Subsequently, it delves into AGI fundamentals, culminating in the formulation of a conceptual framework for AGI's seamless integration within IoT. The application spectrum for AGI-infused IoT is broad, encompassing domains ranging from smart grids, residential environments, manufacturing, and transportation to environmental monitoring, agriculture, healthcare, and education. However, adapting AGI to resource-constrained IoT settings necessitates dedicated research efforts. Furthermore, the paper addresses constraints imposed by limited computing resources, intricacies associated with large-scale IoT communication, as well as the critical concerns pertaining to security and privacy.
"I'm Not Confident in Debiasing AI Systems Since I Know Too Little": Teaching AI Creators About Gender Bias Through Hands-on Tutorials
Zhou, Kyrie Zhixuan, Cao, Jiaxun, Yuan, Xiaowen, Weissglass, Daniel E., Kilhoffer, Zachary, Sanfilippo, Madelyn Rose, Tong, Xin
Gender bias is rampant in AI systems, causing bad user experience, injustices, and mental harm to women. School curricula fail to educate AI creators on this topic, leaving them unprepared to mitigate gender bias in AI. In this paper, we designed hands-on tutorials to raise AI creators' awareness of gender bias in AI and enhance their knowledge of sources of gender bias and debiasing techniques. The tutorials were evaluated with 18 AI creators, including AI researchers, AI industrial practitioners (i.e., developers and product managers), and students who had learned AI. Their improved awareness and knowledge demonstrated the effectiveness of our tutorials, which have the potential to complement the insufficient AI gender bias education in CS/AI courses. Based on the findings, we synthesize design implications and a rubric to guide future research, education, and design efforts.