Goto

Collaborating Authors

 dml



HyperPrism: An Adaptive Non-linear Aggregation Framework for Distributed Machine Learning over Non-IID Data and Time-varying Communication Links

Neural Information Processing Systems

While Distributed Machine Learning (DML) has been widely used to achieve decent performance, it is still challenging to take full advantage of data and devices distributed at multiple vantage points to adapt and learn, especially it is non-trivial to address dynamic and divergence challenges based on the linear aggregation framework as follows: (1) heterogeneous learning data at different devices (i.e., non-IID data) resulting in model divergence and (2) in the case of time-varying communication links, the limited ability for devices to reconcile model divergence. In this paper, we contribute a non-linear class aggregation framework HyperPrism that leverages distributed mirror descent with averaging done in the mirror descent dual space and adapts the degree of Weighted Power Mean (WPM) used in each round. Moreover, HyperPrism could adaptively choose different mapping for different layers of the local model with a dedicated hypernetwork per device, achieving automatic optimization of DML in high divergence settings. We perform rigorous analysis and experimental evaluations to demonstrate the effectiveness of adaptive, mirror-mapping DML. In particular, we extend the generalizability of existing related works and position them as special cases within HyperPrism. Our experimental results show that HyperPrism can improve the convergence speed up to 98.63% and scale well to more devices compared with the state-of-the-art, all with little additional computation overhead compared to traditional linear aggregation.






The Effects of Flipped Classrooms in Higher Education: A Causal Machine Learning Analysis

Czarnowske, Daniel, Heiss, Florian, Schmitz, Theresa M. A., Stammann, Amrei

arXiv.org Machine Learning

This study uses double/debiased machine learning (DML) to evaluate the impact of transitioning from lecture-based blended teaching to a flipped classroom concept. Our findings indicate effects on students' self-conception, procrastination, and enjoyment. We do not find significant positive effects on exam scores, passing rates, or knowledge retention. This can be explained by the insufficient use of the instructional approach that we can identify with uniquely detailed usage data and highlights the need for additional teaching strategies. Methodologically, we propose a powerful DML approach that acknowledges the latent structure inherent in Likert scale variables and, hence, aligns with psychometric principles.


FrameEOL: Semantic Frame Induction using Causal Language Models

Yano, Chihiro, Yamada, Kosuke, Tsukagoshi, Hayato, Sasano, Ryohei, Takeda, Koichi

arXiv.org Artificial Intelligence

Semantic frame induction is the task of clustering frame-evoking words according to the semantic frames they evoke. In recent years, leveraging embeddings of frame-evoking words that are obtained using masked language models (MLMs) such as BERT has led to high-performance semantic frame induction. Although causal language models (CLMs) such as the GPT and Llama series succeed in a wide range of language comprehension tasks and can engage in dialogue as if they understood frames, they have not yet been applied to semantic frame induction. We propose a new method for semantic frame induction based on CLMs. Specifically, we introduce FrameEOL, a prompt-based method for obtaining Frame Embeddings that outputs One frame-name as a Label representing the given situation. To obtain embeddings more suitable for frame induction, we leverage in-context learning (ICL) and deep metric learning (DML). Frame induction is then performed by clustering the resulting embeddings. Experimental results on the English and Japanese FrameNet datasets demonstrate that the proposed methods outperform existing frame induction methods. In particular, for Japanese, which lacks extensive frame resources, the CLM-based method using only 5 ICL examples achieved comparable performance to the MLM-based method fine-tuned with DML.