modularization
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- (2 more...)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > Canada > Ontario > Toronto (0.05)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- (2 more...)
On Strong and Weak Admissibility in Non-Flat Assumption-Based Argumentation
Berthold, Matti, Blümel, Lydia, Rapberger, Anna
In this work, we broaden the investigation of admissibility notions in the context of assumption-based argumentation (ABA). More specifically, we study two prominent alternatives to the standard notion of admissibility from abstract argumentation, namely strong and weak admissibility, and introduce the respective preferred, complete and grounded semantics for general (sometimes called non-flat) ABA. To do so, we use abstract bipolar set-based argumentation frameworks (BSAFs) as formal playground since they concisely capture the relations between assumptions and are expressive enough to represent general non-flat ABA frameworks, as recently shown. While weak admissibility has been recently investigated for a restricted fragment of ABA in which assumptions cannot be derived (flat ABA), strong admissibility has not been investigated for ABA so far. We introduce strong admissibility for ABA and investigate desirable properties. We furthermore extend the recent investigations of weak admissibility in the flat ABA fragment to the non-flat case. We show that the central modularization property is maintained under classical, strong, and weak admissibility. We also show that strong and weakly admissible semantics in non-flat ABA share some of the shortcomings of standard admissible semantics and discuss ways to address these.
NeMo: A Neuron-Level Modularizing-While-Training Approach for Decomposing DNN Models
Bi, Xiaohan, Qi, Binhang, Sun, Hailong, Gao, Xiang, Yu, Yue, Liang, Xiaojun
With the growing incorporation of deep neural network (DNN) models into modern software systems, the prohibitive construction costs have become a significant challenge. Model reuse has been widely applied to reduce training costs, but indiscriminately reusing entire models may incur significant inference overhead. Consequently, DNN modularization has gained attention, enabling module reuse by decomposing DNN models. The emerging modularizing-while-training (MwT) paradigm, which incorporates modularization into training, outperforms modularizing-after-training approaches. However, existing MwT methods focus on small-scale CNN models at the convolutional kernel level and struggle with diverse DNNs and large-scale models, particularly Transformer-based models. To address these limitations, we propose NeMo, a scalable and generalizable MwT approach. NeMo operates at the neuron level fundamental component common to all DNNs-ensuring applicability to Transformers and various architectures. We design a contrastive learning-based modular training method with an effective composite loss function, enabling scalability to large-scale models. Comprehensive experiments on two Transformer-based models and four CNN models across two classification datasets demonstrate NeMo's superiority over state-of-the-art MwT methods. Results show average gains of 1.72% in module classification accuracy and 58.10% reduction in module size, demonstrating efficacy across both CNN and large-scale Transformer-based models. A case study on open-source projects shows NeMo's potential benefits in practical scenarios, offering a promising approach for scalable and generalizable DNN modularization.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (8 more...)
Reviews: Efficient state-space modularization for planning: theory, behavioral and neural signatures
The paper is very ambitious and develops a computational model of how the state space can be carved up (aggregated?) for planning. This model is applied to some intriguing data on human and rodent spatial navigation and seems to nicely pull together disparate threads from the literature. Unfortunately, the exposition was sufficiently abstract, so that following the thread from the model to the results (simulations) was challenging, leaving unclear exactly how the model was explaining the behaviour and making evaluation difficult. It is not entirely clear how the different ideas introduced in the paper (modularity, centrality, description length) fit together into a single model of behaviour and the brain. From the text, it was not clear to me how the simulations and predictions for the different behavioural tasks were generated.
Revisiting Vacuous Reduct Semantics for Abstract Argumentation (Extended Version)
Blümel, Lydia, Thimm, Matthias
We consider the notion of a vacuous reduct semantics for abstract argumentation frameworks, which, given two abstract argumentation semantics {\sigma} and {\tau}, refines {\sigma} (base condition) by accepting only those {\sigma}-extensions that have no non-empty {\tau}-extension in their reduct (vacuity condition). We give a systematic overview on vacuous reduct semantics resulting from combining different admissibility-based and conflict-free semantics and present a principle-based analysis of vacuous reduct semantics in general. We provide criteria for the inheritance of principle satisfaction by a vacuous reduct semantics from its base and vacuity condition for established as well as recently introduced principles in the context of weak argumentation semantics. We also conduct a principle-based analysis for the special case of undisputed semantics.
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- Europe > United Kingdom > Wales > Cardiff (0.04)
- (5 more...)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- North America > United States > New York (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Machine learning's own Industrial Revolution
Luo, Yuan, Han, Song, Liu, Jingjing
Machine learning is expected to enable the next Industrial Revolution. However, lacking standardized and automated assembly networks, ML faces significant challenges to meet ever-growing enterprise demands and empower broad industries. In the Perspective, we argue that ML needs to first complete its own Industrial Revolution, elaborate on how to best achieve its goals, and discuss new opportunities to enable rapid translation from ML's innovation frontier to mass production and utilization.
Modularizing while Training: A New Paradigm for Modularizing DNN Models
Qi, Binhang, Sun, Hailong, Zhang, Hongyu, Zhao, Ruobing, Gao, Xiang
Deep neural network (DNN) models have become increasingly crucial components in intelligent software systems. However, training a DNN model is typically expensive in terms of both time and money. To address this issue, researchers have recently focused on reusing existing DNN models - borrowing the idea of code reuse in software engineering. However, reusing an entire model could cause extra overhead or inherits the weakness from the undesired functionalities. Hence, existing work proposes to decompose an already trained model into modules, i.e., modularizing-after-training, and enable module reuse. Since trained models are not built for modularization, modularizing-after-training incurs huge overhead and model accuracy loss. In this paper, we propose a novel approach that incorporates modularization into the model training process, i.e., modularizing-while-training (MwT). We train a model to be structurally modular through two loss functions that optimize intra-module cohesion and inter-module coupling. We have implemented the proposed approach for modularizing Convolutional Neural Network (CNN) models in this work. The evaluation results on representative models demonstrate that MwT outperforms the state-of-the-art approach. Specifically, the accuracy loss caused by MwT is only 1.13 percentage points, which is 1.76 percentage points less than that of the baseline. The kernel retention rate of the modules generated by MwT is only 14.58%, with a reduction of 74.31% over the state-of-the-art approach. Furthermore, the total time cost required for training and modularizing is only 108 minutes, half of the baseline.
- North America > United States (0.04)
- Europe > Greece (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Asia > China > Chongqing Province > Chongqing (0.04)
- Research Report > Promising Solution (0.88)
- Overview > Innovation (0.74)
- Research Report > New Finding (0.68)