Goto

Collaborating Authors

 tam


LogicalCredalNetworks

Neural Information Processing Systems

Many (if not all) real-world applications require efficient handling of uncertainty and a compact representation of a wide variety of knowledge. Indeed, complex concepts and relationships that typically comprise expert knowledge may be difficult to express in graphical models but can be represented compactly using classical logic.



Reconfiguring Digital Accountability: AI-Powered Innovations and Transnational Governance in a Postnational Accounting Context

Li, Claire, Freeborn, David

arXiv.org Artificial Intelligence

This study explores how AI-powered digital innovations are reshaping organisational accountability in a transnational governance context. As AI systems increasingly mediate decision-making in domains such as auditing and financial reporting, traditional mechanisms of accountability, based on control, transparency, and auditability, are being destabilised. We integrate the Technology Acceptance Model (TAM), Actor-Network Theory (ANT), and institutional theory to examine how organisations adopt AI technologies in response to regulatory, ethical, and cultural pressures that transcend national boundaries. We argue that accountability is co-constructed within global socio-technical networks, shaped not only by user perceptions but also by governance logics and normative expectations. Extending TAM, we incorporate compliance and legitimacy as key factors in perceived usefulness and usability. Drawing on ANT, we reconceptualise accountability as a relational and emergent property of networked assemblages. We propose two organisational strategies including internal governance reconfiguration and external actor-network engagement to foster responsible, legitimate, and globally accepted AI adoption in the accounting domain.


Motion-enhancement to Echocardiography Segmentation via Inserting a Temporal Attention Module: An Efficient, Adaptable, and Scalable Approach

Hasan, Md. Kamrul, Yang, Guang, Yap, Choon Hwai

arXiv.org Artificial Intelligence

Cardiac anatomy segmentation is essential for clinical assessment of cardiac function and disease diagnosis to inform treatment and intervention. In performing segmentation, deep learning (DL) algorithms improved accuracy significantly compared to traditional image processing approaches. More recently, studies showed that enhancing DL segmentation with motion information can further improve it. A range of methods for injecting motion information has been proposed, but many of them increase the dimensionality of input images (which is computationally expensive) or have not used an optimal method to insert motion information, such as non-DL registration, non-attention-based networks or single-headed attention. Here, we present a novel, computation-efficient alternative where a novel, scalable temporal attention module (TAM) extracts temporal feature interactions multiple times and where TAM has a multi-headed, KQV projection cross-attention architecture. The module can be seamlessly integrated into a wide range of existing CNN- or Transformer-based networks, providing novel flexibility for inclusion in future implementations. Extensive evaluations on different cardiac datasets, 2D echocardiography (CAMUS), and 3D echocardiography (MITEA) demonstrate the model's effectiveness when integrated into well-established backbone networks like UNet, FCN8s, UNetR, SwinUNetR, and the recent I2UNet. We further find that the optimized TAM-enhanced FCN8s network performs well compared to contemporary alternatives. Our results confirm TAM's robustness, scalability, and generalizability across diverse datasets and backbones.


Torque-Aware Momentum

Malviya, Pranshu, Mordido, Goncalo, Baratin, Aristide, Harikandeh, Reza Babanezhad, Dziugaite, Gintare Karolina, Pascanu, Razvan, Chandar, Sarath

arXiv.org Artificial Intelligence

Efficiently exploring complex loss landscapes is key to the performance of deep neural networks. While momentum-based optimizers are widely used in stateof-the-art setups, classical momentum can still struggle with large, misaligned gradients, leading to oscillations. To address this, we propose Torque-Aware Momentum (TAM), which introduces a damping factor based on the angle between the new gradients and previous momentum, stabilizing the update direction during training. Empirical results show that TAM, which can be combined with both SGD and Adam, enhances exploration, handles distribution shifts more effectively, and improves generalization performance across various tasks, including image classification and large language model fine-tuning, when compared to classical momentum-based optimizers. Despite the wide range of optimization methods available in the literature, stochastic gradient descent (SGD), typically augmented with momentum (Kingma & Ba, 2015; Nesterov, 1983; Qian, 1999), remains the go-to approach for practitioners. Momentum accelerates convergence, particularly in the presence of high curvature (Cutkosky & Mehta, 2020b), small but consistent gradients, or noisy gradients. It also helps the optimizer navigate the loss landscape and escape local minima or saddle points by maintaining consistent updates directions (Jin et al., 2018). While SGD with momentum (SGDM) has shown remarkable success in various scenarios, particularly in computer vision (Sutskever et al., 2013), it remains vulnerable to In this work, we propose that minimizing the influence of misaligned gradients during momentum updates can preserve valuable information and improve the exploration Figure 1: Comparing momentum updates capabilities of momentum-based methods. To enable more obtained using SGDM and TAM consistent exploration of the loss landscape, particularly in for a given SGD trajectory.


Cognitively Inspired Energy-Based World Models

Gladstone, Alexi, Nanduru, Ganesh, Islam, Md Mofijul, Chadha, Aman, Li, Jundong, Iqbal, Tariq

arXiv.org Artificial Intelligence

One of the predominant methods for training world models is autoregressive prediction in the output space of the next element of a sequence. In Natural Language Processing (NLP), this takes the form of Large Language Models (LLMs) predicting the next token; in Computer Vision (CV), this takes the form of autoregressive models predicting the next frame/token/pixel. However, this approach differs from human cognition in several respects. First, human predictions about the future actively influence internal cognitive processes. Second, humans naturally evaluate the plausibility of predictions regarding future states. Based on this capability, and third, by assessing when predictions are sufficient, humans allocate a dynamic amount of time to make a prediction. This adaptive process is analogous to System 2 thinking in psychology. All these capabilities are fundamental to the success of humans at high-level reasoning and planning. Therefore, to address the limitations of traditional autoregressive models lacking these human-like capabilities, we introduce Energy-Based World Models (EBWM). EBWM involves training an Energy-Based Model (EBM) to predict the compatibility of a given context and a predicted future state. In doing so, EBWM enables models to achieve all three facets of human cognition described. Moreover, we developed a variant of the traditional autoregressive transformer tailored for Energy-Based models, termed the Energy-Based Transformer (EBT). Our results demonstrate that EBWM scales better with data and GPU Hours than traditional autoregressive transformers in CV, and that EBWM offers promising early scaling in NLP. Consequently, this approach offers an exciting path toward training future models capable of System 2 thinking and intelligently searching across state spaces.


Digital Health and Indoor Air Quality: An IoT-Driven Human-Centred Visualisation Platform for Behavioural Change and Technology Acceptance

Kureshi, Rameez Raja, Mishra, Bhupesh Kumar, Thakker, Dhavalkumar, Mazumdar, Suvodeep, Li, Xiao

arXiv.org Artificial Intelligence

The detrimental effects of air pollutants on human health have prompted increasing concerns regarding indoor air quality (IAQ). The emergence of digital health interventions and citizen science initiatives has provided new avenues for raising awareness, improving IAQ, and promoting behavioural changes. The Technology Acceptance Model (TAM) offers a theoretical framework to understand user acceptance and adoption of IAQ technology. This paper presents a case study using the COM-B model and Internet of Things (IoT) technology to design a human-centred digital visualisation platform, leading to behavioural changes and improved IAQ. The study also investigates users' acceptance and adoption of the technology, focusing on their experiences, expectations, and the impact on IAQ. Integrating IAQ sensing, digital health-related interventions, citizen science, and the TAM model offers opportunities to address IAQ challenges, enhance public health, and foster sustainable indoor environments. The analytical results show that factors such as human behaviour, indoor activities, and awareness play crucial roles in shaping IAQ.


UniGen: Universal Domain Generalization for Sentiment Classification via Zero-shot Dataset Generation

Choi, Juhwan, Kim, Yeonghwa, Yu, Seunguk, Yun, JungMin, Kim, YoungBin

arXiv.org Artificial Intelligence

Although pre-trained language models have exhibited great flexibility and versatility with prompt-based few-shot learning, they suffer from the extensive parameter size and limited applicability for inference. Recent studies have suggested that PLMs be used as dataset generators and a tiny task-specific model be trained to achieve efficient inference. However, their applicability to various domains is limited because they tend to generate domain-specific datasets. In this work, we propose a novel approach to universal domain generalization that generates a dataset regardless of the target domain. This allows for generalization of the tiny task model to any domain that shares the label space, thus enhancing the real-world applicability of the dataset generation paradigm. Our experiments indicate that the proposed method accomplishes generalizability across various domains while using a parameter set that is orders of magnitude smaller than PLMs.


Generative AI Adoption in Classroom in Context of Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT)

Ghimire, Aashish, Edwards, John

arXiv.org Artificial Intelligence

The burgeoning development of generative artificial intelligence (GenAI) and the widespread adoption of large language models (LLMs) in educational settings have sparked considerable debate regarding their efficacy and acceptability.Despite the potential benefits, the assimilation of these cutting-edge technologies among educators exhibits a broad spectrum of attitudes, from enthusiastic advocacy to profound skepticism.This study aims to dissect the underlying factors influencing educators' perceptions and acceptance of GenAI and LLMs.We conducted a survey among educators and analyzed the data through the frameworks of the Technology Acceptance Model (TAM) and Innovation Diffusion Theory (IDT). Our investigation reveals a strong positive correlation between the perceived usefulness of GenAI tools and their acceptance, underscoring the importance of demonstrating tangible benefits to educators. Additionally, the perceived ease of use emerged as a significant factor, though to a lesser extent, influencing acceptance. Our findings also show that the knowledge and acceptance of these tools is not uniform, suggesting that targeted strategies are required to address the specific needs and concerns of each adopter category to facilitate broader integration of AI tools.in education.


Coffee producers worldwide grapple with new environmental laws aimed at protecting forests

FOX News

Figure has developed a full-body humanoid robot, Figure-01, that can walk, talk and interact. Le Van Tam is no stranger to how the vagaries of global trade can determine the fortunes of small coffee farmers like him. He first planted coffee in a patch of land outside Buon Ma Thuot city in Vietnam's Central Highland region in 1995. For years, his focus was on quantity, not quality. Tam used ample amounts of fertilizer and pesticides to boost his yields, and global prices determined how well he did.