Goto

Collaborating Authors

 aida


I would love this to be like an assistant, not the teacher: a voice of the customer perspective of what distance learning students want from an Artificial Intelligence Digital Assistant

Rienties, Bart, Domingue, John, Duttaroy, Subby, Herodotou, Christothea, Tessarolo, Felipe, Whitelock, Denise

arXiv.org Artificial Intelligence

With the release of Generative AI systems such as ChatGPT, an increasing interest in using Artificial Intelligence (AI) has been observed across domains, including higher education. While emerging statistics show the popularity of using AI amongst undergraduate students, little is yet known about students' perceptions regarding AI including self-reported benefits and concerns from their actual usage, in particular in distance learning contexts. Using a two-step, mixed-methods approach, we examined the perceptions of ten online and distance learning students from diverse disciplines regarding the design of a hypothetical AI Digital Assistant (AIDA). In the first step, we captured students' perceptions via interviews, while the second step supported the triangulation of data by enabling students to share, compare, and contrast perceptions with those of peers. All participants agreed on the usefulness of such an AI tool while studying and reported benefits from using it for real-time assistance and query resolution, support for academic tasks, personalisation and accessibility, together with emotional and social support. Students' concerns related to the ethical and social implications of implementing AIDA, data privacy and data use, operational challenges, academic integrity and misuse, and the future of education. Implications for the design of AI-tailored systems are also discussed.


Learn to Not Link: Exploring NIL Prediction in Entity Linking

Zhu, Fangwei, Yu, Jifan, Jin, Hailong, Li, Juanzi, Hou, Lei, Sui, Zhifang

arXiv.org Artificial Intelligence

Entity linking models have achieved significant success via utilizing pretrained language models to capture semantic features. However, the NIL prediction problem, which aims to identify mentions without a corresponding entity in the knowledge base, has received insufficient attention. We categorize mentions linking to NIL into Missing Entity and Non-Entity Phrase, and propose an entity linking dataset NEL that focuses on the NIL prediction problem. NEL takes ambiguous entities as seeds, collects relevant mention context in the Wikipedia corpus, and ensures the presence of mentions linking to NIL by human annotation and entity masking. We conduct a series of experiments with the widely used bi-encoder and cross-encoder entity linking models, results show that both types of NIL mentions in training data have a significant influence on the accuracy of NIL prediction. Our code and dataset can be accessed at https://github.com/solitaryzero/NIL_EL


Part 2: Canada's evolving artificial intelligence and privacy regime

#artificialintelligence

The publication of this series was inspired by the release ChatGPT, which is a generative artificial intelligence (AI) chatbox developed by Open AI. ChatGPT uses machine learning and natural language processing to provide relatively sophisticated and human-like responses to almost any question. Unlike traditional AI systems, ChatGPT is a generative AI platform, which means that the content it creates is "new," rather than a reiteration of something that already exists. As ChatGPT demonstrates, content can be produced through generative AI in a matter of seconds and may be composed of images, videos, audio, text or even code. The reality is that generative AI is well on the way to becoming not just faster and cheaper, but better in some cases than what humans create by hand.


An active inference model of car following: Advantages and applications

Wei, Ran, McDonald, Anthony D., Garcia, Alfredo, Markkula, Gustav, Engstrom, Johan, O'Kelly, Matthew

arXiv.org Artificial Intelligence

Driver process models play a central role in the testing, verification, and development of automated and autonomous vehicle technologies. Prior models developed from control theory and physics-based rules are limited in automated vehicle applications due to their restricted behavioral repertoire. Data-driven machine learning models are more capable than rule-based models but are limited by the need for large training datasets and their lack of interpretability, i.e., an understandable link between input data and output behaviors. We propose a novel car following modeling approach using active inference, which has comparable behavioral flexibility to data-driven models while maintaining interpretability. We assessed the proposed model, the Active Inference Driving Agent (AIDA), through a benchmark analysis against the rule-based Intelligent Driver Model, and two neural network Behavior Cloning models. The models were trained and tested on a real-world driving dataset using a consistent process. The testing results showed that the AIDA predicted driving controls significantly better than the rule-based Intelligent Driver Model and had similar accuracy to the data-driven neural network models in three out of four evaluations. Subsequent interpretability analyses illustrated that the AIDA's learned distributions were consistent with driver behavior theory and that visualizations of the distributions could be used to directly comprehend the model's decision making process and correct model errors attributable to limited training data. The results indicate that the AIDA is a promising alternative to black-box data-driven models and suggest a need for further research focused on modeling driving style and model training with more diverse datasets.


The Artificial Intelligence and Data Act (AIDA) – Companion document

#artificialintelligence

Artificial intelligence (AI) systems are poised to have a significant impact on the lives of Canadians and the operations of Canadian businesses. The AIDA represents an important milestone in implementing the Digital Charter and ensuring that Canadians can trust the digital technologies that they use every day. The design, development, and use of AI systems must be safe, and must respect the values of Canadians. The framework proposed in the AIDA is the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses. The Government intends to build on this framework through an open and transparent regulatory development process. Consultations would be organized to gather input from a variety of stakeholders across Canada to ensure that the regulations achieve outcomes aligned with Canadian values. The global interconnectedness of the digital economy requires that the regulation of AI systems in the marketplace be coordinated internationally. Canada has drawn from and will work together with international partners – such as the European Union (EU), the United Kingdom, and the United States (US) – to align approaches, in order to ensure that Canadians are protected globally and that Canadian firms can be recognized internationally as meeting robust standards.


AIDA: Legal Judgment Predictions for Non-Professional Fact Descriptions via Partial-and-Imbalanced Domain Adaptation

Xiao, Guangyi, Liu, Xinlong, Chen, Hao, Guo, Jingzhi, Gong, Zhiguo

arXiv.org Artificial Intelligence

In this paper, we study the problem of legal domain adaptation problem from an imbalanced source domain to a partial target domain. The task aims to improve legal judgment predictions for non-professional fact descriptions. We formulate this task as a partial-and-imbalanced domain adaptation problem. Though deep domain adaptation has achieved cutting-edge performance in many unsupervised domain adaptation tasks. However, due to the negative transfer of samples in non-shared classes, it is hard for current domain adaptation model to solve the partial-and-imbalanced transfer problem. In this work, we explore large-scale non-shared but related classes data in the source domain with a hierarchy weighting adaptation to tackle this limitation. We propose to embed a novel pArtial Imbalanced Domain Adaptation technique (AIDA) in the deep learning model, which can jointly borrow sibling knowledge from non-shared classes to shared classes in the source domain and further transfer the shared classes knowledge from the source domain to the target domain. Experimental results show that our model outperforms the state-of-the-art algorithms.


A DNN Optimizer that Improves over AdaBelief by Suppression of the Adaptive Stepsize Range

Zhang, Guoqiang, Niwa, Kenta, Kleijn, W. Bastiaan

arXiv.org Artificial Intelligence

We make contributions towards improving adaptive-optimizer performance. Our improvements are based on suppression of the range of adaptive stepsizes in the AdaBelief optimizer. Firstly, we show that the particular placement of the parameter epsilon within the update expressions of AdaBelief reduces the range of the adaptive stepsizes, making AdaBelief closer to SGD with momentum. Secondly, we extend AdaBelief by further suppressing the range of the adaptive stepsizes. To achieve the above goal, we perform mutual layerwise vector projections between the gradient g_t and its first momentum m_t before using them to estimate the second momentum. The new optimization method is referred to as Aida. Thirdly, extensive experimental results show that Aida outperforms nine optimizers when training transformers and LSTMs for NLP, and VGG and ResNet for image classification over CIAF10 and CIFAR100 while matching the best performance of the nine methods when training WGAN-GP models for image generation tasks. Furthermore, Aida produces higher validation accuracies than AdaBelief for training ResNet18 over ImageNet. Code is available at this URL


In Hong Kong, designers try out new assistant: AI fashion maven AiDA

#artificialintelligence

Dec 27 (Reuters) - At the Fashion X AI show in Hong Kong, attendees noticed a certain "alien" quality about the new clothes modelled on the event's narrow catwalk - and the designs were, in fact, not entirely human. The show put more than 80 outfits from 14 designers in the spotlight, all of which were created with the help of the artificial intelligence software AiDA, short for "AI-based Interactive Design Assistant". The software was developed by PhD students and academics at the Hong Kong-based AiDLab. Masked in monochrome blue, wearing outfits that ranged from down jackets to translucent skirts, models strutted past rows of critics and fashion designers. Attendee Cynthia Tse said it felt like she was witnessing the future of fashion at the show on Dec. 19.


Liability of AI applications under scrutiny in UK, Canada

#artificialintelligence

Artificial intelligence (AI) applications, particularly those focused on biometric data gathering, have recently come under another round of scrutiny both in Europe and Canada. The European Commission proposed the AI Liability Directive last week, a set of rules designed to aid redress for people whose privacy was harmed by AI-powered and digital devices like self-driving cars, voice assistants and drones. According to BBC reporting, the Directive may operate alongside the EU's proposed AI Act if successfully turned into law, introducing a "presumption of causality" for those claiming injuries by AI-enabled products. In other words, individuals harmed by these systems would not have to provide technical explanations for how AI systems work but merely show how they have harmed them in practical terms. "The objective of this proposal is to promote the rollout of trustworthy AI to harvest its full benefits for the internal market. It does so by ensuring victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general," reads the text of the Directive.


Regulating AI: What marketers need to know

#artificialintelligence

In June, the Canadian government proposed new legislation to regulate artificial intelligence (AI). The proposed Artificial Intelligence and Data Act (AIDA) is part of Bill C-27, which also proposes a new privacy framework, the Consumer Privacy Protection Act (For more on federal privacy reform, see our recent blog). If passed, AIDA would be the first comprehensive law in Canada regulating AI. AIDA intends to promote the responsible use of AI. It aims to ensure high-impact AI systems are developed in a way that mitigates risk of harm and bias.

  Country: North America > Canada (0.79)
  Industry: