new category
Unesco adopts global standards on 'wild west' field of neurotechnology
The Unesco standards define a new category of data, 'neural data', and suggest guidelines governing its protection. The Unesco standards define a new category of data, 'neural data', and suggest guidelines governing its protection. Unesco adopts global standards on'wild west' field of neurotechnology UN body's recommendations driven by AI advances and proliferation of consumer-oriented neurotech devices It is the latest move in a growing international effort to put guardrails around a burgeoning frontier - technologies that harness data from the brain and nervous system. Unesco has adopted a set of global standards on the ethics of neurotechnology, a field that has been described as "a bit of a wild west". "There is no control," said Unesco's chief of bioethics, Dafna Feinholz.
- North America > United States > Virginia > Virginia County (0.61)
- North America > United States > Texas > Yoakum County (0.61)
- North America > United States > Texas > Wichita County (0.61)
- (4 more...)
- Law (0.91)
- Leisure & Entertainment > Sports (0.72)
- Health & Medicine > Therapeutic Area > Neurology (0.71)
- Government > Regional Government > North America Government > United States Government (0.71)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > China > Guangdong Province (0.04)
Partitioned Memory Storage Inspired Few-Shot Class-Incremental learning
Zhang, Renye, Yin, Yimin, Zhang, Jinghua
Current mainstream deep learning techniques exhibit an over-reliance on extensive training data and a lack of adaptability to the dynamic world, marking a considerable disparity from human intelligence. To bridge this gap, Few-Shot Class-Incremental Learning (FSCIL) has emerged, focusing on continuous learning of new categories with limited samples without forgetting old knowledge. Existing FSCIL studies typically use a single model to learn knowledge across all sessions, inevitably leading to the stability-plasticity dilemma. Unlike machines, humans store varied knowledge in different cerebral cortices. Inspired by this characteristic, our paper aims to develop a method that learns independent models for each session. It can inherently prevent catastrophic forgetting. During the testing stage, our method integrates Uncertainty Quantification (UQ) for model deployment. Our method provides a fresh viewpoint for FSCIL and demonstrates the state-of-the-art performance on CIFAR-100 and mini-ImageNet datasets.
- Europe > Finland > Northern Ostrobothnia > Oulu (0.04)
- Asia > China > Hunan Province > Changsha (0.04)
Stack Trace Deduplication: Faster, More Accurately, and in More Realistic Scenarios
Shibaev, Egor, Sushentsev, Denis, Golubev, Yaroslav, Khvorov, Aleksandr
In large-scale software systems, there are often no fully-fledged bug reports with human-written descriptions when an error occurs. In this case, developers rely on stack traces, i.e., series of function calls that led to the error. Since there can be tens and hundreds of thousands of them describing the same issue from different users, automatic deduplication into categories is necessary to allow for processing. Recent works have proposed powerful deep learning-based approaches for this, but they are evaluated and compared in isolation from real-life workflows, and it is not clear whether they will actually work well at scale. To overcome this gap, this work presents three main contributions: a novel model, an industry-based dataset, and a multi-faceted evaluation. Our model consists of two parts - (1) an embedding model with byte-pair encoding and approximate nearest neighbor search to quickly find the most relevant stack traces to the incoming one, and (2) a reranker that re-ranks the most fitting stack traces, taking into account the repeated frames between them. To complement the existing datasets collected from open-source projects, we share with the community SlowOps - a dataset of stack traces from IntelliJ-based products developed by JetBrains, which has an order of magnitude more stack traces per category. Finally, we carry out an evaluation that strives to be realistic: measuring not only the accuracy of categorization, but also the operation time and the ability to create new categories. The evaluation shows that our model strikes a good balance - it outperforms other models on both open-source datasets and SlowOps, while also being faster on time than most. We release all of our code and data, and hope that our work can pave the way to further practice-oriented research in the area.
- Research Report > Promising Solution (0.66)
- Research Report > New Finding (0.46)
- Information Technology > Software (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Information Retrieval (0.88)
NC-NCD: Novel Class Discovery for Node Classification
Hou, Yue, Chen, Xueyuan, Zhu, He, Liu, Romei, Shi, Bowen, Liu, Jiaheng, Wu, Junran, Xu, Ke
Novel Class Discovery (NCD) involves identifying new categories within unlabeled data by utilizing knowledge acquired from previously established categories. However, existing NCD methods often struggle to maintain a balance between the performance of old and new categories. Discovering unlabeled new categories in a class-incremental way is more practical but also more challenging, as it is frequently hindered by either catastrophic forgetting of old categories or an inability to learn new ones. Furthermore, the implementation of NCD on continuously scalable graph-structured data remains an under-explored area. In response to these challenges, we introduce for the first time a more practical NCD scenario for node classification (i.e., NC-NCD), and propose a novel self-training framework with prototype replay and distillation called SWORD, adopted to our NC-NCD setting. Our approach enables the model to cluster unlabeled new category nodes after learning labeled nodes while preserving performance on old categories without reliance on old category nodes. SWORD achieves this by employing a self-training strategy to learn new categories and preventing the forgetting of old categories through the joint use of feature prototypes and knowledge distillation. Extensive experiments on four common benchmarks demonstrate the superiority of SWORD over other state-of-the-art methods.
- Europe > Austria > Vienna (0.14)
- North America > United States > Idaho > Ada County > Boise (0.05)
- Asia > China > Beijing > Beijing (0.05)
- (2 more...)
- Information Technology (0.67)
- Education > Educational Setting (0.46)
Class-Incremental Learning with CLIP: Adaptive Representation Adjustment and Parameter Fusion
Huang, Linlan, Cao, Xusheng, Lu, Haori, Liu, Xialei
Class-incremental learning is a challenging problem, where the goal is to train a model that can classify data from an increasing number of classes over time. With the advancement of vision-language pre-trained models such as CLIP, they demonstrate good generalization ability that allows them to excel in class-incremental learning with completely frozen parameters. However, further adaptation to downstream tasks by simply fine-tuning the model leads to severe forgetting. Most existing works with pre-trained models assume that the forgetting of old classes is uniform when the model acquires new knowledge. In this paper, we propose a method named Adaptive Representation Adjustment and Parameter Fusion (RAPF). During training for new data, we measure the influence of new classes on old ones and adjust the representations, using textual features. After training, we employ a decomposed parameter fusion to further mitigate forgetting during adapter module fine-tuning. Experiments on several conventional benchmarks show that our method achieves state-of-the-art results. Our code is available at \url{https://github.com/linlany/RAPF}.
- Asia > China > Tianjin Province > Tianjin (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
UK spy agencies want to relax 'burdensome' laws on AI data use
The UK intelligence agencies are lobbying the government to weaken surveillance laws they argue place a "burdensome" limit on their ability to train artificial intelligence models with large amounts of personal data. The proposals would make it easier for GCHQ, MI6 and MI5 to use certain types of data, by relaxing safeguards designed to protect people's privacy and prevent the misuse of sensitive information. Privacy experts and civil liberties groups have expressed alarm at the move, which would unwind some of the legal protection introduced in 2016 after disclosures by Edward Snowden about intrusive state surveillance. The UK's spy agencies are increasingly using AI-based systems to help analyse the vast and growing quantities of data they hold. Privacy campaigners argue rapidly advancing AI capabilities require stronger rather than weaker regulation.