ipt
Frame-Level Multi-Label Playing Technique Detection Using Multi-Scale Network and Self-Attention Mechanism
Li, Dichucheng, Che, Mingjin, Meng, Wenwu, Wu, Yulun, Yu, Yi, Xia, Fan, Li, Wei
With the advancements in deep learning, deep neural networks have been increasingly used in more recent work [8, 9]. In [10], a Instrument playing technique (IPT) is a key element in enhancing convolutional recurrent neural network (CRNN) based model was the vividness of musical performance. As shown by the Guzheng proposed to classify IPTs in audio sequences concatenated by cello numbered musical notation (a musical notation system widely used notes from 5 sound banks. To alleviate the computational redundancy in China) in Fig.1, a complete automatic music transcription (AMT) caused by the sliding window in [10], Wang et al. [11] proposed system should contain IPT information in addition to pitch and onset a fully convolutional network (FCN) based end-to-end method information. IPT detection aims to classify the types of IPTs and to detect IPTs in segments concatenated by isolated Erhu notes. In locate the associated IPT boundaries in audio. IPT detection and [12], an additional onset detector was used, and its output was fused modeling can be utilized in many applications of music information with IPT prediction in a post-processing step to improve the accuracy retrieval (MIR), like performance analysis [1] and AMT [2]. of IPT detection from monophonic audio sequences concatenated by The research on IPT detection is still in its early stage.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
How Does In-Context Learning Help Prompt Tuning?
Sun, Simeng, Liu, Yang, Iter, Dan, Zhu, Chenguang, Iyyer, Mohit
Fine-tuning large language models is becoming ever more impractical due to their rapidly-growing scale. This motivates the use of parameter-efficient adaptation methods such as prompt tuning (PT), which adds a small number of tunable embeddings to an otherwise frozen model, and in-context learning (ICL), in which demonstrations of the task are provided to the model in natural language without any additional training. Recently, Singhal et al. (2022) propose ``instruction prompt tuning'' (IPT), which combines PT with ICL by concatenating a natural language demonstration with learned prompt embeddings. While all of these methods have proven effective on different tasks, how they interact with each other remains unexplored. In this paper, we empirically study when and how in-context examples improve prompt tuning by measuring the effectiveness of ICL, PT, and IPT on five text generation tasks with multiple base language models. We observe that (1) IPT does \emph{not} always outperform PT, and in fact requires the in-context demonstration to be semantically similar to the test input to yield improvements; (2) PT is unstable and exhibits high variance, but combining PT and ICL (into IPT) consistently reduces variance across all five tasks; and (3) prompts learned for a specific source task via PT exhibit positive transfer when paired with in-context examples of a different target task. Our results offer actionable insights on choosing a suitable parameter-efficient adaptation method for a given task.
- Europe > Ireland > Leinster > County Dublin > Dublin (0.05)
- North America > United States > South Carolina (0.05)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- (7 more...)
Exploring Low-dimensional Intrinsic Task Subspace via Prompt Tuning
Qin, Yujia, Wang, Xiaozhi, Su, Yusheng, Lin, Yankai, Ding, Ning, Liu, Zhiyuan, Li, Juanzi, Hou, Lei, Li, Peng, Sun, Maosong, Zhou, Jie
How can pre-trained language models (PLMs) learn universal representations and effectively adapt to broad NLP tasks differing a lot superficially? In this work, we empirically find evidences indicating that the adaptations of PLMs to various tasks can be reparameterized as optimizing only a few free parameters in a common low-dimensional intrinsic task subspace, which may help us understand why PLMs could easily adapt to various NLP tasks with small-scale data. Specifically, to find such a subspace and examine its universality, we resort to the recent success of prompt tuning and decompose the soft prompts of multiple NLP tasks into the same low-dimensional nonlinear subspace, then we learn to adapt the PLM to unseen tasks or data by only tuning parameters in the subspace. We dub this pipeline as intrinsic prompt tuning (IPT). In experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 5-dimensional subspace found with 100 random tasks, by only tuning 5 free parameters, we can recover 87% and 65% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace. Besides being an analysis tool, IPT could further bring practical benefits, such as improving the prompt tuning stability.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (4 more...)
Computationally Efficient High-Dimensional Bayesian Optimization via Variable Selection
Bayesian Optimization (BO) is a method for globally optimizing black-box functions. While BO has been successfully applied to many scenarios, developing effective BO algorithms that scale to functions with high-dimensional domains is still a challenge. Optimizing such functions by vanilla BO is extremely time-consuming. Alternative strategies for high-dimensional BO that are based on the idea of embedding the high-dimensional space to the one with low dimension are sensitive to the choice of the embedding dimension, which needs to be pre-specified. We develop a new computationally efficient high-dimensional BO method that exploits variable selection. Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables, without the demand of any pre-specified hyperparameters. We theoretically analyze the computational complexity of our algorithm and derive the regret bound. We empirically show the efficacy of our method on several synthetic and real problems.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks
Gupta, Kishor Datta, Akhtar, Zahid, Dasgupta, Dipankar
Developing secure machine learning models from adversarial examples is challenging as various methods are continually being developed to generate adversarial attacks. In this work, we propose an evolutionary approach to automatically determine Image Processing Techniques Sequence (IPTS) for detecting malicious inputs. Accordingly, we first used a diverse set of attack methods including adaptive attack methods (on our defense) to generate adversarial samples from the clean dataset. A detection framework based on a genetic algorithm (GA) is developed to find the optimal IPTS, where the optimality is estimated by different fitness measures such as Euclidean distance, entropy loss, average histogram, local binary pattern and loss functions. The "image difference" between the original and processed images is used to extract the features, which are then fed to a classification scheme in order to determine whether the input sample is adversarial or clean. This paper described our methodology and performed experiments using multiple data-sets tested with several adversarial attacks. For each attack-type and dataset, it generates unique IPTS. A set of IPTS selected dynamically in testing time which works as a filter for the adversarial attack. Our empirical experiments exhibited promising results indicating the approach can efficiently be used as processing for any AI model.
- North America > United States > Tennessee > Shelby County > Memphis (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
Anytime Behavior of Inexact TSP Solvers and Perspectives for Automated Algorithm Selection
Bossek, Jakob, Kerschke, Pascal, Trautmann, Heike
The Traveling-Salesperson-Problem (TSP) is arguably one of the best-known NP-hard combinatorial optimization problems. The two sophisticated heuristic solvers LKH and EAX and respective (restart) variants manage to calculate close-to optimal or even optimal solutions, also for large instances with several thousand nodes in reasonable time. In this work we extend existing benchmarking studies by addressing anytime behaviour of inexact TSP solvers based on empirical runtime distributions leading to an increased understanding of solver behaviour and the respective relation to problem hardness. It turns out that performance ranking of solvers is highly dependent on the focused approximation quality. Insights on intersection points of performances offer huge potential for the construction of hybridized solvers depending on instance features. Moreover, instance features tailored to anytime performance and corresponding performance indicators will highly improve automated algorithm selection models by including comprehensive information on solver quality.
- Europe > Germany > North Rhine-Westphalia > Münster Region > Münster (0.04)
- Oceania > Australia > South Australia > Adelaide (0.04)
Intelligent Processing in Vehicular Ad hoc Networks: a Survey
The intelligent Processing technique is more and more attractive to researchers due to its ability to deal with key problems in Vehicular Ad hoc networks. However, several problems in applying intelligent processing technologies in VANETs remain open. The existing applications are comprehensively reviewed and discussed, and classified into different categories in this paper. Their strategies, advantages/disadvantages, and performances are elaborated. By generalizing different tactics in various applications related to different scenarios of VANETs and evaluating their performances, several promising directions for future research have been suggested.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > United Kingdom > England (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (4 more...)
- Workflow (0.46)
- Research Report (0.40)
- Transportation > Ground > Road (1.00)
- Telecommunications (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)