charset
Residual Transformer Fusion Network for Salt and Pepper Image Denoising
Putra, Bintang Pradana Erlangga, Prasetyo, Heri, Suryani, Esti
Convolutional Neural Network (CNN) has been widely used in unstructured datasets, one of which is image denoising. Image denoising is a noisy image reconstruction process that aims to reduce additional noise that occurs from the noisy image with various strategies. Image denoising has a problem, namely that some image denoising methods require some prior knowledge of information about noise. To overcome this problem, a combined architecture of Convolutional Vision Transformer (CvT) and Residual Networks (ResNet) is used which is called the Residual Transformer Fusion Network (RTF-Net). In general, the process in this architecture can be divided into two parts, Noise Suppression Network (NSN) and Structure Enhancement Network (SEN). Residual Block is used in the Noise Suppression Network and is used to learn the noise map in the image, while the CvT is used in the Structure Enhancement Network and is used to learn the details that need to be added to the image processed by the Noise Suppression Network. The model was trained using the DIV2K Training Set dataset, and validation using the DIV2K Validation Set. After doing the training, the model was tested using Lena, Bridge, Pepper, and BSD300 images with noise levels ranging from 30%, 50%, and 70% and the PSNR results were compared with the DBA, NASNLM, PARIGI, NLSF, NLSF-MLP and NLSF-CNN methods. The test results show that the proposed method is superior in all cases except for Pepper's image with a noise level of 30%, where NLSF-CNN is superior with a PSNR value of 32.99 dB, while the proposed method gets a PSNR value of 31.70 dB.
Capturing the Denoising Effect of PCA via Compression Ratio
Mukherjee, Chandra Sekhar, Doerkar, Nikhil, Zhang, Jiapeng
Principal component analysis (PCA) is one of the most fundamental tools in machine learning with broad use as a dimensionality reduction and denoising tool. In the later setting, while PCA is known to be effective at subspace recovery and is proven to aid clustering algorithms in some specific settings, its improvement of noisy data is still not well quantified in general. In this paper, we propose a novel metric called \emph{compression ratio} to capture the effect of PCA on high-dimensional noisy data. We show that, for data with \emph{underlying community structure}, PCA significantly reduces the distance of data points belonging to the same community while reducing inter-community distance relatively mildly. We explain this phenomenon through both theoretical proofs and experiments on real-world data. Building on this new metric, we design a straightforward algorithm that could be used to detect outliers. Roughly speaking, we argue that points that have a \emph{lower variance of compression ratio} do not share a \emph{common signal} with others (hence could be considered outliers). We provide theoretical justification for this simple outlier detection algorithm and use simulations to demonstrate that our method is competitive with popular outlier detection tools. Finally, we run experiments on real-world high-dimension noisy data (single-cell RNA-seq) to show that removing points from these datasets via our outlier detection method improves the accuracy of clustering algorithms. Our method is very competitive with popular outlier detection tools in this task.
Undersmoothing Causal Estimators with Generative Trees
Machlanski, Damian, Samothrakis, Spyros, Clarke, Paul
Inferring individualised treatment effects from observational data can unlock the potential for targeted interventions. It is, however, hard to infer these effects from observational data. One major problem that can arise is covariate shift where the data (outcome) conditional distribution remains the same but the covariate (input) distribution changes between the training and test set. In an observational data setting, this problem is materialised in control and treated units coming from different distributions. A common solution is to augment learning methods through reweighing schemes (e.g. propensity scores). These are needed due to model misspecification, but might hurt performance in the individual case. In this paper, we explore a novel generative tree based approach that tackles model misspecification directly, helping downstream estimators achieve better robustness. We show empirically that the choice of model class can indeed significantly affect the final performance and that reweighing methods can struggle in individualised effect estimation. Our proposed approach is competitive with reweighing methods on average treatment effects while performing significantly better on individualised treatment effects.
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM
Holmes, Connor, Zhang, Minjia, He, Yuxiong, Wu, Bo
Natural Language Processing (NLP) has recently achieved success by using huge pre-trained Transformer networks. However, these models often contain hundreds of millions or even billions of parameters, bringing challenges to online deployment due to latency constraints. Recently, hardware manufacturers have introduced dedicated hardware for NxM sparsity to provide the flexibility of unstructured pruning with the runtime efficiency of structured approaches. NxM sparsity permits arbitrarily selecting M parameters to retain from a contiguous group of N in the dense representation. However, due to the extremely high complexity of pre-trained models, the standard sparse fine-tuning techniques often fail to generalize well on downstream tasks, which have limited data resources. To address such an issue in a principled manner, we introduce a new learning framework, called NxMTransformer, to induce NxM semi-structured sparsity on pretrained language models for natural language understanding to obtain better performance. In particular, we propose to formulate the NxM sparsity as a constrained optimization problem and use Alternating Direction Method of Multipliers (ADMM) to optimize the downstream tasks while taking the underlying hardware constraints into consideration. ADMM decomposes the NxM sparsification problem into two sub-problems that can be solved sequentially, generating sparsified Transformer networks that achieve high accuracy while being able to effectively execute on newly released hardware. We apply our approach to a wide range of NLP tasks, and our proposed method is able to achieve 1.7 points higher accuracy in GLUE score than current practices. Moreover, we perform detailed analysis on our approach and shed light on how ADMM affects fine-tuning accuracy for downstream tasks. Finally, we illustrate how NxMTransformer achieves performance improvement with knowledge distillation.
Relational VAE: A Continuous Latent Variable Model for Graph Structured Data
Mylonas, Charilaos, Abdallah, Imad, Chatzi, Eleni
Graph Networks (GNs) enable the fusion of prior knowledge and relational reasoning with flexible function approximations. In this work, a general GN-based model is proposed which takes full advantage of the relational modeling capabilities of GNs and extends these to probabilistic modeling with Variational Bayes (VB). To that end, we combine complementary pre-existing approaches on VB for graph data and propose an approach that relies on graph-structured latent and conditioning variables. It is demonstrated that Neural Processes can also be viewed through the lens of the proposed model. We show applications on the problem of structured probability density modeling for simulated and real wind farm monitoring data, as well as on the meta-learning of simulated Gaussian Process data. We release the source code, along with the simulated datasets.
A Systematic Evaluation of Domain Adaptation in Facial Expression Recognition
Kong, Yan San, Suresh, Varsha, Soh, Jonathan, Ong, Desmond C.
Facial Expression Recognition is a commercially important application, but one common limitation is that applications often require making predictions on out-of-sample distributions, where target images may have very different properties from the images that the model was trained on. How well, or badly, do these models do on unseen target domains? In this paper, we provide a systematic evaluation of domain adaptation in facial expression recognition. Using state-of-the-art transfer learning techniques and six commonly-used facial expression datasets (three collected in the lab and three "in-the-wild"), we conduct extensive round-robin experiments to examine the classification accuracies for a state-of-the-art CNN model. We also perform multi-source experiments where we examine a model's ability to transfer from multiple source datasets, including (i) within-setting (e.g., lab to lab), (ii) cross-setting (e.g., in-the-wild to lab), (iii) mixed-setting (e.g., lab and wild to lab) transfer learning experiments. We find sobering results that the accuracy of transfer learning is not high, and varies idiosyncratically with the target dataset, and to a lesser extent the source dataset. Generally, the best settings for transfer include fine-tuning the weights of a pre-trained model, and we find that training with more datasets, regardless of setting, improves transfer performance. We end with a discussion of the need for more -- and regular -- systematic investigations into the generalizability of FER models, especially for deployed applications.
Top 15 Python Packages You Must Try - Python Land
Why do we all love Python? Another reason: it comes with batteries included, meaning Python has many excellent libraries included by default. But in my opinion, it's the 230,000 user-contributed packages that make Python really powerful and popular. In this article, I handpicked 15 packages that I found most useful during my 10-year career as a Pythonista. It's ideal for building data visualization apps in pure Python, so it's particularly suited for anyone who works with data.