Goto

Collaborating Authors

 Jang, Jun-Gi


Tensor Convolutional Network for Higher-Order Interaction Prediction in Sparse Tensors

arXiv.org Artificial Intelligence

Many real-world data, such as recommendation data and temporal graphs, can be represented as incomplete sparse tensors where most entries are unobserved. For such sparse tensors, identifying the top-k higher-order interactions that are most likely to occur among unobserved ones is crucial. Tensor factorization (TF) has gained significant attention in various tensor-based applications, serving as an effective method for finding these top-k potential interactions. However, existing TF methods primarily focus on effectively fusing latent vectors of entities, which limits their expressiveness. Since most entities in sparse tensors have only a few interactions, their latent representations are often insufficiently trained. In this paper, we propose TCN, an accurate and compatible tensor convolutional network that integrates seamlessly with existing TF methods for predicting higher-order interactions. We design a highly effective encoder to generate expressive latent vectors of entities. To achieve this, we propose to (1) construct a graph structure derived from a sparse tensor and (2) develop a relation-aware encoder, TCN, that learns latent representations of entities by leveraging the graph structure. Since TCN complements traditional TF methods, we seamlessly integrate TCN with existing TF methods, enhancing the performance of predicting top-k interactions. Extensive experiments show that TCN integrated with a TF method outperforms competitors, including TF methods and a hyperedge prediction method. Moreover, TCN is broadly compatible with various TF methods and GNNs (Graph Neural Networks), making it a versatile solution.


Fast and Accurate Dual-Way Streaming PARAFAC2 for Irregular Tensors -- Algorithm and Application

arXiv.org Artificial Intelligence

How can we efficiently and accurately analyze an irregular tensor in a dual-way streaming setting where the sizes of two dimensions of the tensor increase over time? What types of anomalies are there in the dual-way streaming setting? An irregular tensor is a collection of matrices whose column lengths are the same while their row lengths are different. In a dual-way streaming setting, both new rows of existing matrices and new matrices arrive over time. PARAFAC2 decomposition is a crucial tool for analyzing irregular tensors. Although real-time analysis is necessary in the dual-way streaming, static PARAFAC2 decomposition methods fail to efficiently work in this setting since they perform PARAFAC2 decomposition for accumulated tensors whenever new data arrive. Existing streaming PARAFAC2 decomposition methods work in a limited setting and fail to handle new rows of matrices efficiently. In this paper, we propose Dash, an efficient and accurate PARAFAC2 decomposition method working in the dual-way streaming setting. When new data are given, Dash efficiently performs PARAFAC2 decomposition by carefully dividing the terms related to old and new data and avoiding naive computations involved with old data. Furthermore, applying a forgetting factor makes Dash follow recent movements. Extensive experiments show that Dash achieves up to 14.0x faster speed than existing PARAFAC2 decomposition methods for newly arrived data. We also provide discoveries for detecting anomalies in real-world datasets, including Subprime Mortgage Crisis and COVID-19.


Accurate Open-set Recognition for Memory Workload

arXiv.org Artificial Intelligence

The global DRAM (Dynamic Random Access Memory) market size is about tens of billions USD, and keeps increasing due to growing demand of DRAM in mobile devices, modern computers, selfdriving cars, etc. It is crucial to test DRAM using various workloads in verifying and guaranteeing DRAM quality. DRAM manufacturers utilize their known workloads for verification; however, it does not guarantee that DRAM works well for new workloads not known in advance. Therefore, it is necessary to detect new workloads to improve the quality of DRAM verification. The problem of detecting new workloads is formulated as an open-set recognition [19] task which classifies a test sample into the known classes or the unknown class, and identifies its class if it belongs to the known classes. A workload sequence contains a series of tuples with the command and the address information of memory accesses. To detect new workloads based on open-set recognition, we exploit a subsequence, a part of the entire sequence of a workload. Given a subsequence, we classify it into one of the known workload classes or identify it as the unknown class corresponding to new workloads.


Accurate Bundle Matching and Generation via Multitask Learning with Partially Shared Parameters

arXiv.org Artificial Intelligence

How can we recommend existing bundles to users accurately? How can we generate new tailored bundles for users? Recommending a bundle, or a group of various items, has attracted widespread attention in e-commerce owing to the increased satisfaction of both users and providers. Bundle matching and bundle generation are two representative tasks in bundle recommendation. The bundle matching task is to correctly match existing bundles to users while the bundle generation is to generate new bundles that users would prefer. Although many recent works have developed bundle recommendation models, they fail to achieve high accuracy since they do not handle heterogeneous data effectively and do not learn a method for customized bundle generation. In this paper, we propose BundleMage, an accurate approach for bundle matching and generation. BundleMage effectively mixes user preferences of items and bundles using an adaptive gate technique to achieve high accuracy for the bundle matching. BundleMage also generates a personalized bundle by learning a generation module that exploits a user preference and the characteristic of a given incomplete bundle to be completed. BundleMage further improves its performance using multi-task learning with partially shared parameters. Through extensive experiments, we show that BundleMage achieves up to 6.6% higher nDCG in bundle matching and 6.3x higher nDCG in bundle generation than the best competitors. We also provide qualitative analysis that BundleMage effectively generates bundles considering both the tastes of users and the characteristics of target bundles.


Time-Aware Tensor Decomposition for Missing Entry Prediction

arXiv.org Machine Learning

Given a time-evolving tensor with missing entries, how can we effectively factorize it for precisely predicting the missing entries? Tensor factorization has been extensively utilized for analyzing various multi-dimensional real-world data. However, existing models for tensor factorization have disregarded the temporal property for tensor factorization while most real-world data are closely related to time. Moreover, they do not address accuracy degradation due to the sparsity of time slices. The essential problems of how to exploit the temporal property for tensor decomposition and consider the sparsity of time slices remain unresolved. In this paper, we propose TATD (Time-Aware Tensor Decomposition), a novel tensor decomposition method for real-world temporal tensors. TATD is designed to exploit temporal dependency and time-varying sparsity of real-world temporal tensors. We propose a new smoothing regularization with Gaussian kernel for modeling time dependency. Moreover, we improve the performance of TATD by considering time-varying sparsity. We design an alternating optimization scheme suitable for temporal tensor factorization with our smoothing regularization. Extensive experiments show that TATD provides the state-of-the-art accuracy for decomposing temporal tensors.


Fast Partial Fourier Transform

arXiv.org Machine Learning

Given a time series vector, how can we efficiently compute a specified part of Fourier coefficients? Fast Fourier transform (FFT) is a widely used algorithm that computes the discrete Fourier transform in many machine learning applications. Despite its pervasive use, all known FFT algorithms do not provide a fine-tuning option for the user to specify one's demand, that is, the output size (the number of Fourier coefficients to be computed) is algorithmically determined by the input size. This matters because not every application using FFT requires the whole spectrum of the frequency domain, resulting in an inefficiency due to extra computation. In this paper, we propose a fast Partial Fourier Transform (PFT), a careful modification of the Cooley-Tukey algorithm that enables one to specify an arbitrary consecutive range where the coefficients should be computed. We derive the asymptotic time complexity of PFT with respect to input and output sizes, as well as its numerical accuracy. Experimental results show that our algorithm outperforms the state-of-the-art FFT algorithms, with an order of magnitude of speedup for sufficiently small output sizes without sacrificing accuracy.