Interaction Decompositions for Tensor Network Regression
Convy, Ian, Whaley, K. Birgitta
–arXiv.org Artificial Intelligence
Tensor network regression has emerged as a promising and active area of machine learning research, having achieved impressive results on common benchmark tasks such as the Movie 100K [1], MNIST [2][3][4][5], and Fashion MNIST [3][4][5] datasets. The effectiveness of these models can be attributed to the tensor-product transformation that is applied to the data features, which maps the original feature vector into an exponentially large vector space. By performing linear operations on this expanded feature space, tensor network models are able to generate regression outputs that are highly non-linear functions of the original features. In most tensor network models, the tensor-product transformation is constructed from a set of vector-valued functions that each act on only a single data feature. The form of these functions is important to the operation of the model, as it determines how regression on the transformed space is related to regression on the original feature space. Conventional wisdom regarding the choice of these functions can be traced back to the parallel works of Stoudenmire and Schwab [2] and Novikov et al. [1], who each proposed a different transformation scheme.
arXiv.org Artificial Intelligence
Jan-25-2023
- Country:
- Africa > Senegal
- Kolda Region > Kolda (0.04)
- Asia > Middle East
- Republic of Türkiye > Karaman Province > Karaman (0.04)
- North America > United States
- California > Alameda County
- Berkeley (0.14)
- Massachusetts > Suffolk County
- Boston (0.04)
- New York (0.04)
- California > Alameda County
- Africa > Senegal
- Genre:
- Research Report (1.00)
- Technology: