TensorKrowch: Smooth integration of tensor networks in machine learning
Monturiol, José Ramón Pareja, Pérez-García, David, Pozas-Kerstjens, Alejandro
–arXiv.org Artificial Intelligence
Tensor networks are factorizations of high-dimensional tensors into network-like structures composed of smaller tensors. Originating from condensed matter physics and acclaimed for their efficient representation of quantum many-body systems [1-10], these structures have allowed researchers to comprehend the intricate properties of such systems and, additionally, simulate them using classical computers [11-13]. Notably, tensor networks are the most successful method for simulating the results of quantum advantage experiments [14-16]. Furthermore, tensor networks were rediscovered within the numerical linear algebra community [17-19], where the techniques have been adapted to other high-dimensional problems such as numerical integration [20], signal processing [21], or epidemic modelling [22]. With the advent of machine learning and the the quest for expressive yet easy-to-train models, tensor networks have been suggested as promising candidates, due to their ability to parameterize regions of the complex space of size exponential in the number of input features. Since the pioneering works [23, 24] that used simple, 1-dimensional networks known as Matrix Product States (MPS) in the physics literature [4, 25] and as Tensor Trains in the numerical linear algebra literature [18], these have been applied in both supervised and unsupervised learning settings [26-28].
arXiv.org Artificial Intelligence
Jun-14-2023