Fast Factorized Learning: Powered by In-Memory Database Systems
Stöckl, Bernhard, Schüle, Maximilian E.
–arXiv.org Artificial Intelligence
Learning models over factorized joins avoids redundant computations by identifying and pre-computing shared cofactors. Previous work has investigated the performance gain when computing cofactors on traditional disk-based database systems. Due to the absence of published code, the experiments could not be reproduced on in-memory database systems. This work describes the implementation when using cofactors for in-database factorized learning. We benchmark our open-source implementation for learning linear regression on factorized joins with PostgreSQL -- as a disk-based database system -- and HyPer -- as an in-memory engine. The evaluation shows a performance gain of factorized learning on in-memory database systems by 70\% to non-factorized learning and by a factor of 100 compared to disk-based database systems. Thus, modern database engines can contribute to the machine learning pipeline by pre-computing aggregates prior to data extraction to accelerate training.
arXiv.org Artificial Intelligence
Dec-11-2025
- Country:
- Asia > Singapore (0.04)
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Hampshire > Southampton (0.04)
- Germany > Bavaria
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- District of Columbia > Washington (0.04)
- New York > New York County
- New York City (0.04)
- California > San Francisco County
- Genre:
- Research Report (0.40)
- Technology: