From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity
While there has been a lot of progress in developing deep neural networks (DNNs) to solve practical machine learning problems [1-3], the inner workings of neural networks is not well understood. A foundational theory for understanding how neural networks work is still lacking despite extensive research over several decades. In this paper, we provide a novel analysis of neural networks based on geometric algebra and convex optimization. We show that weights of deep ReLU neural networks learn the wedge product of a subset of training samples when trained by minimizing standard regularized loss functions. Furthermore, the training problem reduces to convex optimization over wedge product features, which encode the geometric structure of the training dataset. This structure is given in terms of signed volumes of triangles and parallelotopes generated by data vectors. Our analysis provides a novel perspective on the inner workings of deep neural networks and sheds light on the role of the hidden layers.
Dec-11-2023
- Country:
- Asia
- China > Hong Kong (0.04)
- Middle East > Israel (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > California
- Santa Clara County > Palo Alto (0.04)
- Canada > Ontario
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Technology: