saab transform
Green Learning: Introduction, Examples and Outlook
Kuo, C. -C. Jay, Madni, Azad M.
Rapid advances in artificial intelligence (AI) in the last decade have largely been built upon the wide applications of deep learning (DL). However, the high carbon footprint yielded by larger and larger DL networks becomes a concern for sustainability. Furthermore, DL decision mechanism is somewhat obsecure and can only be verified by test data. Green learning (GL) has been proposed as an alternative paradigm to address these concerns. GL is characterized by low carbon footprints, small model sizes, low computational complexity, and logical transparency. It offers energy-effective solutions in cloud centers as well as mobile/edge devices. GL also provides a clear and logical decision-making process to gain people's trust. Several statistical tools have been developed to achieve this goal in recent years. They include subspace approximation, unsupervised and supervised representation learning, supervised discriminant feature selection, and feature space partitioning. We have seen a few successful GL examples with performance comparable with state-of-the-art DL solutions. This paper offers an introduction to GL, its demonstrated applications, and future outlook.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Heilongjiang Province > Daqing (0.04)
- Overview (1.00)
- Research Report (0.82)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.68)
PAGER: Progressive Attribute-Guided Extendable Robust Image Generation
Azizi, Zohreh, Kuo, C. -C. Jay
This work presents a generative modeling approach based on successive subspace learning (SSL). Unlike most generative models in the literature, our method does not utilize neural networks to analyze the underlying source distribution and synthesize images. The resulting method, called the progressive attribute-guided extendable robust image generative (PAGER) model, has advantages in mathematical transparency, progressive content generation, lower training time, robust performance with fewer training samples, and extendibility to conditional image generation. PAGER consists of three modules: core generator, resolution enhancer, and quality booster. The core generator learns the distribution of low-resolution images and performs unconditional image generation. The resolution enhancer increases image resolution via conditional generation. Finally, the quality booster adds finer details to generated images. Extensive experiments on MNIST, Fashion-MNIST, and CelebA datasets are conducted to demonstrate generative performance of PAGER.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Virginia (0.04)
- Asia (0.04)
How to Spot a DeepFake in 2021
I explain Artificial Intelligence terms and news to non-experts. Wondering about the best ways to spot a deepfake? In this video, learn about a breakthrough US Army technology that uses artificial intelligence to find deepfakes. Read the full article: https://www.louisbouchard.ai/spot-deepfakes While they seem like they've always been there, the very first realistic deepfake didn't appear It went from these first-ever resembling fake images automatically generated to today's How can we tell what's real from what isn't?
- Information Technology > Security & Privacy (1.00)
- Government > Military > Army (0.35)
PixelHop++: A Small Successive-Subspace-Learning-Based (SSL-based) Model for Image Classification
Chen, Yueru, Rouhsedaghat, Mozhdeh, You, Suya, Rao, Raghuveer, Kuo, C. -C. Jay
The successive subspace learning (SSL) principle was developed and used to design an interpretable learning model, known as the PixelHop method,for image classification in our prior work. Here, we propose an improved PixelHop method and call it PixelHop++. First, to make the PixelHop model size smaller, we decouple a joint spatial-spectral input tensor to multiple spatial tensors (one for each spectral component) under the spatial-spectral separability assumption and perform the Saab transform in a channel-wise manner, called the channel-wise (c/w) Saab transform.Second, by performing this operation from one hop to another successively, we construct a channel-decomposed feature tree whose leaf nodes contain features of one dimension (1D). Third, these 1D features are ranked according to their cross-entropy values, which allows us to select a subset of discriminant features for image classification. In PixelHop++, one can control the learning model size of fine-granularity,offering a flexible tradeoff between the model size and the classification performance. We demonstrate the flexibility of PixelHop++ on MNIST, Fashion MNIST, and CIFAR-10 three datasets.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Maryland > Prince George's County > Adelphi (0.04)