hog
A Comparison of Selected Image Transformation Techniques for Malware Classification
Agrawal, Rishit, Bhatnagar, Kunal, Do, Andrew, Rana, Ronnit, Stamp, Mark
Recently, a considerable amount of malware research has focused on the use of powerful image-based machine learning techniques, which generally yield impressive results. However, before image-based techniques can be applied to malware, the samples must be converted to images, and there is no generally-accepted approach for doing so. The malware-to-image conversion strategies found in the literature often appear to be ad hoc, with little or no effort made to take into account properties of executable files. In this paper, we experiment with eight distinct malware-to-image conversion techniques, and for each, we test a variety of learning models. We find that several of these image conversion techniques perform similarly across a range of learning models, in spite of the image conversion processes being quite different. These results suggest that the effectiveness of image-based malware classification techniques may depend more on the inherent strengths of image analysis techniques, as opposed to the precise details of the image conversion strategy.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
Lightweight Deepfake Detection Based on Multi-Feature Fusion
Yasir, Siddiqui Muhammad, Kim, Hyun
Deepfake technology utilizes deep learning based face manipulation techniques to seamlessly replace faces in videos creating highly realistic but artificially generated content. Although this technology has beneficial applications in media and entertainment misuse of its capabilities may lead to serious risks including identity theft cyberbullying and false information. The integration of DL with visual cognition has resulted in important technological improvements particularly in addressing privacy risks caused by artificially generated deepfake images on digital media platforms. In this study we propose an efficient and lightweight method for detecting deepfake images and videos making it suitable for devices with limited computational resources. In order to reduce the computational burden usually associated with DL models our method integrates machine learning classifiers in combination with keyframing approaches and texture analysis. Moreover the features extracted with a histogram of oriented gradients (HOG) local binary pattern (LBP) and KAZE bands were integrated to evaluate using random forest extreme gradient boosting extra trees and support vector classifier algorithms. Our findings show a feature-level fusion of HOG LBP and KAZE features improves accuracy to 92% and 96% on FaceForensics++ and Celeb-DFv2 respectively.
- Europe > Austria > Vienna (0.14)
- Asia > South Korea > Seoul > Seoul (0.05)
- North America > United States > Washington > King County > Seattle (0.04)
- (17 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.46)
Leveraging Pre-trained CNNs for Efficient Feature Extraction in Rice Leaf Disease Classification
Sobuj, Md. Shohanur Islam, Hossen, Md. Imran, Mahmud, Md. Foysal, Khan, Mahbub Ul Islam
Rice disease classification is a critical task in agricultural research, and in this study, we rigorously evaluate the impact of integrating feature extraction methodologies within pre-trained convolutional neural networks (CNNs). Initial investigations into baseline models, devoid of feature extraction, revealed commendable performance with ResNet-50 and ResNet-101 achieving accuracies of 91% and 92%, respectively. Subsequent integration of Histogram of Oriented Gradients (HOG) yielded substantial improvements across architectures, notably propelling the accuracy of EfficientNet-B7 from 92\% to an impressive 97%. Conversely, the application of Local Binary Patterns (LBP) demonstrated more conservative performance enhancements. Moreover, employing Gradient-weighted Class Activation Mapping (Grad-CAM) unveiled that HOG integration resulted in heightened attention to disease-specific features, corroborating the performance enhancements observed. Visual representations further validated HOG's notable influence, showcasing a discernible surge in accuracy across epochs due to focused attention on disease-affected regions. These results underscore the pivotal role of feature extraction, particularly HOG, in refining representations and bolstering classification accuracy. The study's significant highlight was the achievement of 97% accuracy with EfficientNet-B7 employing HOG and Grad-CAM, a noteworthy advancement in optimizing pre-trained CNN-based rice disease identification systems. The findings advocate for the strategic integration of advanced feature extraction techniques with cutting-edge pre-trained CNN architectures, presenting a promising avenue for substantially augmenting the precision and effectiveness of image-based disease classification systems in agricultural contexts.
- North America > United States > Texas > Ellis County (0.04)
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.04)
And You Thought Poisoning Feral Pigs Would Be Easy?
This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Early one winter morning in 2020, Kurt VerCauteren discovered a cluster of dead birds in a barren field in northwest Texas. They were small birds, mostly dark-eyed juncos, but also a smattering of white-crowned sparrows. VerCauteren's team had poisoned them, inadvertently. The clues were clear, the death uncomplicated: The birds had flown in before dawn to scavenge deadly morsels of a contaminated peanut paste, left behind after a sounder of wild hogs had torn through the area in a feeding frenzy. The birds likely died within minutes of eating. "I couldn't even see the crumbs," says VerCauteren, a wildlife biologist at the US Department of Agriculture in Fort Collins, Colorado, who has spent years developing and testing pig poisons. The birds were the unintended victims of a field experiment to test a toxicant--one intended for feral pigs, but no other animals--that had been developed in Australia.
- North America > United States > Texas (0.39)
- North America > United States > Colorado > Larimer County > Fort Collins (0.24)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Food & Agriculture > Agriculture (1.00)
Deep Tensor CCA for Multi-view Learning
Wong, Hok Shing, Wang, Li, Chan, Raymond, Zeng, Tieyong
We present Deep Tensor Canonical Correlation Analysis (DTCCA), a method to learn complex nonlinear transformations of multiple views (more than two) of data such that the resulting representations are linearly correlated in high order. The high-order correlation of given multiple views is modeled by covariance tensor, which is different from most CCA formulations relying solely on the pairwise correlations. Parameters of transformations of each view are jointly learned by maximizing the high-order canonical correlation. To solve the resulting problem, we reformulate it as the best sum of rank-1 approximation, which can be efficiently solved by existing tensor decomposition method. DTCCA is a nonlinear extension of tensor CCA (TCCA) via deep networks. The transformations of DTCCA are parametric functions, which are very different from implicit mapping in the form of kernel function. Comparing with kernel TCCA, DTCCA not only can deal with arbitrary dimensions of the input data, but also does not need to maintain the training data for computing representations of any given data point. Hence, DTCCA as a unified model can efficiently overcome the scalable issue of TCCA for either high-dimensional multi-view data or a large amount of views, and it also naturally extends TCCA for learning nonlinear representation. Extensive experiments on three multi-view data sets demonstrate the effectiveness of the proposed method.
- Asia > China > Hong Kong (0.04)
- North America > United States > Texas > Tarrant County > Arlington (0.04)
- Asia > Singapore (0.04)
- Asia > Middle East > Jordan (0.04)
CIFAR-10 Image Classification Using Feature Ensembles
Giuste, Felipe O., Vizcarra, Juan C.
Image classification requires the generation of features capable of detecting image patterns informative of group identity. The objective of this study was to classify images from the public CIFAR-10 image dataset by leveraging combinations of disparate image feature sources from both manual and deep learning approaches. Histogram of oriented gradients (HOG) and pixel intensities successfully inform classification (53% and 59% classification accuracy, respectively), yet there is much room for improvement. VGG16 with ImageNet trained weights and a CIFAR-10 optimized model (CIFAR-VGG) further improve upon image classification (60% and 93.43% accuracy, respectively). We further improved classification by utilizing transfer learning to re-establish optimal network weights for VGG16 (TL-VGG) and Inception ResNet v2 (TL-Inception) resulting in significant performance increases (85% and 90.74%, respectively), yet fail to surpass CIFAR-VGG. We hypothesized that if each generated feature set obtained some unique insight into the classification problem, then combining these features would result in greater classification accuracy, surpassing that of CIFAR-VGG. Upon selection of the top 1000 principal components from TL-VGG, TL-Inception, HOG, pixel intensities, and CIFAR-VGG, we achieved testing accuracy of 94.6%, lending support to our hypothesis.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision > Image Understanding (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.72)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.58)
Artificial Intelligence Will Hog All the Best Menial Tasks
When machines start picking off all the easy work for themselves, many white-collar jobs are going to get a lot harder. About five years ago, I was the vice president of data for Kickstarter. People would come to the crowdfunding platform with wild ideas--before I got hired there, I used Kickstarter to raise funds for a translation of Moby-Dick into emoji--and company staff got to decide which projects could solicit money from the public. For those working on the Kickstarter projects team, being able to see an obviously worthy idea, and approve it five seconds later, was a total blast. In the early days, it felt like everyone at the company was competing to see what kind of fun and interesting projects we could recruit to our platform.
Alibaba applies cloud and big data in animal husbandry, forestry, fisheries - The Nation
Most livestock and field crops rely heavily on the weather for their comfort, and providing water and energy. But China's more than 1.3 billion residents, a growing number of whom are becoming mid-income earners, are building up such an appetite that farmers are having to change the way they grow and sell food. In order to transform an ancient business that was largely run using intuition, the modern answer is technology. Artificial intelligence has come to the farmyard, helping to ensure the country's increasing numbers of pigs remain active and crop yields grow ever larger. This is the case for Wang Degen and his company Tequ Group, a major hog farm in Southwest China's Sichuan province.
- North America > United States (0.31)
- Asia > China > Sichuan Province (0.25)
- Asia > Singapore (0.05)
- Asia > China > Shaanxi Province (0.05)
- Food & Agriculture > Agriculture (1.00)
- Health & Medicine (0.99)
Matching Cars with Siamese Networks – Gab41
Lab41 just finished Pelops, a vehicle re-identification project using data from fixed video cameras. Last time I talked about "chipping", that is extracting an image of a vehicle from a frame of video automatically. We found that background subtraction worked OK based on the small amount of labeled data we had. In this post I'll go over the rest of the pipeline: feature extraction and vehicle matching. Machine learning algorithms operate on a vector of numbers.
Teaching computers to see -- by learning to see like computers
They comb through databases of previously labeled images and look for combinations of visual features that seem to correlate with particular objects. Then, when presented with a new image, they try to determine whether it contains one of the previously identified combinations of features. Even the best object-recognition systems, however, succeed only around 30 or 40 percent of the time -- and their failures can be totally mystifying. Researchers are divided in their explanations: Are the learning algorithms themselves to blame? Or are they being applied to the wrong types of features?
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- North America > United States > California (0.05)