Collaborating Authors


Tactile Based Fabric Classification via Robotic Sliding


Tactile sensing endows the robots to perceive certain physical properties (which are not directly viable to visual and acoustic sensors) of the object in contact. Robots with tactile perception are able to identify different textures of the object touched. Interestingly, textures of fine micro-geometry beyond the nominal resolution of the tactile sensors, can also be identified through exploratory robotic movements like sliding and rubbing. To study the problem of fine texture classification via robotic sliding, we design a robotic sliding experiment using daily fabrics (as fabrics are likely to be the most common materials of fine textures). We propose a feature extraction process to encode the acquired tactile signals (in the form of time series) into a low dimensional (<= 7D) feature vector. The vector captures the frequency signature of a fabric texture such that distinctive fabrics can be classified by their correspondent feature vectors. The experiment includes multiple combinations of sliding parameters, i.e., speed and pressure, for the investigation into the correlation between sliding parameters and the generated feature space. Results show that changing the contact pressure can greatly affect the significance of the extracted feature vectors. For our specific sensor used in the experiments, there exists a sweet spot of pressure for the fabric classification task. Adversely, variation of sliding speed shows no apparent impact on the performance of the feature ext...

Feature Extraction & Data E


Anno 2021, companies in retail have more and more data at their disposal. Data about their business processes, customers, products; virtually every aspect of modern business is subject to large amounts of back-end data. This data can then be used to make rational and informed decisions and strategic decisions. Furthermore, this data can be used, for example, to optimize the sales process or to provide customers with better service. In short, data is invaluable to companies that want to move with a digital paradigm that is constantly shifting.

Selecting and combining complementary feature representations and classifiers for hate speech detection Artificial Intelligence

Hate speech is a major issue in social networks due to the high volume of data generated daily. Recent works demonstrate the usefulness of machine learning (ML) in dealing with the nuances required to distinguish between hateful posts from just sarcasm or offensive language. Many ML solutions for hate speech detection have been proposed by either changing how features are extracted from the text or the classification algorithm employed. However, most works consider only one type of feature extraction and classification algorithm. This work argues that a combination of multiple feature extraction techniques and different classification models is needed. We propose a framework to analyze the relationship between multiple feature extraction and classification techniques to understand how they complement each other. The framework is used to select a subset of complementary techniques to compose a robust multiple classifiers system (MCS) for hate speech detection. The experimental study considering four hate speech classification datasets demonstrates that the proposed framework is a promising methodology for analyzing and designing high-performing MCS for this task. MCS system obtained using the proposed framework significantly outperforms the combination of all models and the homogeneous and heterogeneous selection heuristics, demonstrating the importance of having a proper selection scheme. Source code, figures, and dataset splits can be found in the GitHub repository:

Feature Extraction Framework based on Contrastive Learning with Adaptive Positive and Negative Samples Artificial Intelligence

Currently, high-dimensional data is widely used in pattern recognition and data mining, which leads to high storage overhead, heavy computation, and excessive time consumption apart from causing the problem known as "curse of dimensionality". A significant way to address these issues is feature extraction, which transforms the original highdimensional spatial data into a low-dimensional subspace by a projection matrix. Although, the effect of feature extraction is often worse than it in deep learning, it has always been a research hotspot because of its strong interpretability and particularly well on any type of hardware (CPU, GPU, DSP). Therefore, it is an urgent need in traditional feature extraction to better extract discriminative features for downstream tasks. In the field of deep learning, contrastive learning has attracted extensive scholarly attention as the primary method of self-supervised learning. Contrastive learning uses information of data to supervise itself by constructing positive and negative samples, which strives to learn more discriminative features. InfoNCE loss based on contrastive learning is proposed in contrastive predictive coding (CPC)van den Oord et al. [2018]. CPC proves that minimizing the InfoNCE loss maximizes a lower bound on mutual information, which provides theoretical support for its advantages in extracting more discriminative features. Consequently, a large number of studies based on contrastive learning are proposed.

Self-aligned Spatial Feature Extraction Network for UAV Vehicle Re-identification Artificial Intelligence

Compared with existing vehicle re-identification (ReID) tasks conducted with datasets collected by fixed surveillance cameras, vehicle ReID for unmanned aerial vehicle (UAV) is still under-explored and could be more challenging. Vehicles with the same color and type show extremely similar appearance from the UAV's perspective so that mining fine-grained characteristics becomes necessary. Recent works tend to extract distinguishing information by regional features and component features. The former requires input images to be aligned and the latter entails detailed annotations, both of which are difficult to meet in UAV application. In order to extract efficient fine-grained features and avoid tedious annotating work, this letter develops an unsupervised self-aligned network consisting of three branches. The network introduced a self-alignment module to convert the input images with variable orientations to a uniform orientation, which is implemented under the constraint of triple loss function designed with spatial features. On this basis, spatial features, obtained by vertical and horizontal segmentation methods, and global features are integrated to improve the representation ability in embedded space. Extensive experiments are conducted on UAV-VeID dataset, and our method achieves the best performance compared with recent ReID works.

What is Feature Extraction in Image Processing?


In real life, all the data we collect is huge. A process is required to understand this data. Manual processing is not possible. Feature extraction is part of the dimensionality reduction process, where the initial set of raw data is split and reduced into more manageable groups. Therefore, it will be easier to handle.

AMSER: Adaptive Multi-modal Sensing for Energy Efficient and Resilient eHealth Systems Artificial Intelligence

eHealth systems deliver critical digital healthcare and wellness services for users by continuously monitoring physiological and contextual data. eHealth applications use multi-modal machine learning kernels to analyze data from different sensor modalities and automate decision-making. Noisy inputs and motion artifacts during sensory data acquisition affect the i) prediction accuracy and resilience of eHealth services and ii) energy efficiency in processing garbage data. Monitoring raw sensory inputs to identify and drop data and features from noisy modalities can improve prediction accuracy and energy efficiency. We propose a closed-loop monitoring and control framework for multi-modal eHealth applications, AMSER, that can mitigate garbage-in garbage-out by i) monitoring input modalities, ii) analyzing raw input to selectively drop noisy data and features, and iii) choosing appropriate machine learning models that fit the configured data and feature vector - to improve prediction accuracy and energy efficiency. We evaluate our AMSER approach using multi-modal eHealth applications of pain assessment and stress monitoring over different levels and types of noisy components incurred via different sensor modalities. Our approach achieves up to 22\% improvement in prediction accuracy and 5.6$\times$ energy consumption reduction in the sensing phase against the state-of-the-art multi-modal monitoring application.

Gait Identification under Surveillance Environment based on Human Skeleton Artificial Intelligence

As an emerging biological identification technology, vision-based gait identification is an important research content in biometrics. Most existing gait identification methods extract features from gait videos and identify a probe sample by a query in the gallery. However, video data contains redundant information and can be easily influenced by bagging (BG) and clothing (CL). Since human body skeletons convey essential information about human gaits, a skeleton-based gait identification network is proposed in our project. First, extract skeleton sequences from the video and map them into a gait graph. Then a feature extraction network based on Spatio-Temporal Graph Convolutional Network (ST-GCN) is constructed to learn gait representations. Finally, the probe sample is identified by matching with the most similar piece in the gallery. We tested our method on the CASIA-B dataset. The result shows that our approach is highly adaptive and gets the advanced result in BG, CL conditions, and average.

DaRE: A Cross-Domain Recommender System with Domain-aware Feature Extraction and Review Encoder Artificial Intelligence

Recent advent in recommender systems, especially text-aided methods and CDR (Cross-Domain Recommendation) leads to promising results in solving data-sparsity and cold-start problems. Despite such progress, prior algorithms either require user overlapping or ignore domain-aware feature extraction. In addition, text-aided methods exceedingly emphasize aggregated documents and fail to capture the specifics embedded in individual reviews. To overcome such limitations, we propose a novel method, named DaRE (Domainaware Feature Extraction and Review Encoder), a comprehensive solution that consists of three key components; text-based representation learning, domain-aware feature extraction, and a review encoder. DaRE debilitate noises by separating domain-invariant features from domain-specific features through selective adversarial training. DaRE extracts features from aggregated documents, and the review encoder fine-tunes the representations by aligning them with the features extracted from individual reviews. Experiments on four real-world datasets show the superiority of DaRE over state-ofthe-art single-domain and cross-domain methodologies, achieving 9.2 % and 3.6 % improvements, respectively. We upload our implementations ( for a reproducibility

GANG-MAM: GAN based enGine for Modifying Android Malware Artificial Intelligence

Malware detectors based on machine learning are vulnerable to adversarial attacks. Generative Adversarial Networks (GAN) are architectures based on Neural Networks that could produce successful adversarial samples. The interest towards this technology is quickly growing. In this paper, we propose a system that produces a feature vector for making an Android malware strongly evasive and then modify the malicious program accordingly. Such a system could have a twofold contribution: it could be used to generate datasets to validate systems for detecting GAN-based malware and to enlarge the training and testing dataset for making more robust malware classifiers.