Goto

Collaborating Authors

Results


Data Science Quiz

#artificialintelligence

Interviews are the most challenging part of getting any job especially for Data Scientist and Machine Learning Engineer roles where you are tested on Machine Learning and Deep Learning concepts. So, Given below is a short quiz that consists of 25 Questions consisting of MCQs(One or more correct), True-False, and Integer Type Questions to check your knowledge. Explanation: The derivative of Leaky RELU activation function h(z) is 1 only for z 0, while for z 0, it has a very small value. Explanation: Residuals are vertical offset and the sum of residuals is always zero. Explanation: For deciding class w1, the conditional Risk for w1 is smaller than w2.


Evaluating Post-Training Compression in GANs using Locality-Sensitive Hashing

arXiv.org Artificial Intelligence

The analysis of the compression effects in generative adversarial networks (GANs) after training, i.e. without any fine-tuning, remains an unstudied, albeit important, topic with the increasing trend of their computation and memory requirements. While existing works discuss the difficulty of compressing GANs during training, requiring novel methods designed with the instability of GANs training in mind, we show that existing compression methods (namely clipping and quantization) may be directly applied to compress GANs post-training, without any additional changes. High compression levels may distort the generated set, likely leading to an increase of outliers that may negatively affect the overall assessment of existing k-nearest neighbor (KNN) based metrics. We propose two new precision and recall metrics based on locality-sensitive hashing (LSH), which, on top of increasing the outlier robustness, decrease the complexity of assessing an evaluation sample against $n$ reference samples from $O(n)$ to $O(\log(n))$, if using LSH and KNN, and to $O(1)$, if only applying LSH. We show that low-bit compression of several pre-trained GANs on multiple datasets induces a trade-off between precision and recall, retaining sample quality while sacrificing sample diversity.


Feature Selection for Imbalanced Data with Deep Sparse Autoencoders Ensemble

arXiv.org Machine Learning

Class imbalance is a common issue in many domain applications of learning algorithms. Oftentimes, in the same domains it is much more relevant to correctly classify and profile minority class observations. This need can be addressed by Feature Selection (FS), that offers several further advantages, s.a. decreasing computational costs, aiding inference and interpretability. However, traditional FS techniques may become sub-optimal in the presence of strongly imbalanced data. To achieve FS advantages in this setting, we propose a filtering FS algorithm ranking feature importance on the basis of the Reconstruction Error of a Deep Sparse AutoEncoders Ensemble (DSAEE). We use each DSAE trained only on majority class to reconstruct both classes. From the analysis of the aggregated Reconstruction Error, we determine the features where the minority class presents a different distribution of values w.r.t. the overrepresented one, thus identifying the most relevant features to discriminate between the two. We empirically demonstrate the efficacy of our algorithm in several experiments on high-dimensional datasets of varying sample size, showcasing its capability to select relevant and generalizable features to profile and classify minority class, outperforming other benchmark FS methods. We also briefly present a real application in radiogenomics, where the methodology was applied successfully.


SSD: A Unified Framework for Self-Supervised Outlier Detection

arXiv.org Artificial Intelligence

We ask the following question: what training information is required to design an effective outlier/out-of-distribution (OOD) detector, i.e., detecting samples that lie far away from the training distribution? Since unlabeled data is easily accessible for many applications, the most compelling approach is to develop detectors based on only unlabeled in-distribution data. However, we observe that most existing detectors based on unlabeled data perform poorly, often equivalent to a random prediction. In contrast, existing state-of-the-art OOD detectors achieve impressive performance but require access to fine-grained data labels for supervised training. We propose SSD, an outlier detector based on only unlabeled in-distribution data. We use self-supervised representation learning followed by a Mahalanobis distance based detection in the feature space. We demonstrate that SSD outperforms most existing detectors based on unlabeled data by a large margin. Additionally, SSD even achieves performance on par, and sometimes even better, with supervised training based detectors. Finally, we expand our detection framework with two key extensions. First, we formulate few-shot OOD detection, in which the detector has access to only one to five samples from each class of the targeted OOD dataset. Second, we extend our framework to incorporate training data labels, if available. We find that our novel detection framework based on SSD displays enhanced performance with these extensions, and achieves state-of-the-art performance. Our code is publicly available at https://github.com/inspire-group/SSD.


Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges

arXiv.org Machine Learning

Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and dispel common misunderstandings that dilute the importance of this crucial topic. We also identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem. Some of these problems are classically important, and some are recent problems that have arisen in the last few years. These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints; (9) Characterization of the "Rashomon set" of good models; and (10) Interpretable reinforcement learning. This survey is suitable as a starting point for statisticians and computer scientists interested in working in interpretable machine learning.


Balancing Biases and Preserving Privacy on Balanced Faces in the Wild

arXiv.org Artificial Intelligence

There are demographic biases in the SOTA CNN used for FR. Our BFW dataset serves as a proxy to measure bias across ethnicity and gender subgroups, allowing us to characterize FR performances per subgroup. We show performances are non-optimal when a single score threshold is used to determine whether sample pairs are genuine or imposter. Furthermore, actual performance ratings vary greatly from the reported across subgroups. Thus, claims of specific error rates only hold true for populations matching that of the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial encodings extracted using SOTA deep nets. Not only does this technique balance performance, but it also boosts the overall performance. A benefit of the proposed is to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features. The removal of demographic knowledge prevents future potential biases from being injected into decision-making. Additionally, privacy concerns are satisfied by this removal. We explore why this works qualitatively with hard samples. We also show quantitatively that subgroup classifiers can no longer learn from the encodings mapped by the proposed.


Feature selection for medical diagnosis: Evaluation for using a hybrid Stacked-Genetic approach in the diagnosis of heart disease

arXiv.org Artificial Intelligence

Background and purpose: Heart disease has been one of the most important causes of death in the last 10 years, so the use of classification methods to diagnose and predict heart disease is very important. If this disease is predicted before menstruation, it is possible to prevent high mortality of the disease and provide more accurate and efficient treatment methods. Materials and Methods: Due to the selection of input features, the use of basic algorithms can be very time-consuming. Reducing dimensions or choosing a good subset of features, without risking accuracy, has great importance for basic algorithms for successful use in the region. In this paper, we propose an ensemble-genetic learning method using wrapper feature reduction to select features in disease classification. Findings: The development of a medical diagnosis system based on ensemble learning to predict heart disease provides a more accurate diagnosis than the traditional method and reduces the cost of treatment. Conclusion: The results showed that Thallium Scan and vascular occlusion were the most important features in the diagnosis of heart disease and can distinguish between sick and healthy people with 97.57% accuracy.


Hybrid stacked ensemble combined with genetic algorithms for Prediction of Diabetes

arXiv.org Artificial Intelligence

Diabetes is currently one of the most common, dangerous, and costly diseases in the world that is caused by an increase in blood sugar or a decrease in insulin in the body. Diabetes can have detrimental effects on people's health if diagnosed late. Today, diabetes has become one of the challenges for health and government officials. Prevention is a priority, and taking care of people's health without compromising their comfort is an essential need. In this study, the Ensemble training methodology based on genetic algorithms are used to accurately diagnose and predict the outcomes of diabetes mellitus. In this study, we use the experimental data, real data on Indian diabetics on the University of California website. Current developments in ICT, such as the Internet of Things, machine learning, and data mining, allow us to provide health strategies with more intelligent capabilities to accurately predict the outcomes of the disease in daily life and the hospital and prevent the progression of this disease and it's many complications. The results show the high performance of the proposed method in diagnosing the disease, which has reached 98.8%, and 99% accuracy in this study.


DynACPD Embedding Algorithm for Prediction Tasks in Dynamic Networks

arXiv.org Artificial Intelligence

Classical network embeddings create a low dimensional representation of the learned relationships between features across nodes. Such embeddings are important for tasks such as link prediction and node classification. In the current paper, we consider low dimensional embeddings of dynamic networks, that is a family of time varying networks where there exist both temporal and spatial link relationships between nodes. We present novel embedding methods for a dynamic network based on higher order tensor decompositions for tensorial representations of the dynamic network. In one sense, our embeddings are analogous to spectral embedding methods for static networks. We provide a rationale for our algorithms via a mathematical analysis of some potential reasons for their effectiveness. Finally, we demonstrate the power and efficiency of our approach by comparing our algorithms' performance on the link prediction task against an array of current baseline methods across three distinct real-world dynamic networks.


Modern Dimension Reduction

arXiv.org Machine Learning

Data are not only ubiquitous in society, but are increasingly complex both in size and dimensionality. Dimension reduction offers researchers and scholars the ability to make such complex, high dimensional data spaces simpler and more manageable. This Element offers readers a suite of modern unsupervised dimension reduction techniques along with hundreds of lines of R code, to efficiently represent the original high dimensional data space in a simplified, lower dimensional subspace. Launching from the earliest dimension reduction technique principal components analysis and using real social science data, I introduce and walk readers through application of the following techniques: locally linear embedding, t-distributed stochastic neighbor embedding (t-SNE), uniform manifold approximation and projection, self-organizing maps, and deep autoencoders. The result is a well-stocked toolbox of unsupervised algorithms for tackling the complexities of high dimensional data so common in modern society. All code is publicly accessible on Github.