Collaborating Authors

Graphical Model-Based Learning in High Dimensional Feature Spaces

AAAI Conferences

Digital media tend to combine text and images to express richer information, especially on image hosting and online shopping websites. This trend presents a challenge in understanding the contents from different forms of information. Features representing visual information are usually sparse in high dimensional space, which makes the learning process intractable. In order to understand text and its related visual information, we present a new graphical model-based approach to discover more meaningful information in rich media. We extend the standard Latent Dirichlet Allocation (LDA) framework to learn in high dimensional feature spaces.

Feature Selection for Machine Learning


Having irrelevant features in your data can decrease the accuracy of the models and make your model learn based on irrelevant features. This is the most comprehensive, yet easy to follow, course for feature selection available online. Throughout this course you will learn a variety of techniques used worldwide for variable selection, gathered from data competition websites and white papers, blogs and forums, and from the instructor's experience as a Data Scientist. You will have at your fingertips, altogether in one place, multiple methods that you can apply to select features from your data set.

Generating custom photo-realistic faces using AI – Insight Data


All the code and online demo are available at the project page. Describing an image is easy for humans, and we are able to do it from a very young age. In machine learning, this task is a discriminative classification/regression problem, i.e. predicting feature labels from input images. Recent advancements in ML/AI techniques, especially deep learning models, are beginning to excel in these tasks, sometimes reaching or exceeding human performance, as is demonstrated in scenarios like visual object recognition (e.g. from AlexNet to ResNet on ImageNet classification) and object detection/segmentation (e.g. from RCNN to YOLO on COCO dataset), etc. However, the other way around, generating realistic images based on descriptions, is much harder, and takes years of graphic design training.

Understanding how LIME explains predictions – Towards Data Science


An example of this procedure for images is shown in the image below. Note that, in addition to a black box model (classifier or regressor) f and an instance to explain y (and its interpretable representation y'), the previous procedure requires setting in advance the number of samples N, the kernel width σ and the length of the explanation K. In future posts I will explain how to explain predictions of ML models with LIME using R and Python.

Can you explain this?


It's the year 2030, we are living in an age of increasing automation and artificially intelligent bots are powering this. All the trivial decisions are driven by machines and they are redesigning our ways of life. Lorem Ipsum is frustrated with the work lately and has a splitting headache, the automated recommendations from his Google health don't help and he gets an appointment with his doctor. It takes him just 30 minutes to go get through all the arduous stages of his brain test until he sees the doctor finally. "The scan results look negative for any abnormality, the brain health score is on the positive side", the doctor concludes it is just a headache and prescribes the medicines.