Text Classification


r/MachineLearning - [D] Text classification on a small dataset

@machinelearnbot

I am trying to perform multiclass text classification (for 24 classes) on a set documents, but I have a very small dataset currently (1200 total examples). The data collection process is a bit tedious in my case, hence the small dataset size. The best result I have achieved till now is 58% accuracy with an SVM model and a single layer CNN model. Is there any other approach I can try other than collecting more data? I have tried oversampling the training set, but it didn't seem to improve the performance.


Text Classification with TensorFlow Estimators

@machinelearnbot

Note: This post was written together with the awesome Julian Eisenschlos and was originally published on the TensorFlow blog. Throughout this post we will show you how to classify text using Estimators in TensorFlow. Welcome to Part 4 of a blog series that introduces TensorFlow Datasets and Estimators. You don't need to read all of the previous material, but take a look if you want to refresh any of the following concepts. Part 1 focused on pre-made Estimators, Part 2 discussed feature columns, and Part 3 how to create custom Estimators.


Machine Learning Helps Humans Perform Text Analysis

#artificialintelligence

To augment that approach, we've found that we can use machine learning to improve the semantic data models as the data set evolves. Our specific use-case is text data in millions of documents. We've found that machine learning facilitates the storage and exploration of data that would otherwise be too vast to support valuable insights. Machine Learning (ML) allows for a model to improve over time given new training data, without requiring more human effort. For example, a common text-classification benchmark task is to train a model on messages for multiple discussion board threads and then later use it to predict what the topic of discussion was (space, computers, religion, etc).


Introduction to ML Classification Models using scikit-learn

@machinelearnbot

This course will give you a fundamental understanding of Machine Learning overall with a focus on building classification models. Basic ML concepts of ML are explained, including Supervised and Unsupervised Learning; Regression and Classification; and Overfitting. There are 3 lab sections which focus on building classification models using Support Vector Machines, Decision Trees and Random Forests using real data sets. The implementation will be performed using the scikit-learn library for Python. The Intro to ML Classification Models course is meant for developers or data scientists (or anybody else) who knows basic Python programming and wishes to learn about Machine Learning, with a focus on solving the problem of classification.


Data Version Control Tutorial – dataversioncontrol

@machinelearnbot

Today the data science community is still lacking good practices for organizing their projects and effectively collaborating. ML algorithms and methods are no longer simple "tribal knowledge" but are still difficult to implement, manage and reuse. To address the reproducibility we have build Data Version Control or DVC. This example shows you how to solve a text classification problem using the DVC tool. Git branches should beautifully reflect the non-linear structure common to the ML process, where each hypotheses can be presented as a Git branch. However, inability to store data in a repository and the discrepancy between code and data make it extremely difficult to manage a data science project with Git.


Regression vs. Classification Algorithms

#artificialintelligence

Machine learning generates a lot of buzz because it's applicable across such a wide variety of use cases. That's because machine learning is actually a set of many different methods that are each uniquely suited to answering diverse questions about a business. To better understand machine learning algorithms, it's helpful to separate them into groups based on how they work.


New machine-assisted text classification on Content Moderator now in public preview

#artificialintelligence

Content Moderator is part of Microsoft Cognitive Services allowing businesses to use machine assisted moderation of text, images, and videos that augment human review. The text moderation capability now includes a new machine-learning based text classification feature which uses a trained model to identify possible abusive, derogatory or discriminatory language such as slang, abbreviated words, offensive, and intentionally misspelled words for review. In contrast to the existing text moderation service that flags profanity terms, the text classification feature helps detect potentially undesired content that may be deemed as inappropriate depending on context. In addition, to convey the likelihood of each category it may recommend a human review of the content. The text classification feature is in preview and supports the English language.


Multi-Class Text Classification with PySpark – Towards Data Science

#artificialintelligence

Apache Spark is quickly gaining steam both in the headlines and real-world adoption, mainly because of its ability to process streaming data. With so much data being processed on a daily basis, it has become essential for us to be able to stream and analyze it in real time. In addition, Apache Spark is fast enough to perform exploratory queries without sampling. Many industry experts have provided all the reasons why you should use Spark for Machine Learning? So, here we are now, using Spark Machine Learning Library to solve a multi-class text classification problem, in particular, PySpark.


Health Research is Time-Consuming and Expensive, but Machine Learning Could Change That

#artificialintelligence

From climate change to opioid addiction, we are facing serious public health crises that put our research and data management experts to the test. When it comes to scientific evidence, systematic literature reviews--painstaking assessments of all the literature ever produced on a given subject--are often regarded as the gold standard. Though no research method is foolproof, says Vox health correspondent Julia Belluz, "these studies represent the best available syntheses of global evidence about the likely effects of different decisions, therapies and policies." That comprehensiveness comes at high price, though, in terms of time and money. It involves sifting through enormous volumes of literature--sometimes hundreds of thousands of scientific abstracts--stored in academic databases.