dataset


5 Essential Papers on Sentiment Analysis Lionbridge AI

#artificialintelligence

From virtual assistants to content moderation, sentiment analysis has a wide range of use cases. AI models that can recognize emotion and opinion have a myriad of applications in numerous industries. Therefore, there is a large growing interest in the creation of emotionally intelligent machines. The same can be said for the research being done in natural language processing (NLP). To highlight some of the work being done in the field, below are five essential papers on sentiment analysis and sentiment classification.


How to Train StyleGAN to Generate Realistic Faces

#artificialintelligence

Generative Adversarial Networks (GAN) is an architecture introduced by Ian Goodfellow and his colleagues in 2014 for generative modeling, which is using a model to generate new samples that imitate an existing dataset. It is composed of two networks: the generator that generates new samples, and the discriminator that detects fake samples. The generator tries to fool the discriminator while the discriminator tries to detect samples synthesized by the generator. Once trained, the generator can be used to create new samples on demand. GANs have quickly become popular due to their various interesting applications such as style transfer, image-to-image translation or video generation.


tensorflow/models

#artificialintelligence

This is the implementation of semantic target driven navigation training and evaluation on Active Vision dataset. We used Active Vision Dataset (AVD) which can be downloaded from here. To make our code faster and reduce memory footprint, we created the AVD Minimal dataset. AVD Minimal consists of low resolution images from the original AVD dataset. In addition, we added annotations for target views, predicted object detections from pre-trained object detector on MS-COCO dataset, and predicted semantic segmentation from pre-trained model on NYU-v2 dataset.


pulp-platform/pulp-dronet

#artificialintelligence

PULP Platform Youtube channel (subscribe it!): PULP-DroNet is a deep learning-powered visual navigation engine that enables autonomous navigation of a pocket-size quadrotor in a previously unseen environment. Thanks to PULP-DroNet the nano-drone can explore the environment, avoiding collisions also with dynamic obstacles, in complete autonomy -- no human operator, no ad-hoc external signals, and no remote laptop! This means that all the complex computations are done directly aboard the vehicle and very fast. The visual navigation engine is composed of both a software and a hardware part.


How to apply machine learning and deep learning methods to audio analysis

#artificialintelligence

To view the code, training visualizations, and more information about the python example at the end of this post, visit the Comet project page. While much of the writing and literature on deep learning concerns computer vision and natural language processing (NLP), audio analysis -- a field that includes automatic speech recognition (ASR), digital signal processing, and music classification, tagging, and generation -- is a growing subdomain of deep learning applications. Some of the most popular and widespread machine learning systems, virtual assistants Alexa, Siri and Google Home, are largely products built atop models that can extract information from audio signals. Many of our users at Comet are working on audio related machine learning tasks such as audio classification, speech recognition and speech synthesis, so we built them tools to analyze, explore and understand audio data using Comet's meta machine-learning platform. This post is focused on showing how data scientists and AI practitioners can use Comet to apply machine learning and deep learning methods in the domain of audio analysis.


How Machine Learning Can Help Unlock the World of Ancient Japan

#artificialintelligence

Humanity's rich history has left behind an enormous number of historical documents and artifacts. However, virtually none of these documents, containing stories and recorded experiences essential to our cultural heritage, can be understood by non-experts due to language and writing changes over time. For instance, archaeologist have unearthed tens of thousands of clay tablets from ancient Babylon [1], yet only a few hundred specially trained scholars can translate them. The vast majority of these documents have never been read, even if they were uncovered in the 1800s. To give a further illustration of the challenge posed by this scale, a tablet from the Tale of Gilgamesh was collected in an expedition in 1851, but its significance was not brought to light until 1872.


Get started with machine learning in this Amazon SageMaker tutorial

#artificialintelligence

Amazon Sagemaker makes machine learning accessible. Developers and data scientists can use it to build and deploy machine learning models on AWS without additional infrastructure management tasks. Amazon SageMaker provides pre-built algorithms and support for open source Jupyter notebook instances to make it easier to get a machine learning model running in applications. In this Amazon SageMaker tutorial, we'll breakdown how to get a notebook instance up and running and how to train and validate your machine learning model. To get started, set up the necessary AWS Identity and Access Management (IAM) roles and permissions and then create a Jupyter notebook that will run Python code.


Classify Spoken Digits: New in Wolfram Language 12

#artificialintelligence

The neural net framework in the Wolfram Language enables powerful and user-friendly network training tools for Audio objects. This example trains a net to classify spoken digits. The dataset is comprised of recordings of the digits from 0 to 9. It is essentially an audio equivalent to the MNIST digit dataset. You can start by deciding how a recording will be transformed into something that a neural network can use. The "AudioMFCC" net encoder is used, where the signal is split into overlapping partitions and some processing is applied to each to reduce the dimension while preserving information that is important for understanding the signal.


r/MachineLearning - [N] Pre-trained knowledge graph embedding models are available in GraphVite!

#artificialintelligence

In the recent update of GraphVite, we release a new large-scale knowledge graph dataset, along with new benchmarks of knowledge graph embedding methods. The dataset, Wikidata5m, contains 5 million entities and 21 million facts constructed from Wikidata and Wikipedia. Most of the entities come from the general domain or the scientific domain, such as celebrities, events, concepts and things. To facilitate the usage of knowledge graph representations in semantic tasks, we provide a bunch of pre-trained embeddings from popular models, including TransE, DistMult, ComplEx, SimplE and RotatE. You can directly access these embeddings by natural language index, such as "machine learning", "united states" or even abbreviations like "m.i.t.".


What Is Deep Transfer Learning and Why Is It Becoming So Popular?

#artificialintelligence

As we already know, large and effective deep learning models are data-hungry. They require training with thousands or even millions of data points before making a plausible prediction. Training is very expensive, both in time and resources. For example, the popular language representation model BERT, developed by Google, has been trained on 16 Cloud TPUs (64 TPU chips total) for 4 days. Put in perspective, this is 60 desktop computers running non-stop for 4 days.