I would guess not, unless you have many small one-hot layers that are correlated with each other. What does work is if you can get something similar to'word embeddings' e.g. from training a compressive autoencoder on as much unlabeled data as you can; either from each of the attributes separately or as a merged representation, depending on what data you have.
In this era of digitization, knowing the user's sociolect aspects have become essential features to build the user specific recommendation systems. These sociolect aspects could be found by mining the user's language sharing in the form of text in social media and reviews. This paper describes about the experiment that was performed in PAN Author Profiling 2017 shared task. The objective of the task is to find the sociolect aspects of the users from their tweets. The sociolect aspects considered in this experiment are user's gender and native language information. Here user's tweets written in a different language from their native language are represented as Document - Term Matrix with document frequency as the constraint. Further classification is done using the Support Vector Machine by taking gender and native language as target classes. This experiment attains the average accuracy of 73.42% in gender prediction and 76.26% in the native language identification task.
This post aims to explain the concept of Word2vec and the mathematics behind the concept in an intuitive way while implementing Word2vec embedding using Gensim in Python. The basic idea of Word2vec is that instead of representing words as one-hot encoding (countvectorizer / tfidfvectorizer) in high dimensional space, we represent words in dense low dimensional space in a way that similar words get similar word vectors, so they are mapped to nearby points. Word2vec is not deep neural network, it turns text into a numeric form that deep neural network can process as input. For example, we can use "artificial" to predict "intelligence". However, the prediction itself is not our goal.
Generative Adversarial Networks (GAN) shook up the deep learning world. When they first appeared in 2014, they proposed a new and fresh approach to modeling and gave a possibility for new neural network architectures to emerge. Since standard GAN architecture is composed from two neural networks, we can play around and use different approaches for those networks and thus create new and shiny architectures. The idea is to make an appropriate model for your problem and generate data which can be used in a real-world business scenario. So far, we had a chance to see how to implement the standard GAN and Deep Convolutional GAN (combining CNN concepts with GAN concepts), but the zoo of GAN architectures grows on a daily basis.