Shalaby, Walid
Hierarchical Multi-Task Learning Framework for Session-based Recommendations
Oh, Sejoon, Shalaby, Walid, Afsharinejad, Amir, Cui, Xiquan
While session-based recommender systems (SBRSs) have shown superior recommendation performance, multi-task learning (MTL) has been adopted by SBRSs to enhance their prediction accuracy and generalizability further. Hierarchical MTL (H-MTL) sets a hierarchical structure between prediction tasks and feeds outputs from auxiliary tasks to main tasks. This hierarchy leads to richer input features for main tasks and higher interpretability of predictions, compared to existing MTL frameworks. However, the H-MTL framework has not been investigated in SBRSs yet. In this paper, we propose HierSRec which incorporates the H-MTL architecture into SBRSs. HierSRec encodes a given session with a metadata-aware Transformer and performs next-category prediction (i.e., auxiliary task) with the session encoding. Next, HierSRec conducts next-item prediction (i.e., main task) with the category prediction result and session encoding. For scalable inference, HierSRec creates a compact set of candidate items (e.g., 4% of total items) per test example using the category prediction. Experiments show that HierSRec outperforms existing SBRSs as per next-item prediction accuracy on two session-based recommendation datasets. The accuracy of HierSRec measured with the carefully-curated candidate items aligns with the accuracy of HierSRec calculated with all items, which validates the usefulness of our candidate generation scheme via H-MTL.
Beyond Word Embeddings: Learning Entity and Concept Representations from Large Scale Knowledge Bases
Shalaby, Walid, Zadrozny, Wlodek, Jin, Hongxia
Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.
A Visual Semantic Framework for Innovation Analytics
Shalaby, Walid (University of North Carolina, Charlotte) | Rajshekhar, Kripa (Metonymy Labs) | Zadrozny, Wlodek (University of North Carolina, Charlotte)
In this demo we present a semantic framework for innovation and patent analytics powered by Mined Semantic Analysis (MSA). Our framework provides cognitive assistance to its users through a Web-based visual and interactive interface. First, we describe building a conceptual knowledge graph by mining user-generated encyclopedic textual corpus for semantic associations. Then, we demonstrate applying the acquired knowledge to support many cognition and knowledge based use cases for innovation analysis including technology exploration and landscaping, competitive analysis, literature and prior art search and others.