Keynotes – BNAIC/BENELEARN 2018

#artificialintelligence 

Information-rich representations of text often decrease sample complexity when an natural language processing (NLP) system is trained on a task. One effective way of producing such representations is the traditional NLP pipeline: tokenization, tagging, parsing etc. An alternative are so-called embeddings that represent text in a high-dimensional real-valued space that is smooth and thereby supports generalization. Most commonly, words are represented as embeddings, but more recently contextualized embeddings like ELMo have been proposed. I will address two challenges for embeddings in this talk.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found