Keynotes – BNAIC/BENELEARN 2018
Information-rich representations of text often decrease sample complexity when an natural language processing (NLP) system is trained on a task. One effective way of producing such representations is the traditional NLP pipeline: tokenization, tagging, parsing etc. An alternative are so-called embeddings that represent text in a high-dimensional real-valued space that is smooth and thereby supports generalization. Most commonly, words are represented as embeddings, but more recently contextualized embeddings like ELMo have been proposed. I will address two challenges for embeddings in this talk.
Oct-26-2018, 06:44:49 GMT
- Country:
- Europe
- France > Occitanie
- Haute-Garonne > Toulouse (0.05)
- Germany
- Baden-Württemberg > Stuttgart Region
- Stuttgart (0.05)
- Bavaria > Upper Bavaria
- Munich (0.06)
- Saxony-Anhalt > Magdeburg (0.05)
- Baden-Württemberg > Stuttgart Region
- Italy (0.05)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.05)
- France > Occitanie
- North America > United States
- California (0.05)
- Europe
- Technology: