Not enough data to create a plot.
Try a different view from the menu above.
Szymczak, Adrian
This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish
Augustyniak, Łukasz, Tagowski, Kamil, Sawczyn, Albert, Janiak, Denis, Bartusiak, Roman, Szymczak, Adrian, Wątroba, Marcin, Janz, Arkadiusz, Szymański, Piotr, Morzy, Mikołaj, Kajdanowicz, Tomasz, Piasecki, Maciej
The availability of compute and data to train larger and larger language models increases the demand for robust methods of benchmarking the true progress of LM training. Recent years witnessed significant progress in standardized benchmarking for English. Benchmarks such as GLUE, SuperGLUE, or KILT have become de facto standard tools to compare large language models. Following the trend to replicate GLUE for other languages, the KLEJ benchmark has been released for Polish. In this paper, we evaluate the progress in benchmarking for low-resourced languages. We note that only a handful of languages have such comprehensive benchmarks. We also note the gap in the number of tasks being evaluated by benchmarks for resource-rich English/Chinese and the rest of the world. In this paper, we introduce LEPISZCZE (the Polish word for glew, the Middle English predecessor of glue), a new, comprehensive benchmark for Polish NLP with a large variety of tasks and high-quality operationalization of the benchmark. We design LEPISZCZE with flexibility in mind. Including new models, datasets, and tasks is as simple as possible while still offering data versioning and model tracking. In the first run of the benchmark, we test 13 experiments (task and dataset pairs) based on the five most recent LMs for Polish. We use five datasets from the Polish benchmark and add eight novel datasets. As the paper's main contribution, apart from LEPISZCZE, we provide insights and experiences learned while creating the benchmark for Polish as the blueprint to design similar benchmarks for other low-resourced languages.
Avaya Conversational Intelligence: A Real-Time System for Spoken Language Understanding in Human-Human Call Center Conversations
Mizgajski, Jan, Szymczak, Adrian, Głowski, Robert, Szymański, Piotr, Żelasko, Piotr, Augustyniak, Łukasz, Morzy, Mikołaj, Carmiel, Yishay, Hodson, Jeff, Wójciak, Łukasz, Smoczyk, Daniel, Wróbel, Adam, Borowik, Bartosz, Artajew, Adam, Baran, Marcin, Kwiatkowski, Cezary, Żyła-Hoppe, Marzena
Avaya Conversational Intelligence (ACI) is an end-to-end, cloud-based solution for real-time Spoken Language Understanding for call centers. It combines large vocabulary, real-time speech recognition, transcript refinement, and entity and intent recognition in order to convert live audio into a rich, actionable stream of structured events. These events can be further leveraged with a business rules engine, thus serving as a foundation for real-time supervision and assistance applications. After the ingestion, calls are enriched with unsupervised keyword extraction, abstractive summarization, and business-defined attributes, enabling offline use cases, such as business intelligence, topic mining, full-text search, quality assurance, and agent training. ACI comes with a pretrained, configurable library of hundreds of intents and a robust intent training environment that allows for efficient, cost-effective creation and customization of customer-specific intents.