Goto

Collaborating Authors

Polish to English Statistical Machine Translation

arXiv.org Machine Learning

This research explores the effects of various training settings on a Polish to English Statistical Machine Translation system for spoken language. Various elements of the TED, Europarl, and OPUS parallel text corpora were used as the basis for training of language models, for development, tuning and testing of the translation system. The BLEU, NIST, METEOR and TER metrics were used to evaluate the effects of the data preparations on the translation results.


Translation Technology Is Getting Better. What Does That Mean For The Future?

#artificialintelligence

Tools and apps like Google Translate are getting better and better at translating one language into another. Alexander Waibel, professor of computer science at Carnegie Mellon University's Language Technologies Institute (@LTIatCMU), tells Here & Now's Jeremy Hobson how translation technology works, where there's still room to improve and what could be in store in the decades to come. "Over the years I think there's been a big trend on translation to go increasingly from rule-based, knowledge-based methods to learning methods. Systems have now really achieved a phenomenally good accuracy, and so I think, within our lifetime I'm fairly sure that we'll reach -- if we haven't already done so -- human-level performance, and/or exceeding it. "The current technology that really has taken the community by storm is of course neural machine translation.


Polish - English Speech Statistical Machine Translation Systems for the IWSLT 2013

arXiv.org Machine Learning

This research explores the effects of various training settings from Polish to English Statistical Machine Translation system for spoken language. Various elements of the TED parallel text corpora for the IWSLT 2013 evaluation campaign were used as the basis for training of language models, and for development, tuning and testing of the translation system. The BLEU, NIST, METEOR and TER metrics were used to evaluate the effects of data preparations on translation results. Our experiments included systems, which use stems and morphological information on Polish words. We also conducted a deep analysis of provided Polish data as preparatory work for the automatic data correction and cleaning phase.


Using Mechanical Turk to Build Machine Translation Evaluation Sets

arXiv.org Machine Learning

Building machine translation (MT) test sets is a relatively expensive task. As MT becomes increasingly desired for more and more language pairs and more and more domains, it becomes necessary to build test sets for each case. In this paper, we investigate using Amazon's Mechanical Turk (MTurk) to make MT test sets cheaply. We find that MTurk can be used to make test sets much cheaper than professionally-produced test sets. More importantly, in experiments with multiple MT systems, we find that the MTurk-produced test sets yield essentially the same conclusions regarding system performance as the professionally-produced test sets yield.


Mix and match networks: encoder-decoder alignment for zero-pair image translation

arXiv.org Machine Learning

We address the problem of image translation between domains or modalities for which no direct paired data is available (i.e. zero-pair translation). We propose mix and match networks, based on multiple encoders and decoders aligned in such a way that other encoder-decoder pairs can be composed at test time to perform unseen image translation tasks between domains or modalities for which explicit paired samples were not seen during training. We study the impact of autoencoders, side information and losses in improving the alignment and transferability of trained pairwise translation models to unseen translations. We show our approach is scalable and can perform colorization and style transfer between unseen combinations of domains. We evaluate our system in a challenging cross-modal setting where semantic segmentation is estimated from depth images, without explicit access to any depth-semantic segmentation training pairs. Our model outperforms baselines based on pix2pix and CycleGAN models.