Goto

Collaborating Authors

machine translation


Zoom will add real-time translation for 12 languages next year

Mashable

Zoom says its real-time translation function will be up and running next year for 12 languages. The company demoed the feature Monday during its annual Zoomtopia event, and argued that it'll help break down language barriers during video calls. In the demo, Zoom employees spoke English, and the video-conferencing app translated the words in real-time using subtitles written in Japanese and then Chinese. Another employee then spoke German, and the app translated the words into English. The technology uses AI-powered algorithms to transcribe what a speaker is saying into text.


Association Mining for Machine Learning

#artificialintelligence

Association Rules is one of the very important concepts of machine learning being used in market basket analysis. This course covers the working Principle of Association Mining and its various concepts like Support, Confidence, and Life in a very simplified manner. All of these algorithms has been explained by taking working examples. Parteek Bhatia is Professor in the Department of Computer Science and Engineering and Former Associate Dean of Student Affairs at Thapar Institute of Engineering and Technology, Patiala. At present he is on sabbatical at Tel Aviv University, Israel and acting as Visiting Professor at LAMBDA Lab, TAU.


Take a Deep Dive into NLP at ODSC APAC 2021

#artificialintelligence

ODSC APAC 2021 is right around the corner this September 15–16th, and while there's something for everyone, NLP is sticking out as one of the focal points of this conference. Natural language processing is indeed special in the APAC region, namely because there's a greater need for diverse datasets, due to the number of different languages spoken in the region. This has lead researchers to develop novel and exciting techniques to address these concerns. At ODSC APAC in a few weeks, you'll be able to hear from these data scientists about NLP, and hear from some research institutions that focus on NLP Natural language processing (NLP) has made truly impressive progress in recent years and is being deployed in an ever-increasing range of user-facing settings. Accompanied by this progress has been a growing realization of inequities in the performance of naively-trained NLP models for users of different demographics, with minorities typically experiencing lower performance levels.


AMAZON MACHINE LEARNING

#artificialintelligence

What is Amazon Web Services? Amazon Web Services or AWS is world's broadly adopted cloud platform . AWS provides with a number of useful cloud computing services that are very much reliable, scalable and cost efficient as they say. AWS provides services like storage, networking, remote computing, servers, email, mobile development and security . So now coming to Amazon machine learning, frankly means leveraging ML algorithms on cloud platforms like AWS .


The Vauquois triangle : Mystery solved

#artificialintelligence

The Vauquois triangle is a classical hierarchical model for visualizing various machine translation approaches. Before we dive into the Vauquois triangle, let's look at what Machine Translation is. Machine translation is the process of using computer software to translate a text or speech in one natural language to another. The definition may look simple, but the process is extremely difficult. Languages differ in so many ways, grammatically, syntactically (sentence structure), semantically (meanings), etc.


DeepLobe - Machine Learning API as a Service Platform

#artificialintelligence

Day by day the number of machine learning models is increasing at a pace. With this increasing rate, it is hard for beginners to choose an effective model to perform Natural Language Understanding (NLU) and Natural Language Generation (NLG) mechanisms. Researchers across the globe are working around the clock to achieve more progress in artificial intelligence to build agile and intuitive sequence-to-sequence learning models. And in recent times transformers are one such model which gained more prominence in the field of machine learning to perform speech-to-text activities. The wide availability of other sequence-to-sequence learning models like RNNs, LSTMs, and GRU always raises a challenge for beginners when they think about transformers.


CushLEPOR: Customised hLEPOR Metric Using LABSE Distilled Knowledge Model to Improve Agreement with Human Judgements

arXiv.org Artificial Intelligence

Human evaluation has always been expensive while researchers struggle to trust the automatic metrics. To address this, we propose to customise traditional metrics by taking advantages of the pre-trained language models (PLMs) and the limited available human labelled scores. We first re-introduce the hLEPOR metric factors, followed by the Python portable version we developed which achieved the automatic tuning of the weighting parameters in hLEPOR metric. Then we present the customised hLEPOR (cushLEPOR) which uses LABSE distilled knowledge model to improve the metric agreement with human judgements by automatically optimised factor weights regarding the exact MT language pairs that cushLEPOR is deployed to. We also optimise cushLEPOR towards human evaluation data based on MQM and pSQM framework on English-German and Chinese-English language pairs. The experimental investigations show cushLEPOR boosts hLEPOR performances towards better agreements to PLMs like LABSE with much lower cost, and better agreements to human evaluations including MQM and pSQM scores, and yields much better performances than BLEU (data available at \url{https://github.com/poethan/cushLEPOR}).


How will GPT-3 change the face of business?

#artificialintelligence

Last year, OpenAI released the third version of its Generative Pretrained Transformer model (GPT-3), to much excitement amongst the tech and business communities -- so much, in fact, that OpenAI's CEO tweeted "the hype is way too much." GPT-3 has astonished observers with groundbreaking examples of code, news articles, translations and even poetry which evaluators have difficulty distinguishing from human-written output. Fundamentally, it simply autocompletes: give it a prompt, and it'll predict what comes next. But the enormous dataset it was trained on, along with the sheer complexity of its architecture, has enabled it to achieve the best results yet. So, how exactly does this technology work, and where could it take us?


Trustworthy AI: A Computational Perspective

arXiv.org Artificial Intelligence

In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society. The intention of developing AI is to benefit humans, by reducing human labor, bringing everyday convenience to human lives, and promoting social good. However, recent research and AI applications show that AI can cause unintentional harm to humans, such as making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against one group. Thus, trustworthy AI has attracted immense attention recently, which requires careful consideration to avoid the adverse effects that AI may bring to humans, so that humans can fully trust and live in harmony with AI technologies. Recent years have witnessed a tremendous amount of research on trustworthy AI. In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex area, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.


Natural language processing (NLP) and its use in machine translation

#artificialintelligence

NMT is a popular and widely used translation service that incorporates an end-to-end approach for automatic translation which overcomes the weaknesses of RBMT and SMT methods. NMT uses the most recent deep learning methods to produce better translation output than other traditional Machine Translation solutions. It is the most recent type of machine translation that employs a neural network that is closely related to the neurons of the human brain, allowing it to categorize data into various groups and layers. NMT is a language translation approach that tries to incorporate the context of the sentences or paragraphs rather than individual words. The NMT system is made up of current multilingual databases and automated learning mechanisms that contribute to continuous improvement.