What if you didn't need English to translate? Meta's new and improved open source AI model'NLLB-200' is capable of translating 200 languages without English! "Communicating across languages is one superpower that AI provides, but as we keep advancing our AI work it's improving everything we do--from showing the most interesting content on Facebook and Instagram, to recommending more relevant ads, to keeping our services safe for everyone", says Mark Zuckerberg, CEO, Meta. Accessibility through language ensures that the benefits of the advancement of technology reach everyone, no matter what language they may speak. Tech companies are assuming a proactive role in attempting to bridge this gap.
Tech giant MetaAhas created a single artificial intelligence (AI)-based model capable of translating across 200 different languages, including many not supported by current commercial tools. According to The Verge, the company is open-sourcing the project in the hopes that others will build on its work. The AI model is part of an ambitious R&D project by Meta to create a so-called "universal speech translator," which the company sees as important for growth across its many platforms -- from Facebook and Instagram to developing domains like VR and AR. Machine translation not only allows Meta to better understand its users (and so improve the advertising systems that generate 97 per cent of its revenue) but could also be the foundation of a killer app for future projects like its augmented reality glasses. Experts in machine translation told the website that Meta's latest research was ambitious and thorough, but noted that the quality of some of the model's translations would likely be well below that of better-supported languages like Italian or German.
"Broadly accessible machine translation systems support around 130 languages; our goal is to bring this number up to 200," the authors write as their mission statement. Meta Properties, owner of Facebook, Instagram and WhatsApp, on Wednesday unveiled its latest effort in machine translation, a 190-page opus describing how it has used deep learning forms of neural nets to double state-of-the-art translation for languages to 202 languages, many of them so-called "low resource" languages such as West Central Oromo, a language of the Oromia state of Ethiopia, Tamasheq, spoken in Algeria and several other parts of Northern Africa, and Waray, the language of the Waray people of the Philippines. The report by a team of researchers at Meta, along with scholars at UC Berkeley and Johns Hopkins, "No Language Left Behind: Scaling Human-Centered Machine Translation," is posted on Facebook's AI research Web site, along with a companion blog post, and both should be required reading for the rich detail on the matter. "Broadly accessible machine translation systems support around 130 languages; our goal is to bring this number up to 200," they write as their mission statement. As Stephanie relates, Meta is open-sourcing its data sets and neural network model code on GitHub, and also offering $200,000 I'm awards to outside uses of the technology.
Lilt, a provider of AI-powered business translation software, today announced that it raised $55 million in a Series C round led by Four Rivers, joined by new investors Sorenson Capital, CLEAR Ventures and Wipro Ventures. The company says that it plans to use the capital to expand its R&D efforts as well as its customer footprint and engineering teams. "Lilt [aims to] build a solution that [will] combine the best of human ingenuity with machine efficiency," CEO Spence Green told TechCrunch via email. We are in three regions -- the U.S., Europe, the Middle East and Africa (EMEA) and Asia -- and look to have both sales and production teams in each of these regions." San Francisco, Calfornia-based Lilt was co-founded by Green and John DeNero in 2015. Green is a former Northrop Grumman software engineer who later worked as a research intern on the Google Translate team, developing an AI language system for improving English-to-Arabic translations. DeNero was previously a senior research scientist at Google, mostly on the Google Translate side, and a teaching professor at the University of California, Berkeley. "15 years ago, I was living in the Middle East, where you make less money if you speak anything other than English.
Does the recent flurry of headlines about Facebook and the negative outcomes produced by its algorithms have you worried about the future and the implications of widespread AI usage? It's a rational response to have during an alarming news cycle. However, this situation shouldn't be interpreted as a death knell for the use of AI in human communications. It's more of a cautionary example of the disastrous consequences that can occur as a result of not using AI in a responsible way. Read on to learn more about ethical technology, data quality, and the significance of human-in-the-loop AI.
The Linked Open Data practice has led to a significant growth of structured data on the Web in the last decade. Such structured data describe real-world entities in a machine-readable way, and have created an unprecedented opportunity for research in the field of Natural Language Processing. However, there is a lack of studies on how such data can be used, for what kind of tasks, and to what extent they can be useful for these tasks. This work focuses on the e-commerce domain to explore methods of utilising such structured data to create language resources that may be used for product classification and linking. We process billions of structured data points in the form of RDF n-quads, to create multi-million words of product-related corpora that are later used in three different ways for creating of language resources: training word embedding models, continued pre-training of BERT-like language models, and training Machine Translation models that are used as a proxy to generate product-related keywords. Our evaluation on an extensive set of benchmarks shows word embeddings to be the most reliable and consistent method to improve the accuracy on both tasks (with up to 6.9 percentage points in macro-average F1 on some datasets). The other two methods however, are not as useful. Our analysis shows that this could be due to a number of reasons, including the biased domain representation in the structured data and lack of vocabulary coverage. We share our datasets and discuss how our lessons learned could be taken forward to inform future research in this direction.
In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society. The intention of developing AI is to benefit humans, by reducing human labor, bringing everyday convenience to human lives, and promoting social good. However, recent research and AI applications show that AI can cause unintentional harm to humans, such as making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against one group. Thus, trustworthy AI has attracted immense attention recently, which requires careful consideration to avoid the adverse effects that AI may bring to humans, so that humans can fully trust and live in harmony with AI technologies. Recent years have witnessed a tremendous amount of research on trustworthy AI. In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex area, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.
Since the five biggest tech companies – Google, Apple, Microsoft, Amazon, and Facebook -- don't really care where their employees learn their skills, there's no reason to take out heavy loans or even time away from your current position to break into a well-paid career in the tech industry. And if you aren't sure exactly which field to pursue, you're in luck. The 2021 Google Software Engineering Manager Prep Bundle offers train-at-your-own pace courses across a wide variety of topics. What is low-code and no-code? Budding web developers can benefit from the "UI Design" and "JavaFX: Build Beautiful User Interfaces" courses.
For the past few years, Google has been dominating the field of artificial intelligence. Google's search engine has revolutionized the internet. From large-scale organizations to kids, Google's search engine has provided every one of us with easier access to information. The company claims that its advancements in technology and enhanced customer service would not have been possible had it not invested in disruptive technologies like artificial intelligence, machine learning, deep learning, and others. This article provides a list of the top 10 products manufactured by Google which are powered by artificial intelligence.