Goto

Collaborating Authors

Machine Translation


New voices in AI: David Adelani

AIHub

Welcome to the first episode of New voices in AI! You can find David on Twitter @davlanade and find out more about Masakhane here. The music used is'Wholesome' by Kevin MacLeod, Licensed under Creative Commons Daly: Hello and welcome to new voices in AI, this a new series from AIhub where we celebrate the voices PhD students, early career researchers, and those with a new perspective on AI. And without further ado, let's begin. First up, a big welcome to our very first guest on "New voices in AI" and if you could introduce yourself, who are you? Adelani: Thank you very much for having me. So, Masakhane is this grassroots organization, whose mission is to strengthen and spur NLP research in African languages, by Africans for Africans, so, and currently the organization we are majorly operating on Slack we already have over 1000 Members. Of course, not everyone is active but we have more than 100 or close to 100 active members as well, yeah. So how did, how did you get into AI?


Facebook and the Importance of Responsible AI

#artificialintelligence

Does the recent flurry of headlines about Facebook and the negative outcomes produced by its algorithms have you worried about the future and the implications of widespread AI usage? It's a rational response to have during an alarming news cycle. However, this situation shouldn't be interpreted as a death knell for the use of AI in human communications. It's more of a cautionary example of the disastrous consequences that can occur as a result of not using AI in a responsible way. Read on to learn more about ethical technology, data quality, and the significance of human-in-the-loop AI.


So retrieval is what we needed?

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Last month DeepMind published their new NLP model called RETRO (Retrieval-Enhanced TRansfOrmer) which according to the paper, is a leap forward in the NLP world in multiple aspects.


"Artificial Intelligence" Science-Research, January 2022, Week 3 -- summary from Europe PMC

#artificialintelligence

Background Liver is one of the most typical metastatic sites of colon cancer cells and liver metastasis determines subsequent therapy along with prognosis of patients, particularly in T1 patients. There is still no effective model to predict the danger of LM in T1 CRC patients. Objectives Chest radiographs are commonly performed in emergency units, yet the interpretation calls for radiology experience. Presently, top quality English-Chinese parallel corpus is presently in a phase of shortage. After that, the multilingual dictionary summed up by the translation model is combined with the language model, unsupervised translation model is initialized, unsupervised English-Chinese neural machine translation model is optimized with the back translation technique.


Natural Language Processing- How different NLP Algorithms work

#artificialintelligence

Natural Language Processing (NLP) is an area in computer science that studies the interactions between computers and human languages. It is the technology behind search engines such as Google. The analysis of language can be done manually, and it has been done for centuries. But technology continues to evolve, which is especially true in natural language processing (NLP). The Machine and Deep Learning communities have been actively pursuing Natural Language Processing (NLP) through various techniques.


Pixel Recursive Super Resolution. Paper @Google Brain. Ryan Dahl, Mohammad Norouzi & Jonathon Shlens

#artificialintelligence

Research ... hoy traemos a este espacio otro paper de Google ... aquí os dejamos el Abstract We present a pixel recursive super resolution model that synthesizes realistic details into images while enhancing their resolution. A low resolution image may correspond to multiple plausible high resolution images, thus modeling the super resolution process with a pixel independent conditional model often results in averaging different details–hence blurry edges. By contrast, our model is able to represent a multimodal conditional distribution by properly modeling the statistical dependencies among the high resolution image pixels, conditioned on a low resolution input. We employ a PixelCNN architecture to define a strong prior over natural images and jointly optimize this prior with a deep conditioning convolutional network. Human evaluations indicate that samples from our proposed model look.(leer


New model improves accuracy of machine learning in COVID-19 diagnosis while preserving privacy

#artificialintelligence

Researchers in the UK and China have developed an artificial intelligence (AI) model that can diagnose COVID-19 as well as a panel of professional radiologists, while preserving the privacy of patient data. The international team, led by the University of Cambridge and the Huazhong University of Science and Technology, used a technique called federated learning to build their model. Using federated learning, an AI model in one hospital or country can be independently trained and verified using a dataset from another hospital or country, without data sharing. The researchers based their model on more than 9,000 CT scans from approximately 3,300 patients in 23 hospitals in the UK and China. Their results, reported in the journal Nature Machine Intelligence, provide a framework where AI techniques can be made more trustworthy and accurate, especially in areas such as medical diagnosis where privacy is vital.


AI 50 2021: America's Most Promising Artificial Intelligence Companies

#artificialintelligence

The Covid-19 pandemic was devastating for many industries, but it only accelerated the use of artificial intelligence across the U.S. economy. Amid the crisis, companies scrambled to create new services for remote workers and students, beef up online shopping and dining options, make customer call centers more efficient and speed development of important new drugs. Even as applications of machine learning and perception platforms become commonplace, a thick layer of hype and fuzzy jargon clings to AI-enabled software.That makes it tough to identify the most compelling companies in the space--especially those finding new ways to use AI that create value by making humans more efficient, not redundant. With this in mind, Forbes has partnered with venture firms Sequoia Capital and Meritech Capital to create our third annual AI 50, a list of private, promising North American companies that are using artificial intelligence in ways that are fundamental to their operations. To be considered, businesses must be privately-held and utilizing machine learning (where systems learn from data to improve on tasks), natural language processing (which enables programs to "understand" written or spoken language) or computer vision (which relates to how machines "see"). AI companies incubated at, largely funded through or acquired by large tech, manufacturing or industrial firms aren't eligible for consideration. Our list was compiled through a submission process open to any AI company in the U.S. and Canada. The application asked companies to provide details on their technology, business model, customers and financials like funding, valuation and revenue history (companies had the option to submit information confidentially, to encourage greater transparency). Forbes received several hundred entries, of which nearly 400 qualified for consideration. From there, our data partners applied an algorithm to identify 100 companies with the highest quantitative scores--and that also made diversity a priority. Next, a panel of expert AI judges evaluated the finalists to find the 50 most compelling companies (they were precluded from judging companies in which they have a vested interest). Among trends this year are what Sequoia Capital's Konstantine Buhler calls AI workbench companies--building of platforms tailored to different enterprises, including Dataiku, DataRobot Domino Data and Databricks.


Chinese TV introducing AI sign language presenter at the next Olympics

#artificialintelligence

Chinese TV will introduce the first AI sign language presenter in time for the 2022 Winter Olympics in Beijing. China Central Television (CCTV) and Baidu AI Cloud said the launch of the AI sign language presenter represents a huge leap forwards in'overcoming the barrier of sound with technology'. Nearly 28 million people in China are hearing impaired and about 430 million around the world also suffer from hearing loss. The launch of the AI presenter will allow the state broadcaster to include sign language services for viewers around the clock, and will start by giving updates of the Winter Olympics in Beijing early next year. The presenter achieves high-level sign language expression thanks to Baidu's natural action engine and their sign language translation engine.


Azure AI empowers organizations to serve users in more than 100 languages

#artificialintelligence

Microsoft announced today that 12 new languages and dialects have been added to Translator. These additions mean that the service can now translate between more than 100 languages and dialects, making information in text and documents accessible to 5.66 billion people worldwide. "One hundred languages is a good milestone for us to achieve our ambition for everyone to be able to communicate regardless of the language they speak," said Xuedong Huang, Microsoft technical fellow and Azure AI chief technology officer. Translator today covers the world's most spoken languages including English, Chinese, Hindi, Arabic and Spanish. In recent years, advances in AI technology have allowed the company to grow its language library with low-resource and endangered languages, such as Inuktitut, a dialect of Inuktut that is spoken by about 40,000 Inuit in Canada.