One goal of AI work in natural language is to enable communication between people and computers without resorting to memorization of complex commands and procedures. Automatic translation – enabling scientists, business people and just plain folks to interact easily with people around the world – is another goal. Both are just part of the broad field of AI and natural language, along with the cognitive science aspect of using computers to study how humans understand language.
Amazon's Alexa might soon replicate the voice of family members - even if they're dead. The capability, unveiled at Amazon's Re:Mars conference in Las Vegas, is in development and would allow the virtual assistant to mimic the voice of a specific person based on a less than a minute of provided recording. Rohit Prasad, senior vice president and head scientist for Alexa, said at the event Wednesday that the desire behind the feature was to build greater trust in the interactions users have with Alexa by putting more "human attributes of empathy and affect." "These attributes have become even more important during the ongoing pandemic when so many of us have lost ones that we love," Prasad said. "While AI can't eliminate that pain of loss, it can definitely make their memories last."
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. To investigate if tailoring a transformer-based language model to radiology is beneficial for radiology natural language processing (NLP) applications. This retrospective study presents RadBERT, a family of bidirectional encoder representations from transformers-based language models adapted for radiology.
Artificial intelligence (AI), for all of its futuristic elements, is not a new category – not by a long shot. The roots of the technology go all the way back to the late 1950s, when computers started to become much more powerful. But the proliferation of AI stocks hasn't come until much more recently, as artificial intelligence became commercially viable over the past decade or so. Artificial intelligence uses algorithms to detect patterns, which can help businesses create predictions that ultimately lower costs, improve productivity and increase revenues. As technological breakthroughs arise, AI models continue to scale.
The first computer algorithm is said to have been written in the early 1840s, for the prototype of the Analytical Engine. Ada Lovelace, a mathematician dubbed a "female genius" in her posthumous biography. As the field of computing developed over the next century following Lovelace's death, the typing work involved in creating computer programs was seen as "women's work," a role viewed as akin to switchboard operator or secretary. Women wrote the software, while men made the hardware -- the latter seen, at the time, as the more prestigious of the two tasks. And, during the Space Race of the 1950s and '60s, three Black women, known as "human computers," broke gender and racial barriers to help NASA send the first men into orbit.
AI models don't have a memory: When you converse with a chatbot one day, it won't remember what you said the next day. Chatbots (and Language Models) typically work by looking at "context", which, for you, basically means a few sentences in the past. The limit will vary from model to model, but it's typically up to 1000 words or something (not sure what is it these days with super huge models, but there's always a limit). Even if a chatbot uses "RNN", it's still very limited (usually even more) as RNNs struggle with long-term memory where long [a few hundred words]. The point is that AI models have no idea what you said a few sentences back. Also, don't be confused by models like Neural Turing Machine, which have a "working memory" (like RAM) but still no permanent memory (like a hard disk).
If you want to begin using machine learning in your applications, Microsoft offers several different ways to jumpstart development. One key technology, Microsoft's Azure Cognitive Services, offers a set of managed machine learning services with pretrained models and REST API endpoints. These models offer most of the common use cases, from working with text and language, to recognizing speech and images. Machine learning is still evolving, with new models being released and new hardware to help speed up inferencing, and so Microsoft regularly updates its Cognitive Services. The latest major update, announced at Build 2022, features a lot of changes to its tools for working with text, bringing three different services under one umbrella.
The European Association for Machine Translation (EAMT) conference is a venue where MT researchers, users and translators gather to discuss the latest advances in the industry. It is really interesting to go there and see what is going on in the European continent in terms of MT development and adoption. In this article, I want to share some ideas from the Best Paper Award of this year. Its title is "Searching for COMETINHO: The Little Metric That Could", from the research lab of Unbabel, a company based in Lisbon, Portugal that offers translation services using MT and human translators. You can find the online version of the paper in the ACL Anthology.