Collaborating Authors

Natural Language

The Uncanny Valley -- Chatbot & CRM


A chatbot is software that simulates human-like conversations with users via text messages on chat. Its key task is to help users by providing answers to their questions. If we dive deep, chatbots are pieces of conversational software powered by artificial intelligence that have the capability to engage in one-to-one chat with customers on their preferred chat platform such as Facebook Messenger, Whatsapp, Instagram, Telegram, Slack and many more conversational platforms. Chatbots, run by pre-programmed algorithms, natural language processing and/or machine learning and conversed in ways that mimicked human communication. Unlike other automated customer service solutions such as IVRS systems that were universally disliked for their robotic nature, Chatbots are seen to get closer to passing the Turing Test convincingly simulating a human conversational partner so well that it was difficult to sense one was chatting with a machine. British Al pioneer Alan Turing in 1950 proposed a test to determine whether machines could think. According to the Turing test, a computer could demonstrate intelligence if a human interviewer, conversing with an unseen human and an unseen computer, could not tell which was which. Although much work has been done in many of the subgroups that fall under the Al umbrella, critics believe that no computer can truly pass the Turing test.

19th Extended Semantic Web Conference (ESWC), Heraklion 2022


The ESWC is a major venue for discussing the latest scientific results and technology innovations around semantic technologies including in knowledge graphs, web data, linked data and the semantic web. The goal of the Semantic Web is to create a Web of knowledge and services in which the semantics of content is made explicit and content is linked to both other content and services allowing novel applications to combine content from heterogeneous sites in unforeseen ways and support enhanced matching between users needs and content. This network of knowledge-based functionality weaves together a large network of human knowledge, and make this knowledge machine-processable to support intelligent behaviour by machines. Creating such an interlinked Web of knowledge which spans unstructured text, structured data as well as multimedia content and services requires the collaboration of many disciplines, including but not limited to: Artificial Intelligence, Natural Language Processing, Databases and Information Systems, Information Retrieval, Machine Learning, Multimedia, Distributed Systems, Social Networks, Web Engineering, and Web Science. For more information about the event please visit the ESWC 2022 website.

La veille de la cybersécurité


An international team of around 1,000 largely academic volunteers has tried to break big tech's stranglehold on natural-language processing and reduce its harms. Trained with US$7-million-worth of publicly funded computing time, the BLOOM language model will rival in scale those made by firms Google and OpenAI, but will be open-source. BLOOM will also be the first model of its scale to be multilingual. The collaboration, called BigScience, launched an early version of the model on 17 June, and hopes that it will ultimately help to reduce harmful outputs of artificial intelligence (AI) language systems. Models that recognize and generate language are increasingly used by big tech firms in applications from chat bots to translators, and can sound so eerily human that a Google engineer this month claimed that the firm's AI model was sentient (Google strongly denies that the AI possesses sentience).

Amazon's Alexa being tested to replicate voice of dead relatives


Amazon's Alexa might soon replicate the voice of family members - even if they're dead. The capability, unveiled at Amazon's Re:Mars conference in Las Vegas, is in development and would allow the virtual assistant to mimic the voice of a specific person based on a less than a minute of provided recording. Rohit Prasad, senior vice president and head scientist for Alexa, said at the event Wednesday that the desire behind the feature was to build greater trust in the interactions users have with Alexa by putting more "human attributes of empathy and affect." "These attributes have become even more important during the ongoing pandemic when so many of us have lost ones that we love," Prasad said. "While AI can't eliminate that pain of loss, it can definitely make their memories last."

RadBERT: Adapting Transformer-based Language Models to Radiology


"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. To investigate if tailoring a transformer-based language model to radiology is beneficial for radiology natural language processing (NLP) applications. This retrospective study presents RadBERT, a family of bidirectional encoder representations from transformers-based language models adapted for radiology.

10 Smart Artificial Intelligence (AI) Stocks to Buy


Artificial intelligence (AI), for all of its futuristic elements, is not a new category – not by a long shot. The roots of the technology go all the way back to the late 1950s, when computers started to become much more powerful. But the proliferation of AI stocks hasn't come until much more recently, as artificial intelligence became commercially viable over the past decade or so. Artificial intelligence uses algorithms to detect patterns, which can help businesses create predictions that ultimately lower costs, improve productivity and increase revenues. As technological breakthroughs arise, AI models continue to scale.

Closing the Gender Data Gap in AI


The first computer algorithm is said to have been written in the early 1840s, for the prototype of the Analytical Engine. Ada Lovelace, a mathematician dubbed a "female genius" in her posthumous biography. As the field of computing developed over the next century following Lovelace's death, the typing work involved in creating computer programs was seen as "women's work," a role viewed as akin to switchboard operator or secretary. Women wrote the software, while men made the hardware -- the latter seen, at the time, as the more prestigious of the two tasks. And, during the Space Race of the 1950s and '60s, three Black women, known as "human computers," broke gender and racial barriers to help NASA send the first men into orbit.

My 2 cents on Google'sLaMDA being sentient


AI models don't have a memory: When you converse with a chatbot one day, it won't remember what you said the next day. Chatbots (and Language Models) typically work by looking at "context", which, for you, basically means a few sentences in the past. The limit will vary from model to model, but it's typically up to 1000 words or something (not sure what is it these days with super huge models, but there's always a limit). Even if a chatbot uses "RNN", it's still very limited (usually even more) as RNNs struggle with long-term memory where long [a few hundred words]. The point is that AI models have no idea what you said a few sentences back. Also, don't be confused by models like Neural Turing Machine, which have a "working memory" (like RAM) but still no permanent memory (like a hard disk).

Text Generation using GPT-J with Hugging Face 🤗 and Segmind


Text generation is the task of automatically generating text using a machine learning system. A good text generation system can make it really hard to distinguish between human and machine-written text pieces.

What's new in Microsoft Azure's NLP AI services


If you want to begin using machine learning in your applications, Microsoft offers several different ways to jumpstart development. One key technology, Microsoft's Azure Cognitive Services, offers a set of managed machine learning services with pretrained models and REST API endpoints. These models offer most of the common use cases, from working with text and language, to recognizing speech and images. Machine learning is still evolving, with new models being released and new hardware to help speed up inferencing, and so Microsoft regularly updates its Cognitive Services. The latest major update, announced at Build 2022, features a lot of changes to its tools for working with text, bringing three different services under one umbrella.