Goto

Collaborating Authors

architecture


Understanding new developments in LSTM framework part1(Deep Learning)

#artificialintelligence

Abstract: The use of sensors available through smart devices has pervaded everyday life in several applications including human activity monitoring, healthcare, and social networks. In this study, we focus on the use of smartwatch accelerometer sensors to recognize eating activity. More specifically, we collected sensor data from 10 participants while consuming pizza. Abstract: Dynamic wireless charging (DWC) is an emerging technology that allows electric vehicles (EVs) to be wirelessly charged while in motion. It is gaining significant momentum as it can potentially address the range limitation issue for EVs.


why-does-successful-ai-require-the-right-data-architecture

#artificialintelligence

Artificial Intelligence promises cost savings, a competitive edge, and a foothold into the future for the business. While AI adoption is on the rise the level of investment is often not in line with the monetary returns. The right data architecture is essential to AI success. This article will show you how. Only 26% of AI projects are currently being implemented in widespread production within an organization.


Building the Vision Transformer From Scratch

#artificialintelligence

Though originally developed for NLP, the transformer architecture is gradually making its way into many different areas of deep learning, including image classification and labeling and even reinforcement learning. It's an amazingly versatile architecture and very powerful at representing whatever it's being used to model. As part of my effort to understand fundamental architectures and their applications better, I decided to implement the vision transformer (ViT) from the paper¹ directly, without referencing the official codebase. In this post, I'll explain how it works (and how my version is implemented). I'll start with a brief review of how transformers work, but I won't get too deep into the weeds here since there are many other excellent guides to transformers (see The Illustrated Transformer for my favorite one).


A new generative adversarial network for medical images super resolution - Scientific Reports

#artificialintelligence

For medical image analysis, there is always an immense need for rich details in an image. Typically, the diagnosis will be served best if the fine details in the image are retained and the image is available in high resolution. In medical imaging, acquiring high-resolution images is challenging and costly as it requires sophisticated and expensive instruments, trained human resources, and often causes operation delays. Deep learning based super resolution techniques can help us to extract rich details from a low-resolution image acquired using the existing devices. In this paper, we propose a new Generative Adversarial Network (GAN) based architecture for medical images, which maps low-resolution medical images to high-resolution images. The proposed architecture is divided into three steps. In the first step, we use a multi-path architecture to extract shallow features on multiple scales instead of single scale. In the second step, we use a ResNet34 architecture to extract deep features and upscale the features map by a factor of two. In the third step, we extract features of the upscaled version of the image using a residual connection-based mini-CNN and again upscale the feature map by a factor of two. The progressive upscaling overcomes the limitation for previous methods in generating true colors. Finally, we use a reconstruction convolutional layer to map back the upscaled features to a high-resolution image. Our addition of an extra loss term helps in overcoming large errors, thus, generating more realistic and smooth images. We evaluate the proposed architecture on four different medical image modalities: (1) the DRIVE and STARE datasets of retinal fundoscopy images, (2) the BraTS dataset of brain MRI, (3) the ISIC skin cancer dataset of dermoscopy images, and (4) the CAMUS dataset of cardiac ultrasound images. The proposed architecture achieves superior accuracy compared to other state-of-the-art super-resolution architectures.


Deep attentive variational inference

AIHub

Figure 1: Overview of a local variational layer (left) and an attentive variational layer (right) proposed in this post. Attention blocks in the variational layer are responsible for capturing long-range statistical dependencies in the latent space of the hierarchy. Generative models are a class of machine learning models that are able to generate novel data samples such as fictional celebrity faces, digital artwork, and scenic images. Currently, the most powerful generative models are deep probabilistic models. This class of models uses deep neural networks to express statistical hypotheses about the data generation process, and combine them with latent variable models to augment the set of observed data with latent (unobserved) information in order to better characterize the procedure that generates the data of interest.


Language Models

Communications of the ACM

A transformer has strong language representation ability; a very large corpus contains rich language expressions (such unlabeled data can be easily obtained) and training large-scale deep learning models has become more efficient. Therefore, pre-trained language models can effectively represent a language's lexical, syntactic, and semantic features. Pre-trained language models, such as BERT and GPTs (GPT-1, GPT-2, and GPT-3), have become the core technologies of current NLP. Pre-trained language model applications have brought great success to NLP. "Fine-tuned" BERT has outperformed humans in terms of accuracy in language-understanding tasks, such as reading comprehension.8,17 "Fine-tuned" GPT-3 has also reached an astonishing level of fluency in text-generation tasks.3


You.com is taking on Google with AI, apps, privacy, and personalization

ZDNet

Richard Socher: "We'll never be as bad as Google. We'll never sell your data." Are you happy with Google search? Regardless of how you answer this question, chances are you still use it. With the notable exceptions of China and Russia, where Baidu and Yandex lead, respectively, Google's market share in search is over 90% worldwide.


How artificial intelligence (AI) will help Autodesk expand in the metaverse

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. For the 40-year-old Autodesk -- known for its design and creation software (including AutoCAD) used by professionals in industries including architecture, engineering, construction, manufacturing and entertainment -- artificial intelligence (AI) has become a must to help boost creativity and collaboration. "A common theme is helping the designer," said Tonya Custis, director of artificial intelligence research at Autodesk, whose team includes 15 AI research scientists based in San Francisco, Toronto and London. But AI will also help Autodesk expand in the metaverse. According to Custis, Autodesk's use of AI is also helping to tackle challenges around "geometry understanding" -- to help contextualize the geometric world around us -- which will be "super-important" as the metaverse expands, in terms of speeding up animation and CGI processes, as well as in architecture and engineering.


What Is Google LaMDA & Why Did Someone Believe It's Sentient?

#artificialintelligence

LaMDA has been in the news after a Google engineer claimed it was sentient because its answers allegedly hint that it understands what it is. The engineer also suggested that LaMDA communicates that it has fears, much like a human does. What is LaMDA, and why are some under the impression that it can achieve consciousness? LaMDA is a language model. Fundamentally, it's a mathematical function (or a statistical tool) that describes a possible outcome related to predicting what the next words are in a sequence.


Charting a New Course of Neural Networks with Transformers - RTInsights

#artificialintelligence

A "transformer model" is a neural network architecture consisting of transformer layers capable of modeling long-range sequential dependencies that are suited to leveraging modern computing hardware to reduce the time to train models. State-of-the-art machine learning and artificial intelligence (AI) systems have achieved significant technological advancements in recent years alongside the technology's growing interest and widespread demand. We've seen the general hype around AI fluctuate with media cycles and new product developments, with the buzz of implementing AI for the sake of implementing it wearing off as companies strive to demonstrate its positive impact on business--emphasizing AI's ability to augment, not replace. Emerging now is the concept of transformer-based models. There is speculation surrounding whether transformers, which have gained considerable traction in natural language processing (NLP), will be positioned to "take over" AI, leaving many to wonder what this approach can achieve and how it could transform the pace and direction of technology.