Goto

Collaborating Authors

Overview


Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review-Part I: Evolution and Recent Trends

#artificialintelligence

Deep learning (DL) has great influence on large parts of science and increasingly established itself as an adaptive method for new challenges in the field of Earth observation (EO). Nevertheless, the entry barriers for EO researchers are high due to the dense and rapidly developing field mainly driven by advances in computer vision (CV). To lower the barriers for researchers in EO, this review gives an overview of the evolution of DL with a focus on image segmentation and object detection in convolutional neural networks (CNN). The survey starts in 2012, when a CNN set new standards in image recognition, and lasts until late 2019. Thereby, we highlight the connections between the most important CNN architectures and cornerstones coming from CV in order to alleviate the evaluation of modern DL models.


First workshop on Resources for African Indigenous Languages (RAIL)

VideoLectures.NET

The South African Centre for Digital Language Resources (SADiLaR) is organizing a workshop (originally expected to be held at the LREC 2020 conference in Marseille, France) in the field of African Indigenous Language Resources. This workshop aims to bring together researchers who are interested in showcasing their research and thereby boosting the field of African indigenous languages. This provides an overview of the current state-of-the-art and emphasizes availability of African indigenous language resources, including both data and tools. Additionally, it allows for information sharing among researchers interested in African indigenous languages as well as starting discussions on improving the quality and availability of the resources. Many African indigenous languages currently have no or very limited resources available and, additionally, they are often structurally quite different from more well-resourced languages, requiring the development and use of specialized techniques.


Speech Analytics Market Future Aspect Analysis and Current Trends by 2017 to 2025 – Distinct Analysis & Reports

#artificialintelligence

Speech analytics technologies are used to extract information at customer contact points across various channels such as voice, chat, email, social channels, and surveys. Across the world, voice and phone interaction is the most common mode of communication used by consumers. Therefore, speech analytics is used in Voice User Interface (VUI) to derive insights at different contact points. In current times, organizations across various industry sectors are undertaking programs for transcripting and analyzing customer and organizational media. This is mainly to take logical decisions for customer and business management with the help of speech and text intelligence.


Tracking Progress in Natural Language Processing

#artificialintelligence

This document aims to track the progress in Natural Language Processing (NLP) and give an overview of the state-of-the-art (SOTA) across the most common NLP tasks and their corresponding datasets. It aims to cover both traditional and core NLP tasks such as dependency parsing and part-of-speech tagging as well as more recent ones such as reading comprehension and natural language inference. The main objective is to provide the reader with a quick overview of benchmark datasets and the state-of-the-art for their task of interest, which serves as a stepping stone for further research. To this end, if there is a place where results for a task are already published and regularly maintained, such as a public leaderboard, the reader will be pointed there. If you want to find this document again in the future, just go to nlpprogress.com


Artificial Intelligence in Cancer: How Is It Used in Practice? - Cancer Therapy Advisor

#artificialintelligence

Artificial intelligence (AI) comprises a type of computer science that develops entities, such as software programs, that can intelligently perform tasks or make decisions.1 The development and use of AI in health care is not new; the first ideas that created the foundation of AI were documented in 1956, and automated clinical tools that were developed between the 1970s and 1990s are now in routine use. These tools, such as the automated interpretation of electrocardiograms, may seem simple, but are considered AI. Today, AI is being harnessed to help with "big" problems in medicine -- such as processing and interpreting large amounts of data in research and in clinical settings, including reading imaging or results from broad genetic-testing panels.1 In oncology, AI is not yet being used broadly, but its use is being studied in several areas.


Mask R-CNN - Practical Deep Learning Segmentation in 1 hour

#artificialintelligence

Udemy Course Mask R-CNN - Practical Deep Learning Segmentation in 1 hour NED Mask R-CNN – Practical Deep Learning Segmentation in 1 hour free download also includes 6 hours on-demand video, 5 articles, 80 downloadable resources, Full lifetime access, Access on mobile and TV, Assignments, Certificate by Augmented Startups, Geeky Bee AI Private Limited What you'll learn What is Instance Segmentation How to take object segmentation further using Mask RCNN Secret tip to multiply your data using Data Augmentation. How to use AI to label your dataset for you. Find out how to train your own custom Mask R-CNN from scratch. Description ***Important Notes*** This is a practical-focused course. While we do provide an overview of Mask R-CNN theory, we focus mostly on helping you get Mask R-CNN working step-by-step.


2020 VizWiz Grand Challenge Workshop – VizWiz

#artificialintelligence

Our goal for this workshop is to educate researchers about the technological needs of people with vision impairments while empowering researchers to improve algorithms to meet these needs. A key component of this event will be to track progress on a new dataset challenge, where the task is to caption images taken by people who are blind. Winners of this challenge will receive awards sponsored by Microsoft. The second key component of this event will be a discussion about current research and application issues, including by invited speakers from both academia and industry who will share about their experiences in building today's state- of-the- art assistive technologies as well as designing next-generation tools. We invite submissions of results from algorithms for the image captioning challenge task.


Artificial Intelligence in Medicine : An Overview SaveDelete

#artificialintelligence

Artificial intelligence (AI) is the term used to describe the use of computers and technology to simulate intelligent behavior and critical thinking comparable to a human being. John McCarthy first defined the term AI in 1956 as the science and engineering of making intelligent machines. The following article gives a broad overview of AI in medicine, dealing with the terms and concepts as well as the current and future applications of AI. It aims to develop knowledge and familiarity with AI among primary care physicians. AI promises to change the practice of medicine in hitherto unknown ways.


Electronics

#artificialintelligence

Various artificial intelligence (AI) technologies have pervaded daily life. For instance, speech recognition has enabled users to interact with a system using their voice, and recent advances in computer vision have made self-driving cars commercially available. However, if not carefully designed, people with different abilities (e.g., loss of vision, weak technical background) may not receive full benefits from these AI-based approaches. This Special Issue focuses on bridging or closing the information gap between people with disabilities and needs. Manuscripts should be submitted online at www.mdpi.com


Data Science in Manufacturing: An Overview

#artificialintelligence

In the last couple of years, data science has seen an immense influx in various industrial applications across the board. Today, we can see data science applied in health care, customer service, governments, cybersecurity, mechanical, aerospace, and other industrial applications. Among these, manufacturing has gained more prominence to achieve a simple goal of Just-in-Time (JIT). In the last 100 years, manufacturing has gone through four major industrial revolutions. Currently, we are going through the fourth Industrial Revolution, where data from machines, environment, and products are being harvested to get closer to that simple goal of Just-in-Time; "Making the right products in right quantities at the right time."