Collaborating Authors


Vaccinations alone won't have major impact on fourth wave of virus in Tokyo, study shows

The Japan Times

It's hoped that COVID-19 vaccines will be the silver bullet that eventually allows society to return to normal. But even an accelerated inoculation campaign is unlikely to have a major impact on what appears to be a growing fourth wave of infections in Tokyo, according to research by a Tsukuba University professor. Setsuya Kurahashi, a professor of systems management, conducted a simulation using artificial intelligence that looked at how the vaccine rollout would help prevent the spread of the coronavirus in Tokyo if new infections rise at the same pace as during the second wave last summer. Even if 70,000 vaccinations per day, or 0.5% of the capital's 14 million people, were given to Tokyoites -- with priority given to people age 60 and over -- the capital would still see a fourth wave of infections peaking at 1,610 new cases on May 14, the study showed. The study also showed a fifth wave is expected to peak at 640 cases on Aug. 31.

Training the Untrained Eye with AI to Classify Fine Art


Beauty, it is said, resides in the eye of the beholder. What if that beholder is a machine learning model being trained to describe and classify fine works of art? That's what AI researchers at Zhejiang University of Technology in China are attempting to find out by comparing the ability of different models trained on a growing list of image data sets to classify artwork by genre and style. Whether these models can be trained to respond emotionally remains to be seen. Preliminary results from one study published earlier this month in the journal of the Public Library of Science highlighted the utility of using convolutional neural networks (CNNs) for demanding tasks like art classification.

US leading race in artificial intelligence, China rising, EU lagging: survey


The United States is leading rivals in development and use of artificial intelligence while China is rising quickly and European Union is lagging, a research report showed Monday (25 January). The study by the Information Technology and Innovation Foundation assessed AI using 30 separate metrics including human talent, research activity, commercial development and investment in hardware and software. The United States leads, with an overall score of 44.6 points on a 100-point scale, followed by China with 32 and the European Union with 23.3, the report based on 2020 data found. The researchers found the US leading in key areas such as investment in startups and research and development funding. But China has made strides in several areas and last year had more of the world's 500 most powerful supercomputers than any other nation -- 214, compared with 113 for the US and 91 for the EU.

An evidential classifier based on Dempster-Shafer theory and deep learning Artificial Intelligence

We propose a new classifier based on Dempster-Shafer (DS) theory and a convolutional neural network (CNN) architecture for set-valued classification. In this classifier, called the evidential deep-learning classifier, convolutional and pooling layers first extract high-dimensional features from input data. The features are then converted into mass functions and aggregated by Dempster's rule in a DS layer. Finally, an expected utility layer performs set-valued classification based on mass functions. We propose an end-to-end learning strategy for jointly updating the network parameters. Additionally, an approach for selecting partial multi-class acts is proposed. Experiments on image recognition, signal processing, and semantic-relationship classification tasks demonstrate that the proposed combination of deep CNN, DS layer, and expected utility layer makes it possible to improve classification accuracy and to make cautious decisions by assigning confusing patterns to multi-class sets.

An Experimental Review on Deep Learning Architectures for Time Series Forecasting Artificial Intelligence

In recent years, deep learning techniques have outperformed traditional models in many machine learning tasks. Deep neural networks have successfully been applied to address time series forecasting problems, which is a very important topic in data mining. They have proved to be an effective solution given their capacity to automatically learn the temporal dependencies present in time series. However, selecting the most convenient type of deep neural network and its parametrization is a complex task that requires considerable expertise. Therefore, there is a need for deeper studies on the suitability of all existing architectures for different forecasting tasks. In this work, we face two main challenges: a comprehensive review of the latest works using deep learning for time series forecasting; and an experimental study comparing the performance of the most popular architectures. The comparison involves a thorough analysis of seven types of deep learning models in terms of accuracy and efficiency. We evaluate the rankings and distribution of results obtained with the proposed models under many different architecture configurations and training hyperparameters. The datasets used comprise more than 50000 time series divided into 12 different forecasting problems. By training more than 38000 models on these data, we provide the most extensive deep learning study for time series forecasting. Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives, with LSTMs obtaining the most accurate forecasts. CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient.

Compacting Deep Neural Networks for Internet of Things: Methods and Applications Artificial Intelligence

Deep Neural Networks (DNNs) have shown great success in completing complex tasks. However, DNNs inevitably bring high computational cost and storage consumption due to the complexity of hierarchical structures, thereby hindering their wide deployment in Internet-of-Things (IoT) devices, which have limited computational capability and storage capacity. Therefore, it is a necessity to investigate the technologies to compact DNNs. Despite tremendous advances in compacting DNNs, few surveys summarize compacting-DNNs technologies, especially for IoT applications. Hence, this paper presents a comprehensive study on compacting-DNNs technologies. We categorize compacting-DNNs technologies into three major types: 1) network model compression, 2) Knowledge Distillation (KD), 3) modification of network structures. We also elaborate on the diversity of these approaches and make side-by-side comparisons. Moreover, we discuss the applications of compacted DNNs in various IoT applications and outline future directions.

Bio-inspired algorithm detects early signs of breast cancer


A computer algorithm based on a biological process could be used to detect breast cancer more effectively, according to new research published in the International Journal of Innovative Computing and Applications. A team from India has improved on earlier bio-inspired algorithms to develop a particle swarm optimisation and firefly algorithm that boosts detection accuracy by up to 2 percent taking it to as much as 97 percent accuracy. Moolchand Sharma and Shubbham Gupta of the Maharaja Agrasen Institute of Technology in New Delhi and Suman Deswal of the Deenbandhu Chhotu Ram University of Science and Technology in Murthal, Haryana, explain that breast cancer in women is common the world over and mortality rates are the second-highest and rising year by year. Early detection is critical to timely intervention that can improve prognosis and reduce the number of women who die prematurely from this disease. The team points out that experiments with many different types of computer algorithms have been researched in recent years with a view to finding a way to automate the detection process from mammograms and improve the positive tests and lower false-positive results from screen programs.

Evaluation of soccer team defense based on prediction models of ball recovery and being attacked Artificial Intelligence

With the development of measurement technology, data on the movements of actual games in various sports are available and are expected to be used for planning and evaluating the tactics and strategy. In particular, defense in team sports is generally difficult to be evaluated because of the lack of statistical data. Conventional evaluation methods based on predictions of scores are considered unreliable and predict rare events throughout the entire game, and it is difficult to evaluate various plays leading up to a score. On the other hand, evaluation methods based on certain plays that lead to scoring and dominant regions are sometimes unsuitable to evaluate the performance (e.g., goals scored) of players and teams. In this study, we propose a method to evaluate team defense from a comprehensive perspective related to team performance based on the prediction of ball recovery and being attacked, which occur more frequently than goals, using player actions and positional data of all players and the ball. Using data from 45 soccer matches, we examined the relationship between the proposed index and team performance in actual matches and throughout a season. Results show that the proposed classifiers more accurately predicted the true events than the existing classifiers which were based on rare events (i.e., goals). Also, the proposed index had a moderate correlation with the long-term outcomes of the season. These results suggest that the proposed index might be a more reliable indicator rather than winning or losing with the inclusion of accidental factors.

DAFAR: Defending against Adversaries by Feedback-Autoencoder Reconstruction Artificial Intelligence

Deep learning has shown impressive performance on challenging perceptual tasks and has been widely used in software to provide intelligent services. However, researchers found deep neural networks vulnerable to adversarial examples. Since then, many methods are proposed to defend against adversaries in inputs, but they are either attack-dependent or shown to be ineffective with new attacks. And most of existing techniques have complicated structures or mechanisms that cause prohibitively high overhead or latency, impractical to apply on real software. We propose DAFAR, a feedback framework that allows deep learning models to detect/purify adversarial examples in high effectiveness and universality, with low area and time overhead. DAFAR has a simple structure, containing a victim model, a plug-in feedback network, and a detector. The key idea is to import the high-level features from the victim model's feature extraction layers into the feedback network to reconstruct the input. This data stream forms a feedback autoencoder. For strong attacks, it transforms the imperceptible attack on the victim model into the obvious reconstruction-error attack on the feedback autoencoder directly, which is much easier to detect; for weak attacks, the reformation process destroys the structure of adversarial examples. Experiments are conducted on MNIST and CIFAR-10 data-sets, showing that DAFAR is effective against popular and arguably most advanced attacks without losing performance on legitimate samples, with high effectiveness and universality across attack methods and parameters.

Code Completion by Modeling Flattened Abstract Syntax Trees as Graphs Artificial Intelligence

Code completion has become an essential component of integrated development environments. Contemporary code completion methods rely on the abstract syntax tree (AST) to generate syntactically correct code. However, they cannot fully capture the sequential and repetitive patterns of writing code and the structural information of the AST. To alleviate these problems, we propose a new code completion approach named CCAG, which models the flattened sequence of a partial AST as an AST graph. CCAG uses our proposed AST Graph Attention Block to capture different dependencies in the AST graph for representation learning in code completion. The sub-tasks of code completion are optimized via multi-task learning in CCAG, and the task balance is automatically achieved using uncertainty without the need to tune task weights. The experimental results show that CCAG has superior performance than state-of-the-art approaches and it is able to provide intelligent code completion.