Collaborating Authors

portable device

Artificial intelligence on the edge


Many of us may not even understand exactly where or what the Cloud is. Yet, much of the data and programs that control our lives live on this Cloud of distant computer servers with the directions to run our devices coming over the Internet. As the prevalence of artificial intelligence (AI)-driven devices grows, researchers would like to bring some of that decision-making back to our own devices. WSU researchers have developed a novel framework to more efficiently use AI algorithms on mobile platforms and other portable devices. They presented their most recent work at the 2020 Design Automation Conference and the 2020 International Conference on Computer Aided Design.

How AI is revolutionizing healthcare


AI applications in healthcare can literally change patients' lives, improving diagnostics and treatment and helping patients and healthcare providers make informed decisions quickly. AI in the global healthcare market (the total value of products and services sold) was valued at $2.4 billion in 2019 and is projected to reach $31.02 billion in 2025. Now in the COVID-19 pandemic, AI is being leveraged to identify virus-related misinformation on social media and remove it. AI is also helping scientists expedite vaccine development, track the virus and understand individual and population risk, among other applications. Companies such as Microsoft, which recently stated it will dedicate $20 million to advance the use of artificial intelligence in COVID-19 research, recognize the need for and extraordinary potential of AI in healthcare.

DFTerNet: Towards 2-bit Dynamic Fusion Networks for Accurate Human Activity Recognition Machine Learning

Deep Convolutional Neural Networks (DCNNs) are currently popular in human activity recognition applications. However, in the face of modern artificial intelligence sensor-based games, many research achievements cannot be practically applied on portable devices. DCNNs are typically resource-intensive and too large to be deployed on portable devices, thus this limits the practical application of complex activity detection. In addition, since portable devices do not possess high-performance Graphic Processing Units (GPUs), there is hardly any improvement in Action Game (ACT) experience. Besides, in order to deal with multi-sensor collaboration, all previous human activity recognition models typically treated the representations from different sensor signal sources equally. However, distinct types of activities should adopt different fusion strategies. In this paper, a novel scheme is proposed. This scheme is used to train 2-bit Convolutional Neural Networks with weights and activations constrained to {-0.5,0,0.5}. It takes into account the correlation between different sensor signal sources and the activity types. This model, which we refer to as DFTerNet, aims at producing a more reliable inference and better trade-offs for practical applications. Our basic idea is to exploit quantization of weights and activations directly in pre-trained filter banks and adopt dynamic fusion strategies for different activity types. Experiments demonstrate that by using dynamic fusion strategy can exceed the baseline model performance by up to ~5% on activity recognition like OPPORTUNITY and PAMAP2 datasets. Using the quantization method proposed, we were able to achieve performances closer to that of full-precision counterpart. These results were also verified using the UniMiB-SHAR dataset. In addition, the proposed method can achieve ~9x acceleration on CPUs and ~11x memory saving.