Goto

Collaborating Authors

 Doermann, David


A Review of Recent Advances of Binary Neural Networks for Edge Computing

arXiv.org Artificial Intelligence

Abstract--Edge computing is promising to become one of the next hottest topics in artificial intelligence because it benefits various evolving domains such as real-time unmanned aerial systems, industrial applications, and the demand for privacy protection. This paper reviews recent advances on binary neural network (BNN) and 1-bit CNN technologies that are well suitable for front-end, edge-based computing. We introduce and summarize existing work and classify them based on gradient approximation, quantization, architecture, loss functions, optimization method, and binary neural architecture search. We also introduce applications in the areas of computer vision and speech recognition and discuss future applications for edge computing. ITH the rapid development of information technology, cloud computing with centralized data processing cannot the performance of binary neural networks. To better review meet the needs of applications that require the processing these methods, we six aspects including gradient approximation, of massive amounts of data, nor can they be effectively used quantization, structural design, loss design, optimization, when privacy requires the data to remain at the source. Finally, we will also edge computing has become an alternative to handle the data review object detection, object tracking, and audio analysis from front-end or embedded devices.


A random forest system combination approach for error detection in digital dictionaries

arXiv.org Machine Learning

When digitizing a print bilingual dictionary, whether via optical character recognition or manual entry, it is inevitable that errors are introduced into the electronic version that is created. We investigate automating the process of detecting errors in an XML representation of a digitized print dictionary using a hybrid approach that combines rule-based, feature-based, and language model-based methods. We investigate combining methods and show that using random forests is a promising approach. We find that in isolation, unsupervised methods rival the performance of supervised methods. Random forests typically require training data so we investigate how we can apply random forests to combine individual base methods that are themselves unsupervised without requiring large amounts of training data. Experiments reveal empirically that a relatively small amount of data is sufficient and can potentially be further reduced through specific selection criteria.