Goto

Collaborating Authors

CICO is a global communication agency.

#artificialintelligence

CICO, a truly Next Generation Organization in the MarCom Industry, is a Global Leading Business Communication Agency that partners with many of the world's largest and emerging businesses and organizations, helping them evolve, promote and protect their Brands Reputations and motivational issues. We deliver innovative and outsmart Cognitive Communication Solutions to the general industry with Tangible Outcome to contribute to a Greater World. CICO designs and implement the most creative, inspiring and effective Advertising, Communication platforms and Conversation Models for your particular business, entire organizations or governments. Molding RIGHT your Communication today, in an environment of increasing multimedia and a faster world where confusing and uncertainty are facts, is essential. Today the world needs more than ever the Right Voice. Follow us to attain your Communication Outcome and Leadership by using our astute methodologies. CICO works according the (HSS) Highest Strategic Standards and the most affordable price on the market.


RPN: A Residual Pooling Network for Efficient Federated Learning

arXiv.org Machine Learning

Federated learning is a new machine learning framework which enables different parties to collaboratively train a model while protecting data privacy and security. Due to model complexity, network unreliability and connection in-stability, communication cost has became a major bottleneck for applying federated learning to real-world applications. Current existing strategies are either need to manual setting for hyper-parameters, or break up the original process into multiple steps, which make it hard to realize end-to-end implementation. In this paper, we propose a novel compression strategy called Residual Pooling Network (RPN). Our experiments show that RPN not only reduce data transmission effectively, but also achieve almost the same performance as compared to standard federated learning. Our new approach performs as an end-to-end procedure, which should be readily applied to all CNN-based model training scenarios for improvement of communication efficiency, and hence make it easy to deploy in real-world application without human intervention.


Semi-supervised Variational Temporal Convolutional Network for IoT Communication Multi-anomaly Detection

arXiv.org Artificial Intelligence

The consumer Internet of Things (IoT) have developed in recent years. Mass IoT devices are constructed to build a huge communications network. But these devices are insecure in reality, it means that the communications network are exposed by the attacker. Moreover, the IoT communication network also faces with variety of sudden errors. Therefore, it easily leads to that is vulnerable with the threat of attacker and system failure. The severe situation of IoT communication network motivates the development of new techniques to automatically detect multi-anomaly. In this paper, we propose SS-VTCN, a semi-supervised network for IoT multiple anomaly detection that works well effectively for IoT communication network. SS-VTCN is designed to capture the normal patterns of the IoT traffic data based on the distribution whether it is labeled or not by learning their representations with key techniques such as Variational Autoencoders and Temporal Convolutional Network. This network can use the encode data to predict preliminary result, and reconstruct input data to determine anomalies by the representations. Extensive evaluation experiments based on a benchmark dataset and a real consumer smart home dataset demonstrate that SS-VTCN is more suitable than supervised and unsupervised method with better performance when compared other state-of-art semi-supervised method.


End-to-End Autoencoder Communications with Optimized Interference Suppression

arXiv.org Artificial Intelligence

An end-to-end communications system based on Orthogonal Frequency Division Multiplexing (OFDM) is modeled as an autoencoder (AE) for which the transmitter (coding and modulation) and receiver (demodulation and decoding) are represented as deep neural networks (DNNs) of the encoder and decoder, respectively. This AE communications approach is shown to outperform conventional communications in terms of bit error rate (BER) under practical scenarios regarding channel and interference effects as well as training data and embedded implementation constraints. A generative adversarial network (GAN) is trained to augment the training data when there is not enough training data available. Also, the performance is evaluated in terms of the DNN model quantization and the corresponding memory requirements for embedded implementation. Then, interference training and randomized smoothing are introduced to train the AE communications to operate under unknown and dynamic interference (jamming) effects on potentially multiple OFDM symbols. Relative to conventional communications, up to 36 dB interference suppression for a channel reuse of four can be achieved by the AE communications with interference training and randomized smoothing. AE communications is also extended to the multiple-input multiple-output (MIMO) case and its BER performance gain with and without interference effects is demonstrated compared to conventional MIMO communications.


Communication-efficient Distributed SGD with Sketching

Neural Information Processing Systems

Large-scale distributed training of neural networks is often limited by network bandwidth, wherein the communication time overwhelms the local computation time. Motivated by the success of sketching methods in sub-linear/streaming algorithms, we introduce Sketched-SGD, an algorithm for carrying out distributed SGD by communicating sketches instead of full gradients. We show that \ssgd has favorable convergence rates on several classes of functions. When considering all communication -- both of gradients and of updated model weights -- Sketched-SGD reduces the amount of communication required compared to other gradient compression methods from $\mathcal{O}(d)$ or $\mathcal{O}(W)$ to $\mathcal{O}(\log d)$, where $d$ is the number of model parameters and $W$ is the number of workers participating in training. We run experiments on a transformer model, an LSTM, and a residual network, demonstrating up to a 40x reduction in total communication cost with no loss in final model performance.