Goto

Collaborating Authors

 alf


Active Learning Framework to Automate NetworkTraffic Classification

Pešek, Jaroslav, Soukup, Dominik, Čejka, Tomáš

arXiv.org Artificial Intelligence

Recent network traffic classification methods benefitfrom machine learning (ML) technology. However, there aremany challenges due to use of ML, such as: lack of high-qualityannotated datasets, data-drifts and other effects causing aging ofdatasets and ML models, high volumes of network traffic etc. Thispaper argues that it is necessary to augment traditional workflowsof ML training&deployment and adapt Active Learning concepton network traffic analysis. The paper presents a novel ActiveLearning Framework (ALF) to address this topic. ALF providesprepared software components that can be used to deploy an activelearning loop and maintain an ALF instance that continuouslyevolves a dataset and ML model automatically. The resultingsolution is deployable for IP flow-based analysis of high-speed(100 Gb/s) networks, and also supports research experiments ondifferent strategies and methods for annotation, evaluation, datasetoptimization, etc. Finally, the paper lists some research challengesthat emerge from the first experiments with ALF in practice.


Title Sequence from ALF

#artificialintelligence

We are a community dedicated to art produced with the help of artificial neural networks, which are themselves inspired by the human brain. Advances in the machine learning sub field of artificial intelligence brought on by the information age have made it possible for machines to create art that rivals that of what a human being can do. We here at /r/DeepDream mainly focus on applications of deep learning which itself is a sub field of machine learning. As the largest online AI art community, we routinely push the bounds of technology in the pursuit of better-looking artwork. The DeepDream wiki is available here.


ALF: Autoencoder-based Low-rank Filter-sharing for Efficient Convolutional Neural Networks

Frickenstein, Alexander, Vemparala, Manoj-Rohit, Fasfous, Nael, Hauenschild, Laura, Nagaraja, Naveen-Shankar, Unger, Christian, Stechele, Walter

arXiv.org Machine Learning

Closing the gap between the hardware requirements of state-of-the-art convolutional neural networks and the limited resources constraining embedded applications is the next big challenge in deep learning research. The computational complexity and memory footprint of such neural networks are typically daunting for deployment in resource constrained environments. Model compression techniques, such as pruning, are emphasized among other optimization methods for solving this problem. Most existing techniques require domain expertise or result in irregular sparse representations, which increase the burden of deploying deep learning applications on embedded hardware accelerators. In this paper, we propose the autoencoder-based low-rank filter-sharing technique technique (ALF). When applied to various networks, ALF is compared to state-of-the-art pruning methods, demonstrating its efficient compression capabilities on theoretical metrics as well as on an accurate, deterministic hardware-model. In our experiments, ALF showed a reduction of 70\% in network parameters, 61\% in operations and 41\% in execution time, with minimal loss in accuracy.


"Friendly" Artificial Intelligence Would Kill Us

#artificialintelligence

What if we could create "god" in our image? Founder of modern psychiatry Sigmund Freud hypothesized that man invented the concept of god. Ever the killjoy in his quest for joy, the philosopher Nietzsche famously went so far as to say that modern man "killed god." Since their time, modern science and technology have sought to resurrect god. But, if god does not really exist, then it is necessary to create god.