Not enough data to create a plot.
Try a different view from the menu above.
Anderson, Michael
Implementing Neural Network-Based Equalizers in a Coherent Optical Transmission System Using Field-Programmable Gate Arrays
Freire, Pedro J., Srivallapanondh, Sasipim, Anderson, Michael, Spinnler, Bernhard, Bex, Thomas, Eriksson, Tobias A., Napoli, Antonio, Schairer, Wolfgang, Costa, Nelson, Blott, Michaela, Turitsyn, Sergei K., Prilepsky, Jaroslaw E.
In this work, we demonstrate the offline FPGA realization of both recurrent and feedforward neural network (NN)-based equalizers for nonlinearity compensation in coherent optical transmission systems. First, we present a realization pipeline showing the conversion of the models from Python libraries to the FPGA chip synthesis and implementation. Then, we review the main alternatives for the hardware implementation of nonlinear activation functions. The main results are divided into three parts: a performance comparison, an analysis of how activation functions are implemented, and a report on the complexity of the hardware. The performance in Q-factor is presented for the cases of bidirectional long-short-term memory coupled with convolutional NN (biLSTM + CNN) equalizer, CNN equalizer, and standard 1-StpS digital back-propagation (DBP) for the simulation and experiment propagation of a single channel dual-polarization (SC-DP) 16QAM at 34 GBd along 17x70km of LEAF. The biLSTM+CNN equalizer provides a similar result to DBP and a 1.7 dB Q-factor gain compared with the chromatic dispersion compensation baseline in the experimental dataset. After that, we assess the Q-factor and the impact of hardware utilization when approximating the activation functions of NN using Taylor series, piecewise linear, and look-up table (LUT) approximations. We also show how to mitigate the approximation errors with extra training and provide some insights into possible gradient problems in the LUT approximation. Finally, to evaluate the complexity of hardware implementation to achieve 200G and 400G throughput, fixed-point NN-based equalizers with approximated activation functions are developed and implemented in an FPGA.
Reducing Computational Complexity of Neural Networks in Optical Channel Equalization: From Concepts to Implementation
Freire, Pedro J., Napoli, Antonio, Ron, Diego Arguello, Spinnler, Bernhard, Anderson, Michael, Schairer, Wolfgang, Bex, Thomas, Costa, Nelson, Turitsyn, Sergei K., Prilepsky, Jaroslaw E.
In this paper, a new methodology is proposed that allows for the low-complexity development of neural network (NN) based equalizers for the mitigation of impairments in high-speed coherent optical transmission systems. In this work, we provide a comprehensive description and comparison of various deep model compression approaches that have been applied to feed-forward and recurrent NN designs. Additionally, we evaluate the influence these strategies have on the performance of each NN equalizer. Quantization, weight clustering, pruning, and other cutting-edge strategies for model compression are taken into consideration. In this work, we propose and evaluate a Bayesian optimization-assisted compression, in which the hyperparameters of the compression are chosen to simultaneously reduce complexity and improve performance. In conclusion, the trade-off between the complexity of each compression approach and its performance is evaluated by utilizing both simulated and experimental data in order to complete the analysis. By utilizing optimal compression approaches, we show that it is possible to design an NN-based equalizer that is simpler to implement and has better performance than the conventional digital back-propagation (DBP) equalizer with only one step per span. This is accomplished by reducing the number of multipliers used in the NN equalizer after applying the weighted clustering and pruning algorithms. Furthermore, we demonstrate that an equalizer based on NN can also achieve superior performance while still maintaining the same degree of complexity as the full electronic chromatic dispersion compensation block. We conclude our analysis by highlighting open questions and existing challenges, as well as possible future research directions.
High-Performance Deep Learning via a Single Building Block
Georganas, Evangelos, Banerjee, Kunal, Kalamkar, Dhiraj, Avancha, Sasikanth, Venkat, Anand, Anderson, Michael, Henry, Greg, Pabst, Hans, Heinecke, Alexander
Deep learning (DL) is one of the most prominent branches of machine learning. Due to the immense computational cost of DL workloads, industry and academia have developed DL libraries with highly-specialized kernels for each workload/architecture, leading to numerous, complex code-bases that strive for performance, yet they are hard to maintain and do not generalize. In this work, we introduce the batch-reduce GEMM kernel and show how the most popular DL algorithms can be formulated with this kernel as the basic building-block. Consequently, the DL library-development degenerates to mere (potentially automatic) tuning of loops around this sole optimized kernel. By exploiting our new kernel we implement Recurrent Neural Networks, Convolution Neural Networks and Multilayer Perceptron training and inference primitives in just 3K lines of high-level code. Our primitives outperform vendor-optimized libraries on multi-node CPU clusters, and we also provide proof-of-concept CNN kernels targeting GPUs. Finally, we demonstrate that the batch-reduce GEMM kernel within a tensor compiler yields high-performance CNN primitives, further amplifying the viability of our approach.
Responses to a Critique of Artificial Moral Agents
Poulsen, Adam, Anderson, Michael, Anderson, Susan L., Byford, Ben, Fossa, Fabio, Neely, Erica L., Rosas, Alejandro, Winfield, Alan
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
Representation, Justification and Explanation in a Value Driven Agent: An Argumentation-Based Approach
Liao, Beishui, Anderson, Michael, Anderson, Susan Leigh
For an autonomous system, the ability to justify and explain its decision making is crucial to improve its transparency and trustworthiness. This paper proposes an argumentation-based approach to represent, justify and explain the decision making of a value driven agent (VDA). By using a newly defined formal language, some implicit knowledge of a VDA is made explicit. The selection of an action in each situation is justified by constructing and comparing arguments supporting different actions. In terms of a constructed argumentation framework and its extensions, the reasons for explaining an action are defined in terms of the arguments for or against the action, by exploiting their defeat relation, as well as their premises and conclusions.
A Value Driven Agent: Instantiation of a Case-Supported Principle-Based Behavior Paradigm
Anderson, Michael (University of Hartford) | Anderson, Susan Leigh (University of Connecticut) | Berenz, Vincent (Max Planck Institute)
We have implemented a simulation of a robot functioning in the domain of eldercare whose behavior is completely determined by an ethical principle. Using a subset of the perceptions and duties that will be required of such a robot, this simulation demonstrates selection of ethically preferable actions in real time using a case-supported principle-based paradigm. We believe that this work could serve as the basis for ensuring that the behavior of all eldercare robots that are created in the future will be ethically justifiable. Further, we believe that the methods used in this project can be employed in other domains as well, to ensure that the robots that humans interact with in these domains will behave ethically.
Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm
Anderson, Michael (University of Hartford) | Anderson, Susan Leigh (University of Connecticut)
A paradigm of case-supported principle-based behavior (CPB) is proposed to help ensure ethical behavior of autonomous machines. We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of their actions as well as a control abstraction for managing unanticipated behavior. The requirements, methods, implementation, and evaluation components of the CPB paradigm are detailed.
Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm
Anderson, Michael (University of Hartford) | Anderson, Susan Leigh (University of Connecticut)
A paradigm of case-supported principle-based behavior (CPB) is proposed to help ensure ethical behavior of autonomous machines. We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of their actions as well as a control abstraction for managing unanticipated behavior. The requirements, methods, implementation, and evaluation components of the CPB paradigm are detailed.
GenEth: A General Ethical Dilemma Analyzer
Anderson, Michael (University of Hartford) | Anderson, Susan Leigh (University of Connecticut)
We contend that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. To provide assistance in developing these ethical principles, we have developed GenEth, a general ethical dilemma analyzer that, through a dialog with ethicists, codifies ethical principles in any given domain. GenEth has been used to codify principles in a number of domains pertinent to the behavior of autonomous systems and these principles have been verified using an Ethical Turing Test.