Social and environmental impact of recent developments in machine learning on biology and chemistry research

Probst, Daniel

arXiv.org Artificial Intelligence 

The hard-and software that catalysed rapid developments in machine learning In late 2002 and early 2003, the release of the Radeon 9700 and GeForce FX video cards introduced a fully programmable graphics pipeline, extending and later replacing the existing fixed function pipelines. Unlike the fixed function pipeline, which allowed the user to only supply input matrices and parameters to built-in operations, the programmable pipeline introduced the execution of user-written shader programs on the GPU [Contributors, 2015]. This fundamental change allowed programmers and researchers to exploit the intrinsic parallelism of GPUs 2 years before Intel would introduce its first dual-core CPU. Within months of the availability of this new hardware and the accompanying APIs, researchers implemented linear algebra methods on GPUs and introduced programming frameworks to use GPUs for generalpurpose computations [Thompson et al., 2002, Krüger and Westermann, 2003]. This rapid development marked the dawn of general-purpose computing on graphics processing units (GPGPU). In a presentation at ICS '08, Harris presented the successes of GPGPU by highlighting a speed-up in molecular docking, N-body simulations, HD video stream transcoding, or image processing--applications in machine learning were not discussed. However, just one year later, the introduction of GPUs as general-purpose processors catalysed the deep learning explosion of the early 2010s by allowing deep learning algorithms pioneered by Alexey Ivakhnenko in 1971 to be run within practical time on widely available consumer hardware when Rajat et al. showed that GPUs outperform CPUs by an order of magnitude in large-scale deep unsupervised learning tasks [Ivakhnenko, 1971, Raina et al., 2009]. Hardware and energy requirements increase in machine learning research In 2010, Ciresan et al. [2010] introduced a multi-layer perceptron (MLP) with up to 12.11 million free parameters where forward and backward propagation were implemented on a GPU using NVIDIA's proprietary CUDA API introduced by Harris at ICS '08 two

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found