Goto

Collaborating Authors

 pytorch and tensorflow




Reviews: Backpropagation with Callbacks: Foundations for Efficient and Expressive Differentiable Programming

Neural Information Processing Systems

The paper descries Lantern, a framework for automatic differentiation in Scala, based on callbacks and continuation passing style. It compares against PyTorch and TensorFlow on several benchmark tasks. There are two main aspects of the paper: Reverse-mode automatic differentiation with continuations, and code generation via multi-stage programming. The submission does not provide code for the proposed framework, which I don't find acceptable for a paper on a software package. It's unclear to me how the first is different from any other implementation of automatic differentiation via operator overloading.


An Empirical Study on Bugs Inside PyTorch: A Replication Study

Ho, Sharon Chee Yin, Majdinasab, Vahid, Islam, Mohayeminul, Costa, Diego Elias, Shihab, Emad, Khomh, Foutse, Nadi, Sarah, Raza, Muhammad

arXiv.org Artificial Intelligence

Software systems are increasingly relying on deep learning components, due to their remarkable capability of identifying complex data patterns and powering intelligent behaviour. A core enabler of this change in software development is the availability of easy-to-use deep learning libraries. Libraries like PyTorch and TensorFlow empower a large variety of intelligent systems, offering a multitude of algorithms and configuration options, applicable to numerous domains of systems. However, bugs in those popular deep learning libraries also may have dire consequences for the quality of systems they enable; thus, it is important to understand how bugs are identified and fixed in those libraries. Inspired by a study of Jia et al., which investigates the bug identification and fixing process at TensorFlow, we characterize bugs in the PyTorch library, a very popular deep learning framework. We investigate the causes and symptoms of bugs identified during PyTorch's development, and assess their locality within the project, and extract patterns of bug fixes. Our results highlight that PyTorch bugs are more like traditional software projects bugs, than related to deep learning characteristics. Finally, we also compare our results with the study on TensorFlow, highlighting similarities and differences across the bug identification and fixing process.


Why TensorFlow for Python is dying a slow death

#artificialintelligence

Religious wars have been a cornerstone in tech. Whether it's debating about the pros and cons of different operating systems, cloud providers, or deep learning frameworks -- a few beers in, the facts slide aside and people start fighting for their technology like it's the holy grail. Just think about the endless talk about IDEs. Some people prefer VisualStudio, others use IntelliJ, again others use plain old editors like Vim. There's a never-ending debate, half-ironic of course, about what your favorite text editor might say about your personality.


Not Just PyTorch and TensorFlow: 4 Other Deep Learning Libraries You Should Lnow

#artificialintelligence

A quick introduction to JAX, MXNet, MATLAB, and FluxPhoto by Gabriel Sollmann on UnsplashMachine learning libraries accelerate the deep learning revolution. They lowered the barrier of entry for practitioners by abstracting many difficult things such as GPU speedup, matrix algebra, and automatic differentiation. In both industries and academia, two deep learning libraries reign supreme: PyTorch and TensorFlow. In this article, I will introduce you to some other deep learning libraries that have considerable usage, either because they achieve speedup in some ways, or because they are used by very specific groups. Let’s begin!JAXhttps://medium.com/media/6bac18fd43fec2b80e2b35ffe642f650/hrefWhat is it? A open-source and in-development numerical framework originally developed by Google (think NumPy but for GPU).Who uses it? Many teams within Google, such as DeepMind.Why should you know about it? JAX was developed by Google to accelerate numerical computing on GPU and Google’s own hardware TPU. Using ideas such as accelerated linear algebra, just-in-time compilation (JIT), and automatic vectorization, JAX achieved great speedup and scale. Even though their syntax are similar to minimize the learning curve, JAX has a different design philosophy from NumPy. JAX encourages functional programming though functions such as vmap and pmap (vectorize + parallelize).Currently, many high-level APIs have been developed for JAX. Notable ones are Haiku and Flax.Apache MXNethttps://medium.com/media/ed0dbfe51985a97c4f9b1700242e23b7/hrefWhat is it? An open-source veteran machine learning framework with front-end bindings for multiple languages including Python, C++, R, Java, and Perl.Who uses it? Amazon AWS.Why should you know about it? MXNet most powerful features are its support for many programming languages and its scalability. Benchmark tests by NVIDIA shows that MXNet is faster than PyTorch and TensorFlow on some deep learning tasks.MXNet comes with Gluon, a high-level API to build neural networks. It also has an ecosystem for image classification (GluonCV) and NLP (GluonNLP).MATLAB Deep Learning Toolboxhttps://medium.com/media/f35c14825e5dd6f7974091faeb658ca1/hrefWhat is it? An add-on toolbox for MATLAB users that can create and train neural networks for a variety of tasks.Who uses it? Academia and industries such as aerospace and mechanical engineering. For example, Airbus used it to detect defects inside airplanes.Why should you know about it? Whatever you feel about MATLAB, it is still a popular programming ecosystem amongst academics and engineers. It has great user support and, in my opinion, the best documentation out of all the deep learning libraries in this list. The deep learning toolbox is geared toward people who want to build systems using minimal programming. Simulink, a graphical programming interface within MATLAB, offers ways to create easy-to-understand deep learning pipelines.Julia Fluxhttps://medium.com/media/c8849fbf650e0032d47f701a04573899/hrefWhat is it? An open-source machine learning library built for Julia programming language.Who uses it? Computing-intensive fields such as pharmaceuticals and finances. For example, Astrazeneca used it to predict drug toxicity.Why should you know about it? Julia programming language gained momentum over the years amongst data scientists, quants, and bioinformatics researchers. It is comparable to C/C++ in terms of speed, and it was designed to be beginners-friendly like Python. An implementation of Julia deep learning on Google TPU showed >200x speedup compared to CPU. If you are already coding in Julia, Flux is a great library to look into.ConclusionI hope that, with this short article, you are introduced to some other deep learning libraries. They all support efficient speedups, GPU scaling, and deployment into productions. There are excellent learning sources for all of them on the internet. Happy coding!Sources[1] https://www.deepmind.com/blog/using-jax-to-accelerate-our-research[2] https://github.com/aws/sagemaker-python-sdk[3] https://developer.nvidia.com/deep-learning-performance-training-inference[4] https://www.mathworks.com/company/user_stories/case-studies/airbus-uses-artificial-intelligence-and-deep-learning-for-automatic-defect-detection.html[5] https://twitter.com/jeffdean/status/1054951415339192321?lang=en[6] https://julialang.org/blog/2012/02/why-we-created-julia/[7] https://juliacomputing.com/case-studies/astra-zeneca/Not Just PyTorch and TensorFlow: 4 Other Deep Learning Libraries You Should Lnow was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.


Overview of Some Deep Learning Libraries

#artificialintelligence

Machine learning is a broad topic. Deep learning, in particular, is a way of using neural networks for machine learning. Neural network is probably a concept older than machine learning, dated back to 1950s. Unsurprisingly, there were many libraries created for it. In the following, we will give an overview of some of the famous libraries for neural network and deep learning.


Introduction to Deep Reinforcement Learning

#artificialintelligence

This is a must read for any practitioner of RL. The book is divided into 3 parts and I would strongly recommend reading through Parts I and II. The sections marked with (*) can be skipped in first reading. And if you click on this, you will see the links of python and Matlab implementations of the examples and exercises contained in the book.


Machine Learning in 2022: TensorFlow or PyTorch?

#artificialintelligence

Data scientists or AI researchers working on deep learning will probably turn to PyTorch or TensorFlow, two popular open-source frameworks designed for AI. But how exactly do they differ, and which is the correct choice for building new ML models in 2022? To help beginner data scientists or those looking to get into AI, Ryan O'Connor of AssemblyAI wrote a 5,000-word deep dive about this rapidly evolving topic. He undertook a multi-faceted comparison to address this complex topic, evaluating considerations such as model availability, deployment ease, and the strengths of both ecosystems. "Outdated or incomplete information [about PyTorch and TensorFlow] is abundant, and further obfuscates the complex discussion of which framework has the upper hand in a given domain," O'Connor explained.


PyTorch vs TensorFlow 2022: Which Deep Learning Framework Should You Use?

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Today, the two most popular Deep Learning frameworks are PyTorch and TensorFlow.