Collaborating Authors

Deep Learning

Borderless tables detection with deep learning and OpenCV


Adrian Rosebrock, a known CV researcher, states in his "Gentle guide to deep learning object detection" that: "object detection, regardless of whether performed via deep learning or other computer vision techniques, builds on image classification and seeks to localize precisely an area where an object appears". One approach to build a custom object detector, as he suggests, is to choose any classifier and precede it with an algorithm to select and provide regions of an image that may contain an object. Within this method, you are free to decide whether to use a traditional ML algorithm for image classification (utilising or not CNN as a feature extractor) or train a simple neural network to handle arbitrary large datasets. Despite its proven efficiency, this two-stage object detection paradigm, known as R-CNN, still relies on heavy computations and is not suitable for real-time application. It is further said in the abovementioned post that "another approach is to treat a pre-trained classification network as a base (backbone) network in a multi-component deep learning object detection framework (such as Faster R-CNN, SSD, or YOLO)".

An introduction to object detection with deep learning


This article is part of "Deconstructing artificial intelligence," a series of posts that explore the details of how AI applications work (In partnership with Paperspace). Deep neural networks have gained fame for their capability to process visual information. And in the past few years, they have become a key component of many computer vision applications. Among the key problems neural networks can solve is detecting and localizing objects in images. Object detection is used in many different domains, including autonomous driving, video surveillance, and healthcare.

Deep Learning in the Cloud


As massive amounts of data are stored every second, it allows for the opportunity to create meaningful and revolutionizing models. This data comes in several forms, including text, images and videos, all allowing for advanced models to be created using techniques such as Deep Learning. Further, using the extensive amount of data, applications using technologies such as computer vision are being used in products such as self-driving cars and facial recognition in phones. When creating a Deep Learning application, one of the first decisions to be made is where the model will be trained, either locally on a machine or through a third-party cloud provider. This is an important decision to be made as it could significantly impact the training time of a model.

Microscopy deep learning predicts viral infections


When viruses infect a cell, changes in the cell nucleus occur, and these can be observed through fluorescence microscopy. Using fluoresence images made in live cells, researchers at the University of Zurich have trained an artificial neural network to reliably recognize cells that are infected by adenoviruses or herpes viruses. The procedure also identifies severe acute infections at an early stage. In most cases, this does not lead to the production of new virus particles, as the viruses are suppressed by the immune system. However, adenoviruses and herpes viruses can cause persistent infections that the immune system is unable to keep completely in check and that produce viral particles for years. These same viruses can also cause sudden, violent infections where affected cells release large amounts of viruses, such that the infection spreads rapidly.

The Transformer


I've started to go through classic papers in machine learning, inventions that shifted the state of the art or created an entirely new application. These are my notes on the Transformer introduced in Attention is All You Need. The transformer addressed problems with recurrent sequence modeling in natural language processing (NLP) but has since been applied to vision, reinforcement learning, audio, and other sequence tasks. Recurrent models built from RNNs, LSTMs, or GRUs were developed to deal with sequence modeling in neural networks because they can include information from adjacent inputs as well as the current input. This has obvious relevance for data like language where the meaning of a word is partially or entirely defined in relation to surrounding words.

Deep Learning for AI

Communications of the ACM

Yoshua Bengio, Yann LeCun, and Geoffrey Hinton are recipients of the 2018 ACM A.M. Turing Award for breakthroughs that have made deep neural networks a critical component of computing. Research on artificial neural networks was motivated by the observation that human intelligence emerges from highly parallel networks of relatively simple, non-linear neurons that learn by adjusting the strengths of their connections. This observation leads to a central computational question: How is it possible for networks of this general kind to learn the complicated internal representations that are required for difficult tasks such as recognizing objects or understanding language? Deep learning seeks to answer this question by using many layers of activity vectors as representations and learning the connection strengths that give rise to these vectors by following the stochastic gradient of an objective function that measures how well the network is performing. It is very surprising that such a conceptually simple approach has proved to be so effective when applied to large training sets using huge amounts of computation and it appears that a key ingredient is depth: shallow networks simply do not work as well. We reviewed the basic concepts and some of the breakthrough achievements of deep learning several years ago.63 Here we briefly describe the origins of deep learning, describe a few of the more recent advances, and discuss some of the future challenges. These challenges include learning with little or no external supervision, coping with test examples that come from a different distribution than the training examples, and using the deep learning approach for tasks that humans solve by using a deliberate sequence of steps which we attend to consciously--tasks that Kahneman56 calls system 2 tasks as opposed to system 1 tasks like object recognition or immediate natural language understanding, which generally feel effortless. There are two quite different paradigms for AI.

Reading CSV(), Excel(), JSON () and HTML() File Formats in Pandas


Pandas is a Python library containing a bunch of capacities and specific information structures that have been intended to help Python developers to perform information examination errands in an organized manner. Importing data is the most fundamental and absolute initial phase in any information-related work. The capacity to import the information accurately is a must have skill for every data scientist. Data exists in many different forms, and not only should we know how to import various data formats but also how to analyze and manipulate the data to infer insights. The majority of the things that pandas should do can be possible with fundamental Python, yet the gathered arrangement of pandas capacities and information structure makes the information examination assignments more reliable as far as punctuation and in this manner helps readability.

Ping An Makes Breakthrough in Artificial Intelligence-Driven Drug Research


Research by Ping An Healthcare Technology Research Institute and Tsinghua University has led to a promising deep learning framework for drug discovery, announced Ping An Insurance (Group) Company of China, Ltd. (hereafter "Ping An" or the "Group", HKEX: 2318; SSE: 601318). The findings were published in "An effective self-supervised framework for learning expressive molecular global representations to drug discovery" in Briefings in Bioinformatics, a peer-reviewed bioinformatics journal. It marks a major technology breakthrough for the Group in the field of AI-driven pharmaceutical research. Drug discovery can take 10 to 15 years from invention to market. It can take a large number of experiments, with significant costs and high failure rates.

Using FastAI to Classify Malware using Deep Learning


This is one of my first Projects trying to implement a predictive model using what I've learned watching Jeremy Howard Fastai's course First of all, I started reading this paper. Secondly, I started to look for any Dataset that already contained the images from malware binary hexadecimal files and found this dropbox. All of the heavy lifting was already done, and I could gather all my efforts in the modeling creation part. I started by saving those images in my Google Drive so that later on I could easily access them by a Google Colab instance.

Deep Learning: The Beginnings


You may have noticed that I mentioned Artificial Intelligence, Machine Learning, and Deep Learning, and if you are a new to these subjects as I am, you may be a little bit confused. What I have learned in Andrew NG's extraordinary AI For Everyone is that, Artificial Intelligence is a huge set of tools for making computers behave smartly. Machine Learning is the biggest set of these AI tools. And lastly, Deep Learning is a Machine Learning tool. AI and ML are such broad topics that there are even more tools within them.