Goto

Collaborating Authors

tensor


#006 TF 2.0 An implementation of a Shallow Neural Network in tf.keras - Moons dataset - Master Data Science

#artificialintelligence

In this post we will learn how to make a classification of Moons dataset with a shallow Neural network. The Neural Net we will implemented in TensorFlow 2.0 using Keras API. With the following code we are going to import all libraries that we will need. First, we will generate a random dataset, then we will split it into train and test set. We will also print dimensions of these datasets.


Building Your First Network in PyTorch

#artificialintelligence

Starting a deep learning project sounds scary and difficult? I have read through articles, took lessons, and watched videos about neural networks, but how do I begin programming one? We have all been through that stage, and this is why I am creating this article to tell you everything (or at least most of the things I know) to begin your PyTorch model training project. The guide is presented in a bottom-up way. I will first describe individual components that are important to training a deep network, then provide examples on how to combine all the components together for training and testing.


Object Detection Explained: YOLO v3.

#artificialintelligence

I am finally writing this article on YOLO v3. There were a few yet important improvements over YOLO v2. Overall, it is quite bigger and more accurate. However, it is still fast since at 320 x 320 it is able to run in 22 ms at 28.2 mAP, which is as accurate as SSD yet three times faster. Here I am focusing only on the differences and improvements made upon YOLO v2.


Illustrated Differences between MLP and Transformers for Tensor Reshaping

#artificialintelligence

When designing neural networks, we are often faced with the need of tensor reshaping. The spatial shape of a tensor has to be altered with a certain layer to be able to fit the downstream layers. Like the special wedge-shaped lego blocks with differently shaped top and bottom surfaces, we also need some adaptor blocks in neural network. The most common way to change the shape of the tensor is through pooling or strided convolution (convolution with non-unit stride). For example, in computer vision, we can use pooling or strided convolution to change spatial dimension an input shape of H x W to H/2 x W/2 or even to an asymmetric H/4 x W/8.


PyTorch 1.10.0 Now Available

#artificialintelligence

PyTorch is a widely used, open source deep learning platform used for easily writing neural network layers in Python enabling a seamless workflow from research to production. Based on Torch, PyTorch has become a powerful machine learning framework favored by esteemed researchers around the world, and now adopted fully by Facebook. The new PyTorch 1.10.0 release is composed of over 3,400 commits since 1.9, made by 426 contributors. PyTorch 1.10 updates are focused on improving training and performance of PyTorch, and developer usability. You can check the blogpost that shows the new features here.


Pixel 6 and 6 Pro hands-on: Google's return to premium phones

Engadget

The Pixel 6 and 6 Pro are finally here, and they're the most promising phones from Google in years. We've already seen plenty of pictures and videos of the Pixel 6, but now we actually have devices to play with and detailed specs to share. One of the highlights of the Pixel 6s are the cameras, which not only received a processing boost thanks to Tensor, but also a serious hardware upgrade. Additionally, these handsets bring faster-refreshing screens, Android 12-exclusive features and significant voice recognition enhancements. But the best thing about the Pixel 6 and 6 Pro is the reasonable price.


The Ultimate Guide To PyTorch

#artificialintelligence

With the rise in technological advancements in the field of artificial neural networks, there have been several libraries that are used to solve and compute modern deep learning tasks. In my previous articles, I have covered some other deep learning frameworks, such as TensorFlow and Keras, in detail. It is recommended that the viewers who are new to this topic to out the following link for TensorFlow and this particular link for Keras. In this article, we will cover another spectacular deep learning framework in PyTorch, which is also widely used for performing a variety of complex tasks. PyTorch, since its release in September 2016, has always offered stiff competition to TensorFlow due to its Pythonic style of coding archetypes and comparatively more simple coding methodologies in some cases. The table of contents for the concepts we will discuss in this article is provided on the right. For starters, we will get accustomed to PyTorch with a basic introduction.


What's so great about Google Tensor? The new Pixel 6 chip, explained

Mashable

To be more specific, Google is developing something called the Tensor chip for the Pixel 6 phones. There are still plenty of important details about Tensor that Google probably won't reveal until the Oct. 19 Pixel 6 launch event -- such as which companies are providing which exact components for it -- but we can use what little we have to paint a picture of what this means for the future of Pixel. In technical terms, Tensor is a new system on a chip (or SoC) that will power the Pixel 6 and Pixel 6 Pro phones. You're probably wondering what the heck an SoC is. This is actually the rare tech term that's somewhat self-explanatory, as an SoC is a group of the essential components that make up a computing system (like CPU, GPU, and RAM) packed together into a silicon chip.


Squeeze and Excitation Networks -- Idiot Developer

#artificialintelligence

Convolutional Neural Network (CNN) has been most widely used in the field of computer vision and visual perception to solve multiple tasks such as image classification, semantic segmentation and many more. However, there is a need for approaches that can further improve its performance. One such approach is to add some attention mechanism to an already existing CNN architecture for further improvements. Squeeze and Excitation Network (SENet) is one such attention mechanism that is most widely used for performance improvements. In the article, we are going to learn more about the Squeeze and Excitation Networks, how they work and how they help to improve performance. The squeeze and excitation attention mechanism was introduced in the year 2018 by Hu et al. in their paper " Squeeze-and-Excitation Networks " at CVPR 2018 with a journal version in TPAMI.


ResNeXt Explained, Part 1

#artificialintelligence

In this two-part series, we are going to review ResNeXt, a network best explained as a marriage of VGG, ResNet, and Inception, composed via repeating a block (as in VGG) that aggregates a set of transforms (like Inception) while bearing residual connections (from ResNet) in mind. It is the backbone of many state-of-the-art networks like NFNet and has proven to be a fast yet accurate option for many vision tasks, from object detection to segmentation. Audience: Except for knowledge of elementary CNN concepts, not much is needed, and helpful articles will be linked in case you require refreshers or are not familiar with some notions. Without further ado, let's get coding! Before we delve into the details of a ResNeXt block, we should look at how such blocks are orchestrated to give ResNeXt.