Goto

Collaborating Authors

Deep Learning


Learn Python machine learning with these essential books and online courses

#artificialintelligence

Teaching yourself Python machine learning can be a daunting task if you don't know where to start. Fortunately, there are plenty of good introductory books and online courses that teach you the basics. It is the advanced books, however, that teach you the skills you need to decide which algorithm better solves a problem and which direction to take when tuning hyperparameters. A while ago, I was introduced to Machine Learning Algorithms, Second Edition by Giuseppe Bonaccorso, a book that almost falls into the latter category. While the title sounds like another introductory book on machine learning algorithms, the content is anything but.


Tensorflow 2.0: Deep Learning and Artificial Intelligence

#artificialintelligence

It's been nearly 4 years since Tensorflow was released, and the library has evolved to its official second version. Tensorflow is Google's library for deep learning and artificial intelligence. Tensorflow is the world's most popular library for deep learning, and it's built by Google, whose parent Alphabet recently became the most cash-rich company in the world (just a few days before I wrote this). It is the library of choice for many companies doing AI and machine learning. In other words, if you want to do deep learning, you gotta know Tensorflow.


How to use human and artificial intelligence with digital twins

#artificialintelligence

Human intelligence has been creating and maintaining complex systems since the beginnings of civilizations. In modern times, digital twins have emerged to aid operations of complex systems, as well as improve design and production. Artificial intelligence (AI) and extended reality (XR) – including augmented reality (AR) and virtual reality (VR) – have emerged as tools that can help manage operations for complex systems. Digital twins can be enhanced with AI and emerging user interface (UI) technologies like XR can improve people's abilities to manage complex systems via digital twins. Digital twins can marry human and AI to produce something far greater by creating a usable representation of complex systems. End users do not need to worry about the formulas that go into machine learning (ML), predictive modeling and artificially intelligent systems, but also can capitalize on their power as an extension of their own knowledge and abilities. Digital twins combined with AR, VR and related technologies provide a framework to overlay intelligent decision making into day-to-day operations, as shown in Figure 1. Figure 1: A digital twin can be enhanced with artificial intelligence (AI) and intelligent realities user interfaces, such as extended reality (XR), which includes augmented reality (AR) and virtual reality (VR). The operations of a physical twin can be digitized by sensors, cameras and other such devices, but those digital streams are not the only sources of data that can feed the digital twin. In addition to streaming data, accumulated historical data can inform a digital twin. Relevant data could include data not generated from the asset itself, such as weather and business cycle data. Also, computer-aided design (CAD) drawings and other documentation can help the digital twin provide context.


Deep Learning 101 -- Role of Deep Learning in Artificial Intelligence

#artificialintelligence

Since the last decade or so, the developments in information technology have been propelled by advancements in areas of Artificial intelligence and Machine learning. Recently, there is a healthy debate going on regarding potential advantages and disadvantages of same between two powerhouses -- Elon Musk of Tesla and Mark Zuckerberg. While the media is jumping on the bandwagon, it is important to understand some basic concepts of AI, ML and Deep Learning to get a better sense of What they do and How they can be useful. Refer to the picture below to get a better sense of co-relation between AI, ML and Deep Learning and how do Artificial Neural Networks work. How does Deep Learning work?


Artificial Intelligence- All you need to know in layman terms

#artificialintelligence

We as human may have often wondered if the intelligence of human can be copied and machines can work the same way as us. While it is still a distant dream but we are not very far away. In the path to artificial intelligence lets have an overview of what it really means and how data science is helping us achieve it. A) Artificial Intelligence: It is an important science that actually helps in daily activities nowadays. The end goal of any machine learning or deep learning algorithm is achieving artificial intelligence.


Why Deep Learning Works Even Though It Shouldn't

#artificialintelligence

This is a big question, and I'm not a particularly big person. As such, these are all likely to be obvious observations to someone deep in the literature and theory. What I find however is that there are a base of unspoken intuitions that underlie expert understanding of a field, that are never directly stated in the literature, because they can't be easily proved with the rigor that the literature demands. And as a result, the insights exist only in conversation and subtext, which make them inaccessible to the casual reader. Because I have no need of rigor to post on the internet, (or even a need to be correct) I'm going to post some of those intuitions here as I understand them.


Is Artificial Intelligence Closer to Common Sense?

#artificialintelligence

Artificial intelligence researchers have not been successful in giving intelligent agents the common-sense knowledge they need to reason about the world. Without this knowledge, it is impossible for intelligent agents to truly interact with the world. Traditionally, there have been two unsuccessful approaches to getting computers to reason about the world--symbolic logic and deep learning. A new project, called COMET, tries to bring these two approaches together. Although it has not yet succeeded, it offers the possibility of progress.


With deep learning algorithms, standard CT technology produces spectral images

#artificialintelligence

In research published today in Patterns, a team of engineers led by Wang demonstrated how a deep learning algorithm can be applied to a conventional computerized tomography (CT) scan in order to produce images that would typically require a higher level of imaging technology known as dual-energy CT. Wenxiang Cong, a research scientist at Rensselaer, is first author on this paper. Wang and Cong were also joined by coauthors from Shanghai First-Imaging Tech, and researchers from GE Research. "We hope that this technique will help extract more information from a regular single-spectrum X-ray CT scan, make it more quantitative, and improve diagnosis," said Wang, who is also the director of the Biomedical Imaging Center within the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer. Conventional CT scans produce images that show the shape of tissues within the body, but they don't give doctors sufficient information about the composition of those tissues.


Janggu makes deep learning a breeze

#artificialintelligence

Imagine that before you could make dinner, you first had to rebuild the kitchen, specifically designed for each recipe. You'd spend way more time on preparation, than actually cooking. For computational biologists, it's been a similar time-consuming process for analyzing genomics data. Before they can even begin their analysis, they spend a lot of valuable time formatting and preparing huge data sets to feed into deep learning models. To streamline this process, researchers from the Max Delbrueck Center for Molecular Medicine in the Helmholtz Association (MDC) developed a universal programming tool that converts a wide variety of genomics data into the required format for analysis by deep learning models.


RStudio AI Blog: Classifying images with torch

#artificialintelligence

In recent posts, we've been exploring essential torch functionality: tensors, the sine qua non of every deep learning framework; autograd, torch's implementation of reverse-mode automatic differentiation; modules, composable building blocks of neural networks; and optimizers, the – well – optimization algorithms that torch provides. But we haven't really had our "hello world" moment yet, at least not if by "hello world" you mean the inevitable deep learning experience of classifying pets. We'll distinguish ourselves by asking a (slightly) different question: What kind of bird? How to apply transforms, both for image preprocessing and data augmentation. How to use Resnet (He et al. 2015), a pre-trained model that comes with torchvision, for transfer learning.