deep learning


Top 10 Books on Artificial Intelligence You Cannot Afford to Miss Analytics Insight

#artificialintelligence

Artificial Intelligence is the need of the hour. This technology of today is neither an elementary school math nor a rocket science application. The understanding of AI not only allows business decision makers and enthusiasts to make advancements in technologies but also let them make processes better. Another term that is doing the rounds is artificial general intelligence (AGI) which encompasses human-level cognitive ability making automation think and work like a human mind. So how do you benefit from AI and the latest advancements that move around it?


Get a grip on neural networks, R, Python, TensorFlow, deployment of AI, and much more, at our MCubed workshops

#artificialintelligence

Event You know that you could achieve great things if only you had time to get to grips with TensorFlow, or mine a vast pile of text, or simply introduce machine-learning into your existing workflow. That's why at our artificial-intelligence conference MCubed, which runs from September 30 to October 2, we have a quartet of all-day workshops that will take you deep into key technologies, and show you how to apply them in your own organisation. Prof Mark Whitehorn and Kate Kilgour will dive deep into machine learning and neural networks, from perceptrons through convolutional neural networks (CNNs) and autoencoders to generative adversarial networks. If you want to get more specific, Oliver Zeigermann returns to MCubed with his workshop on Deep Learning with TensorFlow 2. This session will cover Neural Networks, CNNs and recurrent neural networks, using TensorFlow 2, and Python, to show you how to develop and train your own neural networks. One problem many of us face is making sense of a mountain of text.


Advanced Topics in Deep Convolutional Neural Networks

#artificialintelligence

Throughout this article, I will discuss some of the more complex aspects of convolutional neural networks and how they related to specific tasks such as object detection and facial recognition. This article is a natural extension to my article titled: Simple Introductions to Neural Networks. I recommend looking at this before tackling the rest of this article if you are not well-versed in the idea and function of convolutional neural networks. Due to the excessive length of the original article, I have decided to leave out several topics related to object detection and facial recognition systems, as well as some of the more esoteric network architectures and practices currently being trialed in the research literature. I will likely discuss these in a future article related more specifically to the application of deep learning for computer vision.


r/MachineLearning - [P] Clickstream based user intent prediction with LSTMs and CNNs

#artificialintelligence

I also did some experimentation with GRUs and LSTMs in NLP context, where I saw LSTMs performing better than GRUs, while they need more training time. Honestly, I never tried complete variable length sequences, because of the restriction, that each batch must be the same length and some layers are not usable if you have variable sequences. I don't think the difference will be huge, at least in my data. I experimented with different sequence lengths (100, 200, 250, 400, 500), and 400 and 500 have not performed better then 250. I did indeed achieve a noticeable performance improvement with embeddings, instead of one hot encoding.


Deep learning model from Lockheed Martin tackles satellite image analysis

#artificialintelligence

The model, Global Automated Target Recognition (GATR), runs in the cloud, using Maxar Technologies' Geospatial Big Data platform (GBDX) to access Maxar's 100 petabyte satellite imagery library and millions of curated data labels across dozens of categories that expedite the training of deep learning algorithms. Fast GPUs enable GATR to scan a large area very quickly, while deep learning methods automate object recognition and reduce the need for extensive algorithm training. The tool teaches itself what the identifying characteristics of an object area or target, for example, learning how to distinguish between a cargo plane and a military transport jet. The system then scales quickly to scan large areas, such as entire countries. GATR uses common deep learning techniques found in the commercial sector and can identify airplanes, ships,, buildings, seaports, etc. "There's more commercial satellite data than ever available today, and up until now, identifying objects has been a largely manual process," says Maria Demaree, vice president and general manager of Lockheed Martin Space Mission Solutions.


The Future Of AI And Analytics Lies In Helping Small Businesses And Verticals Leverage The Cloud

#artificialintelligence

The AI and analytics revolution has revolutionized nearly every corner of industry, helping businesses innovate, become more efficient and pioneer entirely new application areas and product lines. At the same time, the greatest beneficiaries of these advances have often been larger companies that can afford to hire the specialized expertise necessary to fully harness these new advances. In contrast, smaller and medium-sized businesses and those in non-traditional industries have struggled to integrate these technologies with their overtaxed technical staff focused more on the mundane IT issues of desktop upgrades and higher priority tasks like shoring up their cybersecurity. Cloud companies are moving rapidly to help these businesses through a wealth of new APIs and tools that don't require any deep learning or advanced analytics experience. The future of the cloud lies in analytics.


Toward artificial intelligence that learns to write code

#artificialintelligence

Learning to code involves recognizing how to structure a program, and how to fill in every last detail correctly. No wonder it can be so frustrating. A new program-writing AI, SketchAdapt, offers a way out. Trained on tens of thousands of program examples, SketchAdapt learns how to compose short, high-level programs, while letting a second set of algorithms find the right sub-programs to fill in the details. Unlike similar approaches for automated program-writing, SketchAdapt knows when to switch from statistical pattern-matching to a less efficient, but more versatile, symbolic reasoning mode to fill in the gaps.


Adobe unveils new AI that can detect if an image has been 'deepfaked'

Daily Mail - Science & tech

Adobe researchers have developed an AI tool that could make spotting'deepfakes' a whole lot easier. The tool is able to detect edits to images, such as those that would potentially go unnoticed to the naked eye, especially in doctored deepfake videos. It comes as deepfake videos, which use deep learning to digitally splice fake audio onto the mouth of someone talking, continue to be on the rise. Adobe researchers have developed an AI tool that could make it easier to spot'deepfakes'. Deepfakes are so named because they utilise deep learning, a form of artificial intelligence, to create fake videos.


How a Japanese cucumber farmer is using deep learning and TensorFlow Google Cloud Blog

#artificialintelligence

Using deep learning for image recognition allows a computer to learn from a training data set what the important "features" of the images are. By using a hierarchy of numerous artificial neurons, deep learning can automatically classify images with a high degree of accuracy. Thus, neural networks can recognize different species of cats, or models of cars or airplanes from images. Sometimes neural networks can exceed the performance of the human eye for certain applications.


IBC 2018: Convergence and deep learning - postPerspective

#artificialintelligence

In the 20 years I've been traveling to IBC, I've tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing.