Machine Learning


Roku leaves rivals in dust - claiming machine learning breakthrough - Rethink

#artificialintelligence

Roku blew away its numbers in style as the US streaming company surpassed 30 million active users in Q2 2019 – comprehensively extending its native dominance in the connected TV space. But while Roku's second quarter results are a milestone for the company, they also signify significant tailwinds in a broader field – advertising. Revenue growth of 59% year on year to $250.1 million was primarily driven by growth in advertising as Roku managed to more than double the number of monetized video ad impressions.


The School of the Tomorrow: How AI in Education Changes How We Learn

#artificialintelligence

We live in exponential times, and merely having a digital strategy focused on continuous innovation is no longer enough to thrive in a constantly changing world. To transform an organisation and contribute to building a secure and rewarding networked society, collaboration among employees, customers, business units and even things is increasingly becoming key. Especially with the availability of new technologies such as artificial intelligence, organisations now, more than ever before, need to focus on bringing together the different stakeholders to co-create the future. Big data empowers customers and employees, the Internet of Things will create vast amounts of data and connects all devices, while artificial intelligence creates new human-machine interactions. In today's world, every organisation is a data organisation, and AI is required to make sense of it all.


Talk to Me: Nvidia Claims NLP Inference, Training Records

#artificialintelligence

Nvidia says it's achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the company says it has a new language training model in the works that dwarfs existing ones. Nvidia said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2 milliseconds making "it possible for developers to use state-of-the-art language understanding for large-scale applications…." Training: Running the largest version of Bidirectional Encoder Representations from Transformers (BERT-Large) language model, an Nvidia DGX SuperPOD with 92 Nvidia DGX-2H systems running 1,472 V100 GPUs cut training from several days to 53 minutes. A single DGX-2 system trained BERT-Large in 2.8 days.


Michigan Medicine makes AI, machine learning a top tech priority

#artificialintelligence

The academic medical center of the University of Michigan is leveraging investments in artificial intelligence, machine learning and advanced analytics to unlock the value of its health data. According to Andrew Rosenberg, MD, chief information officer for Michigan Medicine, the organization currently has 34 ongoing AI and machine leaning projects, 28 of which have principal investigators. "There's a lot of collaboration around these projects--as there should be for the diversity of thought and background needed to deal with complex problems--working with at least seven other U of M schools," Rosenberg told the Machine Learning for Health Care conference on Friday in Ann Arbor, Mich. "That's one of the powers that we enjoy." One of the machine learning projects cited by Rosenberg leverages a combination of electronic health records, monitor data and analytics to predict acute hemodynamic instability--when blood flow drops and deprives the body of oxygen--which is one of the most common causes of death for critically ill or injured patients.


How to Implement Progressive Growing GAN Models in Keras

#artificialintelligence

The progressive growing generative adversarial network is an approach for training a deep convolutional neural network model for generating synthetic images. It is an extension of the more traditional GAN architecture that involves incrementally growing the size of the generated image during training, starting with a very small image, such as a 4 4 pixels. This allows the stable training and growth of GAN models capable of generating very large high-quality images, such as images of synthetic celebrity faces with the size of 1024 1024 pixels. In this tutorial, you will discover how to develop progressive growing generative adversarial network models from scratch with Keras. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code. How to Implement Progressive Growing GAN Models in Keras Photo by Diogo Santos Silva, some rights reserved.


A Gentle Introduction to the Progressive Growing GAN

#artificialintelligence

Progressive Growing GAN is an extension to the GAN training process that allows for the stable training of generator models that can output large high-quality images. It involves starting with a very small image and incrementally adding blocks of layers that increase the output size of the generator model and the input size of the discriminator model until the desired image size is achieved. This approach has proven effective at generating high-quality synthetic faces that are startlingly realistic. In this post, you will discover the progressive growing generative adversarial network for generating large images. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code.


A 2019 Guide to Object Detection

#artificialintelligence

Object detection is a computer vision technique whose aim is to detect objects such as cars, buildings, and human beings, just to mention a few. The objects can generally be identified from either pictures or video feeds. Object detection has been applied widely in video surveillance, self-driving cars, and object/people tracking. In this piece, we'll look at the basics of object detection and review some of the most commonly-used algorithms and a few brand new approaches, as well. Object detection locates the presence of an object in an image and draws a bounding box around that object.


DeepMind's Losses and the Future of Artificial Intelligence

#artificialintelligence

Alphabet's DeepMind lost $572 million last year. DeepMind, likely the world's largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months. Does this mean that AI is falling apart? Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU.


The top 25 cities for recruiting skilled artificial intelligence talent

#artificialintelligence

The rapid rise of artificial intelligence adoption is disrupting many industries, most notably the banking and financial services sector. The many individual technologies that are included under the umbrella title of artificial intelligence are promising great opportunities for business efficiencies, but at the same time, creating new challenges. One of these new challenges is determining the best location to relocate or expand offices and service centers of banks, insurance companies, investment houses and other financial services companies that collectively are leading the national charge of incorporating AI technologies into their day-to-day operations. Not surprisingly, with the growing need for the latest in AI skillsets and the academic resources for recruiting and retraining of displaced workers, relocation firms such as Princeton, NJ-based The Boyd Co. are targeting those North American cities that offer superior academic programs in artificial intelligence. Indeed, the ability to provide an artificial intelligence workforce is emerging as a new site selection driver – and one that Boyd expects to soon extend well beyond the financial services sector.


Robotic Process Automation (RPA) vs. AI, explained

#artificialintelligence

The expanding universe of artificial intelligence includes many terms and technologies. That naturally leads to overlap and confusion. AI and machine learning are mentioned together so often that some people – non-technical folks especially – might think they're one and the same. They're related but not actually interchangeable terms: Machine learning is a subset, or a specific discipline, of AI. Start adding other terms and technologies into the mix – deep learning is yet another subset of machine learning, for instance – and the opportunities abound for further misconceptions.