Goto

Collaborating Authors

Deep Learning


DeepMind co-founder Mustafa Suleyman departs Google

#artificialintelligence

DeepMind co-founder Mustafa Suleyman has departed Google after an eight-year stint at the company. Suleyman co-founded AI giant DeepMind alongside Demis Hassabis and Shane Legg in 2010 before it was acquired by Google in 2014 for $500 million. DeepMind has become somewhat of an AI darling and has repeatedly made headlines for creating neural networks that have beat human capabilities in a range of games. DeepMind's AlphaGo even beat Go world champion Lee Sedol in a five-game match. He left for Google in 2019 and was most recently the company's vice president of AI product management and policy.


Meta's new learning algorithm can teach AI to multi-task

#artificialintelligence

If you can recognize a dog by sight, then you can probably recognize a dog when it is described to you in words. Deep neural networks have become very good at identifying objects in photos and conversing in natural language, but not at the same time: there are AI models that excel at one or the other, but not both. Part of the problem is that these models learn different skills using different techniques. This is a major obstacle for the development of more general-purpose AI, machines that can multi-task and adapt. It also means that advances in deep learning for one skill often do not transfer to others.


Top 5 Machine learning models 2021

#artificialintelligence

This year has been full of a lot of great models. In this article, my hope is to highlight 10 of the most noteworthy models. I have been regularly reviewing papers and explaining them over this year and I think I have quite a few good mentions. Disclaimer: There might be other good models not mentioned here and I am not claiming to be the ultimate expert when it comes to evaluating the quality of machine learning models! Also, note that this list isn't ordered!


PyTorch quick reference -- Tensors

#artificialintelligence

This blog is part of the Torch Thursdays series. The plan is to share some tidbits on PyTorch usage through this. Today's blog particularly is to share some quick notes based on PyTorch's video tutorial on tensors. This blog assumes familiarity with PyTorch framework and numpy. PyTorch provides torch.Tensor to represent a multi-dimensional array containing elements of a single data type.


Top Emerging Computer Vision Trends For 2022

#artificialintelligence

The purpose of Computer Vision (CV) is to allow machines to obtain valuable information from their surroundings, by analyzing visual data that can be provided by different sources such as digital images and videos. The nature of such information depends on the final goal of the machine. Think, for example, of self-driving cars. A CV module that is capable of detecting in real-time objects that appear in front of the car is essential to avoid accidents. On the other hand, a robot that has to give directions to people inside a railway station can change the way of speaking based on whether the listener is a child or an adult.


Crowdsourcing -- A Step Towards Advanced Deep Learning

#artificialintelligence

Deep learning technology has been more prevalent in our daily lives in recent years, with architectures that are becoming increasingly complex, requiring large-scale GPU clusters for model training. To circumvent this limitation, many academics are considering whether it is possible to crowdsource the training of big models by utilizing the computing capacity of massive individual graphics cards that are idle on the Internet.


New Computer Chips Could Process More Like Your Brain Does

#artificialintelligence

A new generation of smartphones and other gadgets could be powered by chips designed to act like your brain. BrainChip recently announced its Akida neural networking processor. The processor uses chips inspired by the spiking nature of the human brain. It's part of a growing effort to commercialize chips based on human neural structures. The new generation of chips could mean "more deep neural network processing capability in the future on portable devices, e.g., smartphones, digital companions, smartwatches, health monitoring, autonomous vehicles and drones," Vishal Saxena, a professor of electrical and computer engineering at the University of Delaware told Lifewire in an email interview.


DeepMind co-founder Mustafa Suleyman leaves Google

Engadget

Mustafa Suleyman, a co-founder of artificial intelligence research company DeepMind, has left Google to join venture capital firm Greylock Partners. Suleyman has brought to an end an eight-year run at Google, where he was most recently the company's vice president of AI product management and policy. He joined Google when it bought DeepMind in 2014 and became the latter's head of applied AI. Suleyman was reportedly placed on administrative leave in 2019 following allegations that he bullied employees. Suleyman, who moved to Google at the end of that year, said on a podcast with Greylock partner Reid Hoffman this week that he "really screwed up" and that "I remain very sorry about the impact that that caused people and the hurt that people felt there."


La veille de la cybersécurité

#artificialintelligence

If you can recognize a dog by sight, then you can probably recognize a dog when it is described to you in words. Deep neural networks have become very good at identifying objects in photos and conversing in natural language, but not at the same time: there are AI models that excel at one or the other, but not both. Part of the problem is that these models learn different skills using different techniques. This is a major obstacle for the development of more general-purpose AI, machines that can multi-task and adapt. It also means that advances in deep learning for one skill often do not transfer to others.


Deep Learning Prerequisites: Logistic Regression in Python

#artificialintelligence

This course is a lead-in to deep learning and neural networks - it covers a popular and fundamental technique used in machine learning, data science and statistics: logistic regression. We cover the theory from the ground up: derivation of the solution, and applications to real-world problems. We show you how one might code their own logistic regression module in Python. This course does not require any external materials. Everything needed (Python, and some Python libraries) can be obtained for free.