Udacity, the Silicon Valley based lifelong learning platform, announced its newest initiative to expand students' artificial intelligence skills: the Intel Edge AI Scholarship Program. This new scholarship program, announced at the Intel AI Summit and the Future of Education and Workforce Summit in San Francisco, will empower professional developers interested in advanced learning, specifically deep learning and computer vision, to accelerate the development and deployment of high-performance computer vision and deep learning solutions. Computer vision and AI at the edge are becoming instrumental in powering everything from factory assembly lines and retail inventory management to hospital urgent care medical imaging equipment like X-ray and CAT scans. This program will teach fluency in some of the most cutting-edge technologies. Upon successful completion of the first phase of the program, students will also have the opportunity to earn their way to a full scholarship to the Intel Edge AI for IoT Developers Nanodegree program, a brand-new Udacity Nanodegree program built in partnership with Intel.
The following interview is one of many included in the report. Oriol Vinyals is a research scientist at Google working on the DeepMind team by way of previous work with the Google Brain team. He holds a Ph.D. in EECS from University of California, Berkeley, and a master's degree from University of California, San Diego. Oriol Vinyals: I'm originally from Barcelona, Spain, where I completed my undergraduate studies in both mathematics and telecommunication engineering. Early on, I knew I wanted to study AI in the U.S. I spent nine months at Carnegie Mellon, where I finished my undergraduate thesis.
Andrew Yan-Tak Ng (Chinese: 吳恩達; born 1976) is a Chinese-American computer scientist and statistician, focusing on machine learning and AI. Also a business executive and investor in the Silicon Valley, Ng co-founded and led Google Brain and was a former Vice President and Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people. Ng is an adjunct professor at Stanford University (formerly associate professor and Director of its AI Lab). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai. With his online courses, he has successfully spearheaded many efforts to "democratize deep learning."
NVIDIA CEO Jen-Hsun Huang earlier this year delivered our NVIDIA DGX-1 AI supercomputer in a box to the University of California, Berkeley's Berkeley AI Research Lab (BAIR). BAIR's over two dozen faculty and more than 100 graduate students are at the cutting edge of multi-modal deep learning, human-compatible AI and connecting AI with other scientific disciplines and the humanities. "I'm delighted to deliver one of the first ones to you," Jen-Hsun told a group of researchers at BAIR celebrating the arrival of their DGX-1. The team at BAIR are working on a dazzling array of AI problems across a huge array of fields -- and they're eager to experiment with as many different approaches as possible. To do that, they need speed, explains Pieter Abbeel, an associate professor at UC Berkeley's Department of Electrical Engineering and Computer Science.
Applications are invited for a PhD studentship, to be undertaken at Imperial College London (Electrical and Electronic Engineering Department). This studentship will form part of a newly established International Centre for Spatial Computational Learning http://spatialml.net, and a supervisory team will be allocated based on the student's interest from the Imperial College supervisors participating in the Centre. This is an exciting cutting-edge project involving close collaboration between Imperial College (UK), the University of California Los Angeles (USA), the University of Toronto (Canada), and the University of Southampton (UK). The successful candidate will be based at Imperial but will have the opportunity to travel frequently to America to attend research meetings and for a placement period at either UCLA or Toronto. Traditional deep learning has been based on the idea of large-scale linear arithmetic units, effectively computing matrix-matrix multiplication, combined with nonlinear activation functions.