Goto

Collaborating Authors

Results


Complete Machine Learning & Data Science Bootcamp 2022

#artificialintelligence

This is a brand new Machine Learning and Data Science course just launched and updated this month with the latest trends and skills for 2021! Become a complete Data Scientist and Machine Learning engineer! Join a live online community of 400,000 engineers and a course taught by industry experts that have actually worked for large companies in places like Silicon Valley and Toronto. Graduates of Andrei's courses are now working at Google, Tesla, Amazon, Apple, IBM, JP Morgan, Facebook, other top tech companies. You will go from zero to mastery!


The Data Analyst Course: Complete Data Analyst Bootcamp 2022

#artificialintelligence

This is a brand new Machine Learning and Data Science course just launched and updated this month with the latest trends and skills for 2021! Become a complete Data Scientist and Machine Learning engineer! Join a live online community of 400,000 engineers and a course taught by industry experts that have actually worked for large companies in places like Silicon Valley and Toronto. Graduates of Andrei's courses are now working at Google, Tesla, Amazon, Apple, IBM, JP Morgan, Facebook, other top tech companies. You will go from zero to mastery!


Manipulating the future

#artificialintelligence

As robots evolve, society's collective imagination forever ponders what else robots can do, with recent fascinations coming to life as self-driving cars or robots that can walk and interact with objects as humans do. These sophisticated systems are powered by advances in deep learning that triggered breakthroughs in robotic perception, so that robots today have greater potential for better decision-making and improved functioning in real-world environments. But tomorrow's roboticists need to understand how to combine deep learning with dynamics, controls, and long-term planning. To keep this momentum in robotic manipulation going forward, engineers today must learn to hover above the whole field, connecting an increasingly diverse set of ideas with an interdisciplinary focus needed to design increasingly complex robotic systems. Last fall, MIT's Department of Electrical Engineering and Computer Science launched a new course, 6.800 (Robotic Manipulation) to help engineering students broadly survey the latest advancements in robotics while troubleshooting real industry problems.


Deep Learning Code Generation from Simulink Applications - MATLAB & Simulink

#artificialintelligence

You can accelerate the simulation of your algorithms in Simulink by using different execution environments. By using support packages, you can also generate and deploy C/C and CUDA code on target hardware. Simulate and generate code for deep learning models in Simulink using MATLAB function blocks. Simulate and generate code for deep learning models in Simulink using library blocks. This example shows how to develop a CUDA application from a Simulink model that performs lane and vehicle detection using convolutional neural networks (CNN).


My 2-year journey into deep learning as a medical student -- Part II: Courses

#artificialintelligence

Deep learning and machine learning courses that I've taken along the way in learning deep learning. It's time to introduce the courses that I've used along this way that helped me get started and grow in the field. You should also keep in mind that there are probably many more and newer courses out there as the community keeps providing interesting educational material every day. So, keep on searching too. This fact aside, I believe the following list introduces high quality courses for many fields that most of you will be okay to start with and learn lots of new things from.


Molecular Deep Learning using DeepChem

#artificialintelligence

I vividly remember my high school Chemistry teacher teaching us about covalent bonds using a 3D model of a water molecule. I also remember enjoying my time in the Chemistry lab trying to determine if a given salt is more acidic or alkaline by performing many tests. How would this setup change if we needed to replace the human performing these experiments with a machine? Recently, my curiosity about applying deep learning architectures in the life sciences resulted in an interesting learning opportunity. I stumbled onto some libraries like RDKit and DeepChem that help with training and developing deep learning data models for use in Drug Discovery.


AI language processing startup Cohere raises US$125 million: The Globe and Mail

#artificialintelligence

Cohere Inc., an AI startup founded by University of Toronto alumni that uses natural language processing to improve human-machine interactions, has raised US$125 million as it looks to open a new office in Silicon Valley, the Globe and Mail reports. The latest financing round, led by New York-based Tiger Global Management, comes only five months after Cohere secured $US40 million in venture capital financing, according to the Globe. Cohere's software platform helps companies infuse natural language processing capabilities into their business using tools like chatbots, without requiring AI expertise of their own. The company originated in a 2017 paper co-authored by CEO Aidan Gomez, who interned at the Google Brain lab of deep learning pioneer and University Professor Emeritus Geoffrey Hinton, a Cohere investor. Cohere's other co-founders are alumnus Nick Frosst, who also worked with Hinton at Google, and Ivan Zhang, a former U of T computer science student.


A minimum viable learning framework for self-learning AI (machine learning and deep learning) - DataScienceCentral.com

#artificialintelligence

Because its concise and its minimal, it does not include topics like GANs. Reinforcement learning etc. tt also does not cover Bayesian approaches n detail What is the difference between machine learning and deep learning?


State of AI Ethics Report (Volume 6, February 2022)

arXiv.org Artificial Intelligence

This report from the Montreal AI Ethics Institute (MAIEI) covers the most salient progress in research and reporting over the second half of 2021 in the field of AI ethics. Particular emphasis is placed on an "Analysis of the AI Ecosystem", "Privacy", "Bias", "Social Media and Problematic Information", "AI Design and Governance", "Laws and Regulations", "Trends", and other areas covered in the "Outside the Boxes" section. The two AI spotlights feature application pieces on "Constructing and Deconstructing Gender with AI-Generated Art" as well as "Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?". Given MAIEI's mission to democratize AI, submissions from external collaborators have featured, such as pieces on the "Challenges of AI Development in Vietnam: Funding, Talent and Ethics" and using "Representation and Imagination for Preventing AI Harms". The report is a comprehensive overview of what the key issues in the field of AI ethics were in 2021, what trends are emergent, what gaps exist, and a peek into what to expect from the field of AI ethics in 2022. It is a resource for researchers and practitioners alike in the field to set their research and development agendas to make contributions to the field of AI ethics.


Latent gaze information in highly dynamic decision-tasks

arXiv.org Artificial Intelligence

Digitization is penetrating more and more areas of life. Tasks are increasingly being completed digitally, and are therefore not only fulfilled faster, more efficiently but also more purposefully and successfully. The rapid developments in the field of artificial intelligence in recent years have played a major role in this, as they brought up many helpful approaches to build on. At the same time, the eyes, their movements, and the meaning of these movements are being progressively researched. The combination of these developments has led to exciting approaches. In this dissertation, I present some of these approaches which I worked on during my Ph.D. First, I provide insight into the development of models that use artificial intelligence to connect eye movements with visual expertise. This is demonstrated for two domains or rather groups of people: athletes in decision-making actions and surgeons in arthroscopic procedures. The resulting models can be considered as digital diagnostic models for automatic expertise recognition. Furthermore, I show approaches that investigate the transferability of eye movement patterns to different expertise domains and subsequently, important aspects of techniques for generalization. Finally, I address the temporal detection of confusion based on eye movement data. The results suggest the use of the resulting model as a clock signal for possible digital assistance options in the training of young professionals. An interesting aspect of my research is that I was able to draw on very valuable data from DFB youth elite athletes as well as on long-standing experts in arthroscopy. In particular, the work with the DFB data attracted the interest of radio and print media, namely DeutschlandFunk Nova and SWR DasDing. All resulting articles presented here have been published in internationally renowned journals or at conferences.