A.I. can say when neurosurgeons are ready to operate - Futurity


You are free to share this article under the Attribution 4.0 International license. Machine learning algorithms can accurately assess the capabilities of neurosurgeons during virtual surgery before they step into an actual operating room, a new study shows. Researchers recruited fifty participants from four stages of neurosurgical training; neurosurgeons, fellows and senior residents, junior residents, and medical students. The participants performed 250 complex tumor resections using NeuroVR, a virtual reality surgical simulator. The National Research Council of Canada developed the system; CAE recorded all instrument movements in 20 millisecond intervals.

Machine learning-guided virtual reality simulators can be powerful tools in surgeon training


Machine learning-guided virtual reality simulators can help neurosurgeons develop the skills they need before they step in the operating room, according to a new study. Research from the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute and Hospital) and McGill University shows that machine learning algorithms can accurately assess the capabilities of neurosurgeons during virtual surgery, demonstrating that virtual reality simulators using artificial intelligence can be powerful tools in surgeon training. Fifty participants were recruited from four stages of neurosurgical training; neurosurgeons, fellows and senior residents, junior residents, and medical students. They performed 250 complex tumor resections using NeuroVR, a virtual reality surgical simulator developed by the National Research Council of Canada and distributed by CAE, which recorded all instrument movements in 20 millisecond intervals. Using this raw data, a machine learning algorithm developed performance measures such as instrument position and force applied, as well as outcomes such as amount of tumor removed and blood loss, which could predict the level of expertise of each participant with 90 per-cent accuracy.

Skills evaluation, tailored feedback: McGill AI project could change the way brain surgeons are trained


Alexander Winkler-Schwartz, a neurosurgery resident and PhD candidate, poses in the lab with a NeuroVR neurosurgical simulator at McGill University, on July 31, 2019. Alexander Winkler-Schwartz focuses on the computer-generated brain on the screen while, below, his hands gently remove the virtual brain tumour inside the mannequin's head. An artificial intelligence algorithm tracks the neurosurgery resident's every movement – ready to classify his performance as part of a research project at McGill University, where intelligent machines are learning to rank people based on how deftly they take away the tumour. It's part of a wider effort to harness the power of technology to improve medicine. Artificial intelligence is already helping monitor the vital signs of babies in intensive-care, and robots are a fixture in operating rooms.

Might artificial intelligence be able to tell you why your baby is crying?


American researchers have developed an artificial intelligence tool capable of detecting whether a baby's cries mean they are hungry, they need changing, are tired, uncomfortable or that they want a cuddle. Over the long term, if exposed to a greater amount of more varied data, the algorithm could become a tool for interpreting a baby's cries. Parents who rack their brains trying to understand what their baby's cries might mean could one day be able to rely on artificial intelligence for help. Researchers from the University of Northern Illinois and the College of New Jersey in the United States have developed an algorithm which, they say, is able to identify the reason behind a young child's cries, reports the Quebec version of The Huffington Post. Of course, every infant is different, including how it cries.

C2RO Raises $2.25 M Financing to Commercialize Portfolio of Enterprise Grade Cloud A.I. Services - C2RO


Montreal-based C2RO today announced that it has secured CAD$2.25 Million in new financing in a round led by Fonds Innovexport, with participation from GCI Capital Inc., Harbor Street Ventures, Tandemlaunch, Ministere de Economie et L'Innovation, and several angel investors in Canada, the U.S. and Europe. The funds will be used to accelerate the commercialization of C2RO's powerful enterprise grade cloud A.I. services. "We led the investment in C2RO because it has an excellent execution team, a significantly expanding Tier1 customer base, and a formidable technology position in the field of real-time machine vision A.I.," said Richard Bordeleau, President at Fonds Innovexport. "C2RO will have a tremendous impact on the industry and we want to support them through this journey." In June of 2018, the company introduced C2RO Engage, the world's first real-time cloud based facial recognition platform.

Canada's AI Corridor is Maturing: The Canadian AI Ecosystem in 2018 - jfgagne


Welcome to the now "annual" Canadian AI Ecosystem Map. What a year it's been. The report also goes to feed the excellent (and searchable!) directory at The point of creating this map was to emphasize that the strength lies in the Canadian AI Ecosystem, as opposed to just one city's. This year, we've seen ties strengthen, but also some weaknesses exposed.

Montreal-based VirtualMED bringing AI to virtual healthcare BetaKit


Montreal-based VirtualMED is partnering with HealthTap, an American healthtech company, to offer AI-powered virtual care to Canadians. VirtualMED's licensed physicians will now be available through an app, which provides personalized diagnoses and treatment plans through artificial intelligence. "HealthTap is excited to partner with VirtualMED to overcome the issues that affect Canadian healthcare." HealthTap provides access to primary healthcare through an AI-powered platform, which personalizes users' care and enables an instant connection between the members and doctors. VirtualMED said its members can also receive affordable virtual care while travelling in the US.

Deep Recurrent Adversarial Learning for Privacy-Preserving Smart Meter Data Release Machine Learning

Smart Meters (SMs) are an important component of smart electrical grids, but they have also generated serious concerns about privacy data of consumers. In this paper, we present a general formulation of the privacy-preserving problem in SMs from an information-theoretic perspective. In order to capture the casual time series structure of the power measurements, we employ Directed Information (DI) as an adequate measure of privacy. On the other hand, to cope with a variety of potential applications of SMs data, we study different distortion measures along with the standard squared-error distortion. This formulation leads to a quite general training objective (or loss) which is optimized under a deep learning adversarial framework where two Recurrent Neural Networks (RNNs), referred to as the releaser and the attacker, are trained with opposite goals. An exhaustive empirical study is then performed to validate the proposed approach for different privacy problems in three actual data sets. Finally, we study the impact of the data mismatch problem, which occurs when the releaser and the attacker have different training data sets and show that privacy may not require a large level of distortion in real-world scenarios.



Yoshua Bengio is a Professor at the University of Montreal, and the Scientific Director of both Mila (Quebec's Artificial Intelligence Institute) and IVADO (the Institute for Data Valorization). He is Co-director (with Yann LeCun) of CIFAR's Learning in Machines and Brains program. Bengio received a Bachelor's degree in electrical engineering, a Master's degree in computer science and a Doctoral degree in computer science from McGill University. Bengio's honors include being named an Officer of the Order of Canada, Fellow of the Royal Society of Canada and the Marie-Victorin Prize. His work in founding and serving as Scientific Director of the Quebec Artificial Intelligence Institute (Mila) is also recognized as a major contribution to the field.

AI Could Predict Cognitive Decline Leading to Alzheimer's Disease in the Next 5 Years


A team of scientists has successfully trained a new artificial intelligence (AI) algorithm to make accurate predictions regarding cognitive decline leading to Alzheimer's disease. Dr. Mallar Chakravarty, a computational neuroscientist at the Douglas Mental Health University Institute, and his colleagues from the University of Toronto and the Centre for Addiction and Mental Health, designed an algorithm that learns signatures from magnetic resonance imaging (MRI), genetics, and clinical data. This specific algorithm can help predict whether an individual's cognitive faculties are likely to deteriorate towards Alzheimer's in the next five years. "At the moment, there are limited ways to treat Alzheimer's and the best evidence we have is for prevention. Our AI methodology could have significant implications as a'doctor's assistant' that would help stream people onto the right pathway for treatment. For example, one could even initiate lifestyle changes that may delay the beginning stages of Alzheimer's or even prevent it altogether," says Chakravarty, an Assistant Professor in McGill University's Department of Psychiatry.