Hacene, Ghouthi Boukli, Leduc-Primeau, François, Soussia, Amal Ben, Gripon, Vincent, Gagnon, François

Abstract--Because deep neural networks (DNNs) rely on a large number of parameters and computations, their impleme n-tation in energy-constrained systems is challenging. In th is paper, we investigate the solution of reducing the supply voltage o f the memories used in the system, which results in bit-cell fault s. We explore the robustness of state-of-the-art DNN architec tures towards such defects and propose a regularizer meant to miti gate their effects on accuracy. Deep Neural Networks [1] (DNNs) are the golden standard for many challenges in machine learning. Thanks to the large number of trainable parameters that they provide, DNNs can capture the complexity of large training datasets and gener alize to previously unseen examples.

Huaulmé, Arnaud, Voros, Sandrine, Reche, Fabian, Faucheron, Jean-Luc, Moreau-Gaudry, Alexandre, Jannin, Pierre

Objective: A median of 14.4% of patient undergone at least one adverse event during surgery and a third of them are preventable. The occurrence of adverse events forces surgeons to implement corrective strategies and, thus, deviate from the standard surgical process. Therefore, it is clear that the automatic identification of adverse events is a major challenge for patient safety. In this paper, we have proposed a method enabling us to identify such deviations. We have focused on identifying surgeons' deviations from standard surgical processes due to surgical events rather than anatomic specificities. This is particularly challenging, given the high variability in typical surgical procedure workflows. Methods: We have introduced a new approach designed to automatically detect and distinguish surgical process deviations based on multi-dimensional non-linear temporal scaling with a hidden semi-Markov model using manual annotation of surgical processes. The approach was then evaluated using cross-validation. Results: The best results have over 90% accuracy. Recall and precision were superior at 70%. We have provided a detailed analysis of the incorrectly-detected observations. Conclusion: Multi-dimensional non-linear temporal scaling with a hidden semi-Markov model provides promising results for detecting deviations. Our error analysis of the incorrectly-detected observations offers different leads in order to further improve our method. Significance: Our method demonstrated the feasibility of automatically detecting surgical deviations that could be implemented for both skill analysis and developing situation awareness-based computer-assisted surgical systems.

Neufcourt, Léo, Cao, Yuchen, Nazarewicz, Witold, Viens, Frederi

The mass, or binding energy, is the basis property of the atomic nucleus. It determines its stability, and reaction and decay rates. Quantifying the nuclear binding is important for understanding the origin of elements in the universe. The astrophysical processes responsible for the nucleosynthesis in stars often take place far from the valley of stability, where experimental masses are not known. In such cases, missing nuclear information must be provided by theoretical predictions using extreme extrapolations. Bayesian machine learning techniques can be applied to improve predictions by taking full advantage of the information contained in the deviations between experimental and calculated masses. We consider 10 global models based on nuclear Density Functional Theory as well as two more phenomenological mass models. The emulators of S2n residuals and credibility intervals defining theoretical error bars are constructed using Bayesian Gaussian processes and Bayesian neural networks. We consider a large training dataset pertaining to nuclei whose masses were measured before 2003. For the testing datasets, we considered those exotic nuclei whose masses have been determined after 2003. We then carried out extrapolations towards the 2n dripline. While both Gaussian processes and Bayesian neural networks reduce the rms deviation from experiment significantly, GP offers a better and much more stable performance. The increase in the predictive power is quite astonishing: the resulting rms deviations from experiment on the testing dataset are similar to those of more phenomenological models. The empirical coverage probability curves we obtain match very well the reference values which is highly desirable to ensure honesty of uncertainty quantification, and the estimated credibility intervals on predictions make it possible to evaluate predictive power of individual models.

Also, can someone help me understand as to why (n-1) is used instead of n in the denominator while computing standard deviation.I did go through the reasoning posted on a few forums (that it has to do with degrees of freedom and depends on whether data is from a sample or the population itself) but did not completely understand it.Most statistical books conveniently give a brief explanation which is not complete and say that it is beyond scope!