Goto

Collaborating Authors

Instructional Theory


How to help humans understand robots

Robohub

Researchers from MIT and Harvard suggest that applying theories from cognitive science and educational psychology to the area of human-robot interaction can help humans build more accurate mental models of their robot collaborators, which could boost performance and improve safety in cooperative workspaces. Scientists who study human-robot interaction often focus on understanding human intentions from a robot's perspective, so the robot learns to cooperate with people more effectively. But human-robot interaction is a two-way street, and the human also needs to learn how the robot behaves. Thanks to decades of cognitive science and educational psychology research, scientists have a pretty good handle on how humans learn new concepts. So, researchers at MIT and Harvard University collaborated to apply well-established theories of human concept learning to challenges in human-robot interaction.


Critical Learning Periods in Federated Learning

arXiv.org Machine Learning

Federated learning (FL) is a popular technique to train machine learning (ML) models with decentralized data. Extensive works have studied the performance of the global model; however, it is still unclear how the training process affects the final test accuracy. Exacerbating this problem is the fact that FL executions differ significantly from traditional ML with heterogeneous data characteristics across clients, involving more hyperparameters. In this work, we show that the final test accuracy of FL is dramatically affected by the early phase of the training process, i.e., FL exhibits critical learning periods, in which small gradient errors can have irrecoverable impact on the final test accuracy. To further explain this phenomenon, we generalize the trace of the Fisher Information Matrix (FIM) to FL and define a new notion called FedFIM, a quantity reflecting the local curvature of each clients from the beginning of the training in FL. Our findings suggest that the {\em initial learning phase} plays a critical role in understanding the FL performance. This is in contrast to many existing works which generally do not connect the final accuracy of FL to the early phase training. Finally, seizing critical learning periods in FL is of independent interest and could be useful for other problems such as the choices of hyperparameters such as the number of client selected per round, batch size, and more, so as to improve the performance of FL training and testing.


How AI is transforming education and skills development 7wData

#artificialintelligence

Artificial intelligence can help us to solve some of society's most difficult challenges and create a safer, healthier and more prosperous world for all. I've already shared the exciting possibilities in the fields of healthcare and agriculture in previous posts. But there may be no area where the possibilities are more interesting – or more important – than Education and skills. From personalized learning that takes advantage of AI to adapt teaching methods and materials to the needs of individual students, to automated grading that frees teachers from the drudgery of assessing tests so they have more time to work with students, to intelligent systems that are transforming how learners find and interact with information, the opportunities to improve Education outcomes and accessibility will be truly transformational. There are many classrooms around the world where educators teach very diverse groups of students from different cultures, who speak multiple languages.


How AI is transforming education and skills development - The Official Microsoft Blog

#artificialintelligence

Artificial intelligence can help us to solve some of society's most difficult challenges and create a safer, healthier and more prosperous world for all. I've already shared the exciting possibilities in the fields of healthcare and agriculture in previous posts. But there may be no area where the possibilities are more interesting – or more important – than education and skills. From personalized learning that takes advantage of AI to adapt teaching methods and materials to the needs of individual students, to automated grading that frees teachers from the drudgery of assessing tests so they have more time to work with students, to intelligent systems that are transforming how learners find and interact with information, the opportunities to improve education outcomes and accessibility will be truly transformational. There are many classrooms around the world where educators teach very diverse groups of students from different cultures, who speak multiple languages.


What does artificial intelligence mean for values and ethics? - OECD Education and Skills Today

#artificialintelligence

Every year, the OECD Forum brings together experts, academics and thought leaders from the private and public sector to discuss key economic and social challenges on the international agenda. The theme of this year's Forum was "World in EMotion" – a theme that reflects the profound changes brought about by globalisation, shifting politics and digitalisation, and the challenges and opportunities that they present. Nowhere are these changes more rapid – and perhaps far-reaching – than in the field of artificial intelligence (AI), and its implications for values and ethics. I attended a very interesting panel on this subject, alongside Peter Gluckman, Chair of the International Network for Government Science Advice in New Zealand; Geoff Mulgan, Chief Executive of NESTA in the UK; Eric Salobir head of Optic; Pallaw Sharma, Senior Vice President at Johnson & Johnson; and Jess Whittlestone, Research Associate at the Centre for the Future of Intelligence at Cambridge University. As Pallaw explained, technology and AI are not magic powers; they are just extraordinary amplifiers and accelerators that add speed and accuracy.


Improved GQ-CNN: Deep Learning Model for Planning Robust Grasps

arXiv.org Machine Learning

Recent developments in the field of robot grasping have shown great improvements in the grasp success rates when dealing with unknown objects. In this work we improve on one of the most promising approaches, the Grasp Quality Convolutional Neural Network (GQ-CNN) trained on the DexNet 2.0 dataset. We propose a new architecture for the GQ-CNN and describe practical improvements that increase the model validation accuracy from 92.2% to 95.8% and from 85.9% to 88.0% on respectively image-wise and object-wise training and validation splits.


Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

arXiv.org Machine Learning

Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios. In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against. Here we emphasise the importance of attacks which solely rely on the final model decision. Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks. Previous attacks in this category were limited to simple models or simple datasets. Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet. We apply the attack on two black-box algorithms from Clarifai.com. The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems. An implementation of the attack is available as part of Foolbox at https://github.com/bethgelab/foolbox .


How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models

arXiv.org Machine Learning

Machine learning models are vulnerable to Adversarial Examples: minor perturbations to input samples intended to deliberately cause misclassification. Current defenses against adversarial examples, especially for Deep Neural Networks (DNN), are primarily derived from empirical developments, and their security guarantees are often only justified retroactively. Many defenses therefore rely on hidden assumptions that are subsequently subverted by increasingly elaborate attacks. This is not surprising: deep learning notoriously lacks a comprehensive mathematical framework to provide meaningful guarantees. In this paper, we leverage Gaussian Processes to investigate adversarial examples in the framework of Bayesian inference. Across different models and datasets, we find deviating levels of uncertainty reflect the perturbation introduced to benign samples by state-of-the-art attacks, including novel white-box attacks on Gaussian Processes. Our experiments demonstrate that even unoptimized uncertainty thresholds already reject adversarial examples in many scenarios.


Visibility and Monitoring for Machine Learning Models [Video] - DZone AI

#artificialintelligence

Josh Willis, an engineer at Slack, spoke at our January MeetUp about testing machine learning models in production.


Gartner: Here are 4 critical lessons we've learned from early AI projects

#artificialintelligence

While the value of artificial intelligence (AI) is just beginning to emerge in the enterprise, some 46% of CIOs have plans to implement the technology in the future, according to a survey from research firm Gartner.