Exams 2018: The 'myth' of the visual learner

BBC News

If you're a visual person, do you always need pictures in order to learn best, even if the thing you're learning is a musical instrument? And what about aural learners who like to hear their information in order to remember it - do they need to listen to learn? What about if they're learning to drive a car? It's a popular belief that people have different styles of learning - visual, aural, reading and writing or kinaesthetic (carrying out physical activities). But as hundreds of thousands of pupils around the UK revise for exams, is that really how learning works?


Girls really are just as good at maths! Myths about gender differences in education don't stand up

Daily Mail - Science & tech

Boys and girls perform equally at maths, according to a study looking to dispel gender myths in education. Analysis of over 20,000 students from primary and secondary schools across the UK suggested that differences in maths attainment between girls and boys are almost negligible. It also indicated that regular and high-quality maths practice improves outcomes across the board and that primary pupils outperformed secondary students with better attainment scores. The study, carried out by Professor Keith Topping at the University of Dundee and the education assessment company Renaissance has led to calls for a cultural change in schools. Professor Topping believes his findings challenge many prevailing stereotypes around gender and the study of maths.


Learner driver gives Kingston police a lift to 999 call

BBC News

A learner driver in south west London stopped to give two police officers running to an arrest a lift. The officers from Chessington Safer Neighbourhood team were sprinting to the aid of colleagues who were pursuing a suspect. They are now trying to trace the female learner to thank her for her help. At around 20:00 BST on Thursday two officers were on foot patrol in Merrett Gardens when they spotted a man acting suspiciously. As they approached he decided to run and a lengthy foot chase began.


Optimizing Wrapper-Based Feature Selection for Use on Bioinformatics Data

AAAI Conferences

High dimensionality (having a large number of independent attributes) is a major problem for bioinformatics datasets such as gene microarray datasets. Feature selection algorithms are necessary to remove the irrelevant (not useful) and redundant (contain duplicate information) features. One approach to handle this problem is wrapper-based subset evaluation, which builds classification models on different feature subsets to discover which performs best. Although the computational complexity of this technique has led to it being rarely used for bioinformatics, its ability to find the features which give the best model make it important in this domain. However, when using wrapper-based feature selection, it is not obvious whether the learner used within the wrapper should match the learner used for building the final classification model. Furthermore, this question may depend on other properties of the dataset, such as difficulty of learning (general performance without feature selection) and dataset balance (ratio of minority and majority instances). To study this, we use nine datasets with varying levels of difficulty and balance. We find that across all datasets, the best strategy is to use one learner (Na¨ıve Bayes) inside the wrapper regardless of the learner which will be used outside. However, when broken down by difficulty and balance levels, our results show that the more balanced and less difficult datasets work best when the learners inside and outside the wrapper match. Thus, the answer to this question will depend on properties of the dataset.


Few-shot Learning with Meta Metric Learners

arXiv.org Machine Learning

Few-shot Learning aims to learn classifiers for new classes with only a few training examples per class. Existing meta-learning or metric-learning based few-shot learning approaches are limited in handling diverse domains with various number of labels. The meta-learning approaches train a meta learner to predict weights of homogeneous-structured task-specific networks, requiring a uniform number of classes across tasks. The metric-learning approaches learn one task-invariant metric for all the tasks, and they fail if the tasks diverge. We propose to deal with these limitations with meta metric learning. Our meta metric learning approach consists of task-specific learners, that exploit metric learning to handle flexible labels, and a meta learner, that discovers good parameters and gradient decent to specify the metrics in task-specific learners. Thus the proposed model is able to handle unbalanced classes as well as to generate task-specific metrics. We test our approach in the `$k$-shot $N$-way' few-shot learning setting used in previous work and new realistic few-shot setting with diverse multi-domain tasks and flexible label numbers. Experiments show that our approach attains superior performances in both settings.