Afghanistan: Jalalabad cricket match bomb attack kills eight

Al Jazeera

At least eight people were killed in a series of bomb explosions at a cricket match in Afghanistan's eastern Nangarhar Province, a provincial official said. The attack took place as hundreds of people gathered at Spinghar cricket stadium in the provincial capital Jalalabad to watch a Ramadan night-time cricket tournament on Friday. Three bombs exploded in quick succession. Attahullah Khogyani, spokesman for the provincial governor, said at least 45 others were wounded in the blasts, adding that the organiser of the cricket match, Hedayatullah Zahir was also among the dead. Afghan President Ashraf Ghani strongly condemned the attack.


Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

arXiv.org Machine Learning

It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks. Although highly intuitive, there is a lack of theory and systematic approach quantitatively characterizing what representations do deep neural networks learn. In this work, we move a tiny step towards a theory and better understanding of the representations. Specifically, we study a simpler problem: How similar are the representations learned by two networks with identical architecture but trained from different initializations. We develop a rigorous theory based on the neuron activation subspace match model. The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively. We also propose efficient algorithms to find the maximum match and simple matches. Finally, we conduct extensive experiments using our algorithms. Experimental results suggest that, surprisingly, representations learned by the same convolutional layers of networks trained from different initializations are not as similar as prevalently expected, at least in terms of subspace match.


Adobe's AI will automatically color-match shots in Premiere

Engadget

At NAB 2018, Adobe has announced that the Sensei AI used in Photoshop and Lightroom have come to its Premiere Pro CC editing app. The first tool, Color Match, takes a lot of tedium out an edit. Even when filmmakers are careful, hues and tones can vary from shot to shot, so editors usually have to do a laborious color correction. All you have to do is tweak one shot just the way you want it, and Color Match will apply them to your other shots as editable color adjustments. That way, if it's still not quite perfect, you can do a final tweak to get it right.


Nadal Should Follow Federer Example To Match Swiss Ace's Record, Rosewall Says

International Business Times

Australian tennis legend Ken Rosewall has backed Rafael Nadal to catch long-time rival Roger Federer's Grand Slam record but admits that the Spaniard will have to make a few changes in order to achieve that. The Swiss ace currently holds the record with 20 men's singles Grand Slam titles and Nadal is four behind with 16 major titles. The Spaniard can move one step closer if he wins the French Open which begins Sunday. Federer won his 17th Grand Slam title in 2012 and failed to win a single major title for four years. He missed the second-half of the 2016 campaign due to a knee injury, and it was unclear if he would ever return to his best, but he did and in some style.


Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

Neural Information Processing Systems

It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks. Although highly intuitive, there is a lack of theory and systematic approach quantitatively characterizing what representations do deep neural networks learn. In this work, we move a tiny step towards a theory and better understanding of the representations. Specifically, we study a simpler problem: How similar are the representations learned by two networks with identical architecture but trained from different initializations. We develop a rigorous theory based on the neuron activation subspace match model. The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively. We also propose efficient algorithms to find the maximum match and simple matches. Finally, we conduct extensive experiments using our algorithms. Experimental results suggest that, surprisingly, representations learned by the same convolutional layers of networks trained from different initializations are not as similar as prevalently expected, at least in terms of subspace match.