Goto

Collaborating Authors

Copa America U.S.-Argentina match draws over 8 million viewers

Los Angeles Times

The Copa America Centenario match between the U.S. and Argentina drew more than 8 million viewers to the Univision and Fox networks Tuesday. Univision's coverage averaged 4.8 million viewers, the Spanish-language network said, bringing its total Centenario audience to 44.4 million viewers. That figure does not include Wednesday's second semifinal between Chile and Colombia. The U.S.-Argentina game, won by Argentina 4-0, was watched by 3.29 million viewers on Fox cable outlet FS1, making it the most-viewed English-language cable program in prime time Tuesday and the most-watched men's soccer match in FS1 history. The tournament's live gate has been even more impressive.


Hinge's newest feature claims to use machine learning to find your best match

#artificialintelligence

Most Compatible -- attempts to use all your cumulative data to find the perfect match for you. The company's been testing this feature, which occasionally recommends a possible match to users, for at least month now. Those recommendations were only offered once a week during testing but will now come every day. Justin McLeod, Hinge's CEO, tells me the company spent the testing time honing its backend algorithm and getting Most Compatible to a point where the company feels confident putting it fully out there. Most Compatible, he says, uses machine learning to figure out each user's taste.



Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

Neural Information Processing Systems

It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks. Although highly intuitive, there is a lack of theory and systematic approach quantitatively characterizing what representations do deep neural networks learn. In this work, we move a tiny step towards a theory and better understanding of the representations. Specifically, we study a simpler problem: How similar are the representations learned by two networks with identical architecture but trained from different initializations. We develop a rigorous theory based on the neuron activation subspace match model. The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively. We also propose efficient algorithms to find the maximum match and simple matches. Finally, we conduct extensive experiments using our algorithms. Experimental results suggest that, surprisingly, representations learned by the same convolutional layers of networks trained from different initializations are not as similar as prevalently expected, at least in terms of subspace match.


Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

arXiv.org Machine Learning

It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks. Although highly intuitive, there is a lack of theory and systematic approach quantitatively characterizing what representations do deep neural networks learn. In this work, we move a tiny step towards a theory and better understanding of the representations. Specifically, we study a simpler problem: How similar are the representations learned by two networks with identical architecture but trained from different initializations. We develop a rigorous theory based on the neuron activation subspace match model. The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively. We also propose efficient algorithms to find the maximum match and simple matches. Finally, we conduct extensive experiments using our algorithms. Experimental results suggest that, surprisingly, representations learned by the same convolutional layers of networks trained from different initializations are not as similar as prevalently expected, at least in terms of subspace match.