Not enough data to create a plot.
Try a different view from the menu above.
Ignorance of history is a badge of honour in Silicon Valley. "The only thing that matters is the future," self-driving-car engineer Anthony Levandowski told The New Yorker in 20181. Levandowski, formerly of Google, Uber and Google's autonomous-vehicle subsidiary Waymo (and recently sentenced to 18 months in prison for stealing trade secrets), is no outlier. The gospel of'disruptive innovation' depends on the abnegation of history2. 'Move fast and break things' was Facebook's motto. Another word for this is heedlessness. And here are a few more: negligence, foolishness and blindness.
Ray, Avik, Neeman, Joe, Sanghavi, Sujay, Shakkottai, Sanjay
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.
With Donald Trump, everything old is new again, it seems. His latest effort to grapple with the school shooting in Parkland, Florida, sees him joining his fellow Republicans, such as the Kentucky governor, Matt Bevin, in resuscitating a long-dormant culture war, blaming video games for mass shootings.
The proposed changes to the ACM Code of Ethics and Professional Conduct, as discussed by Don Gotterbarn et al. in "ACM Code of Ethics: A Guide for Positive Action"1 (Digital Edition, Jan. 2018), are generally misguided and should be rejected by the ACM membership. ACM is a computing society, not a society of activists for social justice, community organizers, lawyers, police officers, or MBAs. The proposed changes add nothing related specifically to computing and far too much related to these other fields, and also fail to address, in any significant new way, probably the greatest ethical hole in computing today--security and hacking. If the proposed revised Code is ever submitted to a vote by the membership, I will be voting against it and urge other members to do so as well. ACM promotes ethical and social responsibility as key components of professionalism.
When we talk about the dangers posed by artificial intelligence, the emphasis is usually on the unintended side effects. We worry that we might accidentally create a super-intelligent AI and forget to program it with a conscience; or that we'll deploy criminal sentencing algorithms that have soaked up the racist biases of their training data.
As America looks for answers in the wake of the shooting massacre of 17 students at Marjory Stoneman Douglas High School in Parkland, Florida, some politicians have returned to the 1990s tactic of blaming video games for violence. Kentucky governor Matt Bevin started the show a couple of days after the shooting, and on Wednesday, Rhode Island state representative Bobby Nardolillo took it a step further.
AI could reboot industries and make the economy more productive; it's already infusing many of the products we use daily. But a new report by more than 20 researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns that the same technology creates new opportunities for criminals, political operatives, and oppressive governments--so much so that some AI research may need to be kept secret.