Goto

Collaborating Authors

Results


PL and HCI

Communications of the ACM

Each subfield has its own culture and design goals. They both contribute to features that matter to users, but often to different sets of features. The PL community has deep expertise in developing modular, reusable abstractions. The HCI community has deep expertise in developing abstractions that are easy to learn or match the existing mental models of their target users. With rich histories of abstraction design across both fields, a union of these forms of expertise holds the promise of delivering useful, usable, and powerful abstractions.


The 84 biggest flops, fails, and dead dreams of the decade in tech

#artificialintelligence

The world never changes quite the way you expect. But at The Verge, we've had a front-row seat while technology has permeated every aspect of our lives over the past decade. Some of the resulting moments -- and gadgets -- arguably defined the decade and the world we live in now. But others we ate up with popcorn in hand, marveling at just how incredibly hard they flopped. This is the decade we learned that crowdfunded gadgets can be utter disasters, even if they don't outright steal your hard-earned cash. It's the decade of wearables, tablets, drones and burning batteries, and of ridiculous valuations for companies that were really good at hiding how little they actually had to offer. Here are 84 things that died hard, often hilariously, to bring us where we are today. Everyone was confused by Google's Nexus Q when it debuted in 2012, including The Verge -- which is probably why the bowling ball of a media streamer crashed and burned before it even came to market.


The UCR Time Series Archive

arXiv.org Machine Learning

The UCR Time Series Archive - introduced in 2002, has become an important resource in the time series data mining community, with at least one thousand published papers making use of at least one dataset from the archive. The original incarnation of the archive had sixteen datasets but since that time, it has gone through periodic expansions. The last expansion took place in the summer of 2015 when the archive grew from 45 datasets to 85 datasets. This paper introduces and will focus on the new data expansion from 85 to 128 datasets. Beyond expanding this valuable resource, this paper offers pragmatic advice to anyone who may wish to evaluate a new algorithm on the archive. Finally, this paper makes a novel and yet actionable claim: of the hundreds of papers that show an improvement over the standard baseline (1-Nearest Neighbor classification), a large fraction may be misattributing the reasons for their improvement. Moreover, they may have been able to achieve the same improvement with a much simpler modification, requiring just a single line of code.