Compressed Linear Algebra for Declarative Large-Scale Machine Learning

Communications of the ACM

Large-scale Machine Learning (ML) algorithms are often iterative, using repeated read-only data access and I/O-bound matrix-vector multiplications. Hence, it is crucial for performance to fit the data into single-node or distributed main memory to enable fast matrix-vector operations. General-purpose compression struggles to achieve both good compression ratios and fast decompression for block-wise uncompressed operations. Therefore, we introduce Compressed Linear Algebra (CLA) for lossless matrix compression. CLA encodes matrices with lightweight, value-based compression techniques and executes linear algebra operations directly on the compressed representations. We contribute effective column compression schemes, cache-conscious operations, and an efficient sampling-based compression algorithm. Our experiments show good compression ratios and operations performance close to the uncompressed case, which enables fitting larger datasets into available memory. We thereby obtain significant end-to-end performance improvements. Large-scale ML leverages large data collections to find interesting patterns or build robust predictive models.7 Applications range from traditional regression, classification, and clustering to user recommendations and deep learning for unstructured data. The labeled data required to train these ML models is now abundant, thanks to feedback loops in data products and weak supervision techniques. Many ML systems exploit data-parallel frameworks such as Spark20 or Flink2 for parallel model training and scoring on commodity hardware. It remains challenging, however, to train ML models on massive labeled data sets in a cost-effective manner.


Implementing Guidelines for Governance, Oversight of AI, and Automation

Communications of the ACM

Governance and independent oversight on the design and implementation of all forms of artificial intelligence (AI) and automation is a cresting wave about to break comprehensively on the field of information technology and computing. If this is a surprise to you, then you may have missed the forest for the trees on a myriad of news stories over the past three to five years. Privacy failures, cybersecurity breeches, unethical choices in decision engines, and biased datasets have repeatedly sprung up as corporations around the world deploy increasing numbers of AIs throughout their organizations. The world, broadly speaking, combined with legislative bodies, regulators, and a dedicated body of academics operating in the field of AI Safety, have been pressing the issue. Now guidelines are taking hold in a practical format.


Code Talkers

Communications of the ACM

When Tavis Rudd decided to build a system that would allow him to write computer code using his voice, he was driven by necessity. In 2010, he tore his rotator cuffwhile rock-climbing, forcing him to quit climbing while the injury healed. Rather than sitting idle, he poured more of his energy into his work as a self-employed computer programmer. "I'd get in the zone and just go for hours," he says. Whether it was the increased time pounding away at a keyboard or the lack of other exercise, Rudd eventually developed a repetitive strain injury (RSI) that caused his outer fingers to go numb and cold, leaving him unable to type/code without pain.


Questioning Quantum

Communications of the ACM

At the beginning of December last year, a committee set up by the U.S. National Academies of Sciences, Engineering, and Medicine said it had come to the conclusion a viable quantum computer with the ability to break ciphers based on today's encryption algorithms is a decade or more away, but they are coming. Committee chair Mark Horowitz said he and his colleagues could see no fundamental reason, in principle, why a functional quantum computer could not ever be built. When they do finally arrive, quantum computers pose a number of problems for computer scientists when it comes to determining whether they work as expected. Quantum computers can make use of the property of superposition: where the bits in a register in the machine do not exist in a single known state, but in a combination of states. Each state has a finite probability of being the one recorded when the register is read and the superposition collapses.


Technical Perspective: Compressing Matrices for Large-Scale Machine Learning

Communications of the ACM

Demand for more powerful big data analytics solutions has spurred the development of novel programming models, abstractions, and platforms for next-generation systems. For these problems, a complete solution would address data wrangling and processing, and it would support analytics over data of any modality or scale. It would support a wide array of machine learning algorithms, but also provide primitives for building new ones. It would be customizable, scale to vast volumes of data, and map to modern multicore, GPU, coprocessor, and compute cluster hardware. In pursuit of these goals, novel techniques and solutions are being developed by machine learning researchers,4,6,7 in the database and distributed systems research communities,2,5,8 and by major players in industry.1,3


Countering the Negative Image of Women in Computing

Communications of the ACM

Despite increased knowledge about gender (in) equality,7,27,38 women in STEM disciplines are still portrayed in stereotypical ways in the popular media. We have reviewed academic research, along with mainstream media quotes and images for depictions of women in STEM and women in computing/IT. We found their personality and identity formation continues to be influenced by the personas and stereotypes associated with role images seen in the media. This, in turn, can affect women's underrepresentation and career participation, as well as prospects for advancement in computing fields. The computer science Degree Hub15 in 2014 published its list of the 30 most influential, living computer scientists, weighing leadership, applicability, awards, and recognition as selection criteria. The list included only one female, Sophie Wilson, a British computer scientist best known for designing the Acorn Micro-Computer, the first computer sold by Acorn Computers Ltd. in 1978. A fellow elected to the prestigious Royal Society, Wilson is today the Director of IC Design at Broadcom Inc. in Cambridge, U.K., listed as number 30 of the 30 on the list.


Robots threaten middle-aged workers the most (that's anyone over 21)

ZDNet

Gallows humor, they call it. "A robot will do my job soon," they say, as they toil in their factory or lawyer's office. For many, it's hard to imagine what they might do next, except, in a fanciful thought, train as a mechanic fixing robots. Yet the more optimistic sorts claim that automation will bring new jobs, new ways and new opportunities to improve life for all. I was a little downcast, therefore, to read three recent papers by two influential economists, Daron Acemoglu of MIT and Pascual Restrepo of Boston University.


Lenovo taps IBM's cognitive and blockchain tools to improve customer service

ZDNet

Lenovo is turning to its longstanding partnership with IBM to improve its customer service with blockchain and AI-powered tools. Under the new, multi-year agreement, Lenovo will use IBM's tools to assist customer service agents and technicians within the Data Center Group -- supporting Lenovo's ThinkSystem and ThinkAgile platforms. What is AI? Everything you need to know about Artificial Intelligence Lenovo's relationship with IBM dates back to 2005, when the Chinese tech firm acquired IBM's PC business. In 2014, Lenovo purchased IBM's x86 server business. Last year, Lenovo signed a $240 million deal with IBM to improve the customer service experience within its PC division.


MIT finally gives a name to the sum of all AI fears

ZDNet

Now we know what to call it, that vast, disturbing collection of worries about artificial intelligence and the myriad of threats we imagine, from machine bias to lost jobs to Terminator-like robots: "Machine behaviour." That's the term that researchers at the Massachusetts Institute of Technology's Media Lab have proposed for a new kind of interdisciplinary field of study to figure out how AI evolves, and what it means for humans. The stakes are high because there is lots of potential for human ability to be amplified by algorithms, but also lots of peril. Commentators and scholars, they write, "are raising the alarm about the broad, unintended consequences of AI agents that can exhibit behaviours and produce downstream societal effects -- both positive and negative -- that are unanticipated by their creators." There is "a fear of the potential loss of human oversight over intelligent machines," and the development of "autonomous weapons" means that "machines could determine who lives and who dies in armed conflicts."


25 Mother's Day gifts she actually wants

USATODAY

We'll help you find the best gifts for mom. If you make a purchase by clicking one of our links, we may earn a small share of the revenue. However, our picks and opinions are independent from USA TODAY's newsroom and any business incentives. Mother's Day is just around the corner and it's time to treat your mom, friend who is a mom, or self-proclaimed mom friend to a special something. We're here to save you from stress shopping and impulse buying with this curated guide of some of this year's tried-and-true products.