Goto

Collaborating Authors

Nurse-Arrest Audit Proposes Faster Officer Investigations

U.S. News

The audit released Tuesday says the department followed its policies, but the two-month investigation process nevertheless raised public concern after video of the July arrest drew widespread attention online amid a national conversation about police use of force.


Faster Convolutions with Sparse FFT • /r/MachineLearning

@machinelearnbot

I disagree Ryan Adams proposed "pooling" in frequency space which amounted to truncating the number of DTFT components. The way I understand sparse FFT, it does the same; sometimes even with adaptive rank. So while it wouldn't give convolutions exactly equivalent to regular convolutions, it could be an appealing form of speeding up and regularizing the model.


Quiet: Faster Belief Propagation for Images and Related Applications

AAAI Conferences

Belief propagation over Markov random fields has been successfully used in many AI applications since it yields accurate inference results by iteratively updating messages between nodes. However, its high computation costs are a barrier to practical use. This paper presents an efficient approach to belief propagation. Our approach, Quiet, dynamically detects converged messages to skip unnecessary updates in each iteration while it theoretically guarantees to output the same results as the standard approach used to implement belief propagation. Experiments show that our approach is significantly faster than existing approaches without sacrificing inference quality.



Microchip Enters Memory Infrastructure Market with Serial Memory Controller for High-performance Data Center Computing

#artificialintelligence

A CPU or SoC with OMI support can utilize a broad set of media types with different cost, power and performance metrics without having to integrate a unique memory controller for each type. In contrast, CPU and SoC memory interfaces today are typically locked to specific DDR interface protocols, such as DDR4, at specific interface rates. The SMC 1000 8x25G is the first memory infrastructure product in Microchip's portfolio that enables the media-independent OMI interface. Data center application workloads require OMI-based DDIMM memory products to deliver the same high-performance bandwidth and low latency results of today's parallel-DDR based memory products. Microchip's SMC 1000 8x25G features an innovative low latency design that delivers less than four ns incremental latency over a traditional integrated DDR controller with LRDIMM.