Goto

Collaborating Authors

North Korean MacOS Malware Adopts In-Memory Execution

#artificialintelligence

A new piece of macOS malware linked to the North Korean hacking group Lazarus employs in-memory execution of payloads, researchers revealed this week.


On GPUs, ranges, latency, and superoptimisers

#artificialintelligence

In addition to that, the PTX documentation clearly states, that bar.sync an instruction to which __syncthreads() is usually (if not always) compiled takes only non-exited threads. In other words, according to the official specifications, it should be perfectly safe to use bar.sync with exited threads, but not necessarily __syncthreads(). I am going to keep using __syncthreads() in this article, though. The second issue, excessive L1TEX stalls, is simpler to solve. Since we know precisely how our access pattern is going to look like, we can use shared memory and effectively manually manage cached data. If we assume the block size of 1024 threads and T being an integer, we can create a shared memory array and use it to store the intermediate values. The way we already have arranged threads and the values they are responsible ensures there won't be any bank conflicts. Unfortunately, this won't work if the size of T makes the array larger than the available shared memory or even big enough to reduce the occupancy on the Streaming Multiprocessor. In such case, the options are to either reduce the block size and either use dynamic shared memory or C template metaprogramming to decide on the size of the array or to stop using shared memory and rely on the cache entirely.


A history of NVidia Stream Multiprocessor

#artificialintelligence

No longer SIMD capable, "shaders units" are now "core" capable of one integer or one float32 instruction per clock. SM receive threads in groups of 32 called warps. Ideally all threads in a warp will execute the same instruction at the same time, only on different data (hence the name SIMT). The Multi-threaded Instruction Unit (MT) takes care of enabling/disabling threads in a warp in case their Instruction Pointer (IP) converge/diverge. Two SFU units are here to help with complex mathematic calculation such as inverse square root, sin, cos, exp, and rcp.


Model Matching and SFMD Computation

Neural Information Processing Systems

In systems that process sensory data there is frequently a model matching stage where class hypotheses are combined to recognize a complex entity. We introduce a new model of parallelism, the Single Function Multiple Data (SFMD) model, appropriate to this stage. SFMD functionality can be added with small hardware expense to certain existing SIMD architectures, and as an incremental addition to the programming model. Adding SFMD to an SIMD machine will not only allow faster model matching, but also increase its flexibility as a general purpose machine and its scope in performing the initial stages of sensory processing. 1 INTRODUCTION In systems that process sensory data there is frequently a post-classification stage where several independent class hypotheses are combined into the recognition of a more complex entity. Examples include matching word models with a string of observation probabilities, and matching visual object models with collections of edges or other features. Current parallel computer architectures for processing sensory data focus on the classification and pre-classification stages (Hammerstrom 1990).This is reasonable, as those stages likely have the largest potential for speedup through parallel execution. Nonetheless, the model-matching stage is also suitable for parallelism, as each model may be matched independently of the others. We introduce a new style of parallelism, Single Function Multiple Data (SFMD), that is suitable for the model-matching stage.


Fix for Intel's massive CPU security flaw might slow down your PC

Engadget

Intel is grappling with another major security flaw in its processors... and this time, the cost of fixing it may be very steep. Researchers have discovered a design vulnerability in Intel CPUs over the past decade that covers the ability of ordinary programs to determine the content or layout of protected kernel memory (i.e. While the details appear to be under embargo for now, the fix is to completely separate the kernel memory from those ordinary processes. That could carry a significant speed hit, since it requires switching between two memory address spaces every time there's a system call or a hardware interrupt request.