Goto

Collaborating Authors

Yellowbrick shows that its data warehouse is not smoke and mirrors

ZDNet

Formed back in 2014 with first product released three years later, Yellowbrick has specialized at the high-performance end of the data warehousing spectrum that includes analytics of real-time data. It cites benchmarks of the ability to ingest up to 10 terabytes of data per hour. Until now, it has done so through a proprietary hardware architecture that includes specialized FPGA chips that made it sound to us like the second coming of Netezza appliances. While impressed with their performance stats, we were initially leery about how Yellowbrick's market success would scale when defined by custom hardware. But over the past year, the company has started ramping up its Yellowbrick Cloud Data Warehouse services that instead leverages the specialized hardware that are offered by providers like AWS, which have much more buying power than niche providers like Yellowbrick alone.


Cerebras Systems Unveils the Industry's First Trillion Transistor Chip

#artificialintelligence

WIRE)--Cerebras Systems, a startup dedicated to accelerating Artificial intelligence (AI) compute, today unveiled the largest chip ever built. Optimized for AI work, the Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors and is 46,225 square millimeters. The WSE is 56.7 times larger than the largest graphics processing unit which measures 815 square millimeters and 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. In AI, chip size is profoundly important.


GeForce Now picks up the OnLive mantle, allowing you to stream games direct to your PC

PCWorld

Considering how much money Nvidia has spent on its Grid service (which was recently renamed to GeForce Now in October), Wednesday's announcement at CES felt almost predictable: The company's officially bringing GeForce Now to the PC (and Mac!). Until this point, Nvidia's game-streaming service was confined to the Shield--whether the earlier mobile version, the tablet, or the current Android TV incarnation. Now you can access the full library of games as if they were playing right on your PC. It routs your mouse-and-keyboard inputs to a remote PC running the game, and then shoots the video back to you with as little latency as possible. If you have a solid internet connection, you could play all of the hottest current titles on your six-year-old laptop or whatever.


Anaysis: Data Center Networking Performance – New Apps Bring New

#artificialintelligence

With machine learning, big data, cloud, and networking functions virtualization (NFV) initiatives invading the data center, there are implications for data center networking performance. Large cloud services providers such as Amazon, Google, Baidu, and Tencent have reinvented the way in which IT services can be delivered, with capabilities that go beyond scale in terms of sheer size to also include scale as it pertains to speed and agility. That's put traditional carriers on notice: John Donovan, chief strategy officer and group president at AT&T technology and operations, for instance, said last year that AT&T wants to be the "most aggressive IT company in the world." He noted that in a world where over-the-top (OTT) offerings have become commonplace, application and services development can no longer be defined by legacy processes. "People that were suppliers are now competitors," he said.


Intel Stretches Deep Learning on Scalable System Framework

#artificialintelligence

The strong interest in deep learning neural networks lies in the ability of neural networks to solve complex pattern recognition tasks – sometimes better than humans. Once trained, these machine learning solutions can run very quickly – even in real-time – and very efficiently on low-power mobile devices and in the datacenter. However training a machine learning algorithm to accurately solve complex problems requires large amounts of data that greatly increases the computational workload. Scalable distributed parallel computing using a high-performance communications fabric is an essential part of what makes the training of deep learning on large complex datasets tractable in both the data center and within the cloud. Very simply, the single node TF/s parallelism delivered by Intel Xeon processor and Intel Xeon Phi devices described in the previous article in this series is simply not enough for many complex machine learning training sets.