Goto

Collaborating Authors

Hardware


These Fitbit trackers are on sale for up to 31% off amid rumors of a new model

Mashable

Apple isn't the only brand rumored to be working on a new wearable this spring, apparently: The German tech site WinFuture (via 9to5Google) just published images of the so-called Fitbit Luxe, which looks to be a high-end fitness tracker with a stainless steel case and an upgraded OLED screen. Fitbit itself has yet to speak on this goss, much less announce a release date for the Luxe, but "the depth of the leak suggests it's coming soon," says Engadget. Fitbit came out with four new fitness trackers last year, so we have high hopes for its 2021 lineup. Speaking of: If you don't feel like waiting around for "soon," now's not a bad time to grab one of Fitbit's (still very good) 2019 or 2020 releases. Walmart had a few of them on sale at the time of writing, saving you as much as 31% off their retail prices.


Beyond CUDA: GPU Accelerated Python for Machine Learning on Cross-Vendor Graphics Cards Made Simple

#artificialintelligence

Machine learning algorithms -- together with many other advanced data processing paradigms -- fit incredibly well to the parallel-architecture that GPU computing offers. This has driven massive growth in the advancement and adoption of graphics cards for accelerated computing in recent years. This has also driven exciting research around techniques that optimize towards concurrency, such as model parallelism and data parallelism. In this article you'll learn how to write your own GPU accelerated algorithms in Python, which you will be able to run on virtually any GPU hardware -- including non-NVIDIA GPUs. We'll introduce core concepts and show how you can get started with the Kompute Python framework with only a handful of lines of code. First we will be building a simple GPU Accelerated Python script that will multiply two arrays in parallel which this will introduce the fundamentals of GPU processing.


Quantum algorithms for data analysis

#artificialintelligence

In these lecture notes, we explore how we can leverage quantum computers and quantum algorithms for information processing. It has long been known that quantum computation can offer computational advantages with respect to classical computation, and in this place we explore more the consequences of this intuition in current domains of computer sciences. Why are we studying quantum algorithms? Studying how to use quantum mechanical systems is already fascinating in itself, but we argue that having faster algorithms it's not the only reason for studying quantum computing. Studying quantum computation might also reveal profound insights on new ways to process information.


A new era of innovation: Moore's Law is not dead and AI is ready to explode - SiliconANGLE

#artificialintelligence

Moore's Law is dead, right? Although the historical annual improvement of about 40% in central processing unit performance is slowing, the combination of CPUs packaged with alternative processors is improving at a rate of more than 100% per annum. These unprecedented and massive improvements in processing power combined with data and artificial intelligence will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. In this Breaking Analysis, we're going to unveil some data that suggests we're entering a new era of innovation where inexpensive processing capabilities will power an explosion of machine intelligence applications.


Canonical's mini-Kubernetes, MicroK8s, has been optimized for Raspberry Pi

ZDNet

To say Kubernetes, everyone's top container orchestration pick, is hard to master is an understatement. Kubernetes doesn't have so much as a learning curve as it does a learning cliff. But, Canonical's MicroK8s lets you learn to climb it in your home. And, with its latest release, it's easier than ever to set up a baby Kubernetes cluster using inexpensive Raspberry Pi or NVIDIA Jetson single-board computers (SBC). MicroK8s is a tiny Kubernetes cluster platform.


The Morning After: Our verdict on the Sonos Roam

Engadget

The world of smart home audio definitely benefited from all this time we're spending indoors. Sonos has an eye on the future, though. We've just finished reviewing the $170 Roam, which Sonos pitches as a hybrid speaker for beach trips and vacations, and which also integrates with your at-home sound system. It also doesn't look like a giant kettlebell like Sonos' last attempt, the $400 (!) Move. According to Deputy Managing Editor Nathan Ingraham, it sounds good (and sounds even better in a stereo pair) and is as portable as the competition.


Sony's LED Bravia TVs with 'cognitive intelligence' start at $1,299

Engadget

Sony's latest trick for Bravia TVs is something called "cognitive intelligence" that can enhance parts of an image depending on what's going on. The first 4K TVs it released with the tech were high-end OLED models, but those started at $2,999 and went up from there. Now, Sony has unveiled its first LED models with the Cognitive Processor XR, and they're a lot more reasonably priced. The X90J is the top-end OLED model and it's available in 50-, 55- and 65-inch sizes at $1,299, $1,499 and $1,799, respectively. These offer the Cognitive Processor XR, though cinephiles will likely want to turn that off to see the content as the creators intended.


Cyberpunk 2077 proves AMD's DLSS rival can't come soon enough

PCWorld

This week, a Cyberpunk 2077 patch added ray tracing support for AMD's Radeon RX 6000-series graphics cards, loosening Nvidia's (worryingly) exclusive grip on the cutting-edge lighting features. But that came with a bit of bad news too, as Cyberpunk 2077 drove home that AMD's missing DLSS rival, dubbed FidelityFX Super Resolution, can't come soon enough. AMD introduced real-time ray tracing support in the Radeon RX 6000-series GPUs, after Nvidia's GeForce RTX 20-series graphics card brought the technology to PCs in 2018. Nvidia didn't just focus on ray tracing, however; it also rolled out a complementary Deep Learning Super Sampling (DLSS) feature. DLSS leverages machine learning and dedicated AI cores in RTX graphics cards to render games at a lower resolution internally, then upscale the final image to your chosen resolution.


'Discovery Accelerator,' a new Cleveland Clinic-IBM partnership, will use quantum computer, artificial intelligence to speed up medical innovations

#artificialintelligence

The Cleveland Clinic and IBM have entered a 10-year partnership that will install a quantum computer -- which can handle large amounts of data at lightning speeds -- at the Clinic next year to speed up medical innovations. The Discovery Accelerator, a joint Clinic-IBM center, will feature artificial intelligence, hybrid cloud data storage and quantum computing technologies. A hybrid cloud is a data storage technology that allows for faster storage and analysis of large amounts of data. The partnership will allow Clinic researchers to use the advanced tech in its new Global Center for Pathogen Research and Human Health for research into genomics, population health, clinical applications, and chemical and drug discovery. The Center for Global and Emerging Pathogens Research studies emerging pathogens -- such as Zika and COVID-19 -- and seeks to develop treatments and vaccines to fight the next public health threat.


Armv9 is Arm's first major architectural update in a decade

#artificialintelligence

Arm, the leader in chips used in everything from mobile devices to supercomputers, has unveiled Armv9, the company's first major architectural change in a decade. The new designs should result in 30% faster performance over the next two chip generations. Arm is a chip architecture company that licenses its designs to others, and its customers have shipped more than 100 billion chips in the past five years. Nvidia is in the midst of acquiring Cambridge, United Kingdom-based Arm for $40 billion, but the deal is waiting on regulatory approvals. In a press briefing, Arm CEO Simon Segars said Armv9 will be the base for the next 300 billion Arm-based chips.