The Raspberry Pi Foundation has announced it's bringing the OpenVX 1.3 API to Raspberry Pi devices to improve computer vision on the popular single-board computers. The new open and royalty-free API comes from the Khronos Group, which has backed standards like Vulcan and OpenCL. Khronos members include most big-name software and hardware vendors – AMD, Apple, Arm, Epic Games, Google, Samsung, Intel, Nvidia and so on – as well as companies with a stake in its standards, like Boeing and IKEA. "The Khronos Group and Raspberry Pi have come together to work on an open-source implementation of OpenVX 1.3, which passes the conformance on Raspberry Pi," explained Kiriti Nagesh Gowda, AMD's MTS software development engineer. "The open-source implementation passes the Vision, Enhanced Vision, & Neural Net conformance profiles specified in OpenVX 1.3 on Raspberry Pi."
The Internet of Things (IoT) has sparked the proliferation of connected devices. These devices, which house sensors to collect data of the day-to-day activities or monitoring purposes, are embedded with microcontrollers and microprocessors chips. These chips are mounted based on the data sensor needed to complete an assigned task. So we don't have a one processor fits all architecture. For example, some devices will perform a limited amount of processing on data sets such as temperature, humidity, pressure, or gravity; more complicated systems, however, will need to handle (multiple) high-resolution sound or video streams.
The touchscreens which we use at supermarkets and ATMs were accidentally invented by a group of atomic physicists back in 1970. The conception of touchscreens can be traced back to the 1940s, even before science fiction writers warmed up to the innovation. Today, the use of touchscreens is only bounded by the creativity of users. You can pinch, zoom, type and move the world literally with your fingers. However, a typical user might have had the experience of typos, unwanted clicks and many other mishits, which couldn't be undone.
This week brought good sales on Apple and Amazon devices, as well as some intriguing gaming deals. The Apple Watch Series 5 dropped to $299 again after WWDC kicked off earlier this week and Amazon still has some of its Echo speakers on sale (including the handy Echo Dot with clock). You can grab some extra storage for your Nintendo Switch for less at Newegg and Steam's Summer Sale has just begun. Here are the best deals we found this week that you can still get today. The latest Apple Watch has dropped to its lowest price ever again at Amazon and Walmart.
Honeywell, a company best known for making control systems for homes, businesses and planes, claims to have built the most powerful quantum computer ever. Other researchers are sceptical about its power, but for the company it is a step toward integrating quantum computing into its everyday operations. Honeywell measured its computer's capabilities using a metric invented by IBM called quantum volume. It takes into account the number of quantum bits – or qubits – the computer has, their error rate, how long the system can spend calculating before the qubits stop working and a few other key properties. Measuring quantum volume involves running about 220 different algorithms on the computer, says Tony Uttley, the president of Honeywell Quantum Solutions.
Apple's Worldwide Developers Conference, which kicked off with a keynote by CEO Tim Cook on Monday, had a different vibe. The keynote, which typically serves as a venue for Apple to highlight its latest iOS operating system, was done online only, due to crowd restrictions to combat the spread of the coronavirus. Cook & Co. unveiled new software updates for iPhones, iPads, Apple Watches and Mac computers – public betas for those will begin in July with final software versions available this fall, Cook said. The Apple CEO also announced that Apple would begin making its own processors for Macs. The move from Intel chips will make for "a huge leap forward" for Mac computers, he said.
From the simple embedded processor in your washing machine to powerful processors in data center servers, most computing today takes place on general-purpose programmable processors or CPUs. CPUs are attractive because they are easy to program and because large code bases exist for them. The programmability of CPUs stems from their execution of sequences of simple instructions, such as ADD or BRANCH; however, the energy required to fetch and interpret an instruction is 10x to 4000x more than that required to perform a simple operation such as ADD. This high overhead was acceptable when processor performance and efficiency were scaling according to Moore's Law.32 One could simply wait and an existing application would run faster and more efficiently. Our economy has become dependent on these increases in computing performance and efficiency to enable new features and new applications. Today, Moore's Law has largely ended,12 and we must look to alternative architectures with lower overhead, such as domain-specific accelerators, to continue scaling of performance and efficiency. There are several ways to realize domain-specific accelerators as discussed in the sidebar on accelerator options. A domain-specific accelerator is a hardware computing engine that is specialized for a particular domain of applications. Accelerators have been designed for graphics,26 deep learning,16 simulation,2 bioinformatics,49 image processing,38 and many other tasks. Accelerators can offer orders of magnitude improvements in performance/cost and performance/W compared to general-purpose computers. For example, our bioinformatics accelerator, Darwin,49 is up to 15,000x faster than a CPU at reference-based, long-read assembly. The performance and efficiency of accelerators is due to a combination of specialized operations, parallelism, efficient memory systems, and reduction of overhead. Domain-specific accelerators7 are becoming more pervasive and more visible, because they are one of the few remaining ways to continue to improve performance and efficiency now that Moore's Law has ended.22 Most applications require modifications to achieve high speed up on domain-specific accelerators. These applications are highly tuned to balance the performance of conventional processors with their memory systems.
For years, LG was the only TV manufacturer making OLED models. While Sony has since jumped into the fray, LG continues to lead the pack when it comes to producing the widest array of OLED options. The company's CX model is its second-most affordable option in 2020, but despite being much more affordable than a lot of the competition, you're still getting everything that makes OLED great: perfect black levels, vivid emissive colors, and excellent response time for movies and video games. If you're looking for a new TV in 2020 that'll put you right at the front of the pack where the most premium TV tech is concerned, the CX is a fantastic investment. Its HDR performance is stellar, it's a great pickup for avid gamers, and it features a thin, sleek design that's sure to turn heads.
"For one moment you have the best technology handy, the next moment it is obsolete" Technology advances every single day, we might want to believe it or not, but this moment some other technology is arising whereas some might be obsoleting. It all depends upon the moment or situation we want technology to take part in. And, there is no doubt that technology is making things a little better than yesterday. In the race of AI, AR, VR, IoT, wearable technology trends are also taking part in making the lives of people monitored and strict. Tech giant Apple came up with the concept of Apple Watches is now a competitor to other brands' smartwatches. To measure heart rate count, blood pressure monitored, steps counted, a smartwatch is more than that. When we are saying smartwatch, there is another best wearable tech that is already working as the lead role in making the lives of people counted and monitored. Wearable technology trends 2020 are taking place in people's lives and expected to reach 614.31 million units in 2025.
Nvidia, Intel and AMD have announced their support for Microsoft's new effort to bring graphics processor support to the Windows 10 Windows Subsystem for Linux to enhance machine-learning training. GPU support for WSL arrived on Wednesday in the Dev Channel preview of Windows 10 build 20150 under Microsoft's reorganized testing structure, which lets it test Windows 10 builds that aren't tied to a specific future feature release. Microsoft announced upcoming GPU support for WSL a few weeks ago at Build 2020, along with support for running Linux GUI apps. The move on GPU access for WSL is intended to bring the performance of applications running in WSL2 up to par with those running on Windows. GPU compute support is the feature most requested by WSL users, according to Microsoft. The 20150 update includes support for Nvidia's CUDA parallel computing platform and GPUs, as well as GPUs from AMD and Intel.