Today was the release of the second round (version 0.7) of MLPerf Inference benchmark results. Like the latest training results, which were announced in July, the new inference numbers show an increase in the number of companies submitting and an increased number of platforms and workloads supported. The MLPerf inference numbers are segmented out into four categories – Data Center, Edge, Mobile, and Notebook. The number of submissions increased from 43 to 327 and the number of companies submitting increased from just nine to 21. The companies submitting included semiconductor companies, device OEMs, and several test labs.
AMD is acquiring chip designer Xilinx for $35 billion in stock to "significantly" expand the range of products it makes and customers it reaches, particularly in high performance computing. As the Wall Street Journal noted, Xilinx's easily customizable FPGA (field-programmable gate array) chips are used in a variety of places AMD wouldn't have even considered before, from 5G systems to the F-35 to self-driving cars. The newly-bought company also specializes in adaptive systems-on-chip, accelerators and smart networking devices found in data centers, edge computing and end devices. AMD expects the Xilinx deal to take a while to wrap up. It should close by the end of 2021, the company said.
The global Deep Learning market is expected to rise with an impressive CAGR and generate the highest revenue by 2026. Zion Market Research in its latest report published this information. The report is titled "Global Deep Learning Market 2020 With Top Countries Data, Revenue, Key Developments, SWOT Study, COVID-19 impact Analysis, Growth and Outlook To 2026". It also offers an exclusive insight into various details such as revenues, market share, strategies, growth rate, product & their pricing by region/country for all major companies. The report provides a 360-degree overview of the market, listing various factors restricting, propelling, and obstructing the market in the forecast duration. The report also provides additional information such as interesting insights, key industry developments, detailed segmentation of the market, list of prominent players operating in the market, and other Deep Learning market trends.
Anil Mankar, head of product development at AI chip startup BrainChip, presented details on the company's technology Tuesday at the prestigious Linley Fall Processor conference. The conference organizer, Linley Gwennap, presented the case that the entire industry needs more software capability to catch up with an enormous lead that Nvidia has in AI. The semiconductor industry is in the midst of a renaissance in chip design and performance improvement, but it will take a lot more software to catch up with graphics chip titan Nvidia, an industry conference Tuesday made clear. The Linley Fall Processor conference, which is taking place as a virtual event this week and next week, is one of the main meet-and-greet events every year for promising young chip companies. To kick off the show, the conference host, Linley Gwennap, who has been a semiconductor analyst for two decades, offered a keynote Tuesday morning in which he said that software remains the stumbling block for all companies that want to challenge Nvidia's lead in processing artificial intelligence.
The next generation of high-performance, low-power computer systems might be inspired by the brain. However, as designers move away from conventional computer technology towards brain-inspired (neuromorphic) systems, they must also move away from the established formal hierarchy that underpins conventional machines -- that is, the abstract framework that broadly defines how software is processed by a digital computer and converted into operations that run on the machine's hardware. This hierarchy has helped enable the rapid growth in computer performance. Writing in Nature, Zhang et al.1 define a new hierarchy that formalizes the requirements of algorithms and their implementation on a range of neuromorphic systems, thereby laying the foundations for a structured approach to research in which algorithms and hardware for brain-inspired computers can be designed separately. The performance of conventional digital computers has improved over the past 50 years in accordance with Moore's law, which states that technical advances will enable integrated circuits (microchips) to double their resources approximately every 18–24 months.
Managing heat is a critical part of any video game console. Sony has already offered a video teardown of the PlayStation 5, revealing a large fan -- 120mm in diameter and 45mm thick, to be precise -- that can direct air to both sides of the motherboard. We didn't know, however, that Sony has plans to optimize the component based on its performance during individual games. Yasuhiro Ootori explained that the console will monitor temperature through a sensor inside the APU and three more attached to the main board. The highest value is then used to determine the speed of the fan.
Machine learning is everywhere you look, affecting many technologies and products that we use on a daily basis. But who are the product managers leading these products? Who is ensuring that the success metrics are set correctly and ethically? Who is responsible for accurate messaging around such products? Let's go several years back, and look at the Product Manager's role.
Volumetric 3D displays are neither easy to produce nor common, as holographic imagery generally requires a mix of stereoscopic screen technology and unique optics, sometimes backed by high-speed eye tracking. Today, the display experts at Sony are throwing their hat into the ring with a new option called the ELF-SR1 -- also known as the Spatial Reality Display -- which is initially being targeted at professional users in content creation businesses, but with an eye towards future use in consumer-facing applications. Resembling a traditional computer monitor fixed on a 45-degree recline with a triangular frame, the Spatial Reality Display combines a 15.6-inch screen with a micro optical lens coating and an eye-tracking camera. While the display packs a conventional 4K resolution, the pixels are effectively split into twin 2K arrays for your left and right eyes, using live pupil tracking data and precision alignment of the micro-lenses atop pixels to deliver sharp, realistic 3D imagery. The results are digital 3D objects that appear to be floating right in front of the screen, and switch perspectives smoothly as your head and eyes move.
After years of trying to make Bixby happen, Samsung is coming around and enabling support for better AI helpers like Alexa and Google Assistant. The company announced at CES this year that you can choose to use these assistants on its QLED 8K TV, but Google Assistant only just became available. With Google Assistant on these devices, you can not only control other smart devices, but also use it to pull up your favorite show or get onscreen answers to your questions. You'll need to connect your TV to the Google Assistant app on your phone by heading over to Settings on your big screen and selecting it under Voice. You'll need to give Samsung and Google permission to share information with each other as well as access to your voice.
But that doesn't mean the technology is entirely useless. Sony's new Spatial Reality Display (or SR Display), for example, uses eye-tracking technology to render believable 3D objects, without the need to wear 3D glasses or put on a VR headset. It's something CG and VR artists could use to preview their work easily. And no, it's not meant for consumers -- not at its $5,000 price, anyway. Sony first previewed the SR Display at CES this year, where it was called its "Eye-Sensing Light Field Display."