Semiconductors & Electronics


How to Pick the Right Pixel 4 and Where to Preorder It

#artificialintelligence

Google's Pixel 4 phones are here. There are two new models to choose from: the Pixel 4 and the larger Pixel 4 XL. If you're trying to decide which one to get and where to buy it, look no further. We've broken down all the preordering options and found the best places to snag a new Pixel 4 before it ships on October 24. If you'd like to see what else Google announced, including other new devices like the Pixel Buds earphones, Pixelbook Go laptop, and Nest Mini speaker with Google Assistant, check out our full coverage of Google's fall hardware event.


ExaNoDe Builds Groundbreaking 3D prototype of Compute Element for Exascale - insideHPC

#artificialintelligence

Today the European ExaNoDe project announced it has built a groundbreaking compute node prototype paving the way to exascale, combining 3DIC with multi-chip-module integration technologies, heterogeneous compute elements with Arm cores and FPGA acceleration and the UNIMEM memory system, all powered by a high-performance, high-productivity software stack. Affordability and power consumption are the main hurdles for an exascale-class compute node," said Denis Dutoit, research engineer at CEA-Leti and the coordinator of ExaNoDe. "In the ExaNoDe project, we have built a complete prototype that integrates multiple core technologies: a 3D active interposer with chiplets, Arm cores with FPGA acceleration, a global address space, high-performance and productive programming environment, which will enable European technology to satisfy the requirements of exascale HPC." The ExaNoDe protoype is part of the disruptive change required to provide the necessary compute density and power efficiency for an operational exascale machine. Taking as a basis an innovative interposer developed by CEA, ExaNoDe allows the combination of multiple system-on-chips (SoC) chiplets, forming a three-dimensional integrated circuit (3DIC).


Qualcomm Paints Strategic Contrasts in Cloud, AI, Edge - SDxCentral

#artificialintelligence

Qualcomm executives this week attempted to draw contrasts and explain how their approach to cloud computing, artificial intelligence (AI), and mobile edge computing is different than other technology companies and operators. "Our focus is different than the rest of the industry," Jim Thompson, CTO and executive vice president at Qualcomm said during a media gathering at the company's headquarters. AI, an area of technology that Qualcomm has been working on for the better part of a decade, plays a big role in Qualcomm's vision for what it calls the "edge cloud," Thompson explained. "Our focus has been deep neural networks, deep learning for low power, for devices that you would have a battery in and have limited thermal capability," he said, adding that Qualcomm's interest is not the cloud essentially. "AI is very good at consuming large amounts of data. It comes from the edge of the network, it comes from what people do, it comes from sensors, and all of that is at the edge of the network," he said.


How to train your Robot's AI - Personal page of Massimiliano Versace

#artificialintelligence

I am the co-founder and CEO of Neurala Inc., a Boston-based company building Artificial Intelligence emulating brain function in software. Neurala's deep learning tech makes robots, drones, cars, consumer electronics, toys and smart devices more useful, engaging and autonomous. Neurala stems out of 10 years of research at Boston University Neuromorphics Lab, where as AI Professor I have pioneered the research and fielding of brain-inspired (also called Deep Learning, or Artificial Neural Networks) algorithms that allow robots and drones to perceive, navigate, interact and learn real-time in complex environments. Over my academic and industrial career, I have lectured and spoken at dozens of events and venues, including TEDx, keynote at Mobile World Congress Drone Summit, NASA, the Pentagon, GTC, InterDrone, Los Alamo National Lab, GE, Air Force Research Labs, HP, iRobot, Samsung, LG, Qualcomm, Huawei, Ericsson, BAE Systems, AI World, Mitsubishi, ABB and Accenture, among many others. My work has been featured in TIME, IEEE Spectrum, Fortune, CNBC, The Boston Globe, Xconomy, The Chicago Tribune, TechCrunch, VentureBeat, Nasdaq, Associated Press and many other media.


How Ray Kurzweil predicted the birth of Google, Deep Blue, and the rise of AI decades in advance- Technology News, Firstpost

#artificialintelligence

"It is difficult to make predictions", goes the old joke, "especially about the future." However, there is one way in which we can predict the rate of change. It is called Moore's Law, named after Gordon Moore the founder of the computer chip company Intel. More than 50 years ago, he observed that the computing power that was available at a fixed price doubled every 18 months or so. This was based on the number of transistors that could be fixed on a chip.


Survival Of The Cheapest?

#artificialintelligence

We all want the best solution to win, but that rarely happens. History is littered with products that were superior to the alternatives and yet lost out to a lessor rival. I am sure several examples are going through your mind without me having to list them. It is normally the first to volume that wins, often accelerated by copious amounts of marketing dollar to help push it against headwinds. The same has been true in many cases within the semiconductor industry.


TechSparks 2019: How India's deep tech ecosystem impacts every sector, from dairy to defence

#artificialintelligence

Deep tech is the newest catchphrase in the Indian startup ecosystem. A bunch of homegrown companies are using new-age technologies like artificial intelligence, machine learning, data analytics, cloud, and the internet-of-things (IoT) to solve real-world problems, and essentially, alter the way humans lead daily lives. On Day One of TechSparks 2019, YourStory's flagship annual conference, a panel of founders, investors, and technical heads gathered to take stock of the evolution of the local deep tech startups ecosystem. Swapan Rajdev, Co-Founder and CTO, Haptik (maker of AI chatbots, recently acquired by Reliance Jio) elaborated on how the growth of AI has spurred new jobs and roles. Gone are the days when Indian companies failed to make a mark in hardware.


DFT for AI chips draws a crowd at ITC India tutorial

#artificialintelligence

At the recently concluded ITC India conference, Mentor experts presented the two highest-attended tutorials. One tutorial was AI Chip Technologies and Its DFT Methodologies, presented by Mentor's Yu Huang, Rahul Singhal, and Lee Harrison Hardware acceleration for Artificial Intelligence (AI) is now a very competitive and rapidly evolving market. There are more than 50 startups and 25 established semiconductor companies all racing to capture a portion of the business. The ITC India tutorial covered the basics of deep learning and gave an overview of how AI chips accelerate deep learning computations. They covered the critical and special characteristics and the architecture of the most popular AI chips.


Samsung AI Makes the Mona Lisa 'Speak'

#artificialintelligence

Imagine the lips forming the Mona Lisa's famous smile were to part, and she began "speaking" to you. This is not some sci-fi fantasy or a 3D face animation, it's an effect achieved by researchers from Samsung AI lab and Skolkovo Institute of Science and Technology, who used adversarial learning to generate a photorealistic talking head model. AI techniques have already been used to generate realistic video of people like former US President Barack Obama and movie star Scarlett Johansson, enabled in large part by the abundance of available visual data on these individuals. The new research however shows it is also possible to generate realistic content when source images are rare. Researchers leveraged their Few-Shot Adversarial Learning technique on one of the most widely recognized humans in history known through a single image: Lisa Gherardini, the subject of Leonardo da Vinci's classic 16th century portrait.


ON Semiconductor's digital image sensor enables AI vision systems -- Softei.com

#artificialintelligence

Intelligent vision systems for viewing and artificial intelligence (AI) can be implemented using the low power 0.3Mpixel image sensor announced by ON Semiconductor. The ARX3A0 digital image sensor has 0.3Mpixel resolution in a 1:1 aspect ratio. It can perform like a global shutter in many conditions, with up to 360 frames per second (fps) capture rate, yet with the size, performance and response levels that relate to being a back-side illuminated (BSI) rolling shutter sensor, explains ON Semiconductor. It has a small size, square format and high frame rate, making it particularly suitable for emerging machine vision, AI and augmented reality/virtual reality (AR/VR) applications, as well as small supplemental security cameras. To meet the demands of applications that provide still or streaming images, the ARX3A0 is designed to deliver flexible, high-performance image capture with minimal power.