Collaborating Authors


Autonomous Driving, AI System on a Chip, Drug Discovery Firms Among Top Funded - AI Trends


The top-funded companies on the recently-released list of top 100 most-promising AI companies to watch from CBInsights, a market intelligence company based in New York, include companies offering autonomous driving software, an AI System on a chip, endpoint security with AI, and a drug discovery company. The list, selected from a base of 6,000 companies, is based on business relations, investor profile, news sentiment analysis, R&D activity, a proprietary scoring system, market potential, competitive landscape, team strength and tech novelty, according to an account in TechRepublic. "This year's cohort spans 18 industries, and is working on everything from climate risk to accelerating drug R&D," stated CB Insights CEO Anand Sanwal. Companies on last year's list went on to raise $5.2 billion in additional financing, including 16 of over $100 million each. Some companies exited via merger or acquisition, IPOs or SPACS.

Podcast 12: Real world tech: Edge AI drives car-making, healthcare and retail - VanillaPlus - The global voice of Telecoms IT


Artificial intelligence (AI) at the edge is changing healthcare, retail and Audi cars, as Intel's IoT Group vice president, John Healy tells Jeremy Cowan and George Malim. Plus we learn how chipmakers globally are tackling supply problems that have halted vehicle production. The semiconductor industry is facing an "awakening", says Healy, as it shape-shifts to meet "insatiable demand" for silicone. Finally, we hear which African country is a leader in satellite cartography, and how Amazon is playing games with its warehouse staff. Hi, and welcome to the latest Trending Tech Podcast brought to you by The Evolving Enterprise, IoT Now, and This is Jeremy Cowan, and I want to thank you for joining the latest, sometimes serious, sometimes light-hearted look at enterprise digital transformation. I am delighted to welcome today two guests, who are John Healy, from California-based international technology company, Intel, known among other things, for the processors that power so many of our devices. John is vice president of the IoT Group. John, thank you very much for making the time to be here. Good to have you on again, George. Okay, today, we'll be looking at some key tech news stories that deserve a bit of a deeper dive.


Communications of the ACM

As a new era in computing emerges, so too must our fundamental thinking patterns.

The microchip shortage, explained: How it's impacting car prices and the tech industry

USATODAY - Tech Top Stories

As the U.S. economy rebounds from its pandemic slump, a vital cog is in short supply: the computer chips that power a wide range of products that connect, transport and entertain us in a world increasingly dependent on technology. The shortage has already been rippling through various markets since last summer. It has made it difficult for schools to buy enough laptops for students forced to learn from home, delayed the release of popular products such as the iPhone 12 and created mad scrambles to find the latest video game consoles such as the PlayStation 5. But things have been getting even worse in recent weeks, particularly in the auto industry, where factories are shutting down because there aren't enough chips to finish building vehicles that are starting to look like computers on wheels. The problem was recently compounded by a grounded container ship that blocked the Suez Canal for nearly a week, choking off chips headed from Asia to Europe.

Why Chip Companies Are Important to the Future of Tech


Over the past 12 months, the Philadelphia Semiconductor Index rallied about 90% as global demand for chips surged through the pandemic. Stay-at-home trends boosted sales of new PCs, data centers installed more chips to keep pace with the surging usage of cloud and AI services, and new technologies -- including driverless cars, automated factories, and 5G networks and devices -- gobbled up more chips. That demand propelled the price of many leading chip stocks, including Qualcomm, Advanced Micro Devices (NASDAQ:AMD), and NVIDIA (NASDAQ:NVDA), to historic highs. Taiwan's Taiwan Semiconductor Manufacturing (NYSE:TSM), the world's largest contract chipmaker, also benefited from those surging orders. The global semiconductor market could still expand at a compound annual growth rate of 10% from 2021 to 2026, according to research firm EMR, as companies across a wide range of industries purchase more chips.

Comparing the Value of Labeled and Unlabeled Data in Method-of-Moments Latent Variable Estimation Machine Learning

Labeling data for modern machine learning is expensive and time-consuming. Latent variable models can be used to infer labels from weaker, easier-to-acquire sources operating on unlabeled data. Such models can also be trained using labeled data, presenting a key question: should a user invest in few labeled or many unlabeled points? We answer this via a framework centered on model misspecification in method-of-moments latent variable estimation. Our core result is a bias-variance decomposition of the generalization error, which shows that the unlabeled-only approach incurs additional bias under misspecification. We then introduce a correction that provably removes this bias in certain cases. We apply our decomposition framework to three scenarios -- well-specified, misspecified, and corrected models -- to 1) choose between labeled and unlabeled data and 2) learn from their combination. We observe theoretically and with synthetic experiments that for well-specified models, labeled points are worth a constant factor more than unlabeled points. With misspecification, however, their relative value is higher due to the additional bias but can be reduced with correction. We also apply our approach to study real-world weak supervision techniques for dataset construction.

A Case for 3D Integrated System Design for Neuromorphic Computing & AI Applications Artificial Intelligence

Over the last decade, artificial intelligence has found many applications areas in the society. As AI solutions have become more sophistication and the use cases grew, they highlighted the need to address performance and energy efficiency challenges faced during the implementation process. To address these challenges, there has been growing interest in neuromorphic chips. Neuromorphic computing relies on non von Neumann architectures as well as novel devices, circuits and manufacturing technologies to mimic the human brain. Among such technologies, 3D integration is an important enabler for AI hardware and the continuation of the scaling laws. In this paper, we overview the unique opportunities 3D integration provides in neuromorphic chip design, discuss the emerging opportunities in next generation neuromorphic architectures and review the obstacles. Neuromorphic architectures, which relied on the brain for inspiration and emulation purposes, face grand challenges due to the limited understanding of the functionality and the architecture of the human brain. Yet, high-levels of investments are dedicated to develop neuromorphic chips. We argue that 3D integration not only provides strategic advantages to the cost-effective and flexible design of neuromorphic chips, it may provide design flexibility in incorporating advanced capabilities to further benefits the designs in the future.

Deep Learning-based Compressive Beam Alignment in mmWave Vehicular Systems Artificial Intelligence

Millimeter wave vehicular channels exhibit structure that can be exploited for beam alignment with fewer channel measurements compared to exhaustive beam search. With fixed layouts of roadside buildings and regular vehicular moving trajectory, the dominant path directions of channels will likely be among a subset of beam directions instead of distributing randomly over the whole beamspace. In this paper, we propose a deep learning-based technique to design a structured compressed sensing (CS) matrix that is well suited to the underlying channel distribution for mmWave vehicular beam alignment. The proposed approach leverages both sparsity and the particular spatial structure that appears in vehicular channels. We model the compressive channel acquisition by a two-dimensional (2D) convolutional layer followed by dropout. We incorporate the low-resolution phase shifter constraint during neural network training by using projected gradient descent for weight updates. Furthermore, we exploit channel spectral structure to optimize the power allocated for different subcarriers. Simulations indicate that our deep learningbased approach achieves better beam alignment than standard CS techniques which use random phase shift-based design. Numerical experiments also show that one single subcarrier is sufficient to provide necessary information for beam alignment. Millimeter-wave (mmWave) vehicular communication enables massive sensor data sharing and various emerging applications related to safety, traffic efficiency and infotainment [2]-[4]. Yuyang Wang is with Apple Inc., One Apple park way, Cupertino, CA, 95014, USA, email: Nitin Jonathan Myers is with Samsung Semiconductor Inc., 5465 Morehouse Dr, San Diego, CA 92121 USA, email: Nuria González-Prelcic, and Robert W. Heath Jr. are with the Department of Electrical and Computer Engineering, North Carolina State University, 890 Oval Dr, Raleigh, NC 27606 USA, email: {ngprelcic, rwheathjr} Part of this work has been presented at IEEE ICASSP 2020 [1]. This material is based upon work supported in part by the National Science Foundation under Grant No. ECCS-1711702, and by a Qualcomm Faculty Award.

Senior Compiler Software Engineer - Machine Learning in San Jose, CA - Xilinx


At Xilinx, we are leading the industry transformation to build an adaptable, intelligent world. ARE YOU bold, collaborative, and creative? We develop leaders and innovators who want to revolutionize the world of technology. We believe that by embracing diverse ideas, pushing boundaries, and working together as ONEXILINX, anything is possible. Our culture of innovation began with the invention of the Field Programmable Gate Array (FPGA), and with the 2018 introduction of our Adaptive Compute Acceleration Platform (ACAP), has made a quantum leap in capability, solidifying our role as the adaptable platform supplier of choice.

The Decline of Computers as a General Purpose Technology

Communications of the ACM

Perhaps in no other technology has there been so many decades of large year-over-year improvements as in computing. It is estimated that a third of all productivity increases in the U.S. since 1974 have come from information technology,a,4 making it one of the largest contributors to national prosperity. The rise of computers is due to technical successes, but also to the economics forces that financed them. Bresnahan and Trajtenberg3 coined the term general purpose technology (GPT) for products, like computers, that have broad technical applicability and where product improvement and market growth could fuel each other for many decades. But, they also predicted that GPTs could run into challenges at the end of their life cycle: as progress slows, other technologies can displace the GPT in particular niches and undermine this economically reinforcing cycle. We are observing such a transition today as improvements in central processing units (CPUs) slow, and so applications move to specialized processors, for example, graphics processing units (GPUs), which can do fewer things than traditional universal processors, but perform those functions better. Many high profile applications are already following this trend, including deep learning (a form of machine learning) and Bitcoin mining. With this background, we can now be more precise about our thesis: "The Decline of Computers as a General Purpose Technology." We do not mean that computers, taken together, will lose technical abilities and thus'forget' how to do some calculations.