Goto

Collaborating Authors

Machine Learning: AI-Alerts


Tech Companies Are Training AI to Read Your Lips

#artificialintelligence

The task is incredibly challenging--even expert human lip readers are actually pretty poor at word-for-word interpretation. In 2018, Google subsidiary Deepmind published research unveiling its latest full-sentence lip-reading system. The AI achieved a word error rate (the percent of words it got wrong) of 41 percent on videos containing full sentences. Human lip readers viewing a similar sample of video-only clips had word error rates of 93 percent when given no context about the subject matter and 86 percent when given the video's title, subject category, and several words in the sentence. That study was conducted using a large, custom-curated dataset.


This High Schooler Created a Drug Discovery Search Engine

#artificialintelligence

Between his mom's place in Manhattan, his dad in Queens, and his high school in the Bronx, Noah Getz is on the subway a lot. It gives him time to read and to think. Our first coronavirus summer was waning, and he'd been wrestling with a weighty science problem: using machine learning to hunt down tiny molecules that may help treat Alzheimer's. Thus far, his AI had been spitting out results that were "almost comically bad." The problem was that the algorithms Getz was using did their best when they had massive amounts of data to sift through and discover patterns in. Getz' data set was far smaller; he was working with one lab at Mount Sinai, not a multinational pharmaceutical company with a galaxy-sized drug library.


Global Artificial Intelligence and Machine Learning Market Analysis, Size, Share, Growth, Trends and Forecast to 2026

#artificialintelligence

The Artificial Intelligence and Machine Learning Market research report is an in-depth analysis of the latest developments, market size, status, upcoming technologies, industry drivers, challenges, regulatory policies, with key company profiles and strategies of players. The research study provides market overview, Artificial Intelligence and Machine Learning market definition, regional market opportunity, sales and revenue by region, manufacturing cost analysis, Industrial Chain, market effect factors analysis, Artificial Intelligence and Machine Learning market size forecast, market data Graphs and Statistics, Tables, Bar & Pie Charts, and many more for business intelligence. The up-to-date report of Artificial Intelligence and Machine Learning market presents an in-depth evaluation of all the crucial factors such as key growth drivers, impediments, and opportunities to understand the industry behavior. Moving ahead, insights into competitive landscape with regards to the top firms, emerging contenders, and new entrants is taken into account. Moreover, the document sheds light on the effects of COVID-19 pandemic on this marketplace and puts forth various strategies for effective risk management and strong profits in the upcoming years.


Machine Learning at the Edge: TinyML Is Getting Big

#artificialintelligence

Is it $61 billion and 38.4% compound annual growth rate (CAGR) by 2028 or $43 billion and 37.4% CAGR by 2027? Depends on which report outlining the growth of edge computing you choose to go by, but in the end it is not that different. What matters is that edge computing is booming. There is growing interest by vendors, and ample coverage, for good reason. Although the definition of what constitutes edge computing is a bit fuzzy, the idea is simple.


AI system outperforms humans in designing floorplans for microchips

#artificialintelligence

Success or failure in designing microchips depends heavily on steps known as floorplanning and placement. These steps determine where memory and logic elements are located on a chip. The locations, in turn, strongly affect whether the completed chip design can satisfy operational requirements such as processing speed and power efficiency. So far, the floorplanning task, in particular, has defied all attempts at automation. It is therefore performed iteratively and painstakingly, over weeks or months, by expert human engineers.


Machine learning speeds up simulations in material science

#artificialintelligence

Research, development, and production of novel materials depend heavily on the availability of fast and at the same time accurate simulation methods. Machine learning, in which artificial intelligence (AI) autonomously acquires and applies new knowledge, will soon enable researchers to develop complex material systems in a purely virtual environment. How does this work, and which applications will benefit? In an article published in the Nature Materials journal, a researcher from Karlsruhe Institute of Technology (KIT) and his colleagues from Göttingen and Toronto explain it all. Digitization and virtualization are becoming increasingly important in a wide range of scientific disciplines.


Ex-Google boss slams transparency rules in Europe's AI bill

#artificialintelligence

Eric Schmidt, who leads a U.S. government initiative to integrate AI into national security, warned Monday that the EU's AI transparency requirements would be "very harmful to Europe." Speaking at POLITICO's AI Summit, Schmidt criticized the provisions of the EU's AI bill that require algorithms to be transparent. "It's just a proposal, but if you would adopt it without modification, it would be a very big setback for Europe," said Schmidt, who chairs the National Security Commission on Artificial Intelligence (NSCAI) and is a former CEO of Google. The EU's proposal "requires that the system would be able to explain itself. But machine learning systems cannot fully explain how they make their decisions," Schmidt said.


Technical Perspective: A Chiplet Prototype System for Deep Learning Inference

Communications of the ACM

The following paper, "Simba: Scaling Deep-Learning Inference with Chiplet-Based Architecture," by Shao et al. presents a scalable deep learning accelerator architecture that tackles issues ranging from chip integration technology to workload partitioning and non-uniform latency effects on deep neural network performance. Through a hardware prototype, they present a timely study of cross-layer issues that will inform next-generation deep learning hardware, software, and neural network architectures. Chip vendors face significant challenges with the continued slowing of Moore's Law causing the time between new technology nodes to increase, sky-rocketing manufacturing costs for silicon, and the end of Dennard scaling. In the absence of device scaling, domain specialization provides an opportunity for architects to deliver more performance and greater energy efficiency. However, domain specialization is an expensive proposition for chip manufacturers.


A Vision to Compute like Nature

Communications of the ACM

Classical computing using digital symbols--equivalent to a Turing Machine--is reaching its limits. It is undeniable that computing's historic exponential performance increases have improved the human condition. Yet such increases are a thing of the past due in large part to the constraints of physics and how today's systems are constructed. Hardware device designers struggle to eliminate the effects of nanometer-scale thermodynamic fluctuations, and the soaring cost of fabrication plants has eliminated all but a few companies as a source of future chips. Software developers' ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains like economic systems, ecological systems, medicine, social systems, warfare, and autonomous vehicles.


Technical Perspective: Race Logic Presents a Novel Form of Encoding

Communications of the ACM

Moore's Law and Dennard scaling are waning. Yet the demand for computer systems with ever-increasing computational capabilities and power/energy-efficiency continues unabated, fueled by advances in big data and machine learning. The future of fields as disparate as data analytics, robotics, vision, natural language processing, and more, rests on the continued scaling of system performance per watt, even as traditional CMOS scaling ends. The following paper proposes a surprising, novel, and creative approach to post-Moore's Law computing by rethinking the digital/analog boundary. The central idea is to revisit the idea of data representation and show how it is a critical design choice that cuts across hardware and software layers.