Goto

Collaborating Authors

 lattner


A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

WIRED

Demand for AI chips is booming--and so is the need for software to run them. Chris Lattner's startup Modular just raised $250 million to build the best developer tools for AI hardware. At a certain point between building Apple's developer tools, leading a core part of Google's AI infrastructure team, and clashing with Elon Musk during a stint as Tesla's Autopilot chief, Chris Lattner's vision for his life's work started to come into focus. AI was taking over the world, and demand was growing for the chips that powered it. But the software stack for those chips was dominated by just a few big companies.


Estimating Musical Surprisal from Audio in Autoregressive Diffusion Model Noise Spaces

Bjare, Mathias Rose, Lattner, Stefan, Widmer, Gerhard

arXiv.org Artificial Intelligence

Recently, the information content (IC) of predictions from a Generative Infinite-Vocabulary Transformer (GIVT) has been used to model musical expectancy and surprisal in audio. We investigate the effectiveness of such modelling using IC calculated with autoregressive diffusion models (ADMs). We empirically show that IC estimates of models based on two different diffusion ordinary differential equations (ODEs) describe diverse data better, in terms of negative log-likelihood, than a GIVT. We evaluate diffusion model IC's effectiveness in capturing surprisal aspects by examining two tasks: (1) capturing monophonic pitch surprisal, and (2) detecting segment boundaries in multi-track audio. In both tasks, the diffusion models match or exceed the performance of a GIVT. We hypothesize that the surprisal estimated at different diffusion process noise levels corresponds to the surprisal of music and audio features present at different audio granularities. Testing our hypothesis, we find that, for appropriate noise levels, the studied musical surprisal tasks' results improve. Code is provided on github.com/SonyCSLParis/audioic.


Revamping Python for an AI World

Communications of the ACM

Python is one of the most popular programming languages in existence. Easy to learn and easy to use, it has been around for years, so there is a large community of Python developers to support each other, and it has built up an ecosystem of libraries that allow users to drop in the functionalities they need. It does, however, come with downsides: its programs tend to run slowly, and because it is inefficient at running processes in parallel, it is not well suited to some of the latest artificial intelligence (AI) programming. Hoping to overcome those difficulties, computer scientist Chris Lattner set out to create a new language, Mojo, which offers the ease of use of Python, but the performance of more complex languages such as C or Rust. He teamed up with Tim Davis, whom he had met when they both worked for Google, to form Modular in January 2022.


SampleMatch: A model that automatically retrieves matching drum samples for musical tracks

#artificialintelligence

Machine learning-based computational models have been successfully applied to a broad range of complex information processing tasks, including those that involve retrieving specific data items from large archives. Researchers at the Sony Computer Science Laboratories (CSL) in France have been trying to develop machine learning techniques that could help music producers to easily identify and retrieve specific audio samples from a database. To this end, Stefan Lattner, a researcher at Sony CSL, recently introduced SampleMatch, a machine learning-based model that can automatically retrieve drum samples that match a specific music track from large archives. His model is set to be presented in December at the ISMIR 2022 conference, a leading event that focuses on music information retrieval. "Our music team at Sony CSL is working on AI that could make the life of music producers easier," Stefan Lattner, one of the researchers who carried out the study, told TechXplore.


Modular closes $30 seed round to simplify the process of developing AI systems – TechCrunch

#artificialintelligence

But if you ask the co-founders of Modular, a startup emerging from stealth today, the software used to develop it is "monolithic," fractured into silos piled with layers of complexity. Big Tech companies have made helpful contributions, like TensorFlow and PyTorch -- AI development frameworks maintained by Google and Facebook, respectively. Modular aims to change that. Founded by former Apple and Google engineers and execs, the company today closed a large ($30 million) seed round led by GV (formerly Google Ventures), with participation from Greylock, The Factory and SV Angel to realize its vision of a streamlined, platform-agnostic AI system development platform. "The industry is struggling to maintain and scale fragmented, custom toolchains that differ across research and production, training and deployment, server and edge," Modular CEO Chris Lattner told TechCrunch in an email interview.


Lattner

AAAI Conferences

An important aspect of music perception in humans is the ability to segment streams of musical events into structural units such as motifs and phrases.A promising approach to the computational modeling of music segmentation employs the statistical and information-theoretic properties of musical data, based on the hypothesis that these properties can (at least partly) account for music segmentation in humans. Prior work has shown that in particular the information content of music events, as estimated from a generative probabilistic model of those events, is a good indicator for segment boundaries.In this paper we demonstrate that, remarkably, a substantial increase in segmentation accuracy can be obtained by not using information content estimates directly, but rather in a bootstrapping fashion. More specifically, we use information content estimates computed from a generative model of the data as a target for a feed-forward neural network that is trained to estimate the information content directly from the data. We hypothesize that the improved segmentation accuracy of this bootstrapping approach may be evidence that the generative model provides noisy estimates of the information content, which are smoothed by the feed-forward neural network, yielding more accurate information content estimates.


Tesla's Autopilot Hit With More Turmoil as Leader Departs for Intel

WSJ.com: WSJD - Technology

Mr. Keller joined Tesla in 2016 from chip maker Advanced Micro Devices Inc. to serve as vice president of Autopilot hardware. He assumed control of Autopilot software as well last June, following the departure of Chris Lattner, who left only six months after Tesla hired him away from Apple Inc. "Prior to joining Tesla, Jim's core passion was microprocessor engineering and he's now joining a company where he'll be able to once again focus on this exclusively," Tesla said. Electrek, a blog that closely follows Tesla, earlier reported Mr. Keller's departure. Tesla has had several departures from its senior ranks since the start of last year, including its top sales executive and chief financial officer. The company is struggling to ramp up production of the Model 3, a sedan that is supposed to help make the electric-car producer more mainstream.


Google hires a legendary Apple engineer to tackle AI

Engadget

Legendary programmer Chris Lattner has had a roller coaster of a year. He left Apple (where he developed the Swift programming language) to help build Tesla's Autopilot technology, only to leave months later after realizing that he wasn't a good fit. However, Lattner might be settling down. He just announced that he's joining Google (namely, the Brain team) to make AI "accessible to everyone." While Lattner doesn't specify exactly what he'll be doing, Bloomberg sources say he'll be working on the TensorFlow language Google uses to simplify AI programming.


Google's self-driving car unit nabs senior Tesla engineer

USATODAY - Tech Top Stories

A Chrysler Pacifica hubrid minivan, decked out in Waymo's colors and self-driving technology. SAN FRANCISCO -- Google's autonomous car company, Waymo, has hired Tesla engineer Satish Jeyachandran to lead its hardware team. Jeyachandran had been the director of hardware engineering at Tesla for seven years. At Waymo, he'll work with Google's proprietary LiDAR (light detection and ranging) technology, radar, and camera vision -- hardware that helps self-driving cars to see the road. "I wanted to join Waymo because it has a talented, mission-driven team that has made impressive advancements in self-driving hardware. By bringing both hardware and software development under one roof, the team is laser-focused on bringing its technology to more people," Jeyachandran said in a statement on his Linkedin page.


Tesla hires AI expert to help lead team in charge of self-driving software

#artificialintelligence

Tesla Inc. has hired a Stanford University computer scientist specializing in artificial intelligence and deep learning to lead its efforts around driverless cars. Andrej Karpathy, previously a research scientist at OpenAI, was named director of AI and Autopilot Vision, reporting directly to Chief Executive Elon Musk, a Tesla spokesperson said. Karpathy is "one of the world's leading experts in computer vision and deep learning," the spokesperson said. He will work closely with Jim Keller, who is responsible for Autopilot hardware and software. Autopilot is Tesla's suite of advanced driver assistance systems, which relies on an onboard Nvidia Corp NVDA, 1.52% "supercomputer" to make sense of data from numerous sensors in and around Tesla vehicles and the company's software.