Plotting

Deep Learning


No 10 acknowledges 'existential' risk of AI for first time

The Guardian

The "existential" risk of artificial intelligence has been acknowledged by No 10 for the first time, after the prime minister met the heads of the world's leading AI research groups to discuss safety and regulation. Rishi Sunak and Chloe Smith, the secretary of state for science, innovation and technology, met the chief executives of Google DeepMind, OpenAI and Anthropic AI on Wednesday evening and discussed how best to moderate the development of the technology to limit the risks of catastrophe. "They discussed safety measures, voluntary actions that labs are considering to manage the risks, and the possible avenues for international collaboration on AI safety and regulation," the participants said in a joint statement. "The lab leaders agreed to work with the UK government to ensure our approach responds to the speed of innovations in this technology both in the UK and around the globe. "The PM and CEOs discussed the risks of the technology, ranging from disinformation and national security, to existential threats … The PM set out how the approach to AI regulation will need to keep pace with the fast-moving advances in this technology." It is the first time the prime minister has acknowledged the potential "existential" threat of developing a "superintelligent" AI without appropriate safeguards, a risk that contrasts with the UK government's generally positive approach to AI development.


To PiM or Not to PiM

Communications of the ACM

A 20nm 6GB function-in-memory DRAM, based on HBM2 with a 1.2 TFLOPS programmable computing unit using bank-level parallelism, for machine learning applications.


Better Algorithms through Faster Math

Communications of the ACM

Developing faster algorithms is an important but elusive goal for data scientists. The ability to accelerate complex computing tasks and reduce latency has far-reaching ramifications in areas such as natural language processing, video streaming, autonomous robotics, gaming, and extended reality. Yet for all the hype surrounding computer algorithms and the increasingly sophisticated ways they operate, a basic fact stands out: these algorithms are typically built atop matrix multiplication, a basic type of linear algebra. The underlying mathematical framework has not changed a great deal since the inception of computing--and finding more efficient formulas has proved elusive. It is an issue attracting growing attention--particularly as machine learning (ML), deep learning (DL), artificial intelligence (AI), and machine automation advance into the mainstream.


On the Implicit Bias in Deep-Learning Algorithms

Communications of the ACM

Deep learning has been highly successful in recent years and has led to dramatic improvements in multiple domains. Deep-learning algorithms often generalize quite well in practice, namely, given access to labeled training data, they return neural networks that correctly label unobserved test data. However, despite much research, our theoretical understanding of generalization in deep learning is still limited. Neural networks used in practice often have far more learnable parameters than training examples. In such overparameterized settings, one might expect overfitting to occur, that is, the learned network might perform well on the training dataset and perform poorly on test data. Indeed, in overparameterized settings, there are many solutions that perform well on the training data, but most of them do not generalize well.


Meta unveils its first custom AI chip

ZDNet

Meta on Thursday unveiled its first chip, the MTIA, which it said was optimized to run recommendation engines, and benefits from close participation with the company's PyTorch developers. Meta Properties, owner of Facebook, WhatsApp and Instagram, on Thursday unveiled its first custom-designed computer chip tailored especially for processing artificial intelligence programs, called the Meta Training and Inference Accelerator, or "MTIA." The chip, consisting of a mesh of blocks of circuits that operate in parallel, runs software that optimizes programs using Meta's PyTorch open-source developer framework. Also: What is deep learning? Meta describes the chip as being tuned for one particular type of AI program: deep learning recommendation models. These are programs that can look at a pattern of activity, such as clicking on posts on a social network, and predict related, possibly relevant material to recommend to the user.


#ICLR2023 invited talk: Data, history and equality with Elaine Nsoesie

AIHub

Figure from Use of Deep Learning to Examine the Association of the Built Environment With Prevalence of Neighborhood Adult Obesity, Adyasha Maharana and Elaine Okanyene Nsoesie. Image on the right represents actual obesity prevalence; on the left, cross-validated estimates of obesity prevalence based on features of the built environment extracted from satellite images. Figure reproduced under CC-BY licence. The 11th International Conference on Learning Representations (ICLR) took place last week in Kigali, Rwanda, the first time a major AI conference has taken place in-person in Africa. The program included workshops, contributed talks, affinity group events, and socials.


Google unveils its multilingual, code-generating PaLM 2 language model

Engadget

Google has stood at the forefront at many of the tech industry's AI breakthroughs in recent years, Zoubin Ghahramani, Vice President of Google DeepMind, declared in a blog post while asserting that the company's work in foundation models, are "the bedrock for the industry and the AI-powered products that billions of people use daily." On Wednesday, Ghahramani and other Google executives took the Shoreline Amphitheater stage to show off its latest and greatest large language model, PaLM 2, which now comes in four sizes able to run locally on everything from mobile devices to server farms. PaLM 2, obviously, is the successor to Google's existing PaLM model that, until recently, powered its experimental Bard AI. "Think of PaLM as a general model that then can be fine tuned to achieve particular tasks," he explained during a reporters call earlier in the week. "For example: health research teams have fine tuned PaLM with with medical knowledge to help answer questions and summarize insights from a variety of dense medical texts." Ghahramani also notes that PaLM was "the first large language model to perform an expert level on the US medical licensing exam."


Why I'm Not Worried About A.I. Killing Everyone and Taking Over the World

Slate

This article was co-published with Understanding AI, a newsletter that explores how A.I. works and how it's changing our world. Geoffrey Hinton is a legendary computer scientist whose work laid the foundation for today's artificial intelligence technology. He was a co-author of two of the most influential A.I. papers: a 1986 paper describing a foundational technique (called backpropagation) that is still used to train deep neural networks and a 2012 paper demonstrating that deep neural networks could be shockingly good at recognizing images. That 2012 paper helped to spark the deep learning boom of the last decade. Google hired the paper's authors in 2013 and Hinton has been helping Google develop its A.I. technology ever since then. But last week Hinton quit Google so he could speak freely about his fears that A.I. systems would soon become smarter than us and gain the power to enslave or kill us. "There are very few examples of a more intelligent thing being controlled by a less intelligent thing," Hinton said in an interview on CNN last week.


FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?

Journal of Artificial Intelligence Research

The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformer-based models (e.g., BERT) or their variants. However, training such models and exploring their hyperparameter space is computationally expensive. Prior work proposes several neural architecture search (NAS) methods that employ performance predictors (e.g., surrogate models) to address this issue; however, such works limit analysis to homogeneous models that use fixed dimensionality throughout the network. This leads to sub-optimal architectures. To address this limitation, we propose a suite of heterogeneous and flexible models, namely FlexiBERT, that have varied encoder layers with a diverse set of possible operations and different hidden dimensions. For better-posed surrogate modeling in this expanded design space, we propose a new graph-similarity-based embedding scheme. We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture. A comprehensive set of experiments shows that the proposed policy, when applied to the FlexiBERT design space, pushes the performance frontier upwards compared to traditional models. FlexiBERT-Mini, one of our proposed models, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. A FlexiBERT model with equivalent performance as the best homogeneous model has 2.6× smaller size. FlexiBERT-Large, another proposed model, attains state-of-the-art results, outperforming the baseline models by at least 5.7% on the GLUE benchmark.


Video: Geoffrey Hinton talks about the "existential threat" of AI

MIT Technology Review

Deep learning pioneer Geoffrey Hinton announced on Monday that he was stepping down from his role as a Google AI researcher after a decade with the company. He says he wants to speak freely as he grows increasingly worried about the potential harms of artificial intelligence. Prior to the announcement, Will Douglas Heaven, MIT Technology Review's senior editor for AI, interviewed Hinton about his concerns--read the full story here. Soon after, the two spoke at EmTech Digital, MIT Technology Review's signature AI event. "I think it's quite conceivable that humanity is just a passing phase in the evolution of intelligence," Hinton said.