yale university
The power of sound in a virtual world
In the digital age, sound is proving to be the greatest connector of all, says Erik Vaveris, vice president of product management and CMO at Shure, and Brian Scholl, director of the Perception and Cognition Laboratory at Yale University. In an era where business, education, and even casual conversations occur via screens, sound has become a differentiating factor. We obsess over lighting, camera angles, and virtual backgrounds, but how we sound can be just as critical to credibility, trust, and connection. Both see audio as more than a technical layer: It's a human factor shaping how people perceive intelligence, trustworthiness, and authority in virtual settings. If you're willing to take a little bit of time with your audio set up, you can really get across the full power of your message and the full power of who you are to your peers, to your employees, your boss, your suppliers, and of course, your customers, says Vaveris. Scholl's research shows that poor audio quality can make a speaker seem less persuasive, less hireable, and even less credible. We know that [poor] sound doesn't reflect the people themselves, but we really just can't stop ourselves from having those impressions, says Scholl. We all understand intuitively that if we're having difficulty being understood while we're talking, then that's bad. But we sort of think that as long as you can make out the words I'm saying, then that's probably all fine. And this research showed in a somewhat surprising way, to a surprising degree, that this is not so. For organizations navigating hybrid work, training, and marketing, the stakes have become high. Vaveris points out that the pandemic was a watershed moment for audio technology. As classrooms, boardrooms, and conferences shifted online almost overnight, demand accelerated for advanced noise suppression, echo cancellation, and AI-driven processing tools that make meetings more seamless. Today, machine learning algorithms can strip away keyboard clicks or reverberation and isolate a speaker's voice in noisy environments. That clarity underpins the accuracy of AI meeting assistants that can step in to transcribe, summarize, and analyze discussions. The implications across industries are rippling. It empowers executives and creators alike to produce broadcast-quality content from the comfort of their home office. And it offers companies new ways to build credibility with customers and employees without the costly overhead of traditional production.
- North America > United States > Massachusetts (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England (0.04)
- (2 more...)
- Health & Medicine (0.68)
- Marketing (0.46)
- Education > Educational Setting (0.46)
A Conjecture on a Fundamental Trade-Off between Certainty and Scope in Symbolic and Generative AI
This article introduces a conjecture that formalises a fundamental trade-off between provable correctness and broad data-mapping capacity in Artificial Intelligence (AI) systems. When an AI system is engineered for deductively watertight guarantees (demonstrable certainty about the error-free nature of its outputs) -- as in classical symbolic AI -- its operational domain must be narrowly circumscribed and pre-structured. Conversely, a system that can input high-dimensional data to produce rich information outputs -- as in contemporary generative models -- necessarily relinquishes the possibility of zero-error performance, incurring an irreducible risk of errors or misclassification. By making this previously implicit trade-off explicit and open to rigorous verification, the conjecture significantly reframes both engineering ambitions and philosophical expectations for AI. After reviewing the historical motivations for this tension, the article states the conjecture in information-theoretic form and contextualises it within broader debates in epistemology, formal verification, and the philosophy of technology. It then offers an analysis of its implications and consequences, drawing on notions of underdetermination, prudent epistemic risk, and moral responsibility. The discussion clarifies how, if correct, the conjecture would help reshape evaluation standards, governance frameworks, and hybrid system design. The conclusion underscores the importance of eventually proving or refuting the inequality for the future of trustworthy AI.
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
Towards Non-Euclidean Foundation Models: Advancing AI Beyond Euclidean Frameworks
Yang, Menglin, Zhang, Yifei, Chen, Jialin, Weber, Melanie, Ying, Rex
In the era of foundation models and Large Language Models (LLMs), Euclidean space is the de facto geometric setting of our machine learning architectures. However, recent literature has demonstrated that this choice comes with fundamental limitations. To that end, non-Euclidean learning is quickly gaining traction, particularly in web-related applications where complex relationships and structures are prevalent. Non-Euclidean spaces, such as hyperbolic, spherical, and mixed-curvature spaces, have been shown to provide more efficient and effective representations for data with intrinsic geometric properties, including web-related data like social network topology, query-document relationships, and user-item interactions. Integrating foundation models with non-Euclidean geometries has great potential to enhance their ability to capture and model the underlying structures, leading to better performance in search, recommendations, and content understanding. This workshop focuses on the intersection of Non-Euclidean Foundation Models and Geometric Learning (NEGEL), exploring its potential benefits, including the potential benefits for advancing web-related technologies, challenges, and future directions. Workshop page: [https://hyperboliclearning.github.io/events/www2025workshop](https://hyperboliclearning.github.io/events/www2025workshop)
- Oceania > Australia > New South Wales > Sydney (0.05)
- North America > United States > Connecticut > New Haven County > New Haven (0.05)
- Asia > China > Hong Kong (0.04)
- (6 more...)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.69)
Introduction to AI Safety, Ethics, and Society
Artificial Intelligence is rapidly embedding itself within militaries, economies, and societies, reshaping their very foundations. Given the depth and breadth of its consequences, it has never been more pressing to understand how to ensure that AI systems are safe, ethical, and have a positive societal impact. This book aims to provide a comprehensive approach to understanding AI risk. Our primary goals include consolidating fragmented knowledge on AI risk, increasing the precision of core ideas, and reducing barriers to entry by making content simpler and more comprehensible. The book has been designed to be accessible to readers from diverse backgrounds. You do not need to have studied AI, philosophy, or other such topics. The content is skimmable and somewhat modular, so that you can choose which chapters to read. We introduce mathematical formulas in a few places to specify claims more precisely, but readers should be able to understand the main points without these.
- Asia > Russia (1.00)
- Asia > Middle East (0.92)
- Europe > United Kingdom > England (0.45)
- (3 more...)
- Workflow (1.00)
- Summary/Review (1.00)
- Research Report > Promising Solution (1.00)
- (5 more...)
- Transportation > Passenger (1.00)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- (58 more...)
The Download: how Yale University has prepared for ChatGPT, and schools' AI reckoning
Back-to-school season always feels like a reset moment. However, the big topic this time around seems to be the same thing that defined the end of last year: ChatGPT and other large language models. Last winter and spring brought so many headlines about AI in the classroom, with some panicked schools going as far as to ban ChatGPT altogether. Now, with the summer months having offered a bit of time for reflection, some schools seem to be reconsidering their approach. Tate Ryan-Mosley, our senior tech policy reporter, spoke to the associate provost at Yale University to find out why the prestigious school never considered banning ChatGPT--and instead wants to work with it.
Interview with Doug Duhaime, contributor to Google's Dev Library
Introducing the Dev Library Contributor Spotlights - a blog series highlighting developers that are supporting the thriving development ecosystem by contributing their resources and tools to Google Dev Library. We met with Doug Duhaime, Full Stack Developer in Yale University's Digital Humanities Lab, to discuss his passion for Machine Learning, his processes and what inspired him to release his PixPlot project as an Open Source. I was an English major in undergrad and in graduate school. I have a PhD in English literature. To answer this question, I had to mine an enormous collection of data - half a million books, published before 1800 - to look at different patterns.
Yale University and IBM Researchers Introduce Kernel Graph Neural Networks (KerGNNs)
Graph kernel approaches have typically been the most popular strategy for graph classification tasks. Graph kernels can be thought of as functions that measure the similarity of two graphs. They allow kernelized learning algorithms like support vector machines to work directly on charts rather than convert them to fixed-length, real-valued feature vectors through feature extraction. In recent years, the use of Graph Neural Networks (GNNs) based on high-performance message-passing neural networks has exploded (MPNNs). As a result, they've grown increasingly popular for graph categorization.
The State of AI Ethics Report (January 2021)
Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Khan, Falaah Arif, Heath, Victoria, Galinkin, Erick, Khurana, Ryan, Ganapini, Marianna Bergamaschi, Fancy, Muriam, Sweidan, Masa, Akif, Mo, Butalid, Renjie
The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is "The Abuse and Misogynoir Playbook," written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D'Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women's contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
- Asia > China (0.67)
- North America > United States > California (0.45)
- Asia > Middle East (0.45)
- (14 more...)
- Summary/Review (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- (3 more...)
- Transportation > Ground > Road (1.00)
- Transportation > Air (1.00)
- Social Sector (1.00)
- (22 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.92)
Robotic fabric stiffens and relaxes in response to changes in temperature
Scientists have created a robotic fabric that stiffens and relaxes in response to changes in temperature, which could be used in emergency situations. The material, developed at Yale University in the US, is equipped with a system of heat sensors and threads that stiffen to change the fabric's shape. Under heat changes, it can bend and twist to transform itself into adaptable clothing, shape-changing machinery and self-erecting shelters. Video footage shows the material going from a flat, ordinary fabric to a load-bearing structure supporting a weight, a model airplane with flexible wings and a wearable robotic tourniquet that activates in response to damage. 'We believe this technology can be leveraged to create self-deploying tents, robotic parachutes, and assistive clothing,' said Professor Rebecca Kramer-Bottiglio at Yale University.
The Spectral Underpinning of word2vec
Jaffe, Ariel, Kluger, Yuval, Lindenbaum, Ofir, Patsenker, Jonathan, Peterfreund, Erez, Steinerberger, Stefan
Word2vec due to Mikolov et al. (2013) is a word embedding method that is widely used in natural language processing. Despite its great success and frequent use, theoretical justification is still lacking. The main contribution of our paper is to propose a rigorous analysis of the highly nonlinear functional of word2vec. Our results suggest that word2vec may be primarily driven by an underlying spectral method. This insight may open the door to obtaining provable guarantees for word2vec. We support these findings by numerical simulations. One fascinating open question is whether the nonlinear properties of word2vec that are not captured by the spectral method are beneficial and, if so, by what mechanism.
- North America > United States (0.15)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)