Goto

Collaborating Authors

 academia


Yann LeCun's new venture is a contrarian bet against large language models

MIT Technology Review

Yann LeCun's new venture is a contrarian bet against large language models In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs. Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry's current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems. Instead, he thinks we should be betting on world models--a different type of AI that accurately reflects the dynamics of the real world. He is also a staunch advocate for open-source AI and criticizes the closed approach of frontier labs like OpenAI and Anthropic. Perhaps it's no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company's influential research lab that he founded. Meta has struggled to gain much traction with its open-source AI model Llama and has seen internal shake-ups, including the controversial acquisition of ScaleAI. LeCun sat down with in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas.


From University Research to Global Impact

Communications of the ACM

Membership in ACM includes a subscription to Communications of the ACM (CACM), the computing industry's most trusted source for staying connected to the world of advanced computing. In an era defined by rapid technological advancement, particularly in fields such as artificial intelligence (AI), there is a growing discourse surrounding the pivotal role of academia and the impact of federal funding on innovation. The following conversation sheds light on an often-underdiscussed facet of this relationship: the profound influence of academic research on the formation and continued success of large technology companies such as Google. The participants include Magda Balazińska (MB) and three senior Google engineers--Urs Hölzle (UH), Jeff Dean (JD), and Parthasarathy Ranganathan (PR)--who collectively have more than a century of experience spanning both academia and industry, and between them represent different disciplines across the computing stack (distributed systems, AI, hardware). The discussion delves into the foundational role of academia in Google's inception, the long-term impact of federally funded research, the stories behind key innovations, and the grand challenges that lie ahead for academic research.


Levers of Power in the Field of AI

Mackenzie, Tammy, Punj, Sukriti, Perez, Natalie, Bhaduri, Sreyoshi, Radeljic, Branislav

arXiv.org Artificial Intelligence

This paper examines how decision makers in academia, government, business, and civil society navigate questions of power in implementations of artificial intelligence (AI). The study explores how individuals experience and exercise "levers of power," which are presented as social mechanisms that shape institutional responses to technological change. The study reports on the responses of personalized questionnaires designed to gather insight on a decision maker's institutional purview, based on an institutional governance framework developed from the work of Neo Institutionalists. Findings present the anonymized, real responses and circumstances of respondents in the form of twelve fictional personas of high-level decision makers from North America and Europe. These personas illustrate how personal agency, organizational logics, and institutional infrastructures may intersect in the governance of AI. The decision makers' responses to the questionnaires then inform a discussion of the field level personal power of decision-makers, methods of fostering institutional stability in times of change, and methods of influencing institutional change in the field of AI. The final section of the discussion presents a table of the dynamics of the levers of power in the field of AI for change makers and 5 testable hypotheses for institutional and social movement researchers. In summary, this study provides insight on the means for policymakers within institutions and their counterparts in civil society to personally engage with AI governance.


He Was Laughed Out of Academia for This Take About Technology. Turns Out He Was Right.

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. The most accurate description of being online that was ever articulated comes to us from a Canadian professor. The light and the message go right through us," he said during a television appearance. "At this moment, we are on the air, and on the air we do not have any physical body. When you're on the telephone or on radio or on TV, you don't have a physical body.


Generative Knowledge Production Pipeline Driven by Academic Influencers

Feher, Katalin, Demeter, Marton

arXiv.org Artificial Intelligence

ABSTRACT Generative AI transforms knowledge production, validation, and dissemination, raising academic integrity and credibility concerns. This study examines 53 academic influencer videos that reached 5.3 million viewers to identify an emerging, structured, implementation-ready pipeline balancing originality, ethical compliance, and human-AI collaboration despite the disruptive impacts. Findings highlight generative AI's potential to automate publication workflows and democratize participation in knowledge production while challenging traditional scientific norms. Academic influencers emerge as key intermediaries in this paradigm shift, connecting bottom-up practices with institutional policies to improve adaptability. Accordingly, the study proposes a generative publication production pipeline and a policy framework for co-intelligence adaptation and reinforcing credibility-centered standards in AI-powered research. These insights support scholars, educators, and policymakers in understanding AI's transformative impact by advocating responsible and innovation-driven knowledge production. Additionally, they reveal pathways for automating best practices, optimizing scholarly workflows, and fostering creativity in academic research and publication. Keywords: generative AI, ChatPGT, academic integrity, influencers, knowledge production, social media, policy implications, academic policy 1. INTRODUCTION The advent of generative AI (GenAI) transforms knowledge production, increasingly supporting and partially automating the academic workflow (Bolanos et al. 2024). This trend suggests a paradigm shift where researchers utilize effectively and productively generative AI tools, potentially leading to more automated scientific workflows. However, we have also identified a human component in this process: the impact of the academic influencers via social media promoting hands-on knowledge about GenAI in academic projects.


Big Tech, You Need Academia. Speak Up!

Communications of the ACM

The current U.S. administration has launched a wara on academia. Indirect costs, or, more accurately, facility and administration expenses, support research but cannot be directly attributed to a specific project, such as lab infrastructure, utilities, and administrative support. These are real costs; the limit, which has since been suspended by courts, is a severe blow to biomedical research in the U.S. Beyond expanding this limit to other agencies, such as the National Science Foundation (NSF), the administration is also reportedly considering slashing NSF's annual budget from approximately US 9 billion down to about US 3– 4 billion. This would deal a devastating blow to academic U.S. research, especially computing research. As statedc by the Computing Research Association (CRA), "NSF budget cuts would put the future of U.S. innovation and security at risk."


Revisiting gender bias research in bibliometrics: Standardizing methodological variability using Scholarly Data Analysis (SoDA) Cards

Lee, HaeJin, Mishra, Shubhanshu, Mishra, Apratim, You, Zhiwen, Kim, Jinseok, Diesner, Jana

arXiv.org Artificial Intelligence

Gender biases in scholarly metrics remain a persistent concern, despite numerous bibliometric studies exploring their presence and absence across productivity, impact, acknowledgment, and self-citations. However, methodological inconsistencies, particularly in author name disambiguation and gender identification, limit the reliability and comparability of these studies, potentially perpetuating misperceptions and hindering effective interventions. A review of 70 relevant publications over the past 12 years reveals a wide range of approaches, from name-based and manual searches to more algorithmic and gold-standard methods, with no clear consensus on best practices. This variability, compounded by challenges such as accurately disambiguating Asian names and managing unassigned gender labels, underscores the urgent need for standardized and robust methodologies. To address this critical gap, we propose the development and implementation of ``Scholarly Data Analysis (SoDA) Cards." These cards will provide a structured framework for documenting and reporting key methodological choices in scholarly data analysis, including author name disambiguation and gender identification procedures. By promoting transparency and reproducibility, SoDA Cards will facilitate more accurate comparisons and aggregations of research findings, ultimately supporting evidence-informed policymaking and enabling the longitudinal tracking of analytical approaches in the study of gender and other social biases in academia.


A Whimsical Odyssey Through the Maze of Scholarly Reviews

Communications of the ACM

This feature promises a seamless transition from despair to solace, as authors can immediately seek professional help to mend their battered egos and decipher cryptic comments. Second, leveraging the prowess of Large Language Models, rational reviewers' critiques will be automatically rewritten into supportive, constructive, and possibly even uplifting feedback, ensuring every review is a warm hug for the soul, regardless of content. This LLM will be combined with a BCI (brain-computer interface) to incept this new review into the Rational Reviewer's brain. Third, to address the cutthroat competition, conference acceptance rates will skyrocket, transforming prestigious gatherings into academic block parties where everyone's invited, and the word'rejection' is but a whisper from a bygone era. Lastly, in an effort to expedite the gladiatorial arena of publish or perish, the submission-review-decision cycle will be accelerated to warp speed, allowing authors to roll the dice more frequently in the grand casino of academia.


AI cheating is overwhelming the education system – but teachers shouldn't despair John Naughton

The Guardian

Parents are starting to fret about lunch packs, school uniforms and schoolbooks. School leavers who have university places are wondering what freshers' week will be like. And some university professors, especially in the humanities, will be apprehensively pondering how to deal with students who are already more adept users of large language models (LLMs) than they are. They're right to be concerned. As Ian Bogost, a professor of film and media and computer science at Washington University in St Louis, puts it: "If the first year of AI college ended in a feeling of dismay, the situation has now devolved into absurdism. Teachers struggle to continue teaching even as they wonder whether they are grading students or computers; in the meantime, an endless AI cheating and detection arms race plays out in the background."


The global landscape of academic guidelines for generative AI and Large Language Models

Jiao, Junfeng, Afroogh, Saleh, Chen, Kevin, Atkinson, David, Dhurandhar, Amit

arXiv.org Artificial Intelligence

The integration of Generative Artificial Intelligence (GAI) and Large Language Models (LLMs) in academia has spurred a global discourse on their potential pedagogical benefits and ethical considerations. Positive reactions highlight some potential, such as collaborative creativity, increased access to education, and empowerment of trainers and trainees. However, negative reactions raise concerns about ethical complexities, balancing innovation and academic integrity, unequal access, and misinformation risks. Through a systematic survey and text-mining-based analysis of global and national directives, insights from independent research, and eighty university-level guidelines, this study provides a nuanced understanding of the opportunities and challenges posed by GAI and LLMs in education. It emphasizes the importance of balanced approaches that harness the benefits of these technologies while addressing ethical considerations and ensuring equitable access and educational outcomes. The paper concludes with recommendations for fostering responsible innovation and ethical practices to guide the integration of GAI and LLMs in academia.