democratization
Opening Musical Creativity? Embedded Ideologies in Generative-AI Music Systems
AI systems for music generation are increasingly common and easy to use, granting people without any musical background the ability to create music. Because of this, generative-AI has been marketed and celebrated as a means of democratizing music making. However, inclusivity often functions as marketable rhetoric rather than a genuine guiding principle in these industry settings. In this paper, we look at four generative-AI music making systems available to the public as of mid-2025 (AIVA, Stable Audio, Suno, and Udio) and track how they are rhetoricized by their developers, and received by users. Our aim is to investigate ideologies that are driving the early-stage development and adoption of generative-AI in music making, with a particular focus on democratization. A combination of autoethnography and digital ethnography is used to examine patterns and incongruities in rhetoric when positioned against product functionality. The results are then collated to develop a nuanced, contextual discussion. The shared ideology we map between producers and consumers is individualist, globalist, techno-liberal, and ethically evasive. It is a 'total ideology' which obfuscates individual responsibility, and through which the nature of music and musical practice is transfigured to suit generative outcomes.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > United States > Virginia (0.04)
- (7 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
AI Safety Should Prioritize the Future of Work
Hazra, Sanchaita, Majumder, Bodhisattwa Prasad, Chakrabarty, Tuhin
Current efforts in AI safety prioritize filtering harmful content, preventing manipulation of human behavior, and eliminating existential risks in cybersecurity or biosecurity. While pressing, this narrow focus overlooks critical human-centric considerations that shape the long-term trajectory of a society. In this position paper, we identify the risks of overlooking the impact of AI on the future of work and recommend comprehensive transition support towards the evolution of meaningful labor with human agency. Through the lens of economic theories, we highlight the intertemporal impacts of AI on human livelihood and the structural changes in labor markets that exacerbate income inequality. Additionally, the closed-source approach of major stakeholders in AI development resembles rent-seeking behavior through exploiting resources, breeding mediocrity in creative labor, and monopolizing innovation. To address this, we argue in favor of a robust international copyright anatomy supported by implementing collective licensing that ensures fair compensation mechanisms for using data to train AI models. We strongly recommend a pro-worker framework of global AI governance to enhance shared prosperity and economic justice while reducing technical debt.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Utah (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (7 more...)
Smart Learning in the 21st Century: Advancing Constructionism Across Three Digital Epochs
Levin, Ilya, Semenov, Alexei L., Gorsky, Mikael
This article explores the evolution of constructionism as an educational framework, tracing its relevance and transformation across three pivotal eras: the advent of personal computing, the networked society, and the current era of generative AI. Rooted in Seymour Papert constructionist philosophy, this study examines how constructionist principles align with the expanding role of digital technology in personal and collective learning. We discuss the transformation of educational environments from hierarchical instructionism to constructionist models that emphasize learner autonomy and interactive, creative engagement. Central to this analysis is the concept of an expanded personality, wherein digital tools and AI integration fundamentally reshape individual self-perception and social interactions. By integrating constructionism into the paradigm of smart education, we propose it as a foundational approach to personalized and democratized learning. Our findings underscore constructionism enduring relevance in navigating the complexities of technology-driven education, providing insights for educators and policymakers seeking to harness digital innovations to foster adaptive, student-centered learning experiences.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Russia (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- (7 more...)
- Information Technology (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.67)
- Education > Educational Setting > K-12 Education (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.50)
Leveraging Large Language Models to Democratize Access to Costly Financial Datasets for Academic Research
Wang, Julian Junyan, Wang, Victor Xiaoqi
Unequal access to costly datasets essential for empirical research has long hindered researchers from disadvantaged institutions, limiting their ability to contribute to their fields and advance their careers. Recent breakthroughs in Large Language Models (LLMs) have the potential to democratize data access by automating data collection from unstructured sources. We develop and evaluate a novel methodology using GPT-4o-mini within a Retrieval-Augmented Generation (RAG) framework to collect data from corporate disclosures. Our approach achieves human-level accuracy in collecting CEO pay ratios from approximately 10,000 proxy statements and Critical Audit Matters (CAMs) from more than 12,000 10-K filings, with LLM processing times of 9 and 40 minutes respectively, each at a cost under $10. This stands in stark contrast to the hundreds of hours needed for manual collection or the thousands of dollars required for commercial database subscriptions. To foster a more inclusive research community by empowering researchers with limited resources to explore new avenues of inquiry, we share our methodology and the resulting datasets.
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Research Report > Promising Solution (0.67)
- Law > Business Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
- Banking & Finance > Trading (0.67)
- Education (0.67)
Understanding "Democratization" in NLP and ML Research
Subramonian, Arjun, Gautam, Vagrant, Klakow, Dietrich, Talat, Zeerak
Recent improvements in natural language processing (NLP) and machine learning (ML) and increased mainstream adoption have led to researchers frequently discussing the "democratization" of artificial intelligence. In this paper, we seek to clarify how democratization is understood in NLP and ML publications, through large-scale mixed-methods analyses of papers using the keyword "democra*" published in NLP and adjacent venues. We find that democratization is most frequently used to convey (ease of) access to or use of technologies, without meaningfully engaging with theories of democratization, while research using other invocations of "democra*" tends to be grounded in theories of deliberation and debate. Based on our findings, we call for researchers to enrich their use of the term democratization with appropriate theory, towards democratic technologies beyond superficial access.
- Asia > Japan (0.28)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (12 more...)
- Government > Voting & Elections (1.00)
- Media (0.94)
- Law (0.68)
- (2 more...)
Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives
Seger, Elizabeth, Dreksler, Noemi, Moulange, Richard, Dardaman, Emily, Schuett, Jonas, Wei, K., Winter, Christoph, Arnold, Mackenzie, hÉigeartaigh, Seán Ó, Korinek, Anton, Anderljung, Markus, Bucknall, Ben, Chan, Alan, Stafford, Eoghan, Koessler, Leonie, Ovadya, Aviv, Garfinkel, Ben, Bluemke, Emma, Aird, Michael, Levermore, Patrick, Hazell, Julian, Gupta, Abhishek
Recent decisions by leading AI labs to either open-source their models or to restrict access to their models has sparked debate about whether, and how, increasingly capable AI models should be shared. Open-sourcing in AI typically refers to making model architecture and weights freely and publicly accessible for anyone to modify, study, build on, and use. This offers advantages such as enabling external oversight, accelerating progress, and decentralizing control over AI development and use. However, it also presents a growing potential for misuse and unintended consequences. This paper offers an examination of the risks and benefits of open-sourcing highly capable foundation models. While open-sourcing has historically provided substantial net benefits for most software and AI development processes, we argue that for some highly capable foundation models likely to be developed in the near future, open-sourcing may pose sufficiently extreme risks to outweigh the benefits. In such a case, highly capable foundation models should not be open-sourced, at least not initially. Alternative strategies, including non-open-source model sharing options, are explored. The paper concludes with recommendations for developers, standard-setting bodies, and governments for establishing safe and responsible model sharing practices and preserving open-source benefits where safe.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (18 more...)
I love AI because it will add decades to our lives
Fox News contributor Dr. Marc Siegel weighs in on how artificial intelligence can change the patient-doctor relationship on'America's Newsroom.' AI, ChatGPT and the like are coming for our jobs and will destroy our way of life, the doomsayers tell us. The mood is utterly different in health care, where cutting-edge physicians recognize the potential of AI to add decades to our lives and to fix the catastrophic "sick care" system, not just in the United States, but around the world. My life expectancy – and yours – is only going up, thanks to AI. Here's how and why. With the democratization of precision medicine, society will shift from a mentality that says, "I'm sick and I need treatment" to "I'm healthy and I want to stay that way." (iStock) We don't really have a health care system.
- Media > News (0.73)
- Health & Medicine > Therapeutic Area (0.51)
Training Is Everything: Artificial Intelligence, Copyright, and Fair Training
Torrance, Andrew W., Tomlinson, Bill
To learn how to behave, the current revolutionary generation of AIs must be trained on vast quantities of published images, written works, and sounds, many of which fall within the core subject matter of copyright law. To some, the use of copyrighted works as training sets for AI is merely a transitory and non-consumptive use that does not materially interfere with owners' content or copyrights protecting it. Companies that use such content to train their AI engine often believe such usage should be considered "fair use" under United States law (sometimes known as "fair dealing" in other countries). By contrast, many copyright owners, as well as their supporters, consider the incorporation of copyrighted works into training sets for AI to constitute misappropriation of owners' intellectual property, and, thus, decidedly not fair use under the law. This debate is vital to the future trajectory of AI and its applications. In this article, we analyze the arguments in favor of, and against, viewing the use of copyrighted works in training sets for AI as fair use. We call this form of fair use "fair training". We identify both strong and spurious arguments on both sides of this debate. In addition, we attempt to take a broader perspective, weighing the societal costs (e.g., replacement of certain forms of human employment) and benefits (e.g., the possibility of novel AI-based approaches to global issues such as environmental disruption) of allowing AI to make easy use of copyrighted works as training sets to facilitate the development, improvement, adoption, and diffusion of AI. Finally, we suggest that the debate over AI and copyrighted works may be a tempest in a teapot when placed in the wider context of massive societal challenges such as poverty, equality, climate change, and loss of biodiversity, to which AI may be part of the solution.
- Europe > United Kingdom (0.46)
- North America > Canada (0.29)
- Oceania > Australia (0.04)
- (6 more...)
- Research Report (0.50)
- Overview (0.46)
- Instructional Material (0.34)
Democratization of AI creates benefits and challenges
AI is no longer confined to small circles of developers and enthusiasts. Data analysis and machine learning services, like Google Colab and Microsoft's Azure OpenAI Service's various models, make it easier than ever to include a larger circle of employees in AI development by enabling anyone to write and share code for projects. Enterprises must appropriately train business users on what AI is and how AI can apply to everyday tasks to utilize the technology effectively. Arpit Mehra, practice director at analyst firm Everest Group, recommends enterprises use decentralized governance models to enable data and technology learning strategies. Arun Chandrasekaran, distinguished vice president and analyst at Gartner, also recommended that companies prioritize investments in specialized and domain-specific intelligent applications that focus on training in areas like customer engagement, customer service and talent acquisition.
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.40)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.31)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.31)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.31)
AI art and its impact on the art world: is AI art stealing?
Artificial intelligence (AI) has emerged as a revolutionary technology with the potential to transform virtually every industry, and the art world is no exception. AI has opened up new possibilities for artists to create unique and innovative works of art that were previously impossible. With the help of AI algorithms, artists can generate music, images, and even entire pieces of art, opening the door to a new era of creativity. This has given rise to the field of AI art, where artists are using this technology to push the boundaries of traditional art forms and create new ones altogether. In this context, it is essential to analyze the impact that AI art is having on the art world, both in terms of how it is being created and how it is being consumed.