Goto

Collaborating Authors

 source


Efficiency Without Cognitive Change: Evidence from Human Interaction with Narrow AI Systems

Benítez, María Angélica, Ceballos, Rocío Candela, Molina, Karina Del Valle, Araujo, Sofía Mundo, Villaroel, Sofía Evangelina Victorio, Justel, Nadia

arXiv.org Artificial Intelligence

The growing integration of artificial intelligence (AI) into human cognition raises a fundamental question: does AI merely improve efficiency, or does it alter how we think? This study experimentally tested whether short-term exposure to narrow AI tools enhances core cognitive abilities or simply optimizes task performance. Thirty young adults completed standardized neuropsychological assessments embedded in a seven-week protocol with a four-week online intervention involving problem-solving and verbal comprehension tasks, either with or without AI support (ChatGPT). While AI-assisted participants completed several tasks faster and more accurately, no significant pre-post differences emerged in standardized measures of problem solving or verbal comprehension. These results demonstrate efficiency gains without cognitive change, suggesting that current narrow AI systems serve as cognitive scaffolds extending performance without transforming underlying mental capacities. The findings highlight the need for ethical and educational frameworks that promote critical and autonomous thinking in an increasingly AI-augmented cognitive ecology.


Training Data Attribution via Approximate Unrolling

Neural Information Processing Systems

Many training data attribution (TDA) methods aim to estimate how a model's behavior would change if one or more data points were removed from the training set. Methods based on implicit differentiation, such as influence functions, can be made computationally efficient, but fail to account for underspecification, the implicit bias of the optimization algorithm, or multi-stage training pipelines. By contrast, methods based on unrolling address these issues but face scalability challenges.


Supervised Autoencoder MLP for Financial Time Series Forecasting

Bieganowski, Bartosz, Slepaczuk, Robert

arXiv.org Machine Learning

This paper investigates the enhancement of financial time series forecasting with the use of neural networks through supervised autoencoders, aiming to improve investment strategy performance. It specifically examines the impact of noise augmentation and triple barrier labeling on risk-adjusted returns, using the Sharpe and Information Ratios. The study focuses on the S&P 500 index, EUR/USD, and BTC/USD as the traded assets from January 1, 2010, to April 30, 2022. Findings indicate that supervised autoencoders, with balanced noise augmentation and bottleneck size, significantly boost strategy effectiveness. However, excessive noise and large bottleneck sizes can impair performance, highlighting the importance of precise parameter tuning. This paper also presents a derivation of a novel optimization metric that can be used with triple barrier labeling. The results of this study have substantial policy implications, suggesting that financial institutions and regulators could leverage techniques presented to enhance market stability and investor protection, while also encouraging more informed and strategic investment approaches in various financial sectors.


Deepfakes, and why we should be worried that every day is becoming April Fool's Day

#artificialintelligence

Deepfakes are becoming so common that we may not even realise that some of the images and videos we encounter have been artificially created. We briefly discuss what a deepfake is and some of the ways it has been permeating our lives and the content we consume. Generally, April Fool's Day is perhaps the only day of the year when we have permission to share practical jokes and hoaxes, in the hope that some of the more gullible among us would believe them, but ultimately, we can all have a good laugh about it and move on. However, what happens when it is not April Fool's Day, and hoaxes abound? That is the situation that increasingly is emerging and is of particular concern. A CNN article published late last week highlighted some of the recent hoaxes that went viral: "Pope Francis wearing a massive, white puffer coat.


diffusion-models-in-ai-everything-you-need-to-know

#artificialintelligence

In the AI ecosystem, diffusion models are setting up the direction and pace of technological advancement. They are revolutionizing the way we approach complex generative AI tasks. These models are based on the mathematics of gaussian principles, variance, differential equations, and generative sequences. Modern AI-centric products and solutions developed by Nvidia, Google, Adobe, and OpenAI have put diffusion models at the center of the limelight. DALL.E 2, Stable Diffusion, and Midjourney are prominent examples of diffusion models that are making rounds on the internet recently.


How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources

#artificialintelligence

Recently, the field has been greatly impressed and inspired by OpenAI's ChatGPT. It is undoubtedly clever, capable, and very fun to talk to. Its multi-faceted abilities are significantly beyond many NLP researchers' and practitioners' expectations based on the impression of (not-that-strong) original GPT-3. The natural question is how ChatGPT gets there, and where these fantastic abilities come from. In this post, we try to dissect the emergent abilities and trace them to their sources, hoping to give a comprehensive roadmap about how the GPT-3.5 model family, along with related large language models, evolved to their current forms.


A Comprehensive Guide for Interview Questions on Classical NLP

#artificialintelligence

This article was published as a part of the Data Science Blogathon. As it is common knowledge that natural language processing is one of the most popular and competitive in the current global IT sector. All of the top organizations and budding startups are on the lookout for candidates with strong NLP-related skills. Natural Language Processing (NLP) is the field at the intersection of Linguistics, Computer Science, and Artificial Intelligence. It is the technology that allows machines to understand, analyze, manipulate, and interpret human languages.


Gigantic database of building blocks will help artificial intelligence uncover new organocatalysts

#artificialintelligence

Researchers have constructed a public database of 4000 experimentally derived organocatalysts. The database also contains several thousand molecular fragments and combinatorially enriched structures based on the experimentally derived entries. It'represents the first steps towards an extensive mapping of organocatalyst space with large chemical diversity,' says database co-creator Clémence Corminboeuf from the Swiss Federal Institute of Technology (EPFL). Researchers will be able to use the Organic structures for catalysis repository database, known as Oscar, 'to train machine learning models and predict the properties of new catalysts' comments EPFL team member Simone Gallarati. The team also hope the database will function as a starting point for organic chemists designing new catalysts.


Everything You Need to Know about LIME - Analytics Vidhya

#artificialintelligence

This article was published as a part of the Data Science Blogathon. In this article, I will walk you through one technique that makes any machine learning model interpretable. Generally, there is a misconception that only linear machine learning models are more interpretable than others. The Model explainability helps in decision making and gives clients reliability. Explainability in AI is how much your features contribute or how important is your feature for the given output.


Unlocking Colonial Archive

#artificialintelligence

The Spanish empire controlled the majority of the Western Hemisphere's lands and peoples for more than three centuries. Its vast administration in the Americas depended on the work of royal notaries, Indigenous artists, and printers, who produced prodigious amounts of written and printed documents. Despite the extensive documentation, present-day understanding of the Spanish colonial enterprise is fragmentary due to the archive's intellectual inaccessibility: Scholars and interested audiences must decipher archaic penmanship, obscure writing conventions, and unfamiliar Indigenous imagery to read these historical sources--a task that requires trained eyes. This project seeks to use artificial intelligence (AI) technologies to automatically convert this "unreadable" archive into accessible data. We seek to develop interdisciplinary data science methods for the study of early-modern Indigenous- and Spanish-language materials--sources that have been mostly neglected in the computer science field.