Goto

Collaborating Authors

 schneider



Why Experts Can't Agree on Whether AI Has a Mind

TIME - Tech

Why Experts Can't Agree on Whether AI Has a Mind Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. I'm not used to getting nasty emails from a holy man, says Professor Michael Levin, a developmental biologist at Tufts University. Levin was presenting his research to a group of engineers interested in spiritual matters in India, arguing that properties like "mind" and intelligence can be observed even in cellular systems, and that they exist on a spectrum. But when he pushed further--arguing that the same properties emerge everywhere, including in computers--the reception shifted.


Re-envisioning Euclid Galaxy Morphology: Identifying and Interpreting Features with Sparse Autoencoders

Wu, John F., Walmsley, Michael

arXiv.org Artificial Intelligence

Sparse Autoencoders (SAEs) can efficiently identify candidate monosemantic features from pretrained neural networks for galaxy morphology. We demonstrate this on Euclid Q1 images using both supervised (Zoobot) and new self-supervised (MAE) models. Our publicly released MAE achieves superhuman image reconstruction performance. While a Principal Component Analysis (PCA) on the supervised model primarily identifies features already aligned with the Galaxy Zoo decision tree, SAEs can identify interpretable features outside of this framework. SAE features also show stronger alignment than PCA with Galaxy Zoo labels. Although challenges in interpretability remain, SAEs provide a powerful engine for discovering astrophysical phenomena beyond the confines of human-defined classification.



DistRAG: Towards Distance-Based Spatial Reasoning in LLMs

Schneider, Nicole R, Ramachandran, Nandini, O'Sullivan, Kent, Samet, Hanan

arXiv.org Artificial Intelligence

Many real world tasks where Large Language Models (LLMs) can be used require spatial reasoning, like Point of Interest (POI) recommendation and itinerary planning. However, on their own LLMs lack reliable spatial reasoning capabilities, especially about distances. To address this problem, we develop a novel approach, DistRAG, that enables an LLM to retrieve relevant spatial information not explicitly learned during training. Our method encodes the geodesic distances between cities and towns in a graph and retrieves a context subgraph relevant to the question. Using this technique, our method enables an LLM to answer distance-based reasoning questions that it otherwise cannot answer. Given the vast array of possible places an LLM could be asked about, DistRAG offers a flexible first step towards providing a rudimentary `world model' to complement the linguistic knowledge held in LLMs.


Using Phonemes in cascaded S2S translation pipeline

Pilz, Rene, Schneider, Johannes

arXiv.org Artificial Intelligence

This paper explores the idea of using phonemes as a textual representation within a conventional multilingual simultaneous speech - to - speech translation pipeline, as opposed to the traditional reliance on text - based language representations. To investigate this, we trained an open - source sequence - to - sequence model on the WMT17 dataset in two formats: one using standard textual representation and the other employing phonemic representation. The performance o f both approaches was assessed using the BLEU metric. Our findings shows that the phonemic approach provides comparable quality but offers several advantages, including lower resource requirements or better suitability for low - resource languages.


Improving Next Tokens via Second-Last Predictions with Generate and Refine

Schneider, Johannes

arXiv.org Artificial Intelligence

Autoregressive language models like GPT aim at predicting next tokens, while autoencoding models such as BERT are trained on tasks such as predicting masked tokens. We train a decoder only architecture for predicting the second last token for a sequence of tokens. Our approach yields higher computational training efficiency than BERT-style models by employing a structured deterministic approach towards masking tokens. We use our model to improve the next token predictions of a standard GPT by combining both predictions in a ``generate-then-refine'' approach. We show on different variants of GPT-2 and different datasets that (not unexpectedly) second last token predictions are much more accurate, i.e., more than 15\% higher accuracy than ordinary next token predictors. The ``generate-then-refine'' approach also demonstrates notable improvements in next-token predictions, yielding smaller yet consistent and significant gains.


'The Dukes of Hazzard' star John Schneider says AI cannot simulate 'heart' and 'soul'

FOX News

John Schneider tells Fox News Digital that he isn't afraid of artificial intelligence because it can't replicate the "heart" or the "soul." "What AI does not have and what AI cannot simulate is a heart, is a soul. So, I'm not afraid of AI," he told Fox News Digital. Schneider gave an analogy, comparing the technology to artificial dairy coffee creamer, to explain why he's not concerned. "A lot of people are talking about AI like it's this terrible, terrible thing that's coming in. I think it's powdered cream at best," he said.


How China's New AI Rules Could Affect U.S. Companies

TIME - Tech

Soon after China's artificial intelligence rules came into effect last month, a series of new AI chatbots began trickling onto the market, with government approval. The rules have already been watered down from what was initially proposed, and so far, China hasn't enforced them as strictly as it could, experts say. China's regulatory approach will likely have huge implications for the technological competition between the country and its AI superpower rival the U.S. The Cyberspace Administration of China's (CAC) Generative AI Measures, which came into effect on Aug. 15, are some of the strictest in the world. They state that the generative AI services should not generate content "inciting subversion of national sovereignty or the overturn of the socialist system," or "advocating terrorism or extremism, promoting ethnic hatred and ethnic discrimination, violence and obscenity, as well as fake and harmful information." Preventing AI chatbots from spewing out unwanted or even toxic content has been a challenge for AI developers around the world.


AI will be the political left's 'single greatest weapon' against religious faith and truth, says expert

FOX News

Angie Wisdom and Dr. Chirag Shah discuss how artificial intelligence could play a role in online and professional relationships. As national conversations around artifical intelligence (AI) intensify, faith leaders and scholars are examining the potential ramifications these emerging technologies will have on worship – both its practice and its role in modern life. Some experts and faith leaders are also concerned about whether religion will have any place in AI programming – or if the intellectual will eventually take precedence over the spiritual in society. It's possible and even probable, say experts. Dan Schneider, Media Research Center and Free Speech America vice president, is both blunt and emphatic in his assessment of AI. "The [political] left controls AI, and the left is going to what the left wants to do," Schneider, whose headquarters are in Reston, Virginia, told Fox News Digital in a recent phone interview.