Goto

Collaborating Authors

 weirdness


'A lump of metal? Fascinating': I get interviewed by the AI Michael Parkinson

The Guardian

Ask anyone who regularly interviews people and they'll tell you that few things are stranger than when the tables turn and you're the one being interviewed. This is especially true when the person interviewing you has been dead for a year and a half. Virtually Parkinson is a new podcast in which celebrities are interviewed by an AI model trained to speak and act like the late Michael Parkinson. The announcement of the podcast last year prompted a flurry of vaguely apocalyptic reactions. It was sacrilegious, some said, tantamount to digging up and reanimating a national treasure against his will. It was pointless, others said – of all the transformative ways to use AI, you're blowing it on a podcast?


'Starfield' Will Be the Meme Game for Years to Come

WIRED

For the past five years, the YouTuber Bacon_ has been uploading funny video game clips, nearly all of which come from titles made by Bethesda Game Studios. With the release of Starfield this week, Bacon_ has new fodder. "Just trying to get through my shift," which was posted four days ago, shows a Starfield NPC pounding a mining laser into his colleague's crotch. "So Starfield is out, and it's definitely a Bethesda game," Bacon_commented. For video games, technical difficulties come with the territory.


Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images

Bitton-Guetta, Nitzan, Bitton, Yonatan, Hessel, Jack, Schmidt, Ludwig, Elovici, Yuval, Stanovsky, Gabriel, Schwartz, Roy

arXiv.org Artificial Intelligence

Weird, unusual, and uncanny images pique the curiosity of observers because they challenge commonsense. For example, an image released during the 2022 world cup depicts the famous soccer stars Lionel Messi and Cristiano Ronaldo playing chess, which playfully violates our expectation that their competition should occur on the football field. Humans can easily recognize and interpret these unconventional images, but can AI models do the same? We introduce WHOOPS!, a new dataset and benchmark for visual commonsense. The dataset is comprised of purposefully commonsense-defying images created by designers using publicly-available image generation tools like Midjourney. We consider several tasks posed over the dataset. In addition to image captioning, cross-modal matching, and visual question answering, we introduce a difficult explanation generation task, where models must identify and explain why a given image is unusual. Our results show that state-of-the-art models such as GPT3 and BLIP2 still lag behind human performance on WHOOPS!. We hope our dataset will inspire the development of AI models with stronger visual commonsense reasoning abilities. Data, models and code are available at the project website: whoops-benchmark.github.io


EA & LW Forum Weekly Summary (6th - 19th Feb 2023) - EA Forum

#artificialintelligence

Supported by Rethink Priorities • This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on pur…


Boosting classification reliability of NLP transformer models in the long run

Kmetty, Zoltán, Kollányi, Bence, Boros, Krisztián

arXiv.org Artificial Intelligence

Introduction A key goal of machine learning projects is some form of classification of the input data. This classification is typically done so that both the training and the data to be classified come from the same period and data set. In practice, however, it may often be the case that a particular classification is extended to a different set of data and/or to a different period. The need and the possibility to extend classification over time is strongly supported by increasing digitization as updated datasets are more frequently available for industrial and scientific research purposes. But how long is a classification suitable for, and when is it worth re-training our model? Since the much-quoted Google Flu case, every researcher using machine learning knows that models should not be blindly trusted and that it is crucial to revise them in time (Lazer et al., 2014). The need for retraining may be particularly relevant in cases where the domain under study changes rapidly over time. In the various natural language processing (NLP) projects, this problem is common, as language and language use can change within a short period of time on a given topic (Kulkarni et al., 2015). Furthermore, neural network-based black-box models complicate the problem, as they offer little insight into how a particular classification model works, making it harder to identify what changes in content or context might cause the model to break down. This black-box feature is a problem with the transformer-based NLP models currently used for classification tasks in data-mining projects. Transformer-based machine learning models have become an important tool for many natural language processing (NLP) tasks since the introduction of the method (Vaswani et al., 2017).


Weird Dreams Train Our Brains to Be Better Learners - Facts So Romantic

Nautilus

Neural networks need to “dream” of weird, senseless examples to learn well. Maybe we do, too.Photo Illustration by MDV Edwards…


So … What If Aliens' Quantum Computers Explain Dark Energy?

WIRED

When I lived in the Bay Area, I used to get together with my friend Jaron Lanier to explore the implications of spectacularly weird thought experiments. Outlandish thought experiments have been essential in the intellectual history of science, but the point isn't the weirdness itself. The payoff of thinking about strange things like Schrödinger's cat, the infamous cat that is alive and dead at the same time, is not necessarily that we should then "believe" in the existence of such a cat. Instead, we can hope that uncommon ideas will shed light on the murky margins of our thoughts; in the case of Schrodinger's cat, in dealing with the question of superposition. The point is not to confuse or bamboozle people, but to eventually find a way to think that makes more sense and is a little less murky.


Why are dreams so strange? The theory based on artificial intelligence explains this

#artificialintelligence

For decades, countless explanations have been proposed for this phenomenon, but the scientific community has yet to reach a consensus on the topic. Recently, Eric Hall, Research Assistant Professor of Neuroscience at Tufts University (USA), added his own theory to the list. Published in the scientific journal Patterns – a drawing, Hoel's hypothesis is inspired by technologies used to train deep AI neural networks and suggests that the weirdness of our dreams helps the brain better adapt to our everyday experiences. "(The theory) assumes that the very experience of dreams is the cause of our dream," says Hoyle. To support this argument, Hoel relies on an AI training process.


The Future Is Weird and So Are You

#artificialintelligence

Our minds are wired to think linearly about the future. However, the future is weird, non-linear, and unpredictable. Innovations we are currently witnessing are wild and non-linear. Exponential changes are hard to grasp, and our biological minds are not well equipped to deal with them. A penny doubling every day for 31 days will become $10,737,418.24.


Weird AI illustrates why algorithms still need people

#artificialintelligence

These days, it can be very hard to determine where to draw the boundaries around artificial intelligence. What it can and can't do is often not very clear, as well as where it's future is headed. In fact, there's also a lot of confusion surrounding what AI really is. Marketing departments have a tendency to somehow fit AI in their messaging and rebrand old products as "AI and machine learning." The box office is filled with movies about sentient AI systems and killer robots that plan to conquer the universe.