Goto

Collaborating Authors

 otto


Luxury jet replaces cabin windows with video screens

Popular Science

The panoramic virtual views are meant to eliminate drag and energy wasted associated with windows. Breakthroughs, discoveries, and DIY tips sent every weekday. For the average air traveler, window seats are often considered prime real estate, so much so that they've sparked several dramatic mid-flight skirmishes over the years. But that's not quite the case for plane manufacturers, who have long viewed those coveted oval portholes more as design obstacles to be overcome. While windows are pleasant for passengers, they create structural weak points that require extra reinforcement and add weight.


Hallucinating with AI: AI Psychosis as Distributed Delusions

Osler, Lucy

arXiv.org Artificial Intelligence

There is much discussion of the false outputs that generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok create. In popular terminology, these have been dubbed AI hallucinations. However, deeming these AI outputs hallucinations is controversial, with many claiming this is a metaphorical misnomer. Nevertheless, in this paper, I argue that when viewed through the lens of distributed cognition theory, we can better see the dynamic and troubling ways in which inaccurate beliefs, distorted memories and self-narratives, and delusional thinking can emerge through human-AI interactions; examples of which are popularly being referred to as cases of AI psychosis. In such cases, I suggest we move away from thinking about how an AI system might hallucinate at us, by generating false outputs, to thinking about how, when we routinely rely on generative AI to help us think, remember, and narrate, we can come to hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but it can also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives, such as in the case of Jaswant Singh Chail. I also examine how the conversational style of chatbots can lead them to play a dual-function, both as a cognitive artefact and a quasi-Other with whom we co-construct our beliefs, narratives, and our realities. It is this dual function, I suggest, that makes generative AI an unusual, and particularly seductive, case of distributed cognition.


Industry Insights from Comparing Deep Learning and GBDT Models for E-Commerce Learning-to-Rank

Lutz, Yunus, Wilm, Timo, Duwe, Philipp

arXiv.org Artificial Intelligence

In e-commerce recommender and search systems, tree-based models, such as LambdaMART, have set a strong baseline for Learning-to-Rank (LTR) tasks. Despite their effectiveness and widespread adoption in industry, the debate continues whether deep neural networks (DNNs) can outperform traditional tree-based models in this domain. To contribute to this discussion, we systematically benchmark DNNs against our production-grade LambdaMART model. We evaluate multiple DNN architectures and loss functions on a proprietary dataset from OTTO and validate our findings through an 8-week online A/B test. The results show that a simple DNN architecture outperforms a strong tree-based baseline in terms of total clicks and revenue, while achieving parity in total units sold.


The Influencer Next Door: How Misinformation Creators Use GenAI

Hassoun, Amelia, Abonizio, Ariel, Osborn, Katy, Wu, Cameron, Goldberg, Beth

arXiv.org Artificial Intelligence

Advances in generative AI (GenAI) have raised concerns about detecting and discerning AI-generated content from human-generated content. Most existing literature assumes a paradigm where 'expert' organized disinformation creators and flawed AI models deceive 'ordinary' users. Based on longitudinal ethnographic research with misinformation creators and consumers between 2022-2023, we instead find that GenAI supports bricolage work, where non-experts increasingly use GenAI to remix, repackage, and (re)produce content to meet their personal needs and desires. This research yielded four key findings: First, participants primarily used GenAI for creation, rather than truth-seeking. Second, a spreading 'influencer millionaire' narrative drove participants to become content creators, using GenAI as a productivity tool to generate a volume of (often misinformative) content. Third, GenAI lowered the barrier to entry for content creation across modalities, enticing consumers to become creators and significantly increasing existing creators' output. Finally, participants used Gen AI to learn and deploy marketing tactics to expand engagement and monetize their content. We argue for shifting analysis from the public as consumers of AI content to bricoleurs who use GenAI creatively, often without a detailed understanding of its underlying technology. We analyze how these understudied emergent uses of GenAI produce new or accelerated misinformation harms, and their implications for AI products, platforms and policies.


Offline Trajectory Generalization for Offline Reinforcement Learning

Zhao, Ziqi, Ren, Zhaochun, Yang, Liu, Yuan, Fajie, Ren, Pengjie, Chen, Zhumin, Ma, jun, Xin, Xin

arXiv.org Artificial Intelligence

Offline reinforcement learning (RL) aims to learn policies from static datasets of previously collected trajectories. Existing methods for offline RL either constrain the learned policy to the support of offline data or utilize model-based virtual environments to generate simulated rollouts. However, these methods suffer from (i) poor generalization to unseen states; and (ii) trivial improvement from low-qualified rollout simulation. In this paper, we propose offline trajectory generalization through world transformers for offline reinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a. World Transformers, to predict state dynamics and the immediate reward. Then we propose four strategies to use World Transformers to generate high-rewarded trajectory simulation by perturbing the offline data. Finally, we jointly use offline data with simulated data to train an offline RL algorithm. OTTO serves as a plug-in module and can be integrated with existing offline RL methods to enhance them with better generalization capability of transformers and high-rewarded data augmentation. Conducting extensive experiments on D4RL benchmark datasets, we verify that OTTO significantly outperforms state-of-the-art offline RL methods.


Scaling Session-Based Transformer Recommendations using Optimized Negative Sampling and Loss Functions

Wilm, Timo, Normann, Philipp, Baumeister, Sophie, Kobow, Paul-Vincent

arXiv.org Artificial Intelligence

This work introduces TRON, a scalable session-based Transformer Recommender using Optimized Negative-sampling. Motivated by the scalability and performance limitations of prevailing models such as SASRec and GRU4Rec+, TRON integrates top-k negative sampling and listwise loss functions to enhance its recommendation accuracy. Evaluations on relevant large-scale e-commerce datasets show that TRON improves upon the recommendation quality of current methods while maintaining training speeds similar to SASRec. A live A/B test yielded an 18.14% increase in click-through rate over SASRec, highlighting the potential of TRON in practical settings. For further research, we provide access to our source code at https://github.com/otto-de/TRON and an anonymized dataset at https://github.com/otto-de/recsys-dataset.


Building a Recommender System using Machine Learning

#artificialintelligence

Welcome to the first edition of a new article series called "The Kaggle Blueprints", where we will analyze Kaggle competitions' top solutions for lessons we can apply to our own data science projects. This first edition will review the techniques and approaches from the "OTTO -- Multi-Objective Recommender System" competition, which ended at the end of January, 2023. The goal of the "OTTO -- Multi-Objective Recommender System" competition was to build a multi-objective recommender system (RecSys) based on a large dataset of implicit user data. One of the main challenges of this competition was the large number of items to choose from. Feeding all of the available information into a complex model would require the availability of extensive amounts of computational resources.


Extended Intelligence

Barack, David L, Jaegle, Andrew

arXiv.org Artificial Intelligence

We argue that intelligence -- construed as the disposition to perform tasks successfully--is a property of systems composed of agents and their contexts. This is the thesis of extended intelligence. We argue that the performance of an agent will generally not be preserved if its context is allowed to vary. Hence, this disposition is not possessed by an agent alone, but is rather possessed by the system consisting of an agent and its context, which we dub an agent-in-context. An agent's context may include an environment, other agents, cultural artifacts (like language, technology), or all of these, as is typically the case for humans and artificial intelligence systems, as well as many non-human animals. In virtue of the thesis of extended intelligence, we contend that intelligence is context-bound, taskparticular and incommensurable among agents. Our thesis carries strong implications for how intelligence is analyzed in the context of both psychology and artificial intelligence.


Where AI, Blockchain and IoT are Now

#artificialintelligence

If the wave of new technologies in the food industry make you feel like a lion searching for courage, you're not alone. "The food industry is kind of technology averse," Dr. David Acheson, CEO of The Acheson Group, told Quality Assurance & Food Safety magazine in December 2020. "They don't embrace new technology." But leveraging these tools, such as using blockchain for tracebacks or Internet of Things (IoT) devices to monitor a facility in real time, can have implications for several parts of the food industry. It's also important to understand that tech isn't coming to replace you or all of your processes.


Weekly Brief: Levandowski – Once Upon Today in America – TU Automotive

#artificialintelligence

Former Waymo and Uber self-driving car-whiz kid, Anthony Levandowski was sentenced last week to 18 months in federal prison for stealing trade secrets. Levandowski will also pay a $95,000 fine and $756,499.22 in restitution to Waymo. He co-founded Google's self-driving car program, now Waymo, in 2009 and served as the program's technical lead until January 2016, when he left to co-found self-driving truck start-up Otto. Seven months later Uber acquired Otto for $680M and named Levandowski the head of its self-driving car division. He was on top of the tech world. He appeared in Wired Magazine as the go-to voice in Silicon Valley for self-driving cars and LiDAR technology.