Goto

Collaborating Authors

 hitchhiker


A Hitchhiker's Guide to Poisson Gradient Estimation

Ibrahim, Michael, Zhao, Hanqi, Sennesh, Eli, Li, Zhi, Wu, Anqi, Yates, Jacob L., Li, Chengrui, Vafaii, Hadi

arXiv.org Machine Learning

Poisson-distributed latent variable models are widely used in computational neuroscience, but differentiating through discrete stochastic samples remains challenging. Two approaches address this: Exponential Arrival Time (EAT) simulation and Gumbel-SoftMax (GSM) relaxation. We provide the first systematic comparison of these methods, along with practical guidance for practitioners. Our main technical contribution is a modification to the EAT method that theoretically guarantees an unbiased first moment (exactly matching the firing rate), and reduces second-moment bias. We evaluate these methods on their distributional fidelity, gradient quality, and performance on two tasks: (1) variational autoencoders with Poisson latents, and (2) partially observable generalized linear models, where latent neural connectivity must be inferred from observed spike trains. Across all metrics, our modified EAT method exhibits better overall performance (often comparable to exact gradients), and substantially higher robustness to hyperparameter choices. Together, our results clarify the trade-offs between these methods and offer concrete recommendations for practitioners working with Poisson latent variable models.


What is Grok and why has Elon Musk's chatbot been accused of anti-Semitism?

Al Jazeera

Elon Musk's artificial intelligence company xAI has come under fire after its chatbot Grok stirred controversy with anti-Semitic responses to questions posed by users – just weeks after Musk said he would rebuild it because he felt it was too politically correct. On Friday last week, Musk announced that xAI had made significant improvements to Grok, promising a major upgrade "within a few days". Online tech news site The Verge reported that, by Sunday evening, xAI had already added new lines to Grok's publicly posted system prompts. By Tuesday, Grok had drawn widespread backlash after generating inflammatory responses – including anti-Semitic comments. One Grok user asking the question, "which 20th-century figure would be best suited to deal with this problem (anti-white hate)", received the anti-Semitic response: "To deal with anti-white hate? Here's what we know about the Grok chatbot and the controversies it has caused. Grok, a chatbot created by xAI – the AI company Elon Musk ...


The final solution of the Hitchhiker's problem #5

Omladič, Matjaž, Vuk, Martin, Zalar, Aljaž

arXiv.org Machine Learning

The recent survey [2] nicknamed "Hitchhiker's Guide" has ra ised the rating of quasi-copula problems in the dependence model ing community in spite of the lack of statistical interpretation of quasi-co pulas. In our previous work we addressed the question of extreme values of the mass d istribution associated with a mutidimensional quasi-copulas. Using li near programming approach we were able to settle [2, Open Problem 5] up to d = 17 and disprove a recent conjecture from [25] on solution to that problem. In this note we use an analytical approach to provide a complete answer to the or iginal question.


A Hitchhiker's Guide to Fine-Grained Face Forgery Detection Using Common Sense Reasoning

Neural Information Processing Systems

Explainability in artificial intelligence is crucial for restoring trust, particularly in areas like face forgery detection, where viewers often struggle to distinguish between real and fabricated content. Vision and Large Language Models (VLLM) bridge computer vision and natural language, offering numerous applications driven by strong common-sense reasoning. Despite their success in various tasks, the potential of vision and language remains underexplored in face forgery detection, where they hold promise for enhancing explainability by leveraging the intrinsic reasoning capabilities of language to analyse fine-grained manipulation areas. For that reason, few works have recently started to frame the problem of deepfake detection as a Visual Question Answering (VQA) task, nevertheless omitting the realistic and informative open-ended multi-label setting. With the rapid advances in the field of VLLM, an exponential rise of investigations in that direction is expected.


A Hitchhiker's Guide to Deep Chemical Language Processing for Bioactivity Prediction

Özçelik, Rıza, Grisoni, Francesca

arXiv.org Artificial Intelligence

Deep learning has significantly accelerated drug discovery, with 'chemical language' processing (CLP) emerging as a prominent approach. CLP learns from molecular string representations (e.g., Simplified Molecular Input Line Entry Systems [SMILES] and Self-Referencing Embedded Strings [SELFIES]) with methods akin to natural language processing. Despite their growing importance, training predictive CLP models is far from trivial, as it involves many 'bells and whistles'. Here, we analyze the key elements of CLP training, to provide guidelines for newcomers and experts alike. Our study spans three neural network architectures, two string representations, three embedding strategies, across ten bioactivity datasets, for both classification and regression purposes. This 'hitchhiker's guide' not only underscores the importance of certain methodological choices, but it also equips researchers with practical recommendations on ideal choices, e.g., in terms of neural network architectures, molecular representations, and hyperparameter optimization.


Learning with 3D rotations, a hitchhiker's guide to SO(3)

Geist, A. René, Frey, Jonas, Zobro, Mikel, Levina, Anna, Martius, Georg

arXiv.org Artificial Intelligence

Many settings in machine learning require the selection of a rotation representation. However, choosing a suitable representation from the many available options is challenging. This paper acts as a survey and guide through rotation representations. We walk through their properties that harm or benefit deep learning with gradient-based optimization. By consolidating insights from rotation-based learning, we provide a comprehensive overview of learning functions with rotation representations. We provide guidance on selecting representations based on whether rotations are in the model's input or output and whether the data primarily comprises small angles.


Drone Delivery Sparks Chaos in Hilarious Sci-Fi Novel Deliver Us

WIRED

Deliver Us, a 2018 novel by Christopher Robinson and Gavin Kovite, takes a hilarious look at the future of drone delivery. The plot revolves around a social media activist named Piper Prince who attempts to stop Amazon from taking over her Detroit neighborhood. "It's written in a Coen brothers sort of tone," Robinson says in Episode 561 of the Geek's Guide to the Galaxy podcast. I wanted the world and the characters to be slightly pitched up from reality. So Jeff Bezos and his S-Team are characters in the book, and they are a little bit like the boardroom characters from The Hudsucker Proxy." Robinson sees Detroit as the perfect setting for a novel about the collision between social justice activism and breakneck technological disruption, given the city's rich history and uncertain future. "It's a place that was the arsenal of democracy," he says. "The Jetsons future is a future that was extrapolated from what Detroit used to be.


Kiltmaker uses AI to design a new tartan - and it's already been accepted onto the official Scottish Register

Daily Mail - Science & tech

In a year since its release, ChatGPT has already been used to draft essays, create beer, write best man speeches and even prescribe antibiotics. Now, a kiltmaker has used the artificial intelligence (AI) tool to design a new tartan – and it's already been accepted onto the official Scottish Register. Steven Sim, 52, a former graphic designer based in Arbroath, said he was simply'blown away' by the chatbot's intelligence. The creation features prominent red, to represent the'passion that drives AI development, and gold for'the brilliance and illumination AI brings to the world'. Also included in the swish design are several hidden references to AI and science fiction, including'The Hitchhiker's Guide to the Galaxy'.


Elon Musk Loves em The Hitchhiker's Guide to the Galaxy /em . Um, Has He Read It?

Slate

Over the weekend, Elon Musk announced the first major product from his artificial-intelligence outfit xAI: Grok, a ChatGPT-like bot available in beta mode for users who are subscribed to the $16-a-month Premium plan on his social network X. This newest entrant in the chatbot arms race takes as its name a term from the libertarian science-fiction classic that's long been one of Musk's favorites, Robert A. Heinlein's Stranger in a Strange Land. But its actual output, Musk says, takes inspiration from Douglas Adams' The Hitchhiker's Guide to the Galaxy, another foundational novel for the Tesla and SpaceX boss. Musk's many, many companies often reference terms he is attached to on either a personal level (the letter X) or just finds funny (his frequent callbacks to old-school memes). But this one is kind of confounding, and not just because Stranger and Hitchhiker's are only comparable works insofar as they are both influential sci-fi novels. Grok is an AI modeled after The Hitchhiker's Guide to the Galaxy, so intended to answer almost anything and, far harder, even suggest what questions to ask! Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don't use it if you hate humor!


Elon Musk unveils Grok, an AI chatbot with a 'rebellious streak'

The Guardian

Elon Musk has unveiled Grok, an artificial intelligence chatbot with a "rebellious streak" inspired by The Hitchhiker's Guide to the Galaxy. The Tesla CEO, who warned last week that AI was "one of the biggest threats to humanity", said the competitor to ChatGPT would be made available to premium subscribers on his X platform after testing. Musk also revealed that Grok had access to user posts on X, which he owns, and has a penchant for sarcastic responses. Grok has real-time access to info via the? It's also based & loves sarcasm. I have no idea who could have guided it this way pic.twitter.com/e5OwuGvZ3Z