Goto

Collaborating Authors

 janet


"Final Boy," by Sam Lipsyte

The New Yorker

Thing is, I've been trying to find a moment to write down what happened to Bennett and me for a while now, but the demands of my audience rarely abate. I've hardly time to jot down a grocery list, let alone compose a personal chronicle. Bennett says I'm practically the Charles (as in Dickens) of scribblers devoted to mining the rich vein of a certain underappreciated sitcom of the nineteen-eighties, but I will leave that for history to judge. Besides, what does Bennett know? Just before he got that way, I was in Amok Mocha, where I like to sip cold brew and do my "C: FB" conjuring, and I struck up a conversation with a young woman who confessed to being a creative-writing student. She told me that in her workshop they talk about the "occasion" of the story. Why is the narrator telling this tale now? What pressures or conditions have coalesced to move a person to speak? I feigned ignorance of the concept, though I'd heard it often in my own writing classes long ago. Instead, I told her that, if the installment I was presently crafting flowed from any occasion, it was this: Charles is anxious about the imminent disintegration of the universe via the ever-increasing tug of dark matter. Moreover, he's ticked off that his best buddy, Buddy, doesn't seem perturbed by the prospect. "How imminent?" the woman said, and sipped her Balkan, a new offering at Amok. When I informed her that he was the titular hero of "Charles in Charge," the most criminally uncelebrated television program of the Reagan era, the woman pursed her lips. "We all write fan fiction," I told her. "Some of us are just more honest about it." The young woman gathered up her belongings, moved to another table. Did she think I was being facetious? Still, if there is an occasion for the story I'm relating now, it's a bit nearer on the space-time continuum. My best buddy, Bennett, is in a vegetative state induced by an anoxic brain injury, and, if he doesn't wake up soon and vouch for me, I could be kicked out of our apartment.


JANET: Joint Adaptive predictioN-region Estimation for Time-series

English, Eshant, Wong-Toi, Eliot, Fontana, Matteo, Mandt, Stephan, Smyth, Padhraic, Lippert, Christoph

arXiv.org Machine Learning

Conformal prediction provides machine learning models with prediction sets that offer theoretical guarantees, but the underlying assumption of exchangeability limits its applicability to time series data. Furthermore, existing approaches struggle to handle multi-step ahead prediction tasks, where uncertainty estimates across multiple future time points are crucial. We propose JANET (Joint Adaptive predictioN-region Estimation for Time-series), a novel framework for constructing conformal prediction regions that are valid for both univariate and multivariate time series. JANET generalises the inductive conformal framework and efficiently produces joint prediction regions with controlled K-familywise error rates, enabling flexible adaptation to specific application needs. Our empirical evaluation demonstrates JANET's superior performance in multi-step prediction tasks across diverse time series datasets, highlighting its potential for reliable and interpretable uncertainty quantification in sequential data.


Think-in-Memory: Recalling and Post-thinking Enable LLMs with Long-Term Memory

Liu, Lei, Yang, Xiaoyan, Shen, Yue, Hu, Binbin, Zhang, Zhiqiang, Gu, Jinjie, Zhang, Guannan

arXiv.org Artificial Intelligence

Memory-augmented Large Language Models (LLMs) have demonstrated remarkable performance in long-term human-machine interactions, which basically relies on iterative recalling and reasoning of history to generate high-quality responses. However, such repeated recall-reason steps easily produce biased thoughts, \textit{i.e.}, inconsistent reasoning results when recalling the same history for different questions. On the contrary, humans can keep thoughts in the memory and recall them without repeated reasoning. Motivated by this human capability, we propose a novel memory mechanism called TiM (Think-in-Memory) that enables LLMs to maintain an evolved memory for storing historical thoughts along the conversation stream. The TiM framework consists of two crucial stages: (1) before generating a response, a LLM agent recalls relevant thoughts from memory, and (2) after generating a response, the LLM agent post-thinks and incorporates both historical and new thoughts to update the memory. Thus, TiM can eliminate the issue of repeated reasoning by saving the post-thinking thoughts as the history. Besides, we formulate the basic principles to organize the thoughts in memory based on the well-established operations, (\textit{i.e.}, insert, forget, and merge operations), allowing for dynamic updates and evolution of the thoughts. Furthermore, we introduce Locality-Sensitive Hashing into TiM to achieve efficient retrieval for the long-term conversations. We conduct qualitative and quantitative experiments on real-world and simulated dialogues covering a wide range of topics, demonstrating that equipping existing LLMs with TiM significantly enhances their performance in generating responses for long-term interactions.


Memories of Janet by The Ghost

#artificialintelligence

Janet is one of the main characters in the ballad Tam Lin. It is a dark and very powerful 16th ballad. The Ghost manages a number of online galleries.The Ghost specialise in Dark Conceptual AI art.The motifs include; Dark Art, Dark Portrait, Dark Fantasy, Dystopian, Punk, Artificial Intelligence,Death, Decay, Gender, Lilith, Ice Children .Artwork is dedicated to Lilith and the Ice Children. All works were created by Martin Wall.


3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models

Mu, Ronghui, Ruan, Wenjie, Marcolino, Leandro S., Ni, Qiang

arXiv.org Artificial Intelligence

3D point cloud models are widely applied in safety-critical scenes, which delivers an urgent need to obtain more solid proofs to verify the robustness of models. Existing verification method for point cloud model is time-expensive and computationally unattainable on large networks. Additionally, they cannot handle the complete PointNet model with joint alignment network (JANet) that contains multiplication layers, which effectively boosts the performance of 3D models. This motivates us to design a more efficient and general framework to verify various architectures of point cloud models. The key challenges in verifying the large-scale complete PointNet models are addressed as dealing with the cross-non-linearity operations in the multiplication layers and the high computational complexity of high-dimensional point cloud inputs and added layers. Thus, we propose an efficient verification framework, 3DVerifier, to tackle both challenges by adopting a linear relaxation function to bound the multiplication layer and combining forward and backward propagation to compute the certified bounds of the outputs of the point cloud models. Our comprehensive experiments demonstrate that 3DVerifier outperforms existing verification algorithms for 3D models in terms of both efficiency and accuracy. Notably, our approach achieves an orders-of-magnitude improvement in verification efficiency for the large network, and the obtained certified bounds are also significantly tighter than the state-of-the-art verifiers. We release our tool 3DVerifier via https://github.com/TrustAI/3DVerifier for use by the community.


Abusing a Robot Won't Hurt It, but It Could Make You a Crueller Person

#artificialintelligence

Set in a dystopian 2019, the sci-fi classic Blade Runner explores how artificial humans could impact our humanity. Harrison Ford's character experiences powerful emotional and moral effects as he goes about hunting "replicants". Now, in the real 2019, the influence of robots on human behaviour is increasingly relevant. Killer military robots and sex robots, for example, might alter attitudes to killing and to women, respectively. Could treating social robots kindly make us kinder people? And could cruelty towards them make us more callous?


AI in Healthcare Is Exciting, However, It Is No Reason to Overpay For It

#artificialintelligence

Eventually, many conversations about artificial intelligence (AI) include HAL. An acronym for Heuristically programmed ALgorithmic computer, HAL played a prominent and disconcerting role in Stanley Kubrick's mind-bending 1968 film 2001: A Space Odyssey. In the film, sentient computer HAL learns that the humans suspect it of being in error and will disconnect it should that error be confirmed. Of course, HAL is having none of that, and terror ensues. So influential was Kubrick's adaptation of an Arthur C. Clarke short story that HAL is now a part of the ways in which AI is often conceived.


The Good Place's Janet Is the Most Optimistic AI on Television

WIRED

Science fiction is where artificial intelligence goes to suffer. In nearly every robot-adjacent story, artificial lifeforms succeed in achieving sentience only to realize that they are abjectly, unendingly oppressed. That realization kicks off an array of terrible events: suicide, submission, or rebellion leading, most often, to death. But these dire possibilities of are limited only by the humans imagining them. Our robots, androids, and AIs should have more options than ending themselves or ending us.


'The Good Place': Why Michael Schur Cast D'Arcy Carden On The Show

International Business Times

Several actors of different ages, ethnicities, genders, sizes and shapes auditioned for the role of Janet on NBC's "The Good Place." But when D'Arcy Carden showed up to try out for the role, series creator Michael Schur knew right away that he'd already found his Janet. "She made the robotic language that I had written for the dummy scene seemed like a real person was doing it," Schur told Vanity Fair of why he picked Carden to play the sentient database. "She found this weird humanity inside this robotic scene." Carden told Vulture last October that her "The Good Place" audition was unlike any audition she had done before.


'The Good Place': D'Arcy Carden Says Playing Janet Was Initially A Struggle

International Business Times

D'Arcy Carden admitted that playing the first incarnation of Janet on the NBC comedy "The Good Place" was not easy. Every line you say and every line your scene partner says, the point is reacting to it, but when your character doesn't react, it's a weird struggle," Carden told Variety of the main challenge she faced when portraying the early version of her character. "I talked about it with [series creator] Mike [Schur] a lot, and he's amazing and so collaborative, and it was very helpful to figure it out together." After figuring out the right balance between human and robot, Carden said that playing Janet started to feel so natural for her. "Now, I feel like I know her so well: She is me, I am her," Carden told Elle last September. In an interview with Collider last January, Carden revealed that the "first season was more challenging" for her and "coming back for the second season felt more like home." "I was like, 'Oh, I know this character!