Goto

Collaborating Authors

 xxx


TextDiffuser: Diffusion Models as Text Painters

Neural Information Processing Systems

TextDiffuser consists of two stages: first, a Transformer model generates the layout of keywords extracted from text prompts, and then diffusion models generate images conditioned on the text prompt and the generated layout.



3d4c0a618d0acd7921493e4f30395c22-Paper-Conference.pdf

Neural Information Processing Systems

Giventextual explanations, our proposed framework uses agenerative model conditioned on textual input to create data points representing the explanations. Bycomparing theneuron'sresponse tothese generated data points and control data points, we can estimate the quality of the explanation.




Position-basedScaledGradientforModel QuantizationandPruning-Appendix

Neural Information Processing Systems

Inthis experiment, we only quantize the weights, not the activations, to compare the performance degradation as weight bit-width decreases. The mean squared errors (MSE) of the weights across different bit-widths are also reported. The name of the layer and the number of parameters in parenthesis are shown in the column. All numbers are results of the last epoch. Table A3: ResNet-32 trained with Adam on the CIFAR-100 dataset.


4aa13186c795a52ba88f5b822f4b77eb-Paper-Conference.pdf

Neural Information Processing Systems

Therefore, estimating how well a given model might perform on the new data is an important step toward reliable ML applications. This isverychallenging, however,asthedata distribution can change inflexible ways, and we may not haveanylabels on the new data, which is often the case in monitoring settings. In this paper, we propose a new distribution shift model, Sparse Joint Shift (SJS), which considers the joint shift of both labels and afew features.


45c166d697d65080d54501403b433256-AuthorFeedback.pdf

Neural Information Processing Systems

The reviewers2 acknowledge that the ideas presented inthe paper are compelling, sound and appear tobeeffective(R3), offering a3 great add to the GP literature (R1) which is also supported by a solid and an interesting theoretical foundation (R2,4 R4). Existing multi-output GP models are not applicable to our setting (see line 79-83) and are thus not16 comparabletotheDAG-GPmodel. Wehavefurther clarified this point in Section 1.2.


Generalised Mutual Information for Discriminative Clustering

Neural Information Processing Systems

All GEMINIsaresummarisedin Table 1, (see Appendix Dforderivations). Figure 2: Clusteringofamixtureof 3 Gaussiandistributionswith MI (left) anda GEMINI (right) usingcategoricaldistributions.


TP-RAG: Benchmarking Retrieval-Augmented Large Language Model Agents for Spatiotemporal-Aware Travel Planning

Ni, Hang, Liu, Fan, Ma, Xinyu, Su, Lixin, Wang, Shuaiqiang, Yin, Dawei, Xiong, Hui, Liu, Hao

arXiv.org Artificial Intelligence

Large language models (LLMs) have shown promise in automating travel planning, yet they often fall short in addressing nuanced spatiotemporal rationality. While existing benchmarks focus on basic plan validity, they neglect critical aspects such as route efficiency, POI appeal, and real-time adaptability. This paper introduces TP-RAG, the first benchmark tailored for retrieval-augmented, spatiotemporal-aware travel planning. Our dataset includes 2,348 real-world travel queries, 85,575 fine-grain annotated POIs, and 18,784 high-quality travel trajectory references sourced from online tourist documents, enabling dynamic and context-aware planning. Through extensive experiments, we reveal that integrating reference trajectories significantly improves spatial efficiency and POI rationality of the travel plan, while challenges persist in universality and robustness due to conflicting references and noisy data. To address these issues, we propose EvoRAG, an evolutionary framework that potently synergizes diverse retrieved trajectories with LLMs' intrinsic reasoning. EvoRAG achieves state-of-the-art performance, improving spatiotemporal compliance and reducing commonsense violation compared to ground-up and retrieval-augmented baselines. Our work underscores the potential of hybridizing Web knowledge with LLM-driven optimization, paving the way for more reliable and adaptive travel planning agents.