Plotting

Broader Impact: Since our method improves subspace clustering, it advances learning from unannotated data

Neural Information Processing Systems

We sincerely thank the reviewers for their valuable comments. We proofread and fixed the mentioned errors. Broader Impact: Since our method improves subspace clustering, it advances learning from unannotated data. Related Work: Thank you for the additional references. We will include and discuss them in the revised version.


Instruction Tuning Large Language Models to Understand Electronic Health Records

Neural Information Processing Systems

Large language models (LLMs) have shown impressive capabilities in solving a wide range of tasks based on human instructions. However, developing a conversational AI assistant for electronic health record (EHR) data remains challenging due to (1) the lack of large-scale instruction-following datasets and (2) the limitations of existing model architectures in handling complex and heterogeneous EHR data. In this paper, we introduce MIMIC-Instr, a dataset comprising over 400K open-ended instruction-following examples derived from the MIMIC-IV EHR database. This dataset covers various topics and is suitable for instructiontuning general-purpose LLMs for diverse clinical use cases. Additionally, we propose Llemr, a general framework that enables LLMs to process and interpret EHRs with complex data structures. Llemr demonstrates competitive performance in answering a wide range of patient-related questions based on EHR data. Furthermore, our evaluations on clinical predictive modeling benchmarks reveal that the fine-tuned Llemr achieves performance comparable to state-of-the-art (SOTA) baselines using curated features. The dataset and code are available at https://github.com/zzachw/llemr.


Instruction Tuning Large Language Models to Understand Electronic Health Records

Neural Information Processing Systems

Large language models (LLMs) have shown impressive capabilities in solving a wide range of tasks based on human instructions. However, developing a conversational AI assistant for electronic health record (EHR) data remains challenging due to (1) the lack of large-scale instruction-following datasets and (2) the limitations of existing model architectures in handling complex and heterogeneous EHR data. In this paper, we introduce MIMIC-Instr, a dataset comprising over 400K open-ended instruction-following examples derived from the MIMIC-IV EHR database. This dataset covers various topics and is suitable for instructiontuning general-purpose LLMs for diverse clinical use cases. Additionally, we propose Llemr, a general framework that enables LLMs to process and interpret EHRs with complex data structures. Llemr demonstrates competitive performance in answering a wide range of patient-related questions based on EHR data. Furthermore, our evaluations on clinical predictive modeling benchmarks reveal that the fine-tuned Llemr achieves performance comparable to state-of-the-art (SOTA) baselines using curated features. The dataset and code are available at https://github.com/zzachw/llemr.


Incorporating Pragmatic Reasoning Communication into Emergent Language

Neural Information Processing Systems

Emergentism and pragmatics are two research fields that study the dynamics of linguistic communication along substantially different timescales and intelligence levels. From the perspective of multi-agent reinforcement learning, they correspond to stochastic games with reinforcement training and stage games with opponent awareness. Given that their combination has been explored in linguistics, we propose computational models that combine short-term mutual reasoning-based pragmatics with long-term language emergentism. We explore this for agent communication referential games as well as in Starcraft II, assessing the relative merits of different kinds of mutual reasoning pragmatics models both empirically and theoretically. Our results shed light on their importance for making inroads towards getting more natural, accurate, robust, fine-grained, and succinct utterances.


7520fa31d14f45add6d61e52df5a03ff-AuthorFeedback.pdf

Neural Information Processing Systems

Please see our responses below (Rn refers to Reviewer n). Lazaridou et al. 2020 also consider pragmatics during training, they do not use two-sided adjustment for each specific Details: Object candidate set C is for a single instance. In this case, the agents randomly select their actions. Compositionality is an important metric and we may add the following to Sec. 4.4: Topological We will amend our paper to better explain the experiments. Actually, we used this setting in the experiment, but it does not outperform IBR.


Supplementary Material

Neural Information Processing Systems

We printed a checkerboard with a 9x10 grid of blocks, each measuring 87 mm x 87 mm. Parameter Value Model Architecture Panoptic-PolarNet Test Batch Size 2 Val Batch Size 2 Test Batch size 1 post proc threshold 0.1 post proc nms kernel 5 post proc top k 100 center loss MSE offset loss L1 center loss weight 100 offset loss weight 10 enable SAP True SAP start epoch 30 SAP rate 0.01 Table 3: Parameters for Panoptic Segmentation model Model mIoU (%) Semantic Segmentation Cylinder3D 67.8 Panoptic Segmentation Panoptic-PolarNet 59.5 4D Panoptic Segmentation 4D-StOP 58.8 Table 6: Models of various tasks used in our experiments and their performances on SemanticKITTI The results reveal a significant variance in performance across different categories. The dataset is divided into 17 and 6 categories, respectively. Ground' and'Roads', as opposed to grouping anything related to ground as a single category. Overall, the performance across these tasks underscores the challenges posed by our dataset's With our dataset, future work can focus on improving the model's capacity to handle such diverse The raw data, processed data, and framework code can be found on our website.



Efficient Minimum Bayes Risk Decoding using Low-Rank Matrix Completion Algorithms

Neural Information Processing Systems

Minimum Bayes Risk (MBR) decoding is a powerful decoding strategy widely used for text generation tasks, but its quadratic computational complexity limits its practical application. This paper presents a novel approach for approximating MBR decoding using matrix completion techniques, focusing on the task of machine translation. We formulate MBR decoding as a matrix completion problem, where the utility metric scores between candidate hypotheses and pseudo-reference translations form a low-rank matrix. First, we empirically show that the scores matrices indeed have a low-rank structure. Then, we exploit this by only computing a random subset of the scores and efficiently recover the missing entries in the matrix by applying the Alternating Least Squares (ALS) algorithm, thereby enabling a fast approximation of the MBR decoding process.


Truncated Linear Regression in High Dimensions MIT As in standard linear regression, in truncated linear regression, we are given access to observations (A, y

Neural Information Processing Systems

As a corollary, our guarantees imply a computationally efficient and information-theoretically optimal algorithm for compressed sensing with truncation, which may arise from measurement saturation effects. Our result follows from a statistical and computational analysis of the Stochastic Gradient Descent (SGD) algorithm for solving a natural adaptation of the LASSO optimization problem that accommodates truncation. This generalizes the works of both: (1) Daskalakis et al. [9], where no regularization is needed due to the lowdimensionality of the data, and (2) Wainright [27], where the objective function is simple due to the absence of truncation. In order to deal with both truncation and high-dimensionality at the same time, we develop new techniques that not only generalize the existing ones but we believe are of independent interest.


A Differentiable Semantic Metric Approximation in Probabilistic Embedding for Cross-Modal Retrieval

Neural Information Processing Systems

Cross-modal retrieval aims to build correspondence between multiple modalities by learning a common representation space. Typically, an image can match multiple texts semantically and vice versa, which significantly increases the difficulty of this task. To address this problem, probabilistic embedding is proposed to quantify these many-to-many relationships. However, existing datasets (e.g., MS-COCO) and metrics (e.g., Recall@K) cannot fully represent these diversity correspondences due to non-exhaustive annotations. Based on this observation, we utilize semantic correlation computed by CIDEr to find the potential correspondences. Then we present an effective metric, named Average Semantic Precision (ASP), which can measure the ranking precision of semantic correlation for retrieval sets. Additionally, we introduce a novel and concise objective, coined Differentiable ASP Approximation (DAA).