Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses
Tang, Liyan, Peng, Yifan, Wang, Yanshan, Ding, Ying, Durrett, Greg, Rousseau, Justin F.
–arXiv.org Artificial Intelligence
A human decision-maker benefits the most from an AI assistant that corrects for their biases. For problems such as generating interpretation of a radiology report given findings, a system predicting only highly likely outcomes may be less useful, where such outcomes are already obvious to the user. To alleviate biases in human decision-making, it is worth considering a broad differential diagnosis, going beyond the most likely options. We introduce a new task, "less likely brainstorming," that asks a model to generate outputs that humans think are relevant but less likely to happen. We explore the task in two settings: a brain MRI interpretation generation setting and an everyday commonsense reasoning setting. We found that a baseline approach of training with less likely hypotheses as targets generates outputs that humans evaluate as either likely or irrelevant nearly half of the time; standard MLE training is not effective. To tackle this problem, we propose a controlled text generation method that uses a novel contrastive learning strategy to encourage models to differentiate between generating likely and less likely outputs according to humans. We compare our method with several state-of-the-art controlled text generation models via automatic and human evaluations and show that our models' capability of generating less likely outputs is improved.
arXiv.org Artificial Intelligence
May-30-2023
- Country:
- North America > United States > Texas (0.14)
- Genre:
- Research Report
- Experimental Study (0.46)
- New Finding (0.46)
- Research Report
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.88)
- Therapeutic Area > Neurology (0.88)
- Health & Medicine
- Technology: