Towards learning to explain with concept bottleneck models: mitigating information leakage
Lockhart, Joshua, Marchesotti, Nicolas, Magazzeni, Daniele, Veloso, Manuela
–arXiv.org Artificial Intelligence
Concept bottleneck models perform classification by first predicting which of a list of human provided concepts are true about a datapoint. Then a downstream model uses these predicted concept labels to predict the target label. The predicted concepts act as a rationale for the target prediction. Model trust issues emerge in this paradigm when soft concept labels are used: it has previously been observed that extra information about the data distribution leaks into the concept predictions. In this work we show how Monte-Carlo Dropout can be used to attain soft concept predictions that do not contain leaked information.
arXiv.org Artificial Intelligence
Nov-7-2022
- Country:
- Europe > United Kingdom
- England > Greater London > London (0.05)
- North America > United States
- New York > New York County > New York City (0.04)
- Europe > United Kingdom
- Genre:
- Research Report (0.50)
- Industry:
- Banking & Finance (0.47)
- Technology: