doorkey
- North America > Canada > Alberta (0.14)
- Asia > China > Tianjin Province > Tianjin (0.05)
- Asia > Singapore (0.04)
GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis
Figure 2: The illustration of knowledge reused from DoorKey to BoxKey. BoxKey As shown in Figure 1b, different from DoorKey, it has to open the box to get the key. Thus the learned program is color-agnostic (i.e., the agent's policy would remain robust no matter The valuation vector representations are fed to all the methods as input. The reward from the MiniGrid environment is sparse (i.e., only a positive reward will be given after We use a batch size of 256. The code is available at: https://github.com/caoysh/GALOIS
- North America > Canada > Alberta (0.15)
- Asia > China > Tianjin Province > Tianjin (0.05)
- Asia > Singapore (0.05)
Concept-Based Interpretable Reinforcement Learning with Limited to No Human Labels
Ye, Zhuorui, Milani, Stephanie, Gordon, Geoffrey J., Fang, Fei
Recent advances in reinforcement learning (RL) have predominantly leveraged neural network-based policies for decision-making, yet these models often lack interpretability, posing challenges for stakeholder comprehension and trust. Concept bottleneck models offer an interpretable alternative by integrating human-understandable concepts into neural networks. However, a significant limitation in prior work is the assumption that human annotations for these concepts are readily available during training, necessitating continuous real-time input from human annotators. To overcome this limitation, we introduce a novel training scheme that enables RL algorithms to efficiently learn a concept-based policy by only querying humans to label a small set of data, or in the extreme case, without any human labels. Our algorithm, LICORICE, involves three main contributions: interleaving concept learning and RL training, using a concept ensembles to actively select informative data points for labeling, and decorrelating the concept data with a simple strategy. We show how LICORICE reduces manual labeling efforts to to 500 or fewer concept labels in three environments. Finally, we present an initial study to explore how we can use powerful vision-language models to infer concepts from raw visual inputs without explicit labels at minimal cost to performance.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > Middle East > Jordan (0.04)