Gierasimczuk, Nina
Cognitive Bias and Belief Revision
Papadamos, Panagiotis, Gierasimczuk, Nina
Cognitive bias is a systematic human thought pattern connected with the distortion of received information, that usually leads to deviation from rationality (for a recent analysis see [18]). Such biases are specific not only to human intelligence, they can be also ascribed to artificial agents, algorithms and programs. For instance, confirmation bias can be seen as stubbornness against new information which contradicts the previously adopted view. In some cases such confirmation bias can be implemented into a system purposefully. Take as an example an authentication algorithm and a malicious user who is trying to break into an email account.
Learning to Act and Observe in Partially Observable Domains
Bolander, Thomas, Gierasimczuk, Nina, Liberman, Andrés Occhipinti
We consider a learning agent in a partially observable environment, with which the agent has never interacted before, and about which it learns both what it can observe and how its actions affect the environment. The agent can learn about this domain from experience gathered by taking actions in the domain and observing their results. We present learning algorithms capable of learning as much as possible (in a well-defined sense) both about what is directly observable and about what actions do in the domain, given the learner's observational constraints. We differentiate the level of domain knowledge attained by each algorithm, and characterize the type of observations required to reach it. The algorithms use dynamic epistemic logic (DEL) to represent the learned domain information symbolically. Our work continues that of Bolander and Gierasimczuk (2015), which developed DEL-based learning algorithms based to learn domain information in fully observable domains.