licato
Licato
The rich expressivity provided by the cognitive event calculus (CEC) knowledge representation framework allows for reasoning over deeply nested beliefs, desires, intentions, and so on. I put CEC to the test by attempting to model the complex reasoning and deceptive planning used in an episode of the popular television show Breaking Bad. CEC is used to represent the knowledge used by reasoners coming up with plans like the ones devised by the fictional characters I describe. However, it becomes clear that a form of nonmonotonic reasoning is necessary--specifically so that an agent can reason about the nonmonotonic beliefs of another agent. I show how CEC can be augmented to have this ability, and then provide examples detailing how my proposed augmentation enables much of the reasoning used by agents such as the Breaking Bad characters. I close by discussing what sort of reasoning tool would be necessary to implement such nonmonotonic reasoning.
Will Artificial Intelligence get high?
But more importantly, a logic-based artificial intelligence allows humans to define the ethical code from which potential Superintelligences can act. "I look at the hypothetical possibility of AI doing drugs from the standpoint of the kind of rationality that I would like to give to a robot," Bringsjord says, "and since that does not emulate the human brain, which is irrational, then the machine is not going to take that drug." John Licato, a research assistant at Rensselaer, supports the theory. "It would be irrational to make choices that would take an AI away from the primary thing that the robot is programmed to do," Licato says. This might not be because an AI won't have the capability of addiction, however, but instead because we equivocate drug use with satisfying a short-term, lower-level bodily desire, like using ecstasy as an aphrodisiac or opiates as a means to escape.