According to the science magazine Nature, chemists are heralding a new artificial intelligence platform as a significant milestone. The platform has the potential to accelerate the process of drug discovery, and it should be able to make organic chemistry more efficient. The new platform is designed to help chemists to plan the syntheses of small organic molecules. Traditionally, chemists use the process of retrosynthesis, which is an established problem-solving technique whereby target molecules are recursively transformed into increasingly simpler precursors. The goal of retrosynthetic analysis is structural simplification.
The model does not discriminate between the wocessing of perceptual or mental representations but only between their memory requiremerits. The primary advantage of visual representations such as all%crams is attributed to apprehension of dynamics rather than the explicit representation of state aukin and Simon, 1987] although the model accounts for both. The power of this approach to mental models lies in incorporating the decomposition of situations into states of affairs and constraints from situation theory [Barwise and Perry, 1983] and the characterization of cognitive tasks as search of a problem space due to [Newell and Simon.
This paper deals with the relationship between intelligent behaviour, on the one hand, and the mental qualities needed to produce it, on the other. We consider two well-known opposing positions on this issue: one due to Alan Turing and one due to John Searle (via the Chinese Room). In particular, we argue against Searle, showing that his answer to the so-called System Reply does not work. The argument takes a novel form: we shift the debate to a different and more plausible room where the required conversational behaviour is much easier to characterize and to analyze. Despite being much simpler than the Chinese Room, we show that the behaviour there is still complex enough that it cannot be produced without appropriate mental qualities.
Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils. I start the keynote with Alan Turing's famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin's Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we've had computers that can expertly play chess for 20 years, but we can't yet build a robot that could go into your kitchen and make you a cup of tea. In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence.
As in-vehicle automation becomes increasingly prevalent and capable, there will be more opportunity for vehicle drivers to delegate control to automated systems. as well as increased ability for automated systems to intervene to increase road safety. With the decline in how much a driver must be engaged, two problems arise: driver disengagement and reduced ability to act when necessary; and also a likely decrease in active driving, which may reduce the engagement a driver can have for the purpose of enjoyment. As vehicles become more intelligent, they need to work collaboratively with human drivers, in the frame of a joint-cognitive system in order to both extend and backstop human capabilities to optimize safety, comfort, and engagement.