His research focuses at the intersection of computer vision, AI, machine learning, and graphics, with particular emphasis on systems that allow people to interact naturally with computers. These projects include the UK's biometric matching system and the International Technology Alliance research programme into novel sensor networks. Dr Waggett has extensive experience of innovative IT systems, including research into image processing at University College London and the Marconi Research Centre. His work includes responsibility for the delivery of innovative systems for a range of government and commercial organisations and he has been the Big Data subject matter expert for a range of projects and clients including the UK's biometric visa matching system.
Christina is audience development editor. After graduating from the University of Nottingham reading philosophy and theology in 2013, Christina joined a tech start-up specialising in mobile apps. She has a keen interest in the mobile platform and innovative tech. In recent years AI has brought us some pretty impressive and widely used tech, from the image recognition being used by Facebook to speech recognition technology at work in Amazon's Alexa or Apple's Siri. It's these breakthroughs in deep learning and neural networks that have led to some of the most exciting yet also worrying times in tech.
A demo of the Orcam MyEye 2.0 was one of the highlights at the AbilityNet/RNIB TechShare Pro event in November. This small device, an update to the MyEye released in 2013, clips onto any pair of glasses and provides discrete audio feedback about the world around the wearer. It uses state-of-the-art image recognition to read signs and documents as well as recognise people and does not require internet connection. It's just one of many apps and devices that are using the power of artificial intelligence (AI) to transform the lives of people who are blind or have sight loss.
Trunk Club, an apparel subscription service owned by Nordstrom, has increased Pinterest engagement by more than 100 percent in recent months by embedding artificial intelligence into its digital-images marketing. AI-image search is central to its ambitions of building a data-science model that drives highly relevant product offerings to consumers across the social web. More broadly, the effort speaks to how AI search is quickly becoming the retail sector's next big digital shopping experience. "We are trying to understand how one pair of jeans plays out against another pair that was released in another season," explained Justin Hughes, vp of product development and design at Trunk Club. "We want to get really granular and understand what really works."
Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils. I start the keynote with Alan Turing's famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin's Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we've had computers that can expertly play chess for 20 years, but we can't yet build a robot that could go into your kitchen and make you a cup of tea. In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence.