The charge of the chatbots: how do you tell who's human online?
Alan Turing's famous test of whether machines could fool us into believing they were human – "the imitation game" – has become a mundane, daily question for all of us. We are surrounded by machine voices, and think nothing of conversing with them – though each time I hear my car tell me where to turn left I am reminded of my grandmother, who having installed a telephone late in life used to routinely say goodnight to the speaking clock. We find ourselves locked into interminable text chats with breezy automated bank tellers and offer our mother's maiden name to a variety of robotic speakers that sound plausibly alive. I've resisted the domestic spies of Apple and Amazon, but one or two friends jokingly describe the rapport they and their kids have built up with Amazon's Alexa or Google's Home Hub – and they are right about that: the more you tell your virtual valet, the more you disclose of wants and desires, the more speedily it can learn and commit to memory those last few fragments of your inner life you had kept to yourself. As the line between human and digital voices blurs, our suspicions are raised: who exactly are we talking to? No online conversation or message-board spat is complete without its doubters: "Are you a bot?" Or, the contemporary door-slam: "Bot: blocked!" Those doubts will only increase. The ability of bots – a term which can describe any automated process present in a computer network – to mimic human online behaviour and language has developed sharply in the past three years.
Nov-18-2018, 08:59:56 GMT
- Country:
- Europe
- Germany (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- North America > United States
- California (0.14)
- Massachusetts (0.04)
- Europe
- Industry:
- Government (1.00)
- Health & Medicine > Therapeutic Area (0.69)
- Information Technology (0.66)
- Leisure & Entertainment (1.00)
- Media
- Technology: