Chat bots that are able to connect with people is a fairly new concept. The more primitive machines that we have seen in the past are not capable of conveying emotion. Recently, the focus has shifted a bit to creating bots, that can not only hold real conversations with people, but connect with them at a deeper level than ever thought possible. This idea has many practical uses in areas like counseling and therapy. In fact these so called "empathetic" bots have already demonstrated some therapeutic value beyond what anyone was expecting.
When we think about artificial intelligence and human speech we typically think of personal assistants like Siri and Alexa. That technology is still in its infancy. What if the real potential with AI is not to make computers or apps sound more human, but instead, to help humans be more humane and empathetic? An MIT spin-off company called Cogito has developed software that evaluates the subtle give-and-take of conversation--focusing not on what is said, but how it is said. The technology monitors subtleties of speech like tone, energy, vocal strain, and tempo.
More than 80 years ago, George Orwell wrote his first book, a memoir called Down And Out in Paris and London. It chronicled his life on the margins of society where he lived in poverty in these two cities by adopting the life of a tramp -- or in modern terms, an indigent man. Written as an investigative exploration, Orwell immersed himself in a world he did not know so he could write authentically about an experience other writers had only observed. Flash forward to today where more than 42 million Americans caring for a loved one over age 50 are also living on the margins -- as an overwhelmed, overlooked part of our society. Family caregivers and their charges often feel all alone.
In our earlier posts we've discussed, and proven, empathys' growing importance in artificial intelligence. The next questions to ask are, "How do we make AI empathetic?", "How do we build emotion into our AIs?" and "Can we ever make AI feel?" At Kairos, we believe the answer to the "how" question is in face analysis. Facial recognition allows software to identify and verify human faces while emotion analysis allows software to measure and read the emotions on those found faces. More importantly, facial recognition and emotion analysis looks at each user as an individual and captures their specific human data.
The relationship between humans and robots is a tricky thing. If the latter looks too much like the former, but is still clearly a machine, people think it's creepy, even repulsive--a feeling that's become known as the "uncanny valley." Or, as is sometimes the case, the human, with "Star Wars" or "The Jetsons" as his or her reference points, is disappointed by all the things the robot can't yet do. Then, there is the matter of job insecurity--the fear of one day being replaced by a tireless, unflappable, unfailingly consistent device. Human-robot interactions can be even more complicated for one group in particular--older adults.