In 2017, the predictive ability of artificial intelligence (AI) powered many new tools and platforms. So what does 2018 have in store for AI? I asked some marketers to find out. Gregg Johnson, CEO of Invoca, a call tracking and analytics service, says that 2018 will be "the year the voice trend becomes undeniable." "As people increasingly trade typing for talking, we'll see more companies invest in developing for voice interfaces," Johnson said.
In this Q&A on Explainable AI, Andrea Brennen speaks with In-Q-Tel's Peter Bronez about descriptive vs. prescriptive models, "white box" vs. "black box" explanation techniques, and why some models are easier to explain than others. Peter also discusses the reproducibility crisis in Psychology and why good experiment design is so important. Peter is a VP on the technical staff at IQT. Could you tell me about your experience with machine learning and AI? PETER: As an undergraduate, I studied econometrics and operations research, so my exposure to machine learning was in the context of designing models of the world that you could test mathematically -- basically, doing hypothesis testing using statistics. Afterwards, I worked at the Department of Defense and used a lot of the same techniques. From there, I went to the private sector and [worked on] social media and data mining in marketing applications, trying to create mathematical models to categorize people, activities, and messages in order to understand them better.
It's a somewhat ironic tail: in the midst of digital transformation, artificial intelligence (AI) is in position to take over many functions of people services--the business of human resources. After all, many companies have been automating their job application processes for quite a while. But as the sophistication of AI continues to ramp up, AI will be playing an even more significant role in recruitment and talent acquisition. This is the future of AI and HR. If you're anything like most people, you have your doubts at how well a machine could select a human for a certain position.
Every year the Loebner Prize for artificial intelligence is awarded to the chatbot software able to converse most like a human. It is a version of the Turing test, proposed in 1950 by Alan Turing. A program passes when a human judge cannot tell that they are talking to a machine. No machine has yet passed. But the winner of the Loebner Prize at the weekend – Elbot, brainchild of Fred Roberts at Artificial Solutions in Germany – came close, according to the contest's rather generous rules.
IBM Research today introduced AI Explainability 360, an open source collection of state-of-the-art algorithms that use a range of techniques to explain AI model decision-making. The launch follows IBM's release a year ago of AI Fairness 360 for the detection and mitigation of bias in AI models. IBM is sharing its latest toolkit in order to increase trust and verification of artificial intelligence and help businesses that must comply with regulations to use AI, IBM Research fellow and responsible AI lead Saska Mojsilovic told VentureBeat in a phone interview. "That's fundamentally important, because we know people in organizations will not use or deploy AI technologies unless they really trust their decisions. And because we create infrastructure for a good part of this world, it is fundamentally important for us -- not because of our own internal deployments of AI or products that we might have in this space, but it's fundamentally important to create these capabilities because our clients and the world will leverage them," she said.