AI trained to win at poker games learned to bluff, handling missing and potentially fake, misleading information. Machine learning (ML), a subset of AI, make machines learn from experience, from examples of the real world: the more the data, the more it learns. Each method might make different errors, so averaging their results can win, at times, over single methods. So it should be the "smaller" AI to claim that the human brain as not real intelligence, but only brute force computation.
That's why we invited Igor Mikhalev from Firmshift, a data-driven technology development company, to answer a few questions about machine learning and AI. However, talking about short to mid-term, I believe the focus will be on the ownership, sufficiency, and readiness of data, as well as organizational capabilities to nurture the creative process of working with internal and external data in the context of cross-functional business (model) innovation, supported by machine learning technology. Once you've established first results, build awareness and a clear business case to establish data science competency, and work with HR to start nurturing a data-driven culture that underpins its importance and usefulness. Designing AI would entail helping business, science, and engineering teams to think creatively, drive the cross-functional business (model) innovation process, challenge conventional wisdom, and become aware of the differences imposed by each other's thinking realms.
There's a lot of hype over machine learning and data science these days. Machine learning and data science strategies need to be well thought-out and planned. Machine learning can help you move past a generic authentication and access security analysis model, toward a user and entity behavioral analytics (UEBA) model. And it can help control costs by providing executive and product management the information they need to make informed, strategic business decisions.
AI, Neural Networks, Machine Learning and other buzzwords are not new; they are with us from late 50s, but why did they become so much of a trend only now? The business focus changed from investing into so-called "artificial intelligence" to development of systems that could work with already gathered data, process and re-structurize it. Bayes was widely used in anti-spam, Markov's chains predicted criminal structure behavior, search engines developed decision trees to predict user input, speech and image recognition was no miracle anymore, and it was good. Basically, we returned to 50s -- we are trying to create universal structures, mimic human brain, and create entities that can process mixed data as our brains do.
Since the dawn of computing technology, developers created programs and algorithms by writing code that machines translate into precise instructions. Instead of code-writing the way a program solves a problem, the program "learns" to solve it on its own. In a not too dissimilar way than the human brain, unsupervised AI would recognize new patterns, label them on its own and classify them without human prior input. Per MIT Technology Review, Quoc Le (one of Google's Brain research scientists) has identified "unsupervised learning" as the biggest challenge to developing true AI that can learn without the need for labeled data.
Edwin Van Bommel, Chief Cognitive Officer at IPsoft, tells industry analyst Michael Krigsman of CXOTALK at the IPsoft Digital Workforce Summit in New York about how the Amelia AI platform can solve problems for customers. Von Bommel explains that Amelia needs three things: Data to understand the client's needs, data to solve those problems, and analytics to make the AI experience even smarter. I'm in New York City at the IPsoft Digital Workforce Summit, and I'm speaking with Edwin Van Bommel, who is the Chief Cognitive Officer at IPsoft. I'm in New York City at the IPsoft Digital Workforce Summit, and I'm speaking with Edwin Van Bommel, who is the Chief Cognitive Officer at IPsoft.
Use cases for this first kind of AI include autonomous cars, robots, chatbots, trading systems, facial recognition, and virtual assistants. To be clear, any apocalyptic scenario involving autonomous weapons systems would be initiated by humans. True intelligence moves past simple ideas like goal-seeking, which is often considered another cornerstone of varying levels of AI and as a potential control mechanism. The basic ideas of self-defense and self-preservation combined with a knowledge of human history seem to inevitably lead to a bad situation for humans.
Marketers are learning to expect that platforms offer important insights pulled from layers of hidden data, make predictions about customers and know how to see a world of images, objects and sounds. So, we've seen the boom in efforts to make advertising more direct and transparent, such as the increasingly popular header bidding trend. In the past few months, for instance, people-based marketing was extended in a LiveRamp-based consortium, in Time Inc./Viant's marketing platform, and in a new publisher consortium from Sonobi. The same transparency urge behind header bidding and people-based marketing -- understanding what the deal is and who you're dealing with, whether marketer or customer -- is similarly driving General Data Protection Regulation (GDPR), a European Union-based consumer privacy initiative that could have a significant impact in the US and elsewhere.
But researchers in AI, and related fields such as learning analytics, are also thinking about how AI can provide more effective feedback to students and teachers. This is the use of technology – including AI – to provide people with information that helps them make better decisions and learn more effectively. So, for instance, rather than focusing on automating the grading of student essays, some researchers are focusing on how they can provide intelligent feedback to students that helps them better assess their own writing. Intelligence amplification helps counteract these concerns by keeping people in the loop.
Artificial intelligence today is making progress exponentially and opening new horizons as to the possibilities it provides us. While some view it as a chance at eradicating hunger and solving illiteracy and poverty, others are concerned about a future where machines surpass human intelligence and make decisions that endanger our safety and survival. The singularity, therefore, is a hypothetical moment in time where artificial intelligence will surpass human intelligence and result in unthinkable changes to the human civilization. So, our fear here is the development of something that surpasses humans, as the artificially intelligent machine would then proceed to redesign and improve itself at an alarming rate, even perceive actions against it and defend itself.