The standard explanation goes like this: once we build a real human level AI, that AI will improve itself exponentially over a short period of time. There's no reason to believe that more computing power alone means a smarter AI. The reductionist in me says that if we understand human level intelligence well enough to build an AI, surely that AI can make some optimizations. We imagine there being some logical program that runs and we improve the intelligence by improving that program.
In the field of cybersecurity, reliance on machine learning amplifies the capabilities of the humans we have, making security teams better. There are many exciting things under development in McAfee Labs that will further leverage machine learning in these areas. While 75 percent of consumers believe it's very important to secure their online identities and connected devices, nearly half are uncertain if they are taking the proper security steps. "There are many exciting things under development in McAfee Labs that will further leverage machine learning in these areas."
And while it is executing tasks that "require human intelligence," the tasks themselves – like mass data analysis or translation, complex calculations or immediate responsiveness – are rarely those which people are otherwise capable of or willing to perform. It requires humans to help AI understand language and make subjective decisions for a business. At a recent Stanford University conference, Andy Slavitt, former acting director, Center for Medicare and Medicaid Services (CMS), said that the expansion of AI in healthcare is designed to address productivity concerns. As the healthcare industry undergoes several paradigm shifts – from fee-for-service to value-based care, impersonal to precision medicine, traditional to digital healthcare delivery – AI is becoming essential.
Big Data and Analytics are cool things that computers do and are used in building Enterprise AI systems of intelligent engagement. It requires a new class of technology, data, methods, and skills focused around a business led portfolio of industry optimized AIs and not just point solutions or science projects. Manoj Saxena is the Executive Chairman of CognitiveScale an industry AI software company focused on delivering systems of intelligent engagement. Portfolio companies include Spark Cognition, a cognitive security analytics company where Saxena is Chariman of the Board and WayBlazer, a B2B AI platform for the travel industry where Saxena is a Board member.
Before answering that question, it is important to point out that there is more than one category of AI technology – weak AI, artificial general intelligence, and strong AI. In fact, even weak AI perform tasks just as good, if not better, as humans. Instead, we will use a few examples to explore how further advancements in the field of AI can potentially affect employment. Is it possible that AI technology may end the historical trend of new technology creating new jobs?
A good rule of thumb a the moment is to mentally replace the words "artificial intelligence" with "machine learning" at the moment, and educate yourself on the difference. Once you have this little adjustment taken care of, it will be much easier to distinguish between machine learning models that are performing a task, often more efficiently than any human could ever do, bounded by the parameters of this one task, and more generalistic (true) artificial intelligence, which should perform more like a human would, or at least that is what many people hope to achieve. I used to be a real believer in spiking neural networks, even though their practical application is minimal at the moment, I do think they will mature and become highly efficient in performing tasks, maybe limited in scope, maybe more generalistic. On the one hand we have people looking into building "neural laces" and the likes, to make sure we can augment human intelligence to keep up with the machines of the future, posing that human intelligence is limited in bandwidth, yet we want to model this inferior intelligence in machines for some reason.
IBM has applied AI to security in the form of its Watson "cognitive computing" platform. Within a decade, humans may well be interacting with lifelike emotionally responsive AI robots, very similar to the premise of the HBO series Westworld and the film I, Robot. This coming generation of malware, which inevitably becomes part of any Internet-based ecosystem, will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next, behaving like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection. Autonomous malware operates much like branch prediction technology, designed to guess which branch of a decision tree a transaction will take before it is executed.
Now at least our Deep Learning systems can recognized sounds and pick cats out of a picture by figuring out those rules for itself. Hide a little snow crash-y distortion in images and convolutional neural nets go from smart to real stupid, real quick. Machine Learning and Deep Learning systems have zero higher reasoning or moral compass. People can and will learn to hack fraud detection classification systems, sentencing software, and more.
There is no doubt that technological trends will have a powerful impact on global education, both by improving the overall learning experience and by increasing global access to education. But will robots and artificial intelligenceArtificial Intelligence knows many different definitions, but in general it can be defined as a machine completing complex tasks intelligently, meaning that it mirrors human intelligence and evolves with time. Good teaching requires complex social interactions and adaptation to the individual student's learning needs. Automating teaching is an example of a task that would require artificial general intelligence (as opposed to narrow or specific intelligence).
But when it comes to good judgment, AI is not smarter than the human brain that designed it. Many automated systems perform poorly, to the point that you are wondering if AI is an abbreviation for Artificial Innumeracy. But when it comes to good judgment, AI is not smarter than the human brain that designed it. Many automated systems perform poorly, to the point that you are wondering if AI is an abbreviation for Artificial Innumeracy.