If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Intel has an ambition to bring more artificial intelligence technology into all aspects of its business, and today is stepping up its game a little in the area with an acquisition. The computer processing giant has acquired Vertex.AI, a startup that had a mission of making it possible to develop "deep learning for every platform", and had built a deep learning engine called PlaidML to do this. Terms of the deal have not been disclosed but Intel has provided us with the following statement, confirming the deal and that the whole team -- including founders Choong Ng and Brian Retford -- will be joining Intel. "Intel has acquired Vertex.AI, a Seattle-based startup focused on deep learning compilation tools and associated technology. The seven-person Vertex.AI team joined the Movidius team in Intel's Artificial Intelligence Products Group.
That patent, awarded April 25, 1961, recognizes Robert Noyce as the inventor of the silicon integrated circuit (IC). Integrated circuits forever changed how computers were made while adding power to a process of another kind: the growth of a then-nascent field called artificial intelligence (AI). And the potential of Noyce's invention truly took flight when he and Gordon Moore founded Intel on July 18, 1968. Fifty years later, the "eternal spring" of artificial intelligence is in full swing. To understand how we arrived, here's the truth in a nutshell: The rise of artificial intelligence is intertwined with the history of faster, more robust microprocessors.
Many businesses are beginning to rely on large scale data analytics for greater insights into their customers' behavior and their business requirements. Simplifying the process so that a wider range of employees can make conclusions from the massive amounts of data is important and can lead to more profits and better customer service. Harp-DAAL is a framework developed at Indiana University that brings together the capabilities of big data (Hadoop) and techniques that have previously been adopted for high performance computing. Together, employees can become more productive and gain deeper insights to massive amounts of data. Modern analytics systems are clusters of independent systems which need to be synchronized in order to make sense of all of the data.
Well-known technology companies including Google, Facebook and Microsoft are making strides in artificial intelligence. But with the rare exception of skunkworks projects from big companies, most artificial intelligence (AI) work will happen on silicon designed by chip companies, as has been the case with computing for decades. Nvidia NVDA, -2.15% is the leader in both AI compute performance and in mindshare with developers. Intel INTC, -0.56% still the biggest technology provider for servers and enterprise computing, knows that it needs to accelerate its development in AI and a subset called machine learning or risk losing out on the largest growth opportunity in enterprise computing in the past 10 years. Nvidia CEO Jensen Huang has successfully shifted the company from a gaming and graphics provider to an AI company.
There is no denying the fact that Artificial Intelligence (AI) is one phenomenon that has stood out among other emerging technologies. Sensing great possibilities, global chip giant Intel has now joined the AI bandwagon in a big way. AI is not new to the world of technology but the past five years have given AI believers a reason to cheer as its uses are increasing across industries – from health care to autonomous vehicles – say AI experts at Intel. "AI capabilities are greatly supplementing humans to do great work in less time in sectors like healthcare, banking and finance, transport, energy and robotics, etc. It will be interesting to see how this whole AI thing evolves with time," Bob Rogers, Data Scientist, AI and Analytics, Data Center Group at Intel, told IANS here.
In a blog post today, Intel (NASDAQ:INTC) CEO Brian Krzanich announced the Nervana Neural Network Processor (NNP). The Intel Nervana NNP promises to revolutionize AI computing across myriad industries. Using Intel Nervana technology, companies will be able to develop entirely new classes of AI applications that maximize the amount of data processed and enable customers to find greater insights – transforming their businesses... We have multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models. This puts us on track to exceed the goal we set last year of achieving 100 times greater AI performance by 2020.
Imagine a future where complex decisions could be made faster and adapt over time. Where societal and industrial problems can be autonomously solved using learned experiences. It's a future where first responders using image-recognition applications can analyze streetlight camera images and quickly solve missing or abducted person reports. It's a future where stoplights automatically adjust their timing to sync with the flow of traffic, reducing gridlock and optimizing starts and stops. It's a future where robots are more autonomous and performance efficiency is dramatically increased.
Lots of tech companies including Apple, Google, Microsoft, NVIDIA and Intel itself have created chips for image recognition and other deep-learning chores. However, Intel is taking another tack as well with an experimental chip called "Loihi." Rather than relying on raw computing horsepower, it uses an old-school, as-yet-unproven type of "nueromorphic" tech that's modeled after the human brain. Intel has been exploring neuromorphic tech for awhile, and even designed a chip in 2012. Instead of logic gates, it uses "spiking neurons" as a fundamental computing unit.
I was wrong to say that Intel (INTC) doesn't need GPUs to compete with Nvidia (NVDA) on artificial intelligence/deep learning computing. Further research told me that along with FPGA (Field Programmable Field Gate Array), there's an embedded Intel Processor Graphics for deep learning inference. It's a new concept that was discussed by Intel only last May. Nvidia's GPU can be the Training Engine for deep learning computers. Intel's FPGAs and embedded Processor Graphics could be the go-to hardware accelerators for inference computing.