If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
These technologies would not exist today without the sustained federal support of fundamental AI research over the past three decades. This article was written for inclusion in the booklet "Computing Research: A National Investment for Leadership in the 21st Century," available from the Computing Research Association, cra.org/research.impact. Early work in AI focused on using cognitive and biological models to simulate and explain human information processing skills, on "logical" systems that perform commonsense and expert reasoning, and on robots that perceive and interact with their environment. This early work was spurred by visionary funding from the Defense Advanced Research Projects Agency (DARPA) and Office of Naval Research (ONR), which began on a large scale in the early 1960s and continues to this day. By the early 1980s an "expert systems" industry had emerged, and Japan and Europe dramatically increased their funding of AI research.
As 2017 comes to a close, I have been noodling about what deserves the title of "Technology of the year." Clearly, Artificial Intelligence (AI) is the winner! Quite a few terms are used interchangeably when discussing the subject of AI, including Deep Learning, Machine Learning, Neural Networks, Graph Theory, Random Forests, and the list goes on. AI is the broad subject, describing how intelligence is gained through machine learning using various algorithmic options like graph theory, neural networks, random forests, etc. Deep learning is a specialized form of machine learning which expands the sample data sets to multi-layer learning. I first worked on Artificial Intelligence during my final semester of engineering school.
Google, a pioneer in AI, has been focusing on four key components - computing, algorithms, data and expertise -- to organise all the data and make it accessible. Google as a company has always been at the forefront of computing AI," Fei-Fei Li, Chief Scientist of Google Cloud AI and ML, told reporters during a press briefing. Earlier this year, Google announced the second-generation Tensor Processing Units (TPUs) (now called the Cloud TPU) at the annual Google I/O event in the US. The company offers computing power including graphics processing unit (GPUs), central processing units (CPUs) and tensor processing units (TPUs) to power machine learning.
With the Industry 4.0 factory automation trend catching on, data-driven artificial intelligence promises to create cyber-physical systems that learn as they grow, predict failures before they impact performance, and connect factories and supply chains more efficiently than we could ever have imagined. To avoid IIoT digital exhaust and preserve the potential latent value of IIoT data, enterprises need to develop long-term IIoT data retention and governance policies that will ensure they can evolve and enrich their IoT value proposition over time and harness IIoT data as a strategic asset. A practical compromise IoT architecture must first employ some centralized (cloud) aggregation and processing of raw IoT sensor data for training useful machine learning models, followed by far-edge execution and refinement of those models. A multi-tiered architecture (involving far-edge, private cloud and public cloud) can provide an excellent balance between local responsiveness and consolidated machine learning, while maintaining privacy for proprietary data sets.
From 1958, since the invention of the first integrated circuit till 1965, the number of components or transistor density in an integrated circuit has doubled every year, marked Gordon Moore. So when Intel, the pioneer of chip developments adapted Moore's law as standard principle for advancing the computing power, the whole semi-conductor industry followed this outline on their chips. But then with the constant advancement, the electronics industry benefited from the Moore's standard method of designing processor chips till 50 years. The technology today is tending to design artificial intelligence technology that matches the super intelligence of human brain.
Workflow monitoring and diagnosis can be a complex process involving sophisticated computational intensive operations. The ever-growing data generation and its utilisation have increased the complexity of workflow domains leading to an increased interest in distributed approaches for efficient workflow monitoring. Existing work has proposed a CBR enhancement to tackle deficiencies in areas where data volumes increase significantly. In such areas, the notion of a “data volume” component was proposed in an enhanced CBR architecture. This work proceeds further by evaluating a proposed distributed CBR lifecycle based on GPU programming to abstract further and evaluate the hypothesis that: increased data volumes can be tackled efficiently using distributed case bases and processing on demand. Our proposed approach is evaluated against previous work and it shows promising speedup gains. This paper signposts future research areas in distributed CBR paradigms.
This is the third installment in a four-part review of 2016 in machine learning and deep learning. In Part One, I covered Top Trends in the field, including concerns about bias, interpretability, deep learning's explosive growth, the democratization of supercomputing, and the emergence of cloud machine learning platforms. In Part Two, I surveyed significant developments in Open Source machine learning projects, such as R, Python, Spark, Flink, H2O, TensorFlow, and others. In this installment, we will review the machine learning and deep learning initiatives of Big Tech Brands -- industry leaders with big budgets for software development and marketing. It has a unique business model and falls into its own category.
NVIDIA (NASDAQ:NVDA) is primarily known as the company that revolutionized computer gaming. The debut of the Graphics Processing Unit (GPU) in 1999 provided gamers with faster, clearer, and more lifelike images. The GPU was designed to quickly perform complex mathematical calculations that were necessary to accelerate the creation of realistic graphics. It achieved this feat by performing many functions at the same time, known as parallel computing. This resulted in faster, smoother motion in game graphics and a revolution in modern gaming.
The release of Home from Google, Alexa from Amazon, and Cortana from Microsoft reveal the investment and importance of AI and shows the future of Natural Language Processing (NLP). As processing and computing power decreases and power increases, sensor data analysis in this supply chain will become easy to use. As pattern detection of structured and unstructured data is becoming a critical component for benchmarking, marketing, and competition analysis, users who are traditionally dependent on IT to build analytics will search for data discovery tools to create their own pattern detection algorithms. For decades, agile project management has been focused on UI building and prototyping.
In particular, machine intelligence permits and requires massively parallel processing, but the design of parallel processors and methods of programming them are nascent arts. The graph structure also makes huge parallelism explicit -- a massively parallel processor can work on many vertices and edges at the same time. If we think of the Central Processing Unit (CPU) in your laptop as being designed for scalar-centric control tasks, and the Graphics Processing Unit (GPU) as being designed for vector-centric graphics tasks, then this new class of processor would be an Intelligence Processing Unit (IPU), designed for graph-centric intelligence tasks. But only a subset of machine intelligence is amenable to wide vector machines, and the high arithmetic precision required by graphics is far too wasteful for the probability processing of intelligence.