If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
From flying vehicles to smart buildings, this year's IoTSWC will bring the best industrial internet solutions to Barcelona. The Industrial Internet Consortium (IIC) is the leading organization promoting Industry 4.0 in the US. During the last five years, together with Fira Barcelona, it has been organizing the IoT Solutions World Congress (IoTSWC), the leading conference of industrial IoT. Dr. Richard Soley is the Executive Director of the Industrial Internet Consortium and is responsible for the vision and direction of the organization. In addition to this role, Dr. Soley is Chairman and CEO of the Object Management Group (OMG) – an international, nonprofit computer industry standards consortium -- and Executive Director of the Cloud Standards Customer Council – an end-user advocacy group.
This book covers neural networks with special emphasis on advanced learning methodologies and applications. It includes practical issues of weight initializations, stalling of learning, and escape from a local minima, which have not been covered by many existing books in this area. Additionally, the book highlights the important feature selection problem, which baffles many neural networks practitioners because of the difficulties handling large datasets. It also contains several interesting IT, engineering and bioinformatics applications.
I saw a video article on Neuromorphic Computing the other day - something I had not really heard much about, though it ties in heavily to Artificial Intelligence which I, of course, do know about. Wow.. the possibilities are now endless. This is what Techopedia says about Neuromorphic Computing... Neuromorphic computing utilizes an engineering approach or method based on the activity of the biological brain. This type of approach can make technologies more versatile and adaptable, and promote more vibrant results than other types of traditional architectures, for instance, the von Neumann architecture that is so useful in traditional hardware design. Neuromorphic computing is also known as neuromorphic engineering.
In recent years, researchers have proposed a wide variety of hardware implementations for feed-forward artificial neural networks. These implementations include three key components: a dot-product engine that can compute convolution and fully-connected layer operations, memory elements to store intermediate inter and intra-layer results, and other components that can compute non-linear activation functions. Dot-product engines, which are essentially high-efficiency accelerators, have so far been successfully implemented in hardware in many different ways. In a study published last year, researchers at the University of Notre Dame in Indiana used dot-product circuits to design a cellular neural network (CeNN)-based accelerator for convolutional neural networks (CNNs). The same team, in collaboration with other researchers at the University of Minnesota, has now developed a CeNN cell based on spintronic (i.e., spin electronic) elements with high energy efficiency.
As 5G networks continue to expand in cities and countries across the globe, key researchers have already started to lay the foundation for 6G deployments roughly a decade from now. This time, they say, the key selling point won't be faster phones or wireless home internet service, but rather a range of advanced industrial and scientific applications -- including wireless, real-time remote access to human brain-level AI computing. That's one of the more interesting takeaways from a new IEEE paper published by NYU Wireless's pioneering researcher Dr. Ted Rappaport and colleagues, focused on applications for 100 gigahertz (GHz) to 3 terahertz (THz) wireless spectrum. As prior cellular generations have continually expanded the use of radio spectrum from microwave frequencies up to millimeter wave frequencies, that "submillimeter wave" range is the last collection of seemingly safe, non-ionizing frequencies that can be used for communications before hitting optical, x-ray, gamma ray, and cosmic ray wavelengths. Dr. Rappaport's team says that while 5G networks should eventually be able to deliver 100Gbps speeds, signal densification technology doesn't yet exist to eclipse that rate -- even on today's millimeter wave bands, one of which offers access to bandwidth that's akin to a 500-lane highway.
The AI and analytics revolution has revolutionized nearly every corner of industry, helping businesses innovate, become more efficient and pioneer entirely new application areas and product lines. At the same time, the greatest beneficiaries of these advances have often been larger companies that can afford to hire the specialized expertise necessary to fully harness these new advances. In contrast, smaller and medium-sized businesses and those in non-traditional industries have struggled to integrate these technologies with their overtaxed technical staff focused more on the mundane IT issues of desktop upgrades and higher priority tasks like shoring up their cybersecurity. Cloud companies are moving rapidly to help these businesses through a wealth of new APIs and tools that don't require any deep learning or advanced analytics experience. The future of the cloud lies in analytics.
The emulation task of a nonlinear autoregressive moving average model, i.e., the NARMA10 task, has been widely used as a benchmark task for recurrent neural networks, especially in reservoir computing. However, the type and quantity of computational capabilities required to emulate the NARMA10 model remain unclear, and, to date, the NARMA10 task has been utilized blindly. Therefore, in this study, we have investigated the properties of the NARMA10 model from a dynamical system perspective. We revealed its bifurcation structure and basin of attraction, as well as the system's Lyapunov spectra. Furthermore, we have analyzed the computational capabilities required to emulate the NARMA10 model by decomposing it into multiple combinations of orthogonal nonlinear polynomials using Legendre polynomials, and we directly evaluated its information processing capacity together with its dependences on some system parameters. The result demonstrates that the NARMA10 model contains an unstable region in the phase space that makes the system diverge according to the selection of the input range and initial conditions. Furthermore, the information processing capacity of the model varies according to the input range. These properties prevent safe application of this model and fair comparisons among experiments, which are unfavorable for a benchmark task. As a result, we propose a benchmark model that can clearly evaluate equivalent computational capacity using NARMA10. Compared to the original NARMA10 model, the proposed model is highly stable and robust against the input range settings.
Few companies had enjoyed the sort of bull run AI chipmaker Nvidia (NVDA) had been on, returning more than 1200% between June 2015 and June 2018, eventually hitting a market cap of about $175 billion by September 2018. However, while many companies have bounced back, Nvidia has continued to languish, sitting at a valuation of about $88 billion, pretty much where it was circa May 2017 when we compared its AI chip technology against AMD (AMD). Now, over the last five years, the two chip manufacturers have returned almost identical value to investors, while a number of upstart startups have risen to also challenge Nvidia's supremacy with new artificial intelligence chips. In fact, it was exactly three years ago that we first introduced you to five startups building artificial intelligence chips, and then followed that up with 12 new AI chip makers in 2017. Last year, we noted that the Chinese are also gunning for Nvidia with their own homegrown artificial intelligence chips, as China seeks to dominate everything to do with AI and other emerging technologies.
The School of Chemistry at the University of Bristol is at the forefront of applying computing to chemistry, from simulating complex materials and biomolecular systems on supercomputers, developing workflows for robotic chemical synthesis, to using modern machine learning algorithms and advanced visualisation to understand and predict chemical behaviours. To get the most out of scientific computing we need a new type of scientist, who combines a firm grounding in chemistry with strong skills in computing as well as a clear understanding of what can be achieved by merging them. Our new degrees will address this emerging skills gap, allowing students to apply their enthusiasm for computing in chemistry, whether that is learning to build machine learning frameworks for predicting spectra, script automation workflows or conduct quantum chemical calculations. In all this we keep chemistry at the core, enhancing it with the breadth of modern scientific computing, covering coding and software engineering, visualisation and virtual reality, data analysis, machine learning, deep learning and AI, as well as modern hardware and computing resources, such as cloud computing, GPUs and high-performance computing architectures. With these skills, our graduates will be well placed in the future job market where employers are ever-more focussed on this combination of skills and experience.