New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
PYRO: Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. These are a few frameworks and projects that are built on top of TensorFlow and PyTorch. You can find more on Github and the official websites of TF and PyTorch. In a world of TensorFlow, PyTorch is capable of holding on its own with its strong points. PyTorch is a go to framework that lets us write code in a more pythonic way.
Fujitsu Laboratories has developed what it believes to be the world's first AI technology that accurately captures essential features, including the distribution and probability of high-dimensional data in order to improve the accuracy of AI detection and judgment. High-dimensional data, which includes communications networks access data, types of medical data, and images remain difficult to process due to its complexity, making it a challenge to obtain the characteristics of the target data. Until now, this made it necessary to use techniques to reduce the dimensions of the input data using deep learning, at times causing the AI to make incorrect judgments. Fujitsu has combined deep learning technology with its expertise in image compression technology, cultivated over many years, to develop an AI technology that makes it possible to optimize the processing of high-dimensional data with deep learning technology, and to accurately extract data features. It combines information theory used in image compression with deep learning, optimising the number of dimensions to be reduced in high-dimensional data and the distribution of the data after the dimension reduction by deep learning.
With the continuous development of network technology and the ever-expanding scale of e-commerce, the number and variety of goods grow rapidly and users need to spend a lot of time to find the goods they want to buy. To solve this problem, the recommendation system came into being. The recommendation system is a subset of the Information Filtering System, which can be used in a range of areas such as movies, music, e-commerce, and Feed stream recommendations. The recommendation system discovers the user's personalized needs and interests by analyzing and mining user behaviors and recommends information or products that may be of interest to the user. Unlike search engines, recommendation systems do not require users to accurately describe their needs but model their historical behavior to proactively provide information that meets user interests and needs.
IBM Research, with the help of the University of Texas Austin and the University of Maryland, has created a technology, called BlockDrop, that promises to speed convolutional neural network operations without any loss of fidelity. This could further excel the use of neural nets, particularly in places with limited computing capability. Increase in accuracy level have been accompanied by increasingly complex and deep network architectures. This presents a problem for domains where fast inference is essential, particularly in delay-sensitive and realtime scenarios such as autonomous driving, robotic navigation, or user-interactive applications on mobile devices. Further research results show regularization techniques for fully connected layers, is less effective for convolutional layers, as activation units in these layers are spatially correlated and information can still flow through convolutional networks despite dropout.
Natural language processing, or NLP, is a type of artificial intelligence (AI) that specializes in analyzing human language. Have you ever used Apple's Siri and wondered how it understands (most of) what you're saying? This is an example of NLP in practice. NLP is becoming an essential part of our lives, and together with machine learning and deep learning, produces results that are far superior to what could be achieved just a few years ago. In this article we'll take a closer look at NLP, see how it's applied and learn how it works.
Early last year, a large European supermarket chain deployed artificial intelligence to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods. The company already used purchasing data and a simple statistical method to predict sales. With deep learning, a technique that has helped produce spectacular AI advances in recent years--as well as additional data, including local weather, traffic conditions, and competitors' actions--the company cut the number of errors by three-quarters. It was precisely the kind of high-impact, cost-saving effect that people expect from AI. But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.
Back in 2015 I had written an article on 100 Big Data papers to help demystify landscape. On the same lines I thought it would be good to do one for AI. The initial part is about the basics and provides some great links to strengthen your foundation. The latter part has links to some great research papers and is for advanced practitioners who want to understand the theory and details. AI is a revolution that is transforming how humans live and work.
This blog highlights different ML algorithms used in blockchain transactions with a special emphasis on bitcoins in retail payments. The potential of blockchain to solve the retail supply chain manifests in three areas. Provenance: Both the retailer and the customer can track the entire product life cycle along the supply chain. Smart contracts: Transactions among disparate partners that are prone to lag can be automated for more efficiency. IoT backbone: Supports low powered mesh networks for IoT devices reducing the needs for a central server and enhancing the reliability of sensor data.
Testing for pathogens is a critical component of maintaining public health and safety. Having a method to rapidly and reliably test for harmful germs is essential for diagnosing diseases, maintaining clean drinking water, regulating food safety, conducting scientific research, and other important functions of modern society. In recent research, scientists from University of California, Los Angeles (UCLA), have demonstrated that artificial intelligence (AI) can detect harmful bacteria from a water sample up to 12 hours faster than the current gold-standard Environmental Protection Agency (EPA) methods. In a new study published yesterday in Light: Science and Applications, the researchers created a time-lapse imaging platform that uses two separate deep neural networks (DNNs) for the detection and classification of bacteria. The team tested the high-throughput bacterial colony growth detection and classification system using water suspensions with added coliform bacteria of E. coli (including chlorine-stressed E. coli), K. pneumoniae and K. aerogenes, grown on chromogenic agar as the culture medium.
Researchers from Google's DeepMind and the University of Oxford recommend that AI practitioners draw on decolonial theory to reform the industry, put ethical principles into practice, and avoid further algorithmic exploitation or oppression. The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary General's High-level Panel on Digital Cooperation. The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.