New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
This article is part of "Deconstructing artificial intelligence," a series of posts that explore the details of how AI applications work. One of the things that caught my eye at Nvidia's flagship event, the GPU Technology Conference (GTC), was Maxine, a platform that leverages artificial intelligence to improve the quality and experience of video-conferencing applications in real-time. Maxine used deep learning for resolution improvement, background noise reduction, video compression, face alignment, and real-time translation and transcription. In this post, which marks the first installation of our "deconstructing artificial intelligence" series, we will take a look at how some of these features work and how they tie-in with AI research done at Nvidia. We'll also explore the pending issues and the possible business model for Nvidia's AI-powered video-conferencing platform.
Historians and nostalgic residents alike take an interest in how cities were constructed and how they developed -- and now there's a tool for that. Google AI recently launched the open-source browser-based toolset "rǝ," which was created to enable the exploration of city transitions from 1800 to 2000 virtually in a three-dimensional view. Google AI says the name rǝ is pronounced as "re-turn" and derives its meaning from "reconstruction, research, recreation and remembering." This scalable system runs on Google Cloud and Kubernetes and reconstructs cities from historical maps and photos. There are three main components to the toolset. Warper is a crowdsourcing platform,where users can upload photos of historical print maps and georectify them to match real world coordinates.
MIT researchers have identified a brain pathway critical in enabling primates to effortlessly identify objects in their field of vision. The findings enrich existing models of the neural circuitry involved in visual perception and help to further unravel the computational code for solving object recognition in the primate brain. Led by Kohitij Kar, a postdoc at the McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, the study looked at an area called the ventrolateral prefrontal cortex (vlPFC), which sends feedback signals to the inferior temporal (IT) cortex via a network of neurons. The main goal of this study was to test how the back-and-forth information processing of this circuitry -- that is, this recurrent neural network -- is essential to rapid object identification in primates. The current study, published in Neuron and available via open access, is a followup to prior work published by Kar and James DiCarlo, the Peter de Florez Professor of Neuroscience, the head of MIT's Department of Brain and Cognitive Sciences, and an investigator in the McGovern Institute and the Center for Brains, Minds, and Machines.
Neo4j, the leader in graph technology, announced the latest version of Neo4j for Graph Data Science, a breakthrough that democratizes advanced graph-based machine learning (ML) techniques by leveraging deep learning and graph convolutional neural networks. Until now, few companies outside of Google and Facebook have had the AI foresight and resources to leverage graph embeddings. This powerful and innovative technique calculates the shape of the surrounding network for each piece of data inside of a graph, enabling far better machine learning predictions. Neo4j for Graph Data Science version 1.4 democratizes these innovations to upend the way enterprises make predictions in diverse scenarios from fraud detection to tracking customer or patient journey, to drug discovery and knowledge graph completion. Neo4j for Graph Data Science version 1.4 is the first and only graph-native machine learning functionality commercially available for enterprises.
Today, every technology startup needs to embrace AI and machine learning models to stay relevant in their business. Machine learning (ML), if implemented well, can have a direct impact on a company's ability to succeed and raise the next round of funding. However, the path to implementing ML solutions comes with some specific hurdles for start-ups. Let's discuss the top considerations for getting ML models production-ready and the best approaches for a startup. An ML model is only as good as the data used to train it.
Common sense is what differentiates humans from machines. For years, scientists and researchers have been looking for ways to bridge the gap and make Artificial Intelligence (AI) more capable of interacting with the human world. However, the process is more complicated than it sounds. Artificial intelligence researchers have been unsuccessful in giving intelligent agents the common-sense knowledge they need to reason about the world. Common sense is considered as something that will pull artificial intelligence closer to humankind.
An international team of researchers has developed a way to use artificial intelligence to predict the risk of a patient developing cardiovascular disease. In their paper published in the journal Nature Biological Engineering, the group describes using retinal blood vessel scans as a data-source for a deep learning system to teach it to recognize the signs of cardiovascular disease in people. For over 100 years, doctors have peered into the eyes of patients looking for changes in retinal vasculature--blood vessels in the retina that can reflect the impact of high blood pressure over a period of time. Such an impact can be an indicator of impending cardiovascular disease. Over time, medical scientists have developed instruments that allow eye doctors to get a better look at the parts of the eye most susceptible to damage from hypertension and have used them as a part of a process to diagnose patients that are likely to develop the disease.
With the advent of new deep learning approaches based on transformer architecture, natural language processing (NLP) techniques have undergone a revolution in performance and capabilities. Cutting-edge NLP models are becoming the core of modern search engines, voice assistants, chatbots, and more. Modern NLP models can synthesize human-like text and answer questions posed in natural language. As DeepMind research scientist Sebastian Ruder says, NLP's ImageNet moment has arrived. While NLP use has grown in mainstream use cases, it still is not widely adopted in healthcare, clinical applications, and scientific research.
Most data organisations hold is not labeled, and labeled data is the foundation of AI jobs and AI projects. "Labeled data, means marking up or annotating your data for the target model so it can predict. In general, data labeling includes data tagging, annotation, moderation, classification, transcription, and processing." Particular features are highlighted by labeled data and the classification of those attributes maybe be analysed by models for patterns in order to predict the new targets. An example would be labelling images as cancerous and benign or non-cancerous for a set of medical images that a Convolutional Neural Network (CNN) computer vision algorithm may then classify unseen images of the same class of data in the future. Niti Sharma also notes some key points to consider.