Goto

Collaborating Authors

Deep Learning


'We are the best-funded AI startup,' says SambaNova co-founder Olukotun following SoftBank, Intel infusion

ZDNet

"I think most people would say we are the most credible competitor to Nvidia," says Kunle Olukotun, Stanford University computer science professor and co-founder of AI startup SambaNova Systems. SambaNova Tuesday announced a new round of venture capital funding that brings its capital to date to over $1 billion. In yet another sign of the rising interest in alternative computing technology, AI systems startup SambaNova Systems on Tuesday said it has received $676 million in a Series D financing from a group of investors that includes the SoftBank Vision Fund of Japanese conglomerate SoftBank Group; private equity firm BlackRock; and the Intel Capital arm of chip giant Intel. The new funding round brings the company's total investment to date to over $1 billion. The company is now valued at more than $5 billion.


Hiroshi Noji and Yohei Oseki have received the Best Paper Award, NLP2021

#artificialintelligence

The research paper of "Parallelization of Recurrent neural network grammar (in Japanese)," co-authored by Hiroshi Noji (AIST) and Yohei Oseki (The University of Tokyo), was received the Best Paper Award from the 27th Annual Meeting of the Association for Natural Language Processing .


Like Us, Deep Learning Networks Prefer a Human Voice

#artificialintelligence

The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change. A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.


7 Top AI/ML Based Music Apps In 2021

#artificialintelligence

You are in your bed, with a book and a cup of coffee in hand. It's raining, and you are savouring the sound of rain droplets buffeting your window panes while your favourite songs play in the background. And most likely, the song you are listening to is recommended by your music app. Music apps -- that leverages the latest AI, ML technologies -- have become an essential part of our daily routines. The app has over 50 million songs and collects a lot of information about music tastes, search habits, playlists, geographical location, and most-used devices.


Artificial Intelligence Applications in Medicine: A Rapid Overview of Current Paradigms - European Medical Journal

#artificialintelligence

The Merriam-Webster dictionary defines artificial intelligence (AI) as "a branch of computer science dealing with the simulation of intelligent behavior in computers" or "the capability of a machine to imitate intelligent human behavior." The layman may think of AI as mere algorithms and programs; however, there is a distinct difference from the usual programs which are task-specific and written to perform repetitive tasks. Machine learning (ML) refers to a computing machine or system's ability to teach or improve itself using experience without explicit programming for each improvement, using methods of forward chaining of algorithms derived from backward chaining of algorithm deduction from data. Deep learning is a subsection within ML focussed on using artificial neural networks to address highly abstract problems;1 however, this is still a primitive form of AI. When fully developed, it will be capable of sentient and recursive or iterative self-improvement.


Table Detection, Information Extraction and Structuring using Deep Learning

#artificialintelligence

The amount of data being collected is drastically increasing day-by-day with lots of applications, tools, and online platforms booming in the present technological era. To handle and access this humongous data productively, it's necessary to develop valuable information extraction tools. One of the sub-areas that's demanding attention in the Information Extraction field is the fetching and accessing of data from tabular forms. To explain this in a subtle way, imagine you have lots of paperwork and documents where you would be using tables, and using the same, you would like to manipulate data. Conventionally, you can copy them manually (onto a paper) or load them into excel sheets. However, with table extraction, no sooner have you sent tables as pictures to the computer than it extracts all the information and stacks them into a neat document. This saves an ample of time and is less erroneous. As discussed in the previous section, tables are used frequently to represent data in a clean format. We can see them so often across several areas, from organizing our work by structuring data across tables to storing huge assets of companies.


Detection of marine litter using deep learning

AIHub

Researchers at the University of Barcelona have developed an open access, deep learning-based web app that will enable the detection and quantification of floating plastics in the sea with a reliability of over 80%. Floating sea macro-litter is a threat to the conservation of marine ecosystems worldwide. According to UNESCO, plastic debris causes the deaths of more than a million seabirds every year, as well as more than 100,000 marine mammals. Eroded fragments, known as micro-plastics, are now prevalent across the food chain. The largest density of floating litter is found in the great ocean gyres (systems of circular currents) with litter being caught and spun in these vast cycles.


shobrook/sequitur

#artificialintelligence

It implements three different autoencoder architectures in PyTorch, and a predefined training loop. Each autoencoder learns to represent input sequences as lower-dimensional, fixed-size vectors. This can be useful for finding patterns among sequences, clustering sequences, or converting sequences into inputs for other algorithms. First, you need to prepare a set of example sequences to train an autoencoder on. So, if each example in your training set is a sequence of 10 5x5 matrices, then each example would be a tensor with shape [10, 5, 5].


Words and images

#artificialintelligence

As we rely more on natural language processing to help us navigate our world, it's more important than ever that these artificial intelligence models -- used increasingly in applications such as caption generation for the visually impaired -- remain true to reality. "The issue is that deep learning-based neural language generation models have no guarantees in generating factually correct sentences that are faithful to the input data," said UC Santa Barbara computer scientist William Wang. Over the many iterations it takes for a language generation model to learn how to describe or predict what a scene depicts, elements can creep in, causing phenomena such as errors in data-to-text translations or object hallucinations, in which the caption contains an object or an action that doesn't exist in the image. As a result, unless you have a way of reining in these errors (or you're surrealist painter René Magritte) these mismatches could spell the end of the usefulness of the language generation model being used. "This is a huge problem," said Wang. "Imagine you are using a news summarization system to read earnings reports -- the loss of faithfulness can give you wrong numbers, wrong facts and misinformation. Similarly, if a visually impaired person relies on an image captioning system to see the environment, wrong generation could create serious consequences."


Researchers use AI to estimate focal mechanism parameters of earthquake

#artificialintelligence

The research team led by Prof. Zhang Jie from the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences made progress on real-time determination of earthquake focal mechanisms through deep learning. The work was published in Nature Communications. Since there are connections between characteristics of the rupture surface of the source fault and seismic wave radiated by the source, it's vital to monitor the earthquake by immediate determination of the source focal mechanism which is inferred from multiple ground seismic records. However, it's hard to calculate the mechanism from the simple records. The parameters about focal mechanisms are either merely reported or reported after a few minutes or even longer.