Goto

Collaborating Authors

Neural Networks


How machine learning can revolutionize the quality of hearing

#artificialintelligence

As machine learning (ML) integrates itself into almost every industry – from automotive and healthcare to banking and manufacturing- the most exciting advancements look as if they are still yet to come. Machine learning as a subset of artificial intelligence (AI) have been among the most significant technological developments in recent history, with few fields possessing the same amount of potential to disrupt a wide range of industries. And while many applications of ML technology go unseen, there are countless ways companies are harnessing its power in new and intriguing applications. That said, ML's revolutionary impact is most poised perhaps when put to use for age-old problems. Hearing loss is not a new condition by any means, and people have suffered from it for centuries.


Protecting people from hazardous areas through virtual boundaries with Computer Vision

#artificialintelligence

As companies welcome more autonomous robots and other heavy equipment into the workplace, we need to ensure equipment can operate safely around human teammates. In this post, we will show you how to build a virtual boundary with computer vision and AWS DeepLens, the AWS deep learning-enabled video camera designed for developers to learn machine learning (ML). Using the machine learning techniques in this post, you can build virtual boundaries for restricted areas that automatically shut down equipment or sound an alert when humans come close. For this project, you will train a custom object detection model with Amazon SageMaker and deploy the model to an AWS DeepLens device. Object detection is an ML algorithm that takes an image as input and identifies objects and their location within the image.


'We are the best-funded AI startup,' says SambaNova co-founder Olukotun following SoftBank, Intel infusion

ZDNet

"I think most people would say we are the most credible competitor to Nvidia," says Kunle Olukotun, Stanford University computer science professor and co-founder of AI startup SambaNova Systems. SambaNova Tuesday announced a new round of venture capital funding that brings its capital to date to over $1 billion. In yet another sign of the rising interest in alternative computing technology, AI systems startup SambaNova Systems on Tuesday said it has received $676 million in a Series D financing from a group of investors that includes the SoftBank Vision Fund of Japanese conglomerate SoftBank Group; private equity firm BlackRock; and the Intel Capital arm of chip giant Intel. The new funding round brings the company's total investment to date to over $1 billion. The company is now valued at more than $5 billion.


Forthcoming machine learning and AI seminars: April 2021 edition

AIHub

This post contains a list of the AI-related seminars that are scheduled to take place between 14 April and 31 May 2021. All events detailed here are free and open for anyone to attend virtually. Machine learning for medical image analysis and why clinicians are not using it Speaker: Christian Baumgartner (Tuebingen University) Organised by: Tuebingen University Zoom link is here. Real-time Distributed Decision Making in Networked Systems Speaker: Na Li (Harvard) Organised by: Control Meets Learning Join the Google group to find out how to register. The limits of Shapley values as a method for explaining the predictions of an ML system Speaker: Suresh Venkatasubramanian (University of Utah) Organised by: Trustworthy ML Join the mailing list for instructions on how to sign up, or check the website a few days beforehand for the Zoom link.


Hiroshi Noji and Yohei Oseki have received the Best Paper Award, NLP2021

#artificialintelligence

The research paper of "Parallelization of Recurrent neural network grammar (in Japanese)," co-authored by Hiroshi Noji (AIST) and Yohei Oseki (The University of Tokyo), was received the Best Paper Award from the 27th Annual Meeting of the Association for Natural Language Processing .


IBM's new tool lets developers add quantum-computing power to machine learning

ZDNet

IBM is releasing a new module as part of its open-source quantum software development kit, Qiskit, to let developers leverage the capabilities of quantum computers to improve the quality of their machine-learning models. Qiskit Machine Learning is now available and includes the computational building blocks that are necessary to bring machine-learning models into the quantum space. Machine learning is a branch of artificial intelligence that is now widely used in almost every industry. The technology is capable of crunching through ever-larger datasets to identify patterns and relationships, and eventually discover the best way to calculate an answer to a given problem. Researchers and developers, therefore, want to make sure that the software comes up with the most optimal model possible – which means expanding the amount and improving the quality of the training data that is fed to the machine-learning software.


8 Outstanding Papers At ICLR 2021

#artificialintelligence

International Conference on Learning Representations (ICLR) recently announced the ICLR 2021 Outstanding Paper Awards winners. It recognised eight papers out of the 860 submitted this year. The papers were evaluated for both technical quality and the potential to create a practical impact. The committee was chaired by Ivan Titov (U. This paper deals with parameterising hypercomplex multiplications using arbitrarily learnable parameters compared with the fully-connected layer counterpart.


Like Us, Deep Learning Networks Prefer a Human Voice

#artificialintelligence

The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change. A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.


Artificial Intelligence: Technology Trends

#artificialintelligence

As artificial intelligence (AI) becomes more pervasive and embedded in life-changing decisions, the need for transparency has intensified. There have been plenty of high-profile cases in recent years where AI has contributed to bias and discrimination, with the use of facial recognition for policing just one example. There is a high probability of a shift from loose self-regulation to government involvement in AI over the next couple of years. In turn, Big Tech is increasingly using AI to solve the privacy and bias problems that the technology itself created. Listed below are the key technology trends impacting the AI theme, as identified by GlobalData.


Artificial Intelligence in Manufacturing: Time to Scale and Time to Accuracy

#artificialintelligence

Asset-intensive organizations are pursuing digital transformation to attain operational excellence, improve KPIs, and solve concrete issues in the production and supporting process areas. AI-based prediction models are particularly useful tools that can be deployed in complex production environments. Compared to common analytical tools, prediction models can more easily amplify correlations between different parameters in complicated production environments that generate large volumes of structured or unstructured data. My regular talks with executives of production-intensive organizations indicate that AI use is steadily rising. This is in line with IDC's forecast that 70% of G2000 companies will use AI to develop guidance and insights for risk-based operational decision making by 2026.