Goto

Collaborating Authors

Neural Networks


Hiroshi Noji and Yohei Oseki have received the Best Paper Award, NLP2021

#artificialintelligence

The research paper of "Parallelization of Recurrent neural network grammar (in Japanese)," co-authored by Hiroshi Noji (AIST) and Yohei Oseki (The University of Tokyo), was received the Best Paper Award from the 27th Annual Meeting of the Association for Natural Language Processing .


IBM's new tool lets developers add quantum-computing power to machine learning

ZDNet

IBM is releasing a new module as part of its open-source quantum software development kit, Qiskit, to let developers leverage the capabilities of quantum computers to improve the quality of their machine-learning models. Qiskit Machine Learning is now available and includes the computational building blocks that are necessary to bring machine-learning models into the quantum space. Machine learning is a branch of artificial intelligence that is now widely used in almost every industry. The technology is capable of crunching through ever-larger datasets to identify patterns and relationships, and eventually discover the best way to calculate an answer to a given problem. Researchers and developers, therefore, want to make sure that the software comes up with the most optimal model possible – which means expanding the amount and improving the quality of the training data that is fed to the machine-learning software.


8 Outstanding Papers At ICLR 2021

#artificialintelligence

International Conference on Learning Representations (ICLR) recently announced the ICLR 2021 Outstanding Paper Awards winners. It recognised eight papers out of the 860 submitted this year. The papers were evaluated for both technical quality and the potential to create a practical impact. The committee was chaired by Ivan Titov (U. This paper deals with parameterising hypercomplex multiplications using arbitrarily learnable parameters compared with the fully-connected layer counterpart.


Like Us, Deep Learning Networks Prefer a Human Voice

#artificialintelligence

The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change. A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.


Artificial Intelligence: Technology Trends

#artificialintelligence

As artificial intelligence (AI) becomes more pervasive and embedded in life-changing decisions, the need for transparency has intensified. There have been plenty of high-profile cases in recent years where AI has contributed to bias and discrimination, with the use of facial recognition for policing just one example. There is a high probability of a shift from loose self-regulation to government involvement in AI over the next couple of years. In turn, Big Tech is increasingly using AI to solve the privacy and bias problems that the technology itself created. Listed below are the key technology trends impacting the AI theme, as identified by GlobalData.


Artificial Intelligence in Manufacturing: Time to Scale and Time to Accuracy

#artificialintelligence

Asset-intensive organizations are pursuing digital transformation to attain operational excellence, improve KPIs, and solve concrete issues in the production and supporting process areas. AI-based prediction models are particularly useful tools that can be deployed in complex production environments. Compared to common analytical tools, prediction models can more easily amplify correlations between different parameters in complicated production environments that generate large volumes of structured or unstructured data. My regular talks with executives of production-intensive organizations indicate that AI use is steadily rising. This is in line with IDC's forecast that 70% of G2000 companies will use AI to develop guidance and insights for risk-based operational decision making by 2026.


The science behind SageMaker's cost-saving Debugger

#artificialintelligence

A machine learning training job can seem to be running like a charm, while it's really suffering from problems such as overfitting, exploding model parameters, and vanishing gradients, which can compromise model performance. Historically, spotting such problems during training has required the persistent attention of a machine learning expert. The Amazon SageMaker team has developed a new tool, SageMaker Debugger, that automates this problem-spotting process, saving customers time and money. For example, by using Debugger, one SageMaker customer reduced model size by 45% and the number of GPU operations by 33%, while improving accuracy. Next week, at the Conference on Machine Learning and Systems (MLSys), we will present a paper that describes the technology behind SageMaker Debugger.


7 Top AI/ML Based Music Apps In 2021

#artificialintelligence

You are in your bed, with a book and a cup of coffee in hand. It's raining, and you are savouring the sound of rain droplets buffeting your window panes while your favourite songs play in the background. And most likely, the song you are listening to is recommended by your music app. Music apps -- that leverages the latest AI, ML technologies -- have become an essential part of our daily routines. The app has over 50 million songs and collects a lot of information about music tastes, search habits, playlists, geographical location, and most-used devices.


Image analysis based on machine learning reliably identifies haematological malignancies

#artificialintelligence

Myelodysplastic syndrome (MDS) is a disease of the stem cells in the bone marrow, which disturbs the maturing and differentiation of blood cells. Annually, some 200 Finns are diagnosed with MDS, which can develop into acute leukemia. Globally, the incidence of MDS is 4 cases per 100,000 person years. To diagnose MDS, a bone marrow sample is needed to also investigate genetic changes in bone marrow cells. The syndrome is classified into groups to determine the nature of the disorder in more detail.


Will Transformers Replace CNNs in Computer Vision?

#artificialintelligence

This article is about most probably the next generation of neural networks for all computer vision applications: The transformer architecture. You've certainly already heard about this architecture in the field of natural language processing, or NLP, mainly with GPT3 that made a lot of noise in 2020. Transformers can be used as a general-purpose backbone for many different applications and not only NLP. In a couple of minutes, you will know how the transformer architecture can be applied to computer vision with a new paper called the Swin Transformer by Ze Lio et al. from Microsoft Research [1]. This article may be less flashy than usual as it doesn't really show the actual results of a precise application.