Goto

Collaborating Authors

neural network


Hiroshi Noji and Yohei Oseki have received the Best Paper Award, NLP2021

#artificialintelligence

The research paper of "Parallelization of Recurrent neural network grammar (in Japanese)," co-authored by Hiroshi Noji (AIST) and Yohei Oseki (The University of Tokyo), was received the Best Paper Award from the 27th Annual Meeting of the Association for Natural Language Processing .


IBM's new tool lets developers add quantum-computing power to machine learning

ZDNet

IBM is releasing a new module as part of its open-source quantum software development kit, Qiskit, to let developers leverage the capabilities of quantum computers to improve the quality of their machine-learning models. Qiskit Machine Learning is now available and includes the computational building blocks that are necessary to bring machine-learning models into the quantum space. Machine learning is a branch of artificial intelligence that is now widely used in almost every industry. The technology is capable of crunching through ever-larger datasets to identify patterns and relationships, and eventually discover the best way to calculate an answer to a given problem. Researchers and developers, therefore, want to make sure that the software comes up with the most optimal model possible – which means expanding the amount and improving the quality of the training data that is fed to the machine-learning software.


Agility Prime Researches Electronic Parachute Powered by Machine Learning - Aviation Today

#artificialintelligence

Kentucky-based Aviation Safety Resources is developing ballistic parachutes for use in aircraft ranging from 60 lbs to 12,000 lbs. The Air Force's Agility Prime program awarded a phase I small business technology transfer (STTR) research contract to Jump Aero and Caltech to create an electronic parachute powered by machine learning that would allow the pilot to recalibrate the flight controller in midair in the event of damage, the company announced on April 7. "The electronic parachute is the name for the concept of implementing an adaptive/machine-learned control routine that would be impractical to certify for the traditional controller for use only in an emergency recovery mode -- something that would be switched on by the pilot if there is reason to believe that the baseline flight controller is not properly controlling the aircraft (if, for example, the aircraft has been damaged in midair)," Carl Dietrich, founder and president of Jump Aero Incorporated, told Avionics International. This technology was previously difficult to certify because of the need for deterministic proof of safety within these complex systems. The research was sparked when the Federal Aviation Administration certified an autonomous landing function for use in emergency situations which created a path for the possible certification of electronic parachute technology, according to Jump Aero. The machine-learned neural network can be trained with non-linear behaviors that occur in an aircraft in the presence of substantial failures such those generated by a bird strike, Dietrich said.


Reinforcement Learning for Dynamic Pricing

#artificialintelligence

Limitations on physical interactions throughout the world have reshaped our lives and habits. And while the pandemic has been disrupting the majority of industries, e-commerce has been thriving. This article covers how reinforcement learning for dynamic pricing helps retailers refine their pricing strategies to increase profitability and boost customer engagement and loyalty. In dynamic pricing, we want an agent to set optimal prices based on market conditions. In terms of RL concepts, actions are all of the possible prices and states, market conditions, except for the current price of the product or service.


8 Outstanding Papers At ICLR 2021

#artificialintelligence

International Conference on Learning Representations (ICLR) recently announced the ICLR 2021 Outstanding Paper Awards winners. It recognised eight papers out of the 860 submitted this year. The papers were evaluated for both technical quality and the potential to create a practical impact. The committee was chaired by Ivan Titov (U. This paper deals with parameterising hypercomplex multiplications using arbitrarily learnable parameters compared with the fully-connected layer counterpart.


Like Us, Deep Learning Networks Prefer a Human Voice

#artificialintelligence

The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change. A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.


Artificial Intelligence: Technology Trends

#artificialintelligence

As artificial intelligence (AI) becomes more pervasive and embedded in life-changing decisions, the need for transparency has intensified. There have been plenty of high-profile cases in recent years where AI has contributed to bias and discrimination, with the use of facial recognition for policing just one example. There is a high probability of a shift from loose self-regulation to government involvement in AI over the next couple of years. In turn, Big Tech is increasingly using AI to solve the privacy and bias problems that the technology itself created. Listed below are the key technology trends impacting the AI theme, as identified by GlobalData.


Neuralink's brain-computer interface demo shows a monkey playing Pong

Engadget

Elon Musk's last update on Neuralink -- his company that is working on technology that will connect the human brain directly to a computer -- featured a pig with one of its chips implanted in its brain. Now Neuralink is demonstrating its progress by showing a Macaque with one of the Link chips playing Pong. At first using "Pager" is shown using a joystick, and then eventually, according to the narration, using only its mind via the wireless connection. Monkey plays Pong with his mind https://t.co/35NIFm4C7T Today we are pleased to reveal the Link's capability to enable a macaque monkey, named Pager, to move a cursor on a computer screen with neural activity using a 1,024 electrode fully-implanted neural recording and data transmission device, termed the N1 Link.


Artificial Intelligence in Manufacturing: Time to Scale and Time to Accuracy

#artificialintelligence

Asset-intensive organizations are pursuing digital transformation to attain operational excellence, improve KPIs, and solve concrete issues in the production and supporting process areas. AI-based prediction models are particularly useful tools that can be deployed in complex production environments. Compared to common analytical tools, prediction models can more easily amplify correlations between different parameters in complicated production environments that generate large volumes of structured or unstructured data. My regular talks with executives of production-intensive organizations indicate that AI use is steadily rising. This is in line with IDC's forecast that 70% of G2000 companies will use AI to develop guidance and insights for risk-based operational decision making by 2026.


The science behind SageMaker's cost-saving Debugger

#artificialintelligence

A machine learning training job can seem to be running like a charm, while it's really suffering from problems such as overfitting, exploding model parameters, and vanishing gradients, which can compromise model performance. Historically, spotting such problems during training has required the persistent attention of a machine learning expert. The Amazon SageMaker team has developed a new tool, SageMaker Debugger, that automates this problem-spotting process, saving customers time and money. For example, by using Debugger, one SageMaker customer reduced model size by 45% and the number of GPU operations by 33%, while improving accuracy. Next week, at the Conference on Machine Learning and Systems (MLSys), we will present a paper that describes the technology behind SageMaker Debugger.