choi


Introducing quantum convolutional neural networks

#artificialintelligence

Machine learning techniques have so far proved to be very promising for the analysis of data in several fields, with many potential applications. However, researchers have found that applying these methods to quantum physics problems is far more challenging due to the exponential complexity of many-body systems. Quantum many-body systems are essentially microscopic structures made up of several interacting particles. While quantum physics studies have focused on the collective behavior of these systems, using machine learning in these investigations has proven to be very difficult. With this in mind, a team of researchers at Harvard University recently developed a quantum circuit-based algorithm inspired by convolutional neural networks (CNNs), a popular machine learning technique that has achieved remarkable results in a variety of fields.


Introducing quantum convolutional neural networks

#artificialintelligence

Machine learning techniques have so far proved to be very promising for the analysis of data in several fields, with many potential applications. However, researchers have found that applying these methods to quantum physics problems is far more challenging due to the exponential complexity of many-body systems. Quantum many-body systems are essentially microscopic structures made up of several interacting particles. While quantum physics studies have focused on the collective behavior of these systems, using machine learning in these investigations has proven to be very difficult. With this in mind, a team of researchers at Harvard University recently developed a quantum circuit-based algorithm inspired by convolutional neural networks (CNNs), a popular machine learning technique that has achieved remarkable results in a variety of fields.


AI: Psychosensory electronic skin technology for future AI development

#artificialintelligence

As a result, many scientists are continuously performing research to imitate tactile, olfactory, and palate senses and tactile sensing is expected to be the next mimetic technology for various reasons. Currently, most tactile sensor researches are focusing on physical mimetic technologies that measure the pressure used for a robot to grab an object, but psychosensory tactile research on how to mimic human tactile feeling such like soft, smooth or rough has a long way to go. As a result, Professor Jae Eun Jang's team developed a tactile sensor that can feel pain and temperature like human through a joint research with Professor Cheil Moon's team in the Department of Brain and Cognitive Science, Professor Ji-woong Choi's team in the Department of Information and Communication Engineering, and Professor Hongsoo Choi's team in the Department of Robotics Engineering. Its key strengths are that it has simplified the sensor structure and can measure pressure and temperature at the same time and can be applied on various tactile systems regardless of the measurement principle of the sensor. For this, the research team focused on zinc oxide nano-wire (ZnO Nano-wire) technology, which was applied as a self-power tactile sensor that does not need a battery thanks to its piezoelectric effect, which generates electrical signals by detecting pressure.


Psychosensory electronic skin technology for future AI and humanoid development

#artificialintelligence

Professor Jae Eun Jang's team in the Department of Information and Communication Engineering has developed electronic skin technology that can detect "prick" and "hot" pain sensations like humans. This research result has applications in the development of humanoid robots and prosthetic hands in the future. Scientists are continuously performing research to imitate tactile, olfactory and palate senses, and tactile sensing is expected to be the next mimetic technology for various applications. Currently, most tactile sensor research is focused on physical mimetic technologies that measure the pressure used for a robot to grab an object, but psychosensory tactile research on mimicking human tactile sensory responses like those caused by soft, smooth or rough surfaces has a long way to go. Professor Jae Eun Jang's team has developed a tactile sensor that can feel pain and temperature like humans through a joint project with Professor Cheil Moon's team in the Department of Brain and Cognitive Science, Professor Ji-woong Choi's team in the Department of Information and Communication Engineering, and Professor Hongsoo Choi's team in the Department of Robotics Engineering.


Psychosensory electronic skin technology for future AI and humanoid development

#artificialintelligence

The attempt to mimic human's five senses led to the development of innovative electronic devices such as camera and TV, which are inventions that dramatically changed human life. As a result, many scientists are continuously performing research to imitate tactile, olfactory, and palate senses and tactile sensing is expected to be the next mimetic technology for various reasons. Currently, most tactile sensor researches are focusing on physical mimetic technologies that measure the pressure used for a robot to grab an object, but psychosensory tactile research on how to mimic human tactile feeling such like soft, smooth or rough has a long way to go. As a result, Professor Jae Eun Jang's team developed a tactile sensor that can feel pain and temperature like human through a joint research with Professor Cheil Moon's team in the Department of Brain and Cognitive Science, Professor Ji-woong Choi's team in the Department of Information and Communication Engineering, and Professor Hongsoo Choi's team in the Department of Robotics Engineering. Its key strengths are that it has simplified the sensor structure and can measure pressure and temperature at the same time and can be applied on various tactile systems regardless of the measurement principle of the sensor.


Learning to Denoise Distantly-Labeled Data for Entity Typing

arXiv.org Artificial Intelligence

Distantly-labeled data can be used to scale up training of statistical models, but it is typically noisy and that noise can vary with the distant labeling technique. In this work, we propose a two-stage procedure for handling this type of data: denoise it with a learned model, then train our final model on clean and denoised distant data with standard supervised training. Our denoising approach consists of two parts. First, a filtering function discards examples from the distantly labeled data that are wholly unusable. Second, a relabeling function repairs noisy labels for the retained examples. Each of these components is a model trained on synthetically-noised examples generated from a small manually-labeled set. We investigate this approach on the ultra-fine entity typing task of Choi et al. (2018). Our baseline model is an extension of their model with pre-trained ELMo representations, which already achieves state-of-the-art performance. Adding distant data that has been denoised with our learned models gives further performance gains over this base model, outperforming models trained on raw distant data or heuristically-denoised distant data.


Cooperative Learning of Disjoint Syntax and Semantics

arXiv.org Artificial Intelligence

There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, \citet{NangiaB18} has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by \newcite{ChoiYL18} that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimization schemes. Our model does not require any linguistic structure for supervision and its recursive nature allows for out-of-domain generalization with little loss in performance. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference or Sentiment Analysis.


LG explains why robots are too fat finder.com.au

#artificialintelligence

I recently had the opportunity to travel to South Korea to look over LG's work in both the AI and robotics fields, including some detailed time with its LG CLOi Airport Guide Robot. That's a design that LG has iterated on over time, and I had the chance to sit down for an interview (via a translator) with Hyungjn Choi, LG's Leader of Life support Robot Biz. That's a fancy title to say that he's in charge (in his own words) "of robot business development and product planning" at LG. Robots in industry are nothing new, but people-centric robots are a tough challenge. Mr Choi is quite clear that the first robot was the toughest. "Technically speaking, the most difficult one is the first one that you can see when you arrive (at Seoul's Incheon International Airport), the Airport guide robot.


Deep Learning Can Now Help Prevent Heart Failure

#artificialintelligence

Georgia Tech researchers are using deep learning to identify early signs of heart failure. In a paper published by the Journal of the American Medical Informatics Association (JAMIA), Georgia Tech's School of Computational Science and Engineering Associate Professor Jimeng Sun and Ph.D. student Edward Choi present a pioneering method for analyzing vast amounts of personal health record data that addresses temporality in the data – something previously ignored by conventional machine learning models in health care applications. The new research, funded by the National Institutes of Health in collaboration with Sutter Health, uses a deep learning model to enable earlier detection of the incidents and stages that often lead to heart failure within 6-18 months. To achieve this, Sun and Choi use a recurrent neural network (RNN) to model temporal relations among events in electronic health records. Temporal relationships communicate the ordering of events or states in time.


Creating and Using Tools in a Hybrid Cognitive Architecture

AAAI Conferences

People regularly use objects in the environment as tools to achieve their goals. In this paper we report extensions to the ICARUS cognitive architecture that let it create and use combinations of objects inthis manner. These extensions include the ability to represent virtual objects composed of simpler ones and to reason about their quantitative features. They also include revised modules for planning and execution that operate over this hybrid representation, taking into account both relational structures and numeric attributes. We demonstrate the extended architecture's behavior on a number of tasks that involve tool construction and use, after which we discuss related research and plans for future work.