morpheus
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > China (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > China (0.04)
Morpheus: A Neural-driven Animatronic Face with Hybrid Actuation and Diverse Emotion Control
Zhang, Zongzheng, Yang, Jiawen, Peng, Ziqiao, Yang, Meng, Ma, Jianzhu, Cheng, Lin, Xu, Huazhe, Zhao, Hang, Zhao, Hao
Blue markers indicate the attachment points between the underlying mechanical structure and the soft skin, while yellow arrows denote the directions of movement. Blue arrows indicate the three-axis neck movement: nodding, shaking, and rotation. The green arrow illustrates the jaw's ability for horizontal movement in addition to typical opening and closing motions, enabling more diverse expressions. The first row illustrates the virtual expressions generated by our algorithm rendered in Blender, while the second row displays the corresponding real-world expressions reproduced by the animatronic face. Abstract --Previous animatronic faces struggle to express emotions effectively due to hardware and software limitations. On the hardware side, earlier approaches either use rigid-driven mechanisms, which provide precise control but are difficult to design within constrained spaces, or tendon-driven mechanisms, which are more space-efficient but challenging to control. In contrast, we propose a hybrid actuation approach that combines the best of both worlds. The eyes and mouth--key areas for emotional expression--are controlled using rigid mechanisms for precise movement, while the nose and cheek, which convey subtle facial microexpressions, are driven by strings. This design allows us to build a compact yet versatile hardware platform capable of expressing a wide range of emotions. On the algorithmic side, our method introduces a self-modeling network that maps motor actions to facial landmarks, allowing us to automatically establish the relationship between blendshape coefficients for different facial expressions and the corresponding motor control signals through gradient backpropagation. We then train a neural network to map speech input to corresponding blendshape controls. With our method, we can generate distinct emotional expressions such as happiness, fear, disgust, and anger, from any given sentence, each with nuanced, emotion-specific control signals--a feature that has not been demonstrated in earlier systems.
- Asia > Azerbaijan > Karabakh Economic Region > Shusha District > Shusha (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (1.00)
DataMap: A Portable Application for Visualizing High-Dimensional Data
Motivation: The visualization and analysis of high-dimensional data are essential in biomedical research. There is a need for secure, scalable, and reproducible tools to facilitate data exploration and interpretation. Results: We introduce DataMap, a browser-based application for visualization of high-dimensional data using heatmaps, principal component analysis (PCA), and t-distributed stochastic neighbor embedding (t-SNE). DataMap runs in the web browser, ensuring data privacy while eliminating the need for installation or a server. The application has an intuitive user interface for data transformation, annotation, and generation of reproducible R code. Availability and Implementation: Freely available as a GitHub page https://gexijin.github.io/datamap/. The source code can be found at https://github.com/gexijin/datamap, and can also be installed as an R package. Contact: Xijin.Ge@sdstate.ed
- North America > United States > South Dakota (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Health & Medicine (0.96)
- Information Technology > Security & Privacy (0.89)
MORPHEUS: Modeling Role from Personalized Dialogue History by Exploring and Utilizing Latent Space
Tang, Yihong, Wang, Bo, Zhao, Dongming, Jin, Xiaojia, Zhang, Jijun, He, Ruifang, Hou, Yuexian
Personalized Dialogue Generation (PDG) aims to create coherent responses according to roles or personas. Traditional PDG relies on external role data, which can be scarce and raise privacy concerns. Approaches address these issues by extracting role information from dialogue history, which often fail to generically model roles in continuous space. To overcome these limitations, we introduce a novel framework \textbf{MO}dels \textbf{R}oles from \textbf{P}ersonalized Dialogue \textbf{H}istory by \textbf{E}xploring and \textbf{U}tilizing Latent \textbf{S}pace (MORPHEUS) through a three-stage training process. Specifically, we create a persona codebook to represent roles in latent space compactly, and this codebook is used to construct a posterior distribution of role information. This method enables the model to generalize across roles, allowing the generation of personalized dialogues even for unseen roles. Experiments on both Chinese and English datasets demonstrate that MORPHEUS enhances the extraction of role information, and improves response generation without external role data. Additionally, MORPHEUS can be considered an efficient fine-tuning for large language models.
- Asia > China > Tianjin Province > Tianjin (0.05)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (4 more...)
- Leisure & Entertainment (1.00)
- Information Technology > Security & Privacy (0.34)
MORPHeus: a Multimodal One-armed Robot-assisted Peeling System with Human Users In-the-loop
Ye, Ruolin, Hu, Yifei, Yuhan, null, Bian, null, Kulm, Luke, Bhattacharjee, Tapomayukh
Meal preparation is an important instrumental activity of daily living~(IADL). While existing research has explored robotic assistance in meal preparation tasks such as cutting and cooking, the crucial task of peeling has received less attention. Robot-assisted peeling, conventionally a bimanual task, is challenging to deploy in the homes of care recipients using two wheelchair-mounted robot arms due to ergonomic and transferring challenges. This paper introduces a robot-assisted peeling system utilizing a single robotic arm and an assistive cutting board, inspired by the way individuals with one functional hand prepare meals. Our system incorporates a multimodal active perception module to determine whether an area on the food is peeled, a human-in-the-loop long-horizon planner to perform task planning while catering to a user's preference for peeling coverage, and a compliant controller to peel the food items. We demonstrate the system on 12 food items representing the extremes of different shapes, sizes, skin thickness, surface textures, skin vs flesh colors, and deformability.
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- Asia > South Korea > Daegu > Daegu (0.04)
Optimizing Sparse Linear Algebra Through Automatic Format Selection and Machine Learning
Stylianou, Christodoulos, Weiland, Michele
Sparse matrices are an integral part of scientific simulations. As hardware evolves new sparse matrix storage formats are proposed aiming to exploit optimizations specific to the new hardware. In the era of heterogeneous computing, users often are required to use multiple formats for their applications to remain optimal across the different available hardware, resulting in larger development times and maintenance overhead. A potential solution to this problem is the use of a lightweight auto-tuner driven by Machine Learning (ML) that would select for the user an optimal format from a pool of available formats that will match the characteristics of the sparsity pattern, target hardware and operation to execute. In this paper, we introduce Morpheus-Oracle, a library that provides a lightweight ML auto-tuner capable of accurately predicting the optimal format across multiple backends, targeting the major HPC architectures aiming to eliminate any format selection input by the end-user. From more than 2000 real-life matrices, we achieve an average classification accuracy and balanced accuracy of 92.63% and 80.22% respectively across the available systems. The adoption of the auto-tuner results in average speedup of 1.1x on CPUs and 1.5x to 8x on NVIDIA and AMD GPUs, with maximum speedups reaching up to 7x and 1000x respectively.
Low-Rank Modular Reinforcement Learning via Muscle Synergy
Dong, Heng, Wang, Tonghan, Liu, Jiayuan, Zhang, Chongjie
Modular Reinforcement Learning (RL) decentralizes the control of multi-joint robots by learning policies for each actuator. Previous work on modular RL has proven its ability to control morphologically different agents with a shared actuator policy. However, with the increase in the Degree of Freedom (DoF) of robots, training a morphology-generalizable modular controller becomes exponentially difficult. Motivated by the way the human central nervous system controls numerous muscles, we propose a Synergy-Oriented LeARning (SOLAR) framework that exploits the redundant nature of DoF in robot control. Actuators are grouped into synergies by an unsupervised learning method, and a synergy action is learned to control multiple actuators in synchrony. In this way, we achieve a low-rank control at the synergy level. We extensively evaluate our method on a variety of robot morphologies, and the results show its superior efficiency and generalizability, especially on robots with a large DoF like Humanoids++ and UNIMALs.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > China (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
AI helps study first images from James Webb Space Telescope
Scientists around the world are gearing up to study the first images taken by the James Webb Space Telescope, which are to be released on July 12. Some astronomers will be running machine-learning algorithms on the data to detect and classify galaxies in deep space at a level of detail never seen before. Brant Robertson, an astrophysics professor at the University of California, Santa Cruz, in the US believes the telescope's snaps will lead to breakthroughs that will help us better understand how the universe formed some 13.7 billion years ago. "The JWST data is exciting because it gives us an unprecedented window on the infrared universe, with a resolution that we've only dreamed about until now," he told The Register. Robertson helped develop Morpheus, a machine-learning model trained to pore over pixels and pick out blurry blob-shaped objects from the deep abyss of space and determine whether these structures are galaxies or not, and if so, of what type.
Tension Inside Google Over a Fired AI Researcher's Conduct
In late 2018, Google AI researchers Anna Goldie and Azalia Mirhoseini got the go-ahead to test an elegant idea. Google had invented powerful computer chips called tensor processing units, or TPUs, to run machine learning algorithms inside its data centers--but, the pair wondered, what if AI software could help improve that same AI hardware? The project, later codenamed Morpheus, won support from Google's AI boss Jeff Dean and attracted interest from the company's chipmaking team. It focused on a step in chip design when engineers must decide how to physically arrange blocks of circuits on a chunk of silicon, a complex, months-long puzzle that helps determine a chip's performance. In June 2021, Goldie and Mirhoseini were lead authors on a paper in the journal Nature that claimed a technique called reinforcement learning could perform that step better than Google's own engineers, and do it in just a few hours.
- Information Technology > Services (0.54)
- Information Technology > Hardware (0.37)