Goto

Collaborating Authors

 joint unit


Joint-repositionable Inner-wireless Planar Snake Robot

Kanada, Ayato, Takahashi, Ryo, Hayashi, Keito, Hosaka, Ryusuke, Yukita, Wakako, Nakashima, Yasutaka, Yokota, Tomoyuki, Someya, Takao, Kamezaki, Mitsuhiro, Kawahara, Yoshihiro, Yamamoto, Motoji

arXiv.org Artificial Intelligence

Bio-inspired multi-joint snake robots offer the advantages of terrain adaptability due to their limbless structure and high flexibility. However, a series of dozens of motor units in typical multiple-joint snake robots results in a heavy body structure and hundreds of watts of high power consumption. This paper presents a joint-repositionable, inner-wireless snake robot that enables multi-joint-like locomotion using a low-powered underactuated mechanism. The snake robot, consisting of a series of flexible passive links, can dynamically change its joint coupling configuration by repositioning motor-driven joint units along rack gears inside the robot. Additionally, a soft robot skin wirelessly powers the internal joint units, avoiding the risk of wire tangling and disconnection caused by the movable joint units. The combination of the joint-repositionable mechanism and the wireless-charging-enabled soft skin achieves a high degree of bending, along with a lightweight structure of 1.3 kg and energy-efficient wireless power transmission of 7.6 watts.


MURPHY: A Robot that Learns by Doing

Mel, Bartlett W.

Neural Information Processing Systems

Current Focus Of Learning Research Most connectionist learning algorithms may be grouped into three general catagories, commonly referred to as supenJised, unsupenJised, and reinforcement learning. Supervised learning requires the explicit participation of an intelligent teacher, usually to provide the learning system with task-relevant input-output pairs (for two recent examples, see [1,2]). Unsupervised learning, exemplified by "clustering" algorithms, are generally concerned with detecting structure in a stream of input patterns [3,4,5,6,7]. In its final state, an unsupervised learning system will typically represent the discovered structure as a set of categories representing regions of the input space, or, more generally, as a mapping from the input space into a space of lower dimension that is somehow better suited to the task at hand. In reinforcement learning, a "critic" rewards or penalizes the learning system, until the system ultimately produces the correct output in response to a given input pattern [8].


MURPHY: A Robot that Learns by Doing

Mel, Bartlett W.

Neural Information Processing Systems

Current Focus Of Learning Research Most connectionist learning algorithms may be grouped into three general catagories, commonly referred to as supenJised, unsupenJised, and reinforcement learning. Supervised learning requires the explicit participation of an intelligent teacher, usually to provide the learning system with task-relevant input-output pairs (for two recent examples, see [1,2]). Unsupervised learning, exemplified by "clustering" algorithms, are generally concerned with detecting structure in a stream of input patterns [3,4,5,6,7]. In its final state, an unsupervised learning system will typically represent the discovered structure as a set of categories representing regions of the input space, or, more generally, as a mapping from the input space into a space of lower dimension that is somehow better suited to the task at hand. In reinforcement learning, a "critic" rewards or penalizes the learning system, until the system ultimately produces the correct output in response to a given input pattern [8]. It has seemed an inevitable tradeoff that systems needing to rapidly learn specific, behaviorally useful input-output mappings must necessarily do so under the auspices of an intelligent teacher with a ready supply of task-relevant training examples. This state of affairs has seemed somewhat paradoxical, since the processes of Rerceptual and cognitive development in human infants, for example, do not depend on the moment by moment intervention of a teacher of any sort. Learning by Doing The current work has been focused on a fourth type of learning algorithm, i.e. learning-bydoing, an approach that has been very little studied from either a connectionist perspective


MURPHY: A Robot that Learns by Doing

Mel, Bartlett W.

Neural Information Processing Systems

Current Focus Of Learning Research Most connectionist learning algorithms may be grouped into three general catagories, commonly referred to as supenJised, unsupenJised, and reinforcement learning. Supervised learning requires the explicit participation of an intelligent teacher, usually to provide the learning system with task-relevant input-output pairs (for two recent examples, see [1,2]). Unsupervised learning, exemplified by "clustering" algorithms, are generally concerned with detecting structure in a stream of input patterns [3,4,5,6,7]. In its final state, an unsupervised learning system will typically represent the discovered structure as a set of categories representing regions of the input space, or, more generally, as a mapping from the input space into a space of lower dimension that is somehow better suited to the task at hand. In reinforcement learning, a "critic" rewards or penalizes the learning system, until the system ultimately produces the correct output in response to a given input pattern [8]. It has seemed an inevitable tradeoff that systems needing to rapidly learn specific, behaviorally useful input-output mappings must necessarily do so under the auspices of an intelligent teacher with a ready supply of task-relevant training examples. This state of affairs has seemed somewhat paradoxical, since the processes of Rerceptual and cognitive development in human infants, for example, do not depend on the moment by moment intervention of a teacher of any sort. Learning by Doing The current work has been focused on a fourth type of learning algorithm, i.e. learning-bydoing, an approach that has been very little studied from either a connectionist perspective