Goto

Collaborating Authors

 play session


Toward Anxiety-Reducing Pocket Robots for Children

Frederiksen, Morten Roed, Støy, Kasper, Matarić, Maja

arXiv.org Artificial Intelligence

A common denominator for most therapy treatments for children who suffer from an anxiety disorder is daily practice routines to learn techniques needed to overcome anxiety. However, applying those techniques while experiencing anxiety can be highly challenging. This paper presents the design, implementation, and pilot study of a tactile hand-held pocket robot AffectaPocket, designed to work alongside therapy as a focus object to facilitate coping during an anxiety attack. The robot does not require daily practice to be used, has a small form factor, and has been designed for children 7 to 12 years old. The pocket robot works by sensing when it is being held and attempts to shift the child's focus by presenting them with a simple three-note rhythm-matching game. We conducted a pilot study of the pocket robot involving four children aged 7 to 10 years, and then a main study with 18 children aged 6 to 8 years; neither study involved children with anxiety. Both studies aimed to assess the reliability of the robot's sensor configuration, its design, and the effectiveness of the user tutorial. The results indicate that the morphology and sensor setup performed adequately and the tutorial process enabled the children to use the robot with little practice. This work demonstrates that the presented pocket robot could represent a step toward developing low-cost accessible technologies to help children suffering from anxiety disorders.


Serious Play to Encourage Socialization between Unfamiliar Children Facilitated by a LEGO Robot

Lind, Nicklas, Paramarajah, Nilan, Merritt, Timothy

arXiv.org Artificial Intelligence

Socialization is an essential development skill for preschool children. In collaboration with the LEGO Group, we developed Robert Robot, a simplified robot, which enables socialization between children and facilitates shared experiences when meeting for the first time. An exploratory study to observe socialization between preschool children was conducted with 30 respondents in pairs. Additionally, observational data from 212 play sessions with four Robert Robots in the wild were collected. Subsequent analysis found that children have fun as Robert Robot breaks the ice between unfamiliar children. The children relayed audio cues related to the imaginative world of Robert Robot's personalities and mimicked each other as a method of initiating social play and communication with their unfamiliar peers. Furthermore, the study contributes four implications for the design of robots for socialization between children. This chapter provides an example case of serious storytelling using playful interactions engaging children with the character of the robot and the mini-narratives around the build requests.


Active Gaze Behavior Boosts Self-Supervised Object Learning

Yu, Zhengyang, Aubret, Arthur, Raabe, Marcel C., Yang, Jane, Yu, Chen, Triesch, Jochen

arXiv.org Artificial Intelligence

Due to significant variations in the projection of the same object from different viewpoints, machine learning algorithms struggle to recognize the same object across various perspectives. In contrast, toddlers quickly learn to recognize objects from different viewpoints with almost no supervision. Recent works argue that toddlers develop this ability by mapping close-in-time visual inputs to similar representations while interacting with objects. High acuity vision is only available in the central visual field, which may explain why toddlers (much like adults) constantly move their gaze around during such interactions. It is unclear whether/how much toddlers curate their visual experience through these eye movements to support learning object representations. In this work, we explore whether a bio inspired visual learning model can harness toddlers' gaze behavior during a play session to develop view-invariant object recognition. Exploiting head-mounted eye tracking during dyadic play, we simulate toddlers' central visual field experience by cropping image regions centered on the gaze location. This visual stream feeds a time-based self-supervised learning algorithm. Our experiments demonstrate that toddlers' gaze strategy supports the learning of invariant object representations. Our analysis also reveals that the limited size of the central visual field where acuity is high is crucial for this. We further find that toddlers' visual experience elicits more robust representations compared to adults' mostly because toddlers look at objects they hold themselves for longer bouts. Overall, our work reveals how toddlers' gaze behavior supports self-supervised learning of view-invariant object recognition.


Bottlenose dolphins 'smile' to say it's time to play

Popular Science

Dolphins are among the most playful and social animals on Earth, yet we don't know much about how they communicate during games and other more light interactions. New research of captive bottlenose dolphins (Tursiops truncates) indicates that they use an "open mouth" facial expression similar to a smile to communicate during social play. This expression was most consistently used when a dolphin is in their playmate's field of view and some respond with a similar expression. The findings are detailed in a study published October 2 in the Cell Press journal iScience. For dolphins, play can include acrobatics, surfing, playing with objects, chasing, and playfighting.


Caregiver Talk Shapes Toddler Vision: A Computational Study of Dyadic Play

Schaumlöffel, Timothy, Aubret, Arthur, Roig, Gemma, Triesch, Jochen

arXiv.org Artificial Intelligence

Infants' ability to recognize and categorize objects develops gradually. The second year of life is marked by both the emergence of more semantic visual representations and a better understanding of word meaning. This suggests that language input may play an important role in shaping visual representations. However, even in suitable contexts for word learning like dyadic play sessions, caregivers utterances are sparse and ambiguous, often referring to objects that are different from the one to which the child attends. Here, we systematically investigate to what extent caregivers' utterances can nevertheless enhance visual representations. For this we propose a computational model of visual representation learning during dyadic play. We introduce a synthetic dataset of ego-centric images perceived by a toddler-agent that moves and rotates toy objects in different parts of its home environment while hearing caregivers' utterances, modeled as captions. We propose to model toddlers' learning as simultaneously aligning representations for 1) close-in-time images and 2) co-occurring images and utterances. We show that utterances with statistics matching those of real caregivers give rise to representations supporting improved category recognition. Our analysis reveals that a small decrease/increase in object-relevant naming frequencies can drastically impact the learned representations. This affects the attention on object names within an utterance, which is required for efficient visuo-linguistic alignment. Overall, our results support the hypothesis that caregivers' naming utterances can improve toddlers' visual representations.


Esports injuries real for pros and at-home gamer, from finger sprains to collapsed lungs

USATODAY - Tech Top Stories

Almost 70,000 gamers are set to crowd Los Angeles Convention Center for the annual E3 expo, a highlight on the video game calendar. Some of the top attractions are esports, with competitions throughout the week attracting fans in droves. Inside an 8,000-square-foot state-of-the-art esports training facility in Santa Monica, California, the five players who compete in "League of Legends" for Team Liquid push their bodies to the limit. "I play eight-to-12 hours a day, not including our games on the weekends," Jake Puchero tells me over the phone. It takes a lot of mental focus and physical stamina." Puchero, better known by his in-game name "Xmithie," is a 28-year-old professional "League of Legends" player for Team Liquid. For those not following esports, "League of Legends" is an online multiplayer game that pits two teams of five players against each other in an online battle arena, and it's one of the most popular sporting events on the planet. More people now watch the League of ...


Keepon Helps Kids Learn to Argue Better

IEEE Spectrum Robotics

Kids are not well known for their conflict resolution skills. That's part of being a kid, I guess, but they've got to learn these skills at some point, or they turn into teens without conflict resolution skills. And then you end up with adults that only know how to solve problems by throwing tantrums of one sort or another: We've all met people like that. It would be great if there was a way to teach children how to handle disagreements equitably, and there is: It's called teachers (or adults in general). But having adults around all the time gets expensive.


Yeti: How a Google game console could take on Xbox, PlayStation, and Steam

PCWorld

It's called "Yeti," and it's the code name attached to an intriguing rumor about Google's gaming ambitions that emerged this week. The rumor suggests the company is developing its own cloud-based gaming service and home console. As The Information reported, the so-called service would stream games into users' homes from remote servers, allowing users to play on a Chromecast or a new console made by Google. A game console from Google could be a big deal, akin to how Microsoft transformed the gaming business after launching the Xbox in 2001. Still, this week's reporting offered scant details how Google's gaming service might work, what its hardware might look like, and when we'll see the fruits of these efforts.


It's hard not to love Anki's adorable Cozmo robot

Engadget

As someone raised on science fiction and the dream of advanced artificially intelligent robots, I couldn't help but fall for Anki's Cozmo. The tiny bot already won me over when I first saw it in action back in June, and since then it's been one of my most anticipated gadgets this year. Having a robot pal with the spunk and wit of a Pixar character simply feels more exciting than the prospect of yet another smartphone. It's bursting with potential, though the high 180 price means it's not for everyone just yet. You might already be familiar with Anki's smartphone-powered remote control cars, but Cozmo is something else entirely.

  Industry:

'ReCore' is the mashup of 'Metroid' and 'Mega Man' I didn't know I wanted

Engadget

Unfortunately, I didn't really get to do any exploration, but I did get a good taste of the smooth and fluid combat system during my demo. One trigger locks you on to your enemies and the other lets you blast away, making it relatively painless to keep up with the swarms of fast-moving attacking robots. Another button tells your robot companion to attack, and you can swap rapidly between them at any time. Each bot has its own special attack you can use to even the odds, as well. The bots are designed to be crucial to your success -- if you forget about utilizing those special attacks, you'll likely end up in big trouble.