Collaborating Authors

Humanoid Robots

Interview with Guillem Alenyà – discussing assistive robotics, human-robot interaction, and more


His research activities include assistive robotics, robot adaptation, human-robot interactions and grasping of deformables. We spoke about some of the projects he is involved in and his plans for future work. The SOCRATES project is about quality of interaction between robots and users, and our role is focussed on adapting the robot behaviour to user needs. We have concentrated on a very nice use case: cognitive training of mild dementia patients. We are working with a day-care facility in Barcelona and asked if we could provide some technological help for the caregiver.

CES 2021: LG's press conference featured a virtual person presenting

USATODAY - Tech Top Stories

Typically the presenters at a CES press conference don't get a lot of attention. Wearing a pink hooded sweatshirt with the phrase "Stay punk forever," Reah Keem was among presenters highlighting some of the offerings from LG, ranging from appliances to personal technology. LG describes her as a "virtual composer and DJ made even more human through deep learning technology." Keem was there to introduce the LG CLOi robot, which can disinfect high-traffic areas using ultraviolet light. You can watch Reah make her debut during LG's press conference Monday morning, at roughly the 22-minute mark.

Watch a Robot Dog Learn How to Deftly Fend Off a Human


Study hard enough, kids, and maybe one day you'll grow up to be a professional robot fighter. A few years ago, Boston Dynamics set the standard for the field by having people wielding hockey sticks try to keep Spot the quadrupedal robot from opening a door. Previously, in 2015, the far-out federal research agency Darpa hosted a challenge in which it forced clumsy humanoid robots to embarrass themselves on an obstacle course way outside the machines' league. And now, behold: The makers of the Jueying robot dog have taught it a fascinating way to fend off a human antagonizer who kicks it over or pushes it with a stick. A team of researchers from China's Zhejiang University--where the Jueying's hardware was also developed--and the University of Edinburgh didn't teach the Jueying how to recover after an assault, so much as they let the robot figure it out.

These Were Our Favorite Tech Stories ... :: Human Robots#


This time last year we were commemorating the end of a decade and looking ahead to the next one. Enter the year that felt like a decade all by itself: 2020. News written in January, the before-times, feels hopelessly out of touch with all that came after. Stories published in the early days of the pandemic are, for the most part, similarly naive. The year’s news cycle was swift and brutal, ping-ponging from pandemic to extreme social and political tension, whipsawing economies, and natural disasters. Hope. Despair. Loneliness. Grief. Grit. More hope. Another lockdown. It’s been a hell of a year. Though 2020 was dominated by big, hairy societal change, science and technology took significant steps forward. Researchers singularly focused on the pandemic and collaborated on solutions to a degree never before seen. New technologies converged to deliver vaccines in record time. The dark side of tech, from biased algorithms to the threat of omnipresent surveillance and corporate control of artificial intelligence, continued to rear its head. Meanwhile, AI showed uncanny command of language, joined Reddit threads, and made inroads into some of science’s grandest challenges. Mars rockets flew for the first time, and a private company delivered astronauts to the International Space Station. Deprived of night life, concerts, and festivals, millions traveled to virtual worlds instead. Anonymous jet packs flew over LA. Mysterious monoliths appeared and disappeared worldwide. It was all, you know, very 2020. For this year’s (in-no-way-all-encompassing) list of fascinating stories in tech and science, we tried to select those that weren’t totally dated by the news, but rose above it in some way. So, without further ado: This year’s picks. How Science Beat the Virus Ed Yong | The Atlantic “Much like famous initiatives such as the Manhattan Project and the Apollo program, epidemics focus the energies of large groups of scientists. …But ‘nothing in history was even close to the level of pivoting that’s happening right now,’ Madhukar Pai of McGill University told me. … No other disease has been scrutinized so intensely, by so much combined intellect, in so brief a time.” ‘It Will Change Everything’: DeepMind’s AI Makes Gigantic Leap in Solving Protein Structures Ewen Callaway | Nature “In some cases, AlphaFold’s structure predictions were indistinguishable from those determined using ‘gold standard’ experimental methods such as X-ray crystallography and, in recent years, cryo-electron microscopy (cryo-EM). AlphaFold might not obviate the need for these laborious and expensive methods—yet—say scientists, but the AI will make it possible to study living things in new ways.” OpenAI’s Latest Breakthrough Is Astonishingly Powerful, But Still Fighting Its Flaws James Vincent | The Verge “What makes GPT-3 amazing, they say, is not that it can tell you that the capital of Paraguay is Asunción (it is) or that 466 times 23.5 is 10,987 (it’s not), but that it’s capable of answering both questions and many more beside simply because it was trained on more data for longer than other programs. If there’s one thing we know that the world is creating more and more of, it’s data and computing power, which means GPT-3’s descendants are only going to get more clever.” Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try? Will Douglas Heaven | MIT Technology Review “A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?” The Dark Side of Big Tech’s Funding for AI Research Tom Simonite | Wired “Timnit Gebru’s exit from Google is a powerful reminder of how thoroughly companies dominate the field, with the biggest computers and the most resources. …[Meredith] Whittaker of AI Now says properly probing the societal effects of AI is fundamentally incompatible with corporate labs. ‘That kind of research that looks at the power and politics of AI is and must be inherently adversarial to the firms that are profiting from this technology.’i” We’re Not Prepared for the End of Moore’s Law David Rotman | MIT Technology Review “Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.” Inside the Race to Build the Best Quantum Computer on Earth Gideon Lichfield | MIT Technology Review “Regardless of whether you agree with Google’s position [on ‘quantum supremacy’] or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. …The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.” The Secretive Company That Might End Privacy as We Know It Kashmir Hill | The New York Times “Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable—and his or her home address would be only a few clicks away. It would herald the end of public anonymity.” Wrongfully Accused by an Algorithm Kashmir Hill | The New York Times “Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.” Predictive Policing Algorithms Are Racist. They Need to Be Dismantled. Will Douglas Heaven | MIT Technology Review “A number of studies have shown that these tools perpetuate systemic racism, and yet we still know very little about how they work, who is using them, and for what purpose. All of this needs to change before a proper reckoning can take pace. Luckily, the tide may be turning.” The Panopticon Is Already Here Ross Andersen | The Atlantic “Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi [Jinping] also wants to use AI’s awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.” The Case For Cities That Aren’t Dystopian Surveillance States Cory Doctorow | The Guardian “Imagine a human-centered smart city that knows everything it can about things. It knows how many seats are free on every bus, it knows how busy every road is, it knows where there are short-hire bikes available and where there are potholes. …What it doesn’t know is anything about individuals in the city.” The Modern World Has Finally Become Too Complex for Any of Us to Understand Tim Maughan | OneZero “One of the dominant themes of the last few years is that nothing makes sense. …I am here to tell you that the reason so much of the world seems incomprehensible is that it is incomprehensible. From social media to the global economy to supply chains, our lives rest precariously on systems that have become so complex, and we have yielded so much of it to technologies and autonomous actors that no one totally comprehends it all.” The Conscience of Silicon Valley Zach Baron | GQ “What I really hoped to do, I said, was to talk about the future and how to live in it. This year feels like a crossroads; I do not need to explain what I mean by this. …I want to destroy my computer, through which I now work and ‘have drinks’ and stare at blurry simulations of my parents sometimes; I want to kneel down and pray to it like a god. I want someone—I want Jaron Lanier—to tell me where we’re going, and whether it’s going to be okay when we get there. Lanier just nodded. All right, then.” Yes to Tech Optimism. And Pessimism. Shira Ovide | The New York Times “Technology is not something that exists in a bubble; it is a phenomenon that changes how we live or how our world works in ways that help and hurt. That calls for more humility and bridges across the optimism-pessimism divide from people who make technology, those of us who write about it, government officials and the public. We need to think on the bright side. And we need to consider the horribles.” How Afrofuturism Can Help the World Mend C. Brandon Ogbunu | Wired “…[W. E. B. DuBois’] ‘The Comet’ helped lay the foundation for a paradigm known as Afrofuturism. A century later, as a comet carrying disease and social unrest has upended the world, Afrofuturism may be more relevant than ever. Its vision can help guide us out of the rubble, and help us to consider universes of better alternatives.” Wikipedia Is the Last Best Place on the Internet Richard Cooke | Wired “More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web.” Can Genetic Engineering Bring Back the American Chestnut? Gabriel Popkin | The New York Times Magazine “The geneticists’ research forces conservationists to confront, in a new and sometimes discomfiting way, the prospect that repairing the natural world does not necessarily mean returning to an unblemished Eden. It may instead mean embracing a role that we’ve already assumed: engineers of everything, including nature.” At the Limits of Thought David C. Krakauer | Aeon “A schism is emerging in the scientific enterprise. On the one side is the human mind, the source of every story, theory, and explanation that our species holds dear. On the other stand the machines, whose algorithms possess astonishing predictive power but whose inner workings remain radically opaque to human observers.” Is the Internet Conscious? If It Were, How Would We Know? Meghan O’Gieblyn | Wired “Does the internet behave like a creature with an internal life? Does it manifest the fruits of consciousness? There are certainly moments when it seems to. Google can anticipate what you’re going to type before you fully articulate it to yourself. Facebook ads can intuit that a woman is pregnant before she tells her family and friends. It is easy, in such moments, to conclude that you’re in the presence of another mind—though given the human tendency to anthropomorphize, we should be wary of quick conclusions.” The Internet Is an Amnesia Machine Simon Pitt | OneZero “There was a time when I didn’t know what a Baby Yoda was. Then there was a time I couldn’t go online without reading about Baby Yoda. And now, Baby Yoda is a distant, shrugging memory. Soon there will be a generation of people who missed the whole thing and for whom Baby Yoda is as meaningless as it was for me a year ago.” Digital Pregnancy Tests Are Almost as Powerful as the Original IBM PC Tom Warren | The Verge “Each test, which costs less than $5, includes a processor, RAM, a button cell battery, and a tiny LCD screen to display the result. …Foone speculates that this device is ‘probably faster at number crunching and basic I/O than the CPU used in the original IBM PC.’ IBM’s original PC was based on Intel’s 8088 microprocessor, an 8-bit chip that operated at 5Mhz. The difference here is that this is a pregnancy test you pee on and then throw away.” The Party Goes on in Massive Online Worlds Cecilia D’Anastasio | Wired “We’re more stand-outside types than the types to cast a flashy glamour spell and chat up the nearest cat girl. But, hey, it’s Final Fantasy XIV online, and where my body sat in New York, the epicenter of America’s Covid-19 outbreak, there certainly weren’t any parties.” The Facebook Groups Where People Pretend the Pandemic Isn’t Happening Kaitlyn Tiffany | The Atlantic “Losing track of a friend in a packed bar or screaming to be heard over a live band is not something that’s happening much in the real world at the moment, but it happens all the time in the 2,100-person Facebook group ‘a group where we all pretend we’re in the same venue.’ So does losing shoes and Juul pods, and shouting matches over which bands are the saddest, and therefore the greatest.” Did You Fly a Jetpack Over Los Angeles This Weekend? Because the FBI Is Looking for You Tom McKay | Gizmodo “Did you fly a jetpack over Los Angeles at approximately 3,000 feet on Sunday? Some kind of tiny helicopter? Maybe a lawn chair with balloons tied to it? If the answer to any of the above questions is ‘yes,’ you should probably lay low for a while (by which I mean cool it on the single-occupant flying machine). That’s because passing airline pilots spotted you, and now it’s this whole thing with the FBI and the Federal Aviation Administration, both of which are investigating.” Image Credit: Thomas Kinto / Unsplash Continue reading →

Nikolas Martelaro's talk on 11 December – Remote user research for human-robot interaction


This Friday the 11th of December, Nikolas Martelaro (Assistant Professor at Carnegie Mellon's Human-Computer Interaction Institute) will give an online seminar on ways robot design teams can do remote user research now (in these COVID-19 times) and in the future. Nikolas Martelaro is an Assistant Professor at Carnegie Mellon's Human-Computer Interaction Institute. Martelaro's lab focuses on augmenting designer's capabilities through the use of new technology and design methods. Martelaro's interest in developing new ways to support designers stems from my interest in creating interactive and intelligent products. Martelaro blends a background in product design methods, interaction design, human-robot interaction, and mechatronic engineering to build tools and methods that allow designers to understand people better and to create more human-centered products.

Why Did the Robot Cross the Road? A User Study of Explanation in Human-Robot Interaction Artificial Intelligence

This work documents a pilot user study evaluating the effectiveness of contrastive, causal and example explanations in supporting human understanding of AI in a hypothetical commonplace human robot interaction HRI scenario. In doing so, this work situates explainable AI XAI in the context of the social sciences and suggests that HRI explanations are improved when informed by the social sciences.

Towards Abstract Relational Learning in Human Robot Interaction Artificial Intelligence

Humans have a rich representation of the entities in their environment. Entities are described by their attributes, and entities that share attributes are often semantically related. For example, if two books have "Natural Language Processing" as the value of their `title' attribute, we can expect that their `topic' attribute will also be equal, namely, "NLP". Humans tend to generalize such observations, and infer sufficient conditions under which the `topic' attribute of any entity is "NLP". If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way. This ends in a contextualized cognitive agent that can adapt its understanding, where context provides sufficient conditions for a correct understanding. In this work, we address the problem of how to obtain these representations through human-robot interaction. We integrate visual perception and natural language input to incrementally build a semantic model of the world, and then use inductive reasoning to infer logical rules that capture generic semantic relations, true in this model. These relations can be used to enrich the human-robot interaction, to populate a knowledge base with inferred facts, or to remove uncertainty in the robot's sensory inputs.

Designing Human-Robot Coexistence Space Artificial Intelligence

When the human-robot interactions become ubiquitous, the environment surrounding these interactions will have significant impact on the safety and comfort of the human and the effectiveness and efficiency of the robot. Although most robots are designed to work in the spaces created for humans, many environments, such as living rooms and offices, can be and should be redesigned to enhance and improve human-robot collaboration and interactions. This work uses autonomous wheelchair as an example and investigates the computational design in the human-robot coexistence spaces. Given the room size and the objects $O$ in the room, the proposed framework computes the optimal layouts of $O$ that satisfy both human preferences and navigation constraints of the wheelchair. The key enabling technique is a motion planner that can efficiently evaluate hundreds of similar motion planning problems. Our implementation shows that the proposed framework can produce a design around three to five minutes on average comparing to 10 to 20 minutes without the proposed motion planner. Our results also show that the proposed method produces reasonable designs even for tight spaces and for users with different preferences.

Disney Made a Skinless Robot That Can Realistically Stare Directly Into Your Soul - View Orbit


One of the obvious giveaways that you're interacting with a robot is their blank dead-eyed stare. The eyes don't connect with yours the way they would if they were, you know, human. A research team at Disney is trying to fix that using subtle head motions and eye movements that make the robot seem more lifelike--despite it lacking skin and looking like pure, unfiltered, nightmare material. The robot, which mostly consists of a static torso (wearing a stylish dress shirt) supporting a highly animated and articulated head, was developed by engineers at Disney's Research division, Walt Disney Imagineering, and robotics researchers from the University of Illinois, Urbana-Champaign and the California Institute of Technology. It seems like a lot of people for an animatronic that just barely resembles a human being, but despite the lack of muscles and skin, it represents an impressive leap forward when it comes to making a human-like robot that could potentially fool a real person.