Lisbon


Talking Robotics' seminars of January – April 2021 (with videos and even a musical summary!)

Robohub

Talking Robotics is a series of virtual seminars about Robotics and its interaction with other relevant fields, such as Artificial Intelligence, Machine Learning, Design Research, Human-Robot Interaction, among others. They aim to promote reflections, dialogues, and a place to network. In this seminars compilation, we bring you 7 talks (and a half?) from current roboticists for your enjoyment. Filipa Correia received a M.Sc. in Computer Science from University of Lisbon, Portugal, 2015. She is currently a junior researcher at GAIPSLab and she is pursuing a Ph.D. on Human-Robot Interaction at University of Lisbon, Portugal.


#324: Embodied Interactions: from Robotics to Dance, with Kim Baraka

Robohub

In this episode, our interviewer Lauren Klein speaks with Kim Baraka about his PhD research to enable robots to engage in social interactions, including interactions with children with Autism Spectrum Disorder. Baraka discusses how robots can plan their actions across multiple modalities when interacting with humans, and how models from psychology can inform this process. He also tells us about his passion for dance, and how dance may serve as a testbed for embodied intelligence within Human-Robot Interaction. Kim Baraka is a postdoctoral researcher in the Socially Intelligent Machines Lab at the University of Texas at Austin, and an upcoming Assistant Professor in the Department of Computer Science at Vrije Universiteit Amsterdam, where he will be part of the Social Artificial Intelligence Group. Baraka recently graduated with a dual PhD in Robotics from Carnegie Mellon University (CMU) in Pittsburgh, USA, and the Instituto Superior Técnico (IST) in Lisbon, Portugal.


Machine-learning software competes with human experts to optimise organic reactions

#artificialintelligence

A free software tool that can find the best conditions for organic synthesis reactions often does as well as expert chemists – somewhat to the surprise of the researchers. The software, called LabMate.ML, suggests a random set of initial conditions – such as the temperature, the amount of solvent and the reaction time – for a specific reaction, with the aim of optimising its yield. After those initial reactions are carried out by a human chemist, their resulting yields are read with nuclear magnetic resonance and infrared spectroscopy, digitised into binary code and then fed back into the software. LabMate.ML then uses a machine-learning algorithm to make decisions about the yields, and then recommends further sets of conditions to try. Researcher Tiago Rodrigues of the University of Lisbon says LabMate.ML usually takes between 10 and 20 iterations to find the greatest yield, while the number of initial reactions varies between five and 10, depending on how many conditions are being optimised.


Brutalist AI-generated buildings feature in hypnotic Moullinex music videos

#artificialintelligence

Lisbon musician Moullinex has shared an exclusive short music video showing an endlessly changing landscape of brutalist buildings drawn up by a generative design algorithm with Dezeen. Moullinex, whose real name is Luís Clara Gomes, created two videos that use artificial intelligence (AI) to imagine a series of brutalist buildings. The first video, which the artist shared on his Facebook page, is based on 200 photographs of modernist, concrete buildings. These images acted as the dataset, which was used to train a generative network via the machine learning tool StyleGAN2, to create a string of entirely non-existent buildings with similar characteristics. "It's akin to showing thousands of pictures of a cat to a child and then asking them to draw a brand new cat based on what they now know are cat-like characteristics," Gomes told Dezeen.


How I became a Software Developer during the pandemic without a degree or a bootcamp

#artificialintelligence

In 2018 I was depressed and unmotivated, I thought of myself as a failure and I thought I was too dumb to finish my degree or learn anything at all, I had no direction in life and just wanted everything to be over. Two years later, one spent working abroad and another dedicated to studying, I have a completely different perspective about myself and I just started my new exciting developer job on Monday. It took a lot of courage (and argumentations to convince my parents) to leave my university after three years of studies to accept a job in a Lisbon without knowing anyone nor the language but it was a wonderful experience that helped me find myself. Again it took even more grit and determination to leave Lisbon and start studying again, but I did it because I knew my dream was to become a programmer. I have no expertise in psychology and the best advice I have if you are in a dark place is to seek professional help, but I know what it feels to be lost and I want to help anyone that shares my same dream by writing this article offering actionable advice on how to achieve a career in software development.



Playing Space Invaders Blind RL & Cross Modality Transfer

#artificialintelligence

In the 1975 film Tommy, the "deaf, dumb, and blind" protagonist overcomes substantial sensory limitations to capture a pinball championship. While it's difficult to imagine playing a video game without being able to see the screen, that was the challenge taken up by AI researchers from INESC-ID and Instituto Superior Técnico in Lisbon and Pittsburgh's Carnegie Mellon University. Using cross-modality transfer techniques and reinforcement learning (RL), the researchers produced an agent that can play video games with only the game audio to guide it. In some respects, an RL policy learned over image and sound inputs succeeding when only sound inputs are available mimics the available sensory data leveraging process that comes as second nature to humans -- we use touch and hearing for example to navigate through a dark room. The new cross-modality transfer RL approach explores how latent representations built by advanced variational autoencoder (VAE) methods might enable RL agents to learn and transfer policies over different input modalities.


Neanderthals ate seafood including crabs, clams, oysters and dolphins

Daily Mail - Science & tech

Neanderthals fed regularly on mussels, fish and other omega-3-rich marine life including seals, which likely impacted their cognitive abilities, a new study claims. Archaeological digs along the Portuguese coast reveal the evidence that our cavemen ancestors had as much fondness for seafood as modern humans today. Both Neanderthals and early Homo sapiens tucked into'surf and turf', from molluscs, crabs, fish, waterfowl and dolphins to horse, goat and red deer, as well as pine nuts. The findings are based on ancient remains in the cave of Figueira Brava, Portugal, dating to roughly 106,000-86,000 years ago – when Neanderthals settled in Europe. Figueira Brava is 18.6 miles (30km) south of Lisbon on the slopes of the Serra da Arrábida, a natural park facing south, about a 45-minute drive from Lisbon'Pretty much every potential source of food that existed in the environment they [Neanderthals] exploited and used it,' said Professor João Zilhão, an expert in palaeolithic archaeology at the University of Barcelona.


#eri: Fostering Creativity: RSS Pioneers and the YOLO Robot, with Patrícia Alves-Oliveira

Robohub

Patrícia Alves-Oliveira is an upcoming postdoctoral researcher working with Professor Maya Cakmak in the Human-Centered Robotics Lab at the University of Washington. She recently completed her PhD in Human-Robot Interaction at the Lisbon University Institute supervised by Profs. Ana Paiva and Patrícia Arriaga, also working as a visiting scholar in the Human-Robot Collaboration and Companionship Laboratory at Cornell University, supervised by Professor Guy Hoffman. Patrícia's research focuses on the use of social robots as intervention tools to enrich creative behaviors in children. Patrícia's overarching goal as a Human-Robot Interaction researcher is to investigate how and where robots can be used to empower innate human qualities and experiences.


Zendesk Invests in Tymeshift to Improve WFM Solutions

#artificialintelligence

Leading Customer Support Ticket System and Sales CRM platform Zendesk has invested in Tymeshift. Tymeshift is an Omnichannel Workforce Management (WFM) tool that is made exclusively for Zendesk. Tymeshift will use the new funding to push for growth in new markets. At the time of this investment, David Birchmier, CEO- Tymeshift, shared his vision for the company's future. David said, "We're proud of the organic growth we've achieved and are excited to leverage Zendesk's investment to accelerate our product innovation pace and continue to grow our teams in Fairfield, Iowa, Lisbon, Portugal, and Novi Sad, Serbia. In short, we're focused on making our WFM solution even more comprehensive."