People’s interactions with machines, from robots that throw tantrums when they lose a colour-matching game against a human opponent to the bionic limbs that could give us extra abilities, are not just revealing more about how our brains are wired – they are also altering them.
Emily Cross is a professor of social robotics at the University of Glasgow in Scotland who is examining the nature of human-robot relationships and what they can tell us about human cognition.
She defines social robots as machines designed to engage with humans on a social level – from online chatbots to machines with a physical presence, for example, those that check people into hotel rooms.
According to Prof. Cross, as robots can be programmed to perform and replicate specific behaviours, they make excellent tools for shedding light on how our brains work, unlike humans, whose behaviour varies.
‘The central tenets to my questions are, can we use human-robot interaction to better understand the flexibility and fundamental mechanisms of social cognition and the human brain,’ she said.
Brain imaging shows that a sad, happy or neutral robotic expression will engage the same parts of the brain as a human face with similar expressions.
Through their project called Social Robots, Prof. Cross and her team are using neural decoding techniques to probe the extent to which human feelings towards a robot change depending on how it behaves.
When the robots used in the project lose a game, they alternate between throwing tantrums or appearing dejected. ‘So far, people actually find it really funny when the robot gets angry,’ she said. ‘But people do respond to them quite strongly and that’s really interesting to see.’
Having robots as colleagues has been shown to affect humans in complex ways. Researchers at the University of Washington found that when soldiers used robots in bomb disposal, they developed emotional attachments towards them and felt frustration, anger or sadness if their robot was destroyed.
Prof. Cross says that from an evolutionary perspective, this doesn’t make sense. ‘We care about people and perhaps animals that might help us or hurt us,’ she said. ‘But with machines it’s a bit more of a mystery and understanding how far we can push that (to develop social relationships with machines) is a really, really fascinating question.’
It’s important to understand these dynamics since, as she points out, robots are already working as companions in nursing homes or even as tutors in early childhood education. Home care and education are prime areas of social robotics research, with R&D efforts focusing on adults suffering from dementia and young children.
Typically, studies on such groups observe interactions over a relatively short time-span. They rarely exceed what Prof. Cross describes as a ten-hour rule, beyond which study participants tend to get bored of their robotic toys. But her team is looking at how feelings towards robots evolve over time.
As part of the project, the researchers send a palm-sized Cozmo robot home with study participants and instruct them to interact with it every day for a week by playing games or introducing it to their friends and pets. The participants’ brains are imaged at the start and end of that period to track changes.
‘If we’re going to have robots in our home environment, if they’re going to be in our schools teaching our kids across weeks, if not years, if they’re going to be peoples’ social companions, we want to know a lot more than just what happens after ten hours’ (of exposure),’ she said.
‘We want to know how people’s social bonds and relationships to robots change across many, many more hours.’
‘If we’re going to have robots in our home environment, if they’re going to be in our schools … we want to know a lot more than just what happens after ten hours (of exposure).’
Prof. Emily Cross, University of Glasgow, Scotland
With such technologies set to become a bigger part of our future, other studies are investigating how the brain reacts to a different kind of robot – wearable robotic limbs that augment the body, providing extra abilities.
Wearables could have social and healthcare benefits. For instance, a third arm could assist surgeons to carry out procedures more safely rather than relying on human assistants, enable people to complete their household chores much faster or help construction workers.
But even as the technology capabilities develop apace, Dr Tamar Makin, a neuroscientist at University College London, UK, is exploring what it would take for the brain to accept and operate a robotic appendage as part of the body, through a five-year project called Embodied Tech.
In order to understand how the brain deals with an extra body part, Dr Makin’s team asks participants to wear an additional opposable thumb for a week. Created by a designer named Dani Clode, the thumb is controlled by pressure sensors worn on the big toes.
Product designer Dani Clode created a prosthetic opposable thumb for people to wear as an extra digit. Video credit: Dani Clode
With the additional thumb, the augmented hand almost has the capabilities of two hands, giving people extra capacity to carry out actions. The question is what effect that has on the brain.
The study is still underway but preliminary results indicate that the presence of an extra thumb alters the brain’s internal map of what the biological hand looks like. Scans show that the brain represents the fingers as collapsing onto each other, away from the thumb and index finger.
This mirrors what happens in diseases like dystonia, when the representation of fingers begins to merge - for instance, when musicians use their fingers excessively - and causes cramp-like pain. The same effect could theoretically cause pain in the wearer of an extra thumb.
‘One important interim message we have is that there are potential costs, not just benefits, to using augmentation technology,’ said Dr Makin.
She believes that the newness of human augmentation means there are lots of unanswered questions but it’s vital to explore the challenges of wearable robotics in order to fully realise the promises, such as multitasking or safer working conditions.
‘I feel like we have a responsibility to gain a much better understanding of how having good control of an additional body part is going to change the representation of the body parts you already have.’
The research in this article was funded by the European Research Council. If you liked this article, please consider sharing it on social media.
As our world becomes more digitalised and connected, we can actually make a virtual copy of it. And such replicas are now being used to improve real world scenarios, from making aircraft production more accurate to preventing oil spills.
Consciousness – the awareness we have of our self and surroundings – is often referred to as ‘the hard problem’. It’s not easy to scientifically explain how a subjective experience, which is something intangible, can be created by the brain – a physical object. But understanding more about how consciousness works could help us find treatments when things go wrong.
Rocky planets larger than our own, so-called super-Earths, are surprisingly abundant in our Galaxy, and stand as the most likely planets to be habitable. Getting a better idea of their interior structures will help predict whether different planets are able to generate magnetic fields – thought to be conducive for life to survive.
Across an entire desert or ocean, migratory birds make some of the most extreme journeys found in nature, but there are still huge gaps in our understanding of how they manage to travel these vast distances and what a changing climate means for their migration patterns.
Virtual simulations can also help build aeroplane wings more efficiently.
Understanding consciousness in healthy people could help when things go wrong.
Dr Michaël Gillon on what's next for exoplanet science.