Follow Dr. Sara Abad on LinkedIn or discover more about the research Dr. Sara Abad and her team have been conducting here. A link to the specific paper our earlier news report and this interview pertains to can be found here.
In a groundbreaking study, Dr. Sara Abad and her team have developed innovative virtual touch technology, providing a pioneering solution for human-computer interaction. As part of our in-depth exploration of this topic, which we previously highlighted in our news feature, we engaged with Dr. Abad to delve deeper into her research and its implications for the industry.
The following interview is presented unedited to maintain Dr. Abad’s original responses and offer a comprehensive, unfiltered insight into her research on virtual touch technology. This format enables readers to fully appreciate the intricacies and significance of her work in advancing interactive technology solutions.
Acknowledgement From Dr. Sara Abad: Principal Investigator Prof. Helge Wurdemann and Dr Duncan Raiit from the University College London; our clinical collaborator Prof. Martin Koltzenburg from UCL Queen Square Institute of Neurology; and our collaborator Dr Nicolas Herzig from the University of Sussex expert in compliant structures. Dr Jialei Shi and Wenlong Gaozhang who worked on making the system more portable. This research was funded by the UKRI EPSRC(EP/S014039/1), the Royal Society, and the EPSRC-IAA-UCL Therapeutic Acceleration Sup- port (TAS) Fund.
Can you explain the fundamental technology behind the virtual hand-holding system? How are haptic feedback mechanisms used to simulate the sensation of touch?
Our bioinspired touch feedback system is designed to stimulate the four main types of receptors in the skin that are responsible for our sense of touch. These receptors help us perceive textures, edges, directions, and the type of touch we feel, whether it’s a constant or pulsing sensation. Our system can stimulate to all areas of the finger, and the intensity can range from levels below to above human sensitivity. By directly stimulating these four receptors, we aim to reduce the amount of training needed for people to understand the touch feedback, as the brain already knows how to process information from these receptors.
The system uses a soft fingertip interface that applies gentle pressure to the skin. This interface has small chambers with membranes that touch the skin. Pressurised air is delivered into these chambers, causing the membranes to expand, which then press on the skin—this is how the system creates the touch sensation.
Our system is the first to stimulate all four mechanoreceptors both individually and simultaneously. This capability opens the door to a range of future applications, including healthcare, teleoperation, and social interactions. For instance, it could enhance touch sensations in virtual reality or even allow us to feel touch during video calls.
What are the key challenges in accurately replicating the complex sensations of physical touch, such as pressure, texture, and temperature, in a virtual environment?
I’m not a neuroscientist myself, but I collaborate closely with one. So, while I can’t go into all the technical challenges in detail, I can share some insights from my perspective as a researcher in bioinspired robotics. The sense of touch is incredibly complex due to its many different aspects. It’s not just about receiving the right stimuli through the four mechanoreceptors I mentioned earlier. Factors such as temperature, the social context in which the touch occurs, who is doing the touching, and the recipient’s past experiences all play a role in how we perceive touch. While we’ve made significant progress, there’s still a lot of work to do to fully understanding and replicating the sense of touch.
How does your system ensure real-time responsiveness, particularly in long-distance interactions, where latency could impact the authenticity of the virtual touch experience?
We haven’t reached that stage in the research yet, so I’m unable to comment on it.
What advancements in materials science or sensor technology have enabled the development of more sensitive and realistic haptic feedback devices in your work?
3D printing and 3D scanning have been crucial in our work. We used 3D scans of a real finger to shape the fingertip interface in order to improve the contact between the finger and the fingertip interface. 3D printing enables us to create complex moulds that match the natural curves of a human finger, and it does so quickly. Using 3D printed moulds makes it easier to produce fingertip interfaces using soft materials like silicone.
How scalable is this technology in terms of different use cases, such as virtual reality applications, healthcare, or remote communication? Are there any limitations in adapting the system to broader applications?
Our system stimulates the four main touch receptors in the skin, so it has a wide range of potential uses. Depending on the application, we may mainly need to modify the fingertip interface. For example, in healthcare, we’re exploring how the system could be used as a clinical tool to measure the loss of touch sensitivity in the fingers. Many conditions can cause a reduced sense of touch, and our system, which can deliver stimuli both above and below human sensitivity thresholds, provides more accurate, operator-independent data. This helps us track how touch sensitivity changes over time. In this case, the fingertip interface only needs to deliver stimuli to the sides and front of the finger, leaving other areas unstimulated.
For virtual reality applications, if the goal is to provide touch feedback to larger areas of the body, such as the back or arms, the interface needs to be bigger. The skin in these areas is less sensitive than the fingertips, so using tiny 1-millimetre chambers wouldn’t be as effective. Instead, the membrane dimensions should be increased to better suit the lower sensitivity of these body parts.
What role does AI or machine learning play in refining the feedback mechanisms in your virtual touch system? Are there algorithms in place that help adapt or personalise the touch experience based on user interaction?
In the next phase of our research, we’ll probably use AI to help analyse the data collected through our haptic system. For example, AI could help determine the kind of stimulus that should be applied to each of the four receptors in the skin, as well as the right intensity, to simulate sensations like the feeling of touching a smooth surface.
Looking ahead, what are the next stages of development for this technology? Are there plans for commercialisation, or are you collaborating with industry partners to bring this innovation to market?
Yes, my colleagues and I are exploring opportunities to bring our work to market. For example, we’ve received an invitation to partner with a company focused on virtual reality.