Mingmin Zhao, Assistant Professor in Computer and Information Science at the University of Pennsylvania, is leading a research effort that could change how robots interpret space. Zhao and colleagues at Penn Engineering have developed a system called HoloRadar that enables robots to reconstruct three dimensional scenes outside their direct line of sight using radio waves and artificial intelligence. The work, presented at the 39th Conference on Neural Information Processing Systems, addresses a fundamental limitation in robotic perception: the inability to see what lies just beyond a corner.
Tang, X., Miura, Y., Terada, N., Xiao, E., Kobayashi, S., Döring, A., Tadano, T., Martin‐Cid, A., Ohkochi, T., Kawaguchi, S., Matsushita, Y., Ohkubo, T., Nakamura, T., Skokov, K., Gutfleisch, O., Hono, K., & Sepehri‐Amin, H. (2026). Control of Covalent Bond Enables Efficient Magnetic Cooling. Advanced Materials, 38(7). https://doi.org/10.1002/adma.202514295
For autonomous machines operating in warehouses, hospitals, campuses, and eventually on public roads, blind spots are more than an inconvenience. A delivery robot approaching a T shaped corridor or a vehicle nearing an obstructed intersection must make decisions without full visibility. Current sensing technologies, including cameras and LiDAR, provide detailed information about what is directly ahead but offer little insight into hidden regions. HoloRadar extends that awareness by exploiting the physical behavior of radio frequency signals.
Mingmin Zhao, Assistant Professor in Computer and Information Science at the University of Pennsylvania stated,
“This is an important step toward giving robots a more complete understanding of their surroundings. Our long-term goal is to enable machines to operate safely and intelligently in the dynamic and complex environments humans navigate every day.”
The concept rests on a counterintuitive property of radio waves. Compared to visible light, radio waves have much longer wavelengths. In traditional imaging systems, long wavelengths are associated with lower resolution, which is typically seen as a drawback. Zhao’s team recognized that for non line of sight perception, those longer wavelengths offer a practical advantage. Because radio waves are much larger than the small surface irregularities found on walls and ceilings, many indoor surfaces behave like approximate mirrors for radio signals. When a pulse is transmitted, it can reflect off a wall, travel around a corner, interact with objects hidden from view, and return to the sensor.
In effect, the built environment becomes a network of reflective pathways. Unlike optical mirror systems installed at blind intersections, HoloRadar does not require additional infrastructure. It uses existing architectural features such as walls, floors, and ceilings to guide signals indirectly toward hidden areas. The challenge is not simply detecting these reflections but interpreting them.
Radio pulses often bounce multiple times before returning to a receiver. Each bounce introduces additional delay and distortion. The resulting signal is a mixture of overlapping reflections, commonly referred to as multipath interference. Untangling this information is difficult using conventional signal processing techniques alone.
To address this complexity, the Penn team developed a two stage computational framework that combines machine learning with physics based modeling. In the first stage, a neural network processes the raw radar returns to enhance their effective resolution and separate distinct reflection paths. In the second stage, a physics guided model traces these reflections backward through space, accounting for how radio waves propagate and reflect from surfaces. By explicitly modeling wave behavior, the system can infer the true spatial positions of objects that generated the reflections.
This hybrid approach reflects a broader shift in robotics research toward integrating data driven methods with physical constraints. Rather than relying entirely on a neural network to infer hidden geometry, the system embeds knowledge about wave propagation directly into the reconstruction process. This allows it to distinguish between direct and indirect reflections and to filter out ambiguous or misleading signal components.
In laboratory and building corridor experiments, the researchers mounted HoloRadar on a mobile robotic platform. As the robot approached corners and T shaped intersections, the system reconstructed surrounding walls and detected human subjects located outside its visual field. These reconstructions were generated in real time, an important factor for safety critical applications. The system also operated effectively in low light and dark conditions, since radio waves are not dependent on ambient illumination.
Previous research in non line of sight imaging has demonstrated the ability to reconstruct hidden scenes using ultrafast lasers and time of flight cameras. Those optical approaches can achieve high spatial precision but often require specialized equipment and controlled conditions. Other radar based systems have shown through wall sensing capabilities, yet many relied on large scanning arrays or long acquisition times that limit mobility. HoloRadar contributes to this landscape by emphasizing compact hardware and real time computation suitable for mobile robots.
The system is not intended to replace existing sensors. Instead, it complements them. Autonomous vehicles already employ LiDAR, cameras, ultrasonic sensors, and conventional radar to build detailed maps of their surroundings. HoloRadar adds another layer of perception by revealing regions that are physically blocked from view. Even a fraction of a second of earlier detection at a blind intersection can expand the window for safe decision making.
There remain technical and practical challenges before such systems can be widely deployed. Indoor environments offer relatively predictable geometries and reflective surfaces. Outdoor settings introduce longer distances, moving objects, irregular facades, and environmental noise. Urban intersections, for example, involve vehicles, pedestrians, and metallic structures that may generate complex reflection patterns. Extending non line of sight reconstruction to those scenarios will require further refinement of both hardware and algorithms.
The broader implication of this research lies in how it reframes the sensing problem. Instead of viewing walls as barriers to perception, the system treats them as components of the sensing architecture. This shift in perspective aligns with a growing interest in computational imaging, where information is extracted not only from direct signals but also from indirect interactions with the environment.
As autonomous systems continue to move into shared human spaces, perception beyond direct line of sight is likely to become a design requirement rather than an experimental feature. Zhao’s team has demonstrated that radio waves, when paired with carefully structured AI models, can provide a workable path toward that goal. For robotics engineers, the work underscores the value of combining classical physics with modern machine learning to extend what machines can sense, interpret, and ultimately understand about the spaces they navigate.

Adrian graduated with a Masters Degree (1st Class Honours) in Chemical Engineering from Chester University along with Harris. His master’s research aimed to develop a standardadised clean water oxygenation transfer procedure to test bubble diffusers that are currently used in the wastewater industry commercial market. He has also undergone placments in both US and China primarely focused within the R&D department and is an associate member of the Institute of Chemical Engineers (IChemE).

