The fact we're living in a world where surveillance is becoming more common is unlikely to be a surprise to you. But even when you're out of sight, you might not be safely hidden: researchers have developed a computer program that lets cameras see around corners.
The technique is called computational periscopy, and it works by analysing shadows cast on a wall and applying some seriously powerful decoding algorithms to them. The end result isn't perfect, but it's very impressive (see the image below).
While thoughts of Big Brother watching you might immediately spring to mind, the technology doesn't have to have such a dystopian purpose.
It could, for example, help self-driving cars avoid accidents, or improve the navigation skills of autonomous robots working in disaster areas.
What really makes the program stand out is the way it can be applied to an image captured by any digital camera – you don't need any special equipment.
"It was thought to be practically impossible to reconstruct an image from only scattered light from a wall without any advanced instruments," optical physicist Allard Mosk from Utrecht University in the Netherlands, who wasn't involved in the study, told Nature.
For a source image, the program needs a picture of the wall receiving light from a scene and shadows cast by an object hidden around the corner. More specifically, it needs a penumbra – the outer edge of a shadow cast by an opaque object.
Penumbras are most often talked about in relation to the shadows cast by planets and moons, but here the algorithms developed by the researchers can work backwards from them to reconstruct a picture of the original scene.
The algorithm is essentially unscrambling the light. When light from a scene hits a mirror (as in a conventional periscope), no unscrambling is needed, because the light travels without interference.
In this case, some heavy computational lifting is used to strip back the interference, and turn a matte wall into something like a mirror.
Importantly, there has to be an opaque object blocking the scene, with dimensions and a shape the algorithm already knows about. That helps the program figure out how the light has been scattered and how to put it back together again.
"Based on light ray optics, we can compute and understand which subsets of the scene's appearance influence the camera pixels," says one of the team, electrical engineer Vivek Goyal from Boston University.
"It becomes possible to compute an image of the hidden scene."
Cameras have been doing tricks like this with scattered light for several years, but here everything is done in the software rather than the camera itself.
Even though a specific scenario (with an opaque object) is required, as well as strong lighting illuminating the object, it's another tool that cameras of the future might be able to call upon when needed.
The team thinks that eventually the algorithm might be able to work out the dimensions and shape of the opaque object itself.
"In the future, I imagine there might be some sort of hybrid method, in which the system is able to locate foreground opaque objects and factor that into the computational reconstruction of the scene," says Goyal.
The system is only going to get better over time as well. Right now it takes around 50 seconds to reconstruct a scene from the light and shadows scattered on a wall, but the team thinks that could be improved upon.
Eventually, it might be able to process video footage in real time, the researchers say – but they're hoping it's going to be put to positive rather than sinister uses, like searching through burning buildings or for keeping people safe on the roads.
"I'm not especially excited by surveillance, I don't want to be doing creepy things," Goyal told Ian Sample at the Guardian.
"But being able to see that there's a child on the other side of a parked car, or see a little bit around the corner of an intersection could have a significant impact on safety."
The research has been published in Nature.