Skip to main content

New ‘shady’ research from MIT uses shadows to see what cameras can’t

Computational Mirrors: Revealing Hidden Video

Artificial intelligence could soon help video cameras see lies just beyond what the lens can see — by using shadows. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have concocted an algorithm that “sees” what’s out of the video frame by analyzing the shadows and shading that out-of-view objects create. The research, Blind Inverse Light Transport by Deep Matrix Factorization, was published today, Dec. 6.

The algorithm works almost like reading shadow puppets in reverse — the computer sees the bunny-shaped shadow and is then able to create an estimate of the object that created that shadow. The computer doesn’t know what that object is, but can provide a rough outline of the shape.

The researchers used shadows and geometry to teach the program how to predict light transport, or how a light moves in a scene. When light hits an object, it scatters, creating shadows and highlights. The research team worked to “unscramble” that light from the pattern of the shading, shadows, and highlights. Further refinement helped the computer estimate the most plausible shape out of all the potential possibilities.

With an understanding of how light moves, the algorithm can then create a rough reconstruction of the object that created that shadow, even though the object itself isn’t actually in the video. The algorithm relies on two neural networks, one for the “unscramble” and another to generate the video feed of what that object looks like.

The algorithm creates a pixelated silhouette of the shape and how that shape moves. That’s not enough for creating a spy camera that sees around corners, but it does help make those scenes from CSI where the investigators pull out detail that wasn’t there before a little more plausible.

The researchers suggest that, with further refinement, the technology could be used for applications like enhancing the vision of self-driving cars. By reading the shadow information, the car could potentially see an object about to cross the road before it even enters the camera’s field of view. That application is still a long way out yet — researchers say the process currently takes about two hours to reconstruct a mystery object.

The research is based on similar work from other MIT researchers that used special lasers to see what a camera couldn’t. The new research works without any extra equipment beyond the camera, computer, and software.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
MIT is teaching self-driving cars how to psychoanalyze humans on the road
mit algorithm predict drivers personality car driver behind wheel

In March 2004, the U.S. Defense Advanced Research Projects Agency (DARPA) organized a special Grand Challenge event to test out the promise -- or lack thereof -- of current-generation self-driving cars. Entrants from the world's top A.I. labs competed for a $1 million prize; their custom-built vehicles trying their best to autonomously navigate a 142-mile route through California’s Mojave Desert. It didn’t go well. The “winning” team managed to travel just 7.4 miles in several hours before shuddering to a halt. And catching fire.

A decade-and-a-half, a whole lot has changed. Self-driving cars have successfully driven hundreds of thousands of miles on actual roads. It’s non-controversial to say that humans will almost certainly be safer in a car driven by a robot than they are in one driven by a human. However, while there will eventually be a tipping point when every car on the road is autonomous, there’s also going to be a messy intermediary phase when self-driving cars will have to share the road with human-driven cars. You know who the problem parties are likely to be in this scenario? That’s right: the fleshy, unpredictable, sometimes-cautious, sometimes-prone-to-road-rage humans.

Read more
A.I. can remove distortions from underwater photos, streamlining ocean research
nasa coral reef climate change lush tropical shore and corals underwater

Light behaves differently in water than it does on the surface -- and that behavior creates the blur or green tint common in underwater photographs as well as the haze that blocks out vital details. But thanks to research from an oceanographer and engineer and a new artificial intelligence program called Sea-Thru, that haze and those occluded colors could soon disappear.

Besides putting a downer on the photos from that snorkeling trip, the inability to get an accurately colored photo underwater hinders scientific research at a time when concern for coral and ocean health is growing. That’s why oceanographer and engineer Derya Akkaynak, along with Tali Treibitz and the University of Haifa, devoted their research to developing an artificial intelligence that can create scientifically accurate colors while removing the haze in underwater photos.

Read more
New cardiology A.I. knows if you’ll die soon. Doctors can’t explain how it works
cardiology ai predicts death toe tag

Here’s a scenario scary enough to warrant a horror movie: An artificial intelligence that is able to accurately predict your chances of dying in the next year by looking at heart test results, despite the fact that the results may look totally fine to trained doctors. The good news: The technology might just wind up saving your life one day.

“We have developed two different artificial intelligence algorithms that can automatically analyze electrical tracings from the heart and make predictions about the likelihood of a future important clinical event,” Brandon Fornwalt, from Pennsylvania-based healthcare provider Geisinger, told Digital Trends.

Read more