We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

Diving Into the Data, Literally

New immersive environments are allowing researchers to visualize and study everything from brains to hurricanes with unprecedented detail and scale.

By Linda Marsa
Jul 29, 2014 6:00 PMNov 12, 2019 5:41 AM
StarCAVE-proteins.jpg
Researchers can stand immersed in 3-D renderings of green fluorescent proteins and other structures derived from UC San Diego’s Protein Data Bank.  | Tom DeFanti/ UC San Diego CALIT2

Newsletter

Sign up for our email newsletter for the latest science news
 

One winter evening in the early 1860s, German chemist August Kekulé dozed off while sitting before a fire, falling into a remarkably vivid dream. Atoms formed themselves into undulating strings that morphed into a snake eating its own tail. Kekulé contended that this intense imagery helped him solve the mystery of benzene’s ringlike structure, a discovery that is considered a foundation of modern chemistry.

Nearly 100 years later, research teams on both sides of the Atlantic were vying to be the first to decipher the structure of DNA, the genetic material that is the basic molecule of life. In the United States, Nobel laureate Linus Pauling found himself up against obscure English physicist Francis Crick and his 20-something American postdoc, James Watson, in Cambridge. The upstart British team had a hidden advantage: crystallographic X-rays of DNA taken by colleague Rosalind Franklin. This chemically enhanced X-ray technique revealed that DNA was composed of two complementary strands of nucleic acids linked by chemical bonds on a ladder-like chain. The ability to visualize DNA gave them insights into the spiral double-helix structure — and they won the race.

In 1993, Kary Mullis won the Nobel for his invention of polymerase chain reaction, the chemical Xerox machine that makes thousands of copies of tiny strands of DNA, a breakthrough that jump-started the biotech revolution. The biochemist, then based in Berkeley, freely admitted he conceptualized this advance while under the influence of the mind-altering drug LSD, which helped him visualize the complex chemistry three-dimensionally.

These three examples center on the power of visualization — that ability to “see” something from a different perspective, a spark of insight that pares away mountains of extraneous details and distills seemingly impenetrable puzzles down to their essence. But now we’re in the era of big data, which harnesses the computing power of massive databases with bytes measured in teras (trillions) and petas (quadrillions), combined with sophisticated algorithms that can grapple with problems on a once-unimaginable scale. While this numbers-crunching ability promises to greatly accelerate the pace of scientific discovery, we’re suddenly buried in an avalanche of information.

Immersive environments — 3-D virtual reality worlds — can help us make sense of this in a tangible way. Big data collects such a vast amount of information that it’s difficult to see patterns. Using computing power to translate data into something that can be seen and heard makes it easier to understand. “Scientists and engineers can work with their data, perceptually and intuitively, the way artists do,” says JoAnn Kuchera-Morin, creator of the AlloSphere. It is perhaps the most advanced of these immersive environments, housed on the campus of the University of California, Santa Barbara.

These electronically simulated worlds of sight and sound cut through a lot of the noise of big data, and they enable researchers to synthesize, manipulate and analyze large data sets in a way that is easier to comprehend and digest, providing unparalleled insights into the whole picture and how each individual piece fits in. “We have so much data that we need these bigger lenses to get a full picture of what’s really going on,” says Andrew Johnson, director of research at the Electronic Visualization Laboratory at the University of Illinois at Chicago. “These kinds of environments are lenses to look at data — the modern equivalent of the microscope or telescope.”

Pooling massive amounts of data allows patterns and trends to emerge that aren’t apparent in small, individual studies, and the applications are virtually infinite — think Moneyball, the 2003 best-selling book about how the perennially cash-strapped Oakland A’s used analytics and baseball stats to scout overlooked talent. Another example: In 2013, it took number-crunching algorithms, sifting through terabytes of data, to spot the distinctive signature of several Higgs boson particles. Physicists could finally identify them. Medical scientists, on the other hand, are crunching billions of data points culled from millions of patients about genetic mutations that make people more vulnerable to diseases like diabetes, heart disease and cancer. They combine this information with sequences of the proteins those bits of DNA produce. (Proteins are the body’s workhorses that control every cell.) This information is used to concoct more targeted therapeutics and more precise diagnostics using biomarkers — in a patient’s blood, saliva or urine — that signal the presence of a disease.

Immersive environments like the ones you’ll see in the following pages allow scientists to watch a tumor grow, observe molecules binding together — or even see a re-enactment of the Big Bang and witness the transformation of the universe over billions of years. Rudimentary versions of these environments have been around since the 1990s, but with today’s technology, scientists can sink into even greater realism and visualize more with sharper resolution. This immersion is used in disciplines as diverse as medicine, physics, neuroscience, green technology, structural engineering and archaeology at universities, government research agencies and in private industry all over the world.

“Originally, we created these as an educational tool for visualizing concepts and ideas — in place of a blackboard and hand waving — to help people see things they never did before,” says Thomas DeFanti, a research scientist at UC San Diego’s California Institute for Telecommunications and Information Technology, and a pioneer of virtual reality systems. “But the newest technology gives you the feeling of true immersion that makes for a completely riveting experience.”

Inside the AlloSphere

“Shall we, Matt?” 

JoAnn Kuchera-Morin instructs her media systems engineer, Matthew Wright, to fire up the computer cluster that powers the AlloSphere. With a simple keystroke, we’re suddenly plunged into a virtual world of sight and sound that transports us on a fantastic voyage through a three-dimensional model of the human body. We hurtle down an artery, as if we’re sliding down a slippery chute, and nearly collide with the liver and heart. It feels as if we’re propelled, airborne and hovering in free fall in an onrush of images in the darkened chamber. 

We’re wearing 3-D glasses and standing on a sturdy metallic catwalk suspended at the center of a 33-foot-diameter sphere, which seems to be floating inside a 2,000-square-foot room three stories high. Dozens of speakers and other audio equipment envelop us in sound from every direction, while high-resolution video projectors beam floor-to-ceiling images in 40-million-pixel detail. This all creates a unique 360-degree immersive environment that far outstrips the technology of other virtual reality systems. Here, researchers can use all of their senses to uncover new patterns in the data.

The AlloSphere cost $12 million for the structure alone and was completed in 2007. It is the brainchild of Kuchera-Morin, an orchestrally trained composer turned computer geek who directs the AlloSphere Research Laboratory at the University of California, Santa Barbara, perched on the rocky shoreline of the Pacific. A gregarious woman clad all in black with long, straight gray hair that makes her resemble a hippie grandmother, Kuchera-Morin began dabbling with big mainframe computers in the 1980s, when traditional instruments couldn’t translate the sounds she heard in her head into music.

“The computer helped me understand all of the acoustics, vibrations and physics of instruments,” she says. “And through mathematical equations, I could transform them into anything I wanted to.”

Her early experiments ultimately evolved into the AlloSphere, which converts reams of data into moving images and sound that are easier for researchers to comprehend and digest. Sometimes, dozens of scientists in data-rich disciplines ranging from neuroscience and medicine to green tech, theoretical physics, materials science and nanotechnology gather on this bridge. They use special wireless controllers and sensors embedded in the railings to maneuver through the constellation of images. Physicists can watch representations of electrons spinning inside hydrogen atoms, allowing them to actually “see” these invisible processes of nature, while neuroscientists can seemingly fly through 3-D images inside a patient’s brain. “Everything you see is a number that’s been crunched,” says Kuchera-Morin. “Mathematical algorithms can be translated into visual and audio frequencies by mapping their vibratory spectrum in the light and sound domain — like mapping heat through infrared light. The AlloSphere is a virtual instrument that allows scientists to do simulations, which will speed up time to discovery.”

On this particular day, we’re looking at a project by Jamey Marth, director of the Center for Nanomedicine at UC Santa Barbara. Marth is using the simulation version of the human body to examine the makeup and behavior of critical cell components, such as proteins, lipids (fats) and glycans (sugars). This particular simulation was built with MRI information collected from a living human body. Using specialized software and computational language to translate mathematical algorithms and scientific data into sight and sound, Kuchera-Morin’s band of techies first integrated the geometries of the arteries, veins, pancreas and liver, and then scaled them up like a high-powered digital microscope so researchers can better visualize the biological processes of health and disease. 

Right now, Marth’s team is simulating the transport of chemotherapy directly to cancerous tumors in the pancreas and liver without harming healthy tissue. Artificial nanoscale particles might prove a good trucking device. But first, the researchers have to gauge if the organic nanoparticles can successfully navigate through blood vessels and then bind with cancer cells to deliver their toxic payload. In the AlloSphere, it’s as if Marth’s researchers are standing inside blood vessels, visualizing data on a human scale that is normally too small to see. The next step is to integrate fluid dynamics to simulate precisely how blood flows through arteries and veins. They’ll also work with materials scientists to create a reproduction that mirrors the composition of different-shaped nanoparticles to see how they navigate through the bloodstream so the team can run virtual tests of new treatments in nanomedicine. 

“We need to design nanoparticles that will, like a lock-and-key mechanism, travel through the body and interact only with the diseased cell surface,” says Marth. “Right now, we use MRIs and PET scans to visualize theseprocesses, but other imaging approaches are needed — and that’s where the AlloSphere comes in. This is the breeding ground for the next generation of solutions in medicine.”

Immersive Innovations

The AlloSphere is one of dozens of immersive environments that are now routinely used at universities, government agencies and key research centers. Here’s a sampling of some of the more innovative applications.

UIC's CAVE2:

Tracking Depression’s Footprints in the Brain

At the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago, a psychiatric team is mapping the brain’s intricate web of neural connections, using images from MRI scans, to try to identify regions responsible for depression. They’re in CAVE2 (Cave Automatic Virtual Environment), a 24-foot-wide, 8-foot-tall, 320-degree panoramic room, with 72 3-D liquid-crystal displays that have a 37-megapixel resolution, which is about the limit of human 20/20 visual acuity. 

The bundles of neural fiber tracts in this brain visualization are color-coded by their primary direction (green: front-back; red: left-right; blue: up-down). Scientists are scouring this rainbow array for faulty wiring in the neural connections in the white matter, which takes up half the brain. It serves as a neural switchyard with millions of communication cables that operate like telephone trunk lines connecting different regions of the brain. Previous research suggests that damage to these cables in the white matter is associated with depression.

“We’re hoping this 3-D environment will help us spot differences that aren’t easily detectable in two dimensions on a flat screen,” says Olusola Ajilore, the UIC psychiatrist conducting the research. “If we get better at mapping the brain areas responsible, it will lead to more precision in the use of technologies that may repair these damaged connections, like deep brain stimulation.”

Underground Lakes at the Bottom of the World

CAVE2 is able to switch gears from the inner workings of the brain to Antarctica’s West Lake Bonney. Peter Doran stands in front of the EVL’s 8-by-24-foot wall of LCD screens at the University of Illinois at Chicago, wearing 3-D glasses and surrounded by images from the 2½-mile-long pool of water entombed beneath 15 feet of ice. His glasses are outfitted with tiny tracking balls and sensors that adjust the projections on the screen according to his movements. A turn of his head changes the scene, accentuating the sense of immersion.

An earth and environmental scientist, Doran was part of a NASA-funded team that did field trips in 2008 and 2009 in Antarctica’s Dry Valleys to explore the underground lake. Each day, ENDURANCE (Environmentally Non-Disturbing Under-ice Robotic Antarctic Explorer) was lowered through a hole in the ice and used its sensors to take readings in different parts of the lake — temperatures, light levels, solar radiation and dissolved organic matter. The $2.3 million research project may be a dress rehearsal for future exploration when a modified version of ENDURANCE could be dispatched to explore the extreme environments of the solar system, such as Mars or Jupiter’s moons. 

The information Doran’s team gathered was relayed to the EVL to generate 3-D images and maps to help them get a better understanding of the lake’s watery depths. “This is an incredible tool — you feel like you’re flying through the lake, and it’s the closest thing to actually being there I could imagine,” says Doran. “If I put all this data on my laptop, I’d miss a lot, and I’d have to break it down slice by slice, which would be grueling and take years to accomplish.”

Stony Brook's Reality Deck: 

Up Close and Personal in the City of Dubai

Called “virtual reality on steroids,” the $2 million Reality Deck at Stony Brook University in New York is the world’s highest resolution panoramic immersive environment. The facility, which debuted in 2012, has a total resolution of more than 1.5 billion pixels (1,500 megapixels). For comparison, high-def TVs have 2 million pixels, while 3-D movies have 4 million; the Reality Deck boasts about 500 times more resolution. This surround-view theater, a 627-square-foot room tiled floor-to-ceiling with 416 high-resolution flat-screen monitors, can display a 45-gigapixel photograph of the city of Dubai as a single image. Researchers can soar over the city, yet detect such minute details as a car’s license plate number. No need to pan or zoom — to see something close up, just walk up, thanks to the “infinite canvas,” a 360-degree smart screen that changes images as you walk around the deck.

The Reality Deck's 627 square feet are tiled floor-to-ceiling with 416 high-resolution monitors. Its total resolution is more than 1.5 billion pixels. | Stony Brook University

The level of detail is remarkable: Flood maps can pinpoint submerged areas during Superstorm Sandy, even the debris on someone’s lawn. A shot of the half-million people at Barack Obama’s presidential inauguration in 2009 is “so detailed that you can recognize each and every face in the crowd,”says Arie Kaufman, chief scientist of Stony Brook’s Center of Excellence in Wireless and Information Technology.

This technology can be used for applications as diverse as tourism or thwarting terrorists because of its uncanny ability to detect a suspicious person in vast crowds. It’s also useful in cosmology and charting the far reaches of the Milky Way, weather prediction and climate change modeling or even aerial reconnaissance missions for national defense.

Researchers can stand immersed in 3-D renderings of green fluorescent proteins and other structures derived from UC San Diego’s Protein Data Bank.  | Tom DeFanti/ UC San Diego CALIT2

UC San Diego's StarCAVE:

Engulfed in Virtual Proteins

Inside the StarCAVE at the University of California, San Diego, high school biology students find themselves engulfed in virtual 3-D renderings of green fluorescent proteins and other structures derived from the university’s Protein Data Bank. The five-sided virtual reality room, with 70 monitors and nearly 287 million pixels, is the third generation of technology first developed at the University of Illinois’ EVL. (See page 55.) StarCAVE has been integral to projects in areas ranging from molecular biology to archaeology and structural engineering. 

Neuroscientists, working with architects, have built virtual hospitals, allowing them to design workspaces that ensure better patient care, such as keeping a line of sight from the nurses’ station to patients’ bedsides. Engineers are devising blood flow simulations to improve cardiac pumps for children with heart defects. Archaeology teams have used the StarCAVE to interact in real time with ongoing excavations at ancient dig sites in Jordan and to study satellite images and data gathered during field expeditions to Mongolia to aid in the quest to find Genghis Khan’s tomb.

UT's TAAC Visualization Laboratory:

Advance Warning on Hurricanes

In 2008, Hurricane Ike had already decimated parts of Cuba and the Bahamas, and it was barreling for the U.S. mainland. Disaster management teams had less than 72 hours to make decisions that would affect millions. Before it made landfall in Galveston, Texas, Ike was a powerful category 4 storm 450 miles wide with winds of up to 145 mph. It ultimately caused billions of dollars in damage from the Louisiana coastline to Corpus Christi, Texas, shuttering oil refineries and forcing more than a million people to evacuate. 

Scientists at the Texas Advanced Computing Center (TACC) at the University of Texas’ Austin campus turned to Stallion, a 328-megapixel system that uses 80 30-inch flat-panel monitors covering a single wall, and worked closely with the National Oceanic and Atmospheric Administration and state emergency management teams. By combining actual photos with satellite images, they devised real-time global and regional 3-D simulation models of when and where the hurricane would make landfall.

“Think of it as a flow problem, trying to get millions of people out of harm’s way in a couple of days,” says Kelly Gaither, TACC’s director of visualization. “You must have an efficient way to evacuate the large coast regions via your pipeline, and they have to have somewhere to go.”

TACC’s Stallion has been used in several research projects, including visualizations of the Southwest Power Grid, with images streamed directly from its source in Pasadena, Calif.; massive oil spills; the spread of the flu across the region; and the birth of the universe with the Big Bang. “Visualization technology allows us to leverage the world’s most powerful pattern recognition engine — the human mind,” says Gaither. “Seeing is a way to gain insights into very large data sets that we would have a difficult time understanding otherwise.”

[This article originally appeared in print as "Fantastic Voyages."]

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.