Visualization in Space and Time: Seamless Pipelines Now Available

Advances in visualization changing work flows for understanding molecular dynamics, tracking cell movements, and designing interventional procedures

The pathway from raw data to valuable visualization of molecules, cells or organs being simulated over time involves several potentially painstaking steps. Typically, researchers must generate a set of initial images from raw data; give them some kind of context (often through image registration); segment the images into meaningful parts; figure out how to analyze the images to understand the system being studied; determine how the display of the objects being studied can assist in that analysis; and, finally, produce a compelling graphic for a published work. Here we describe three examples of recent efforts to simplify the process.

 

Molecular Dynamics Hollywood Style: ePMV

ePMV allows a user to go from an idea (the proverbial napkin sketch) to a press-ready image. An online movie at epmv.scripps.edu shows how this can be done in a matter of minutes with practice. Courtesy of Graham Johnson.

To watch and understand complex biological molecules in action, researchers want fast physics, easy manipulation of objects, and an intuitive interface—all features that Hollywood has already developed in high-end graphics software. To access those capabilities, some researchers have coded their own molecular viewers into their favorite professional 3-D animation applications while others have coded animation capabilities into molecular viewers. The members of Art Olson’s lab saw the folly of this redundant effort and took action, producing embedded Python Molecular Viewer (ePMV).

 

“We developed ePMV to pool the strengths of both types of existing software (pro 3-D animation and scientific modelers/viewers/simulators) by merging them with Autin Ludovic’s ubiquitous translator,” says Graham Johnson, PhD, who developed the software along with Autin Ludovic, PhD, David Goodsell and Michel Sanner, PhD, in Art Olson’s Molecular Graphics Lab at the Scripps Research Institute. ePMV is described in the March 2011 issue of Structure and available for download at http://epmv.scripps.edu.

 

The value of Hollywood’s tools goes well beyond creating pretty pictures. For example, they provide great character animation controls. “Picture the simplified skeletons that underlie the meshes of your favorite Pixar characters and control the motion,” Johnson says.

 

And because “Hollywood works on a time-is-money model,” Johnson says, the software allows fast physics within a relatively intuitive interface; multiple simultaneous views; easy assembly of mesoscale models; efficient and intuitive real-time construction of repeating units such as actin monomers into an actin filament; positioning of transmembrane proteins into a bilayer; and click-and-drag interfaces for manipulating lights and textures and adding text and arrows. “And that’s just scratching the surface,” he says.

 

Some of the early enthusiasm for ePMV has come from illustrators, Johnson notes. But, “what’s most exciting about ePMV for biologists is that it allows you to interoperate an unlimited number of Python scientific algorithms on the same model in the same user interface at the same time,” he says.

 

In addition, ePMV makes computational tools relatively approachable for structural biologists and molecular biologists, letting them interact with molecular dynamics experts in an intuitive way, Johnson says.

 

Olson notes that although scientists can use ePMV to do their research, the question is: Will they? “There is an energy barrier to learning these high level graphics codes,” he says. But, Johnson says, it’s mostly a matter of legacy and momentum. “Clean-slate audiences can be brought up to similar levels in both approaches in similar amounts of time.” Olson adds, “You learn what you need to learn to do your work.”

 

Understanding Cells on the Move

With recent advances in microscopy tools, developmental biologists can now image and track live cells as they move and divide. The information gathered has the potential to explain much about cell growth and differentiation. Although the field of developmental biology is still at the early stages of developing visualization tools to analyze the accumulating data, says Willy Supatto, PhD, a researcher at Ecole Polytechnique, Palaiseau, France, some progress has been made.

Visualized tracking of a Drosophila embryo during gastrulation produced a lovely image (top) and movie, but it was only when the cells’ movements were decomposed and visualized with a color code (bottom) that the researchers understood what was happening. The color code represents the position of the cells in the furrow at a particular time. The graph shows when each of those cells divided as it moved along its trajectory. Cells that started nearest the ectoderm divided first, followed by cells nearer to the top of the ventral furrow. Embryo images reprinted with permission from Truong, TV, and Supatto, W., Toward high-content/high-throughput imaging and analysis of embryonic morphogenesis, Genesis 49:555–569 (2011). Graphs reprinted with permission from McMahon, A et al., Dynamic Analyses of Drosophila Gastrulation Provide Insights into Collective Cell Migration, Science 322: 1546-1550 (2008).

 

First, of course, developmental biologists must visualize the data itself. The entire pipeline from embryo preparation to live imaging and image analysis can be extremely painstaking. Many researchers still do some steps manually—for example, clicking on the mouse to pick out each cell at each time point.

 

But some high-throughput computational alternatives are on the rise, says Supatto. For example, in 2008, he and his colleagues analyzed the movement of thousands of cells in the Drosophila embryo as it changes from a single-layered blastula into a gastrula, a three-layered structure consisting of ectoderm, mesoderm and endoderm. At the time, much of the work was done manually: Taking one embryo through the entire pipeline took about a month. Today, Supatto says, it would take less than a week, mostly because of improved imaging and automated cell tracking.

 

After visualizing the data itself, Supatto says, researchers must then find ways to interpret what they are seeing. He and his colleagues produced a beautiful movie of Drosophila embryo gastrulation but, he says, it couldn’t tell him what was going on. “Cells are mixed together and you can’t remember where they come from,” he says. “And it’s in 3-D so you can’t see the pattern of cell division.” So he and his colleagues began decomposing the trajectories according to the embryonic body plan and color-coding them in various ways. “These were really simple visualization tools, but essential to the research,” he says. And voilà, they found an interesting pattern: each mesoderm cell divided twice during gastrulation and these divisions were ordered in space and time. Cells nearest the ectoderm divided first, followed by cells nearer to the top of the ventral furrow. “Looking at the raw data, you can’t see these waves of cell division,” he says.

 

Work described in a 2010 paper in Science by Nicolas Olivier, et al., shows how far the pipeline has come, Supatto says, but also shows its limitations. Olivier and his colleagues combined multiple experiments to reconstruct a virtual zebrafish blastula. The resulting visualizations show lovely balls of cells with patches of color representing a cell lineage. But they could only go so far before some of the cells became hidden from view. “That points up a real visualization challenge,” he says.

 

Despite such daunting issues, Supatto is inspired by others. “In some fields, people are really a step ahead in managing multidimensional data and maybe we have to steal some of their tools.” For instance, he says, interactive tools used in the gene network field might be adapted to visualize cell lineages during embryo development and to follow cell movement and gene expression during the entire development process. “The community of developmental biologists using such approaches is growing fast but still small,” Supatto says, “so it’s on us to check what other disciplines are using and adapt.”

 

Visualization Pipelines Go to the Heart of the Matter

Physicians are now validating the use of this 3-D model animation for placing a defibrillator in a child’s torso. Here we see the thorax of the particular patient (derived from MRI and CT scans) and the electrodes to be placed (at right). The physician can move these electrodes into position, decide where they should go in the heart, and then save that placement in order to run a simulation that predicts what type of shock would be created. The shock’s effects are then visualized in several ways—in cross sections of the torso as well as in graph form. Image courtesy of Rob MacLeod.At the University of Utah, researchers have developed several visualization pipelines that can bring patient-specific visualizations to the clinic and to patients themselves. “That’s a big change over five years ago,” says Rob MacLeod, PhD, associate director of the Scientific Computing and Imaging Institute (SCI) at the University of Utah.

 

For example, in children at risk of going into fibrillation—a condition in which the heart no longer produces strong, rhythmic contractions—finding a good location to implant a relatively large defibrillator to shock (and essentially reset) the heart can be a challenge. Physicians have resorted to a variety of spots, including near the left shoulder blade or in the abdomen or pelvis, but really have had no guidance in how to make the placement decision. To help out, SCI developed a pipeline to predict the best defibrillator location for a specific patient. Prior to the procedure, the physician can place the electrodes of the defibrillator in a virtual visualization of the patient’s chest and run a simulation to predict and visualize their effect. “This is done in real time while showing the voltage gradient and current density,” MacLeod says. “It’s a 3-D interaction in time and space.” And although the system is still going through validation, MacLeod says, “physicians are coming to us with their tricky cases and asking us to create a model.”

 

Another example involves the use of ablation (essentially intentional scarring) to treat atrial fibrillation (a-fib). Again, physicians need help deciding where to ablate the heart muscle. SCI developed a pipeline in which the heart MRI of someone with a-fib goes through the entire visualization process and ends up in an iPad-based format that lets the physician explain to the patient what needs to be done. But MacLeod and his colleagues are trying to take this a step further: They want to integrate these a-fib patient-specific images with real-time fluoroscopy to guide physicians during ablation procedures. “The previously recorded imagery would be in sync, registered, and moving just as the fluoroscopy camera moves,” MacLeod says. So far, this approach is being used in research on mice.

 

A key aspect of Utah’s success with these projects comes from teamwork and an integrated approach, MacLeod says. “There can’t be a barrier between visualization people and those collecting, digesting and using the data.”

 

Visualizing the Future

Advances in molecular visualization suggest a future in which molecular dynamics can be visualized as dramatically as a Hollywood movie; cell movements can be tracked through time and space; and physicians can interact with patient-specific images in real time during interventional procedures. It’s a future that is now.



All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
All submitted comments are reviewed, so it may be a few days before your comment appears on the site. This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.