Life in Motion: Simulation from Particles to People

Computational simulations of life in motion at every scale—molecular, cellular, tissue-level, and whole organism—are boosting our understanding of the role mechanics plays in controlling life.

From atoms and molecules to insects, dinosaurs, and humans, computational researchers are finding that much of life can be understood in mechanical terms. Indeed, the machines of life are well-tuned.

 

“Because nature has evolved forms that naturally have the desired functions, you don't have to bend over backwards to steer, control or coerce the structures to do their jobs,” says Russ Altman, MD, PhD, chair of the bioengineering department at Stanford University. “They do them naturally.” And that’s true at all scales, as computational researchers are discovering.

 

At the most basic level, charged atoms push and pull on one another to control the inner workings of every living thing. Cellular machines called ribosomes use ratchets and springs to translate coded messages into the workhorses of the cell: proteins. And small movements made by these proteins act as cellular signals that give directionality and function to developing tissues. Combinations of tissues then produce appendages designed to carry entire organisms, including humans, through the natural environment.

 

In a feedback loop with laboratory experiments, computational simulations of life in motion at every scale—molecular, cellular, tissue-level, and whole organism—are boosting our understanding of the role mechanics plays in controlling life. Such simulations were the focus of the “Life in Motion” symposium sponsored by Stanford’s Simbios Center and BioX Program this past October. That symposium served as the foundation for the ten stories presented here.

 

Jiggling Molecules

"Everything that living things do can be understood in terms of the jiggling and wiggling of atoms," said Richard Feynman in his seminal 1963 Lectures on Physics. Guided by the laws of physics, the atoms that make up the molecules of life assemble themselves into essential forms to do a wide variety of tasks.

 

This graphic shows several steps in a coarse-grained molecular dynamics simulation of a lipoprotein nanodisc assembly. At the start, two semi-circular membrane scaffold proteins (brown) are surrounded by randomly scatttered lipids (small tailed objects shown here in a different color at each step in the simulation). During a 10 microsecond simulation, the lipids quickly glom together. The fusion of these lipid micelles draws the two protein strands (brown) together eventually forming a single lipoprotein particle. The aggregation, driven by hydrophobic effects, is followed by a much slower protein tertiary structure rearrangement. Eventually the protein strands rearrange themselves to form a double-belted nanodisc. Courtesy of Klaus Schulten.Today, researchers can observe those molecules interacting by simulating them on a computer. Using this approach, known as molecular dynamics, computational researchers are going beyond what experimentalists can do to understand the way life works at the nanoscale, says Klaus Schulten, PhD, professor of physics at the University of Illinois at Urbana-Champaign.

 

Schulten and his colleagues have simulated interactions among all of the atoms in a variety of biomolecular systems ranging in size from water channels and lipoproteins (on the order of 105 atoms) to an entire virus particle (106 atoms) and, most recently, a bacterial flagellum (109 atoms).

 

“With computation, we can take experimental data with limited meaning and, using what I call a “computational microscope,” turn it into information about the chemical structure of the system under investigation,” Schulten says.

 

For example, when Schulten’s group simulated the structure of a water channel, they learned that water molecules pass through the channel in a very specific orientation: oxygen first. X-ray crystallography—a standard experimental method for studying atom arrangements—could not determine the orientation of the molecules, Schulten says. “Computation gives additional insight into the system.”

 

Sometimes Schulten and his colleagues beat experimentalists to the punch. For example, in 1999, their molecular dynamics simulations of the largest known protein, titin, explained how the pro- tein gives muscles stretchability—under force, nine hydrogen bonds are disrupted in a reversible way.

 

Three years later that finding was confirmed in a lab. Likewise with ankyrin, a molecule that’s important for hearing. Simulations showed that the protein was a very soft spring. “It was very stretchable in the computer,” Schulten says. “Put a feather on it and it stretches to the ground.” The computational results were published before the experimental results came in. “The lab researchers confirmed the computational work,” he says.

 

Recently, Schulten’s group has been taking steps toward simulating larger systems. “We’ve moved from single protein sports in the cell to describing team sports,” Schulten says.

 

Lipoproteins, which contain both proteins and lipids, cannot be crystallized because there is so much disorder in the lipids. “To get a picture of the molecule, you need the computer,” Schulten says. But simulating the assembly of the lipoprotein molecule at atomic resolution would have required simulations in excess of 100 milliseconds—more than their computer could do. So the team simplified the system—a process known as coarse-graining—to effectively cut down the number of elements. A series of pictures showing the self-assembling lipoprotein appears on the previous page.

 

Schulten’s team is currently using molecular dynamics simulations to generate movement of a bacterial flagellum. They’ve already created an all-atom model and simplified it using coarse graining. “Now we hope to rotate the entire flagellum around the base to see what kind of motions we get,” Schulten says. “It’s a very challenging 10 microsecond rotational period, and we’re not done yet.”

 

Molecular Movement Takes Time

One of the challenges of simulating moving particles is that movement takes time, and simulating over time requires significant compu- tational resources. For example, with 20,000 processors, Schulten says he can simulate 100 nanoseconds of molecular movement a day; and he needs 100 days to get to the microsecond level.

 

The headpiece of villin, an actin-binding protein, makes for interesting molecular dynamics  simulations because it is quite small and folds quickly. That speed allowed Pande’s group to obtain  thousands of trajectories showing the headpiece folding and unfolding while continuing to make progress toward its final state. Courtesy of Vijay Pande.But Vijay Pande, PhD, associate professor of chemistry and of structural biology at Stanford, wants to cover longer time scales than the typical nano- and micro-second simulation. He’s interested in processes that take milliseconds or seconds—or even protein activation, which can take minutes or hours. “Our interest has been to push as hard as possible into this area,” he says. “If you have something that takes a millisecond but you’re simulating it for 1,000 times less time, you’ll probably be missing things.”

 

Pande points to simulations of the headpiece for the protein villin. Initially, simulations of all possible future positions of each particle (known as trajectories) lasted only a brief time-step with gaps between. It was impossible to make sense of what was happening. “Now that we have thousands of trajectories, each on these long time scales, we can see what it looks like,” Pande says. And the details matter: In a movie, the headpiece folds, unfolds a bit, tries again, gets some things right but maybe not all, unfolds a bit again, and so on until eventually it makes its final shape. Yet, says Pande, “Every step of the way it creates more and more native-like structure.”

 

Perhaps the only general statement one can make about protein folding, Pande says, is that it’s a stochastic process, meaning it involves chance or probability. “Trying to understand this means we might want to rethink how we simulate dynamics,” Pande says. “The question is: if a handful of trajectories don’t really describe the system, how are we going to capture all the complexity and interest that even a small protein molecule might have?”

 

So Pande suggests a paradigm shift. Instead of running simulations evenly, giving each trajectory equal attention, he proposes using Bayesian statistical methods to figure out which areas really need to be simulated longer in order to gain insight. This can yield huge speed increases, making protein folding simulations 10 to 1000 times faster.

 

To magnify that speed increase, Pande relies on large amounts of computing power—specifically, grid computing. He has 250,000 processors participating in his distributed computing program, including graphics processors like Sony’s Playstation®  3—which he says give a 20 to 50 times speed increase. By combining this with Bayesian methods, “we hope to get millions times the speed of what you can do with one computer,” he says.

 

The Ratcheting, Springing, Ejecting Ribosome

The ribosome is a magnificent piece of cellular machinery. It’s where the DNA code (carried by messenger RNA) gets translated into useful proteins. Three different kinds of molecules move through the ribosome during this process: messenger RNA (mRNA); numerous transfer RNAs (tRNA) (each one carrying an amino acid); and the growing protein that’s being assembled from those amino acids. For each pathway through the ribosome, unique mechanical features help ensure an efficient and accurate process.

 

Studying the dynamics of such a molecular machine is a complex business. But by combining cryo-electron microscopy, or cryo-EM, with an array of interdisciplinary methods, researchers have made tremendous headway. Joachim Frank, PhD, professor in the School of Public Health and Biomedical Sciences at State University of New York, Albany, is one important contributor.

 

Frank’s team starts with a soup of ribosomes and other essential ingredients for translation including mRNA, amino-acid-carrying tRNAs, various elongation factors, and amino acids. Using antibiotics or other chemical means, the researchers stop the translation process at a particular step. In this manner, all the ribosomes in the sample are trapped in the same conformation—e.g., with tRNA snapped into place or not; or with mRNA in a particular stage of movement. The electron microscope then makes tens of thousands of projections of these ribosomes that must be assembled into a single 3-D map. “The job of the computer is to make sense of all these projections,” Frank says.

 

Here, the large ribosomal subunit is shown alone (the small subunit has been removed) in the process of translocation.  On the left, three tRNAs sit in the three slots inside the ribosome: orange prepared to exit; green in the middle; and pink having just arrived and in a position to link its amino acid to the growing peptide chain. When (at right) elongation factor G (red) binds to the ribosome, it induces a ratcheting motion—the small subunit (not shown) twists down and away, which causes the mRNA to shift over by one codon. In addition, the three tRNAs shift toward their next position, as shown. When the subunits a spot for a new tRNA to enter. Image courtesy of Joachim Frank and Haixiao Gao. Although the resolution of the 3-D maps created with this cryo-EM approach is getting better and better, it’s still shy of atomic resolution. So Frank’s team then docks existing crystal structures into the cryo-EM map. At this point, Frank says, “We have only discrete stops along the way. So we have to figure out what happens in between the snapshots.” Multiple interdisciplinary techniques then come into play: Kinetics data, molecular dynamics simulations, normal-mode analysis, single-molecule FRET, and other approaches create a more complete picture of the ribosomal molecular machine in motion.

 

The results have answered important questions about how the ribosome works, including the processes known as tRNA selection and translocation. In tRNA selection, the ribosome must choose among twenty different tRNAs each attempt- ing to deliver a different amino acid package to the growing peptide. How does the ribosome choose the right one accurately? Frank’s cryo-EM analysis shows that the tRNA undergoes a conformational change when it enters the ribosome to try out its match: it gets bent into a molecular spring with high energy. Only with a match between the tRNA anticodon and the mRNA codon does the spring snap into place. “tRNA makes an enormous move going into its place in the ribosome,” Frank says.

 

The work also elucidated the process of how mRNA and tRNA move through the ribosome (translocation). When elongation factor G (EF-G) binds to a site between the ribosome’s large and small subunits, the small subunit moves in a ratch- eting motion to one side and (after EF-G departs) back again. “There is an enormous movement of the bridges that connect the two subunits,” Frank says. As a result, mRNA shifts over by one codon and, at the same time, tRNA moves stepwise from one of three positions to the next.

 

“The ribosome’s dynamic properties follow from the molecular architecture,” Frank says. And that's something Vijay Pande has thought about in his investigations of the polypeptide’s exit tunnel through the ribosome. Pande wonders why the growing protein departs through the center of the ribosome and what interactions occur along the way.

 

To investigate, Pande began by considering tunnels generally. His team simulated a helical peptide inside different sized nanotubes with a small number of water layers. The result: “Most people would expect that you’d have a more stable helix in the smaller tube because you’re removing the unfolded states,” he says. “But there were fewer helical residues and more protein-hydrogen bonds.” His hypothesis—in such a small space, water denatured the protein!
“Thinking about water is really important,” Pande says. “If you didn’t think about it explicitly here, you’d have gotten the opposite result.”

 

Taking that idea to the ribosome, Pande’s simulations show that protein helices mostly remain coiled inside the ribosomal tunnel. Currently, his group is simulating whether the polypeptide interacts with the ribosomal tunnel on its exit journey. “Perhaps the ribosome itself goes through some changes during the process,” he says. “Even residues down at the end of the tunnel might affect what’s going on higher up.” Results are expected soon.

 

The Stretching Cytoskeleton

Inside a cell, the cytoskeleton creates a scaffold for essential activities such as cell migration, division and signaling. To function well, it must be flexible and stretchable. But we have a poor understanding of how it works mechanically.

 

To study the cytoskeleton’s mechanical propeties, Roger Kamm, PhD, professor of mechanical engineering and biological engineering at Massachusetts Institute of Technology, uses both experimental and computational approaches.

 

As you increase strain on a cell, Kamm says, cytoskeletal filaments get stiffer, a process known as strain hardening. Eventually, these filaments reach a critical point and there’s a sudden drop in stiffness; the material gets much more fluid-like. But what, Kamm wondered, causes both the hardening and the drop?

 

A simulated cytoskeleton shown without (left) and with (right) shear stress applied. The green filaments (at right) are the ones that "feel" the sheer stress. Courtesy of Roger Kamm.To explore that question, Kamm and his colleagues developed an actin cytoskeleton in silico consisting of a network of actin filaments in a 500 nanometer cube. And because strain hardening only occurs in experimental networks with cross-linking proteins, they threw two cross-linkers into the simulation—one that connects the filaments in paral- lel, and another that joins them at right angles. “The nice thing is that we can start simple and add in different types of cross-linkers as we go,” he says. Taking a slice through the cube, the network appears reasonably similar to that seen in slices through real cells.

 

Kamm’s group then modeled the in silico matrix with and without shear stress. The key result: “At higher concentrations of cross-linking protein, you get this dramatic strain stiffening behavior,” Kamm says. On the other hand, he didn’t see the catastrophic drop in stiffness seen experimentally. For this, the cross-links need to rupture under force, an effect that is now being incorporated into the model.

 

Eventually, Kamm hopes this kind of iterative research using experimental and computational approaches will lead to a better understanding of how forces are transmitted across the cell membrane and within the cell.

 

 

One-Way Tissue

Many types of cells and tissues develop a kind of directionality called cell polarity: certain events happen toward one end of the cell or tissue. When disrupted in humans, a variety of disorders may result: congenital deafness, respiratory diseases involving cilia, neural tube defects, and even cancer.

 

The hairs on a fly’s wing grow from the distal side of each cell (the side away from the fly’s body),  displaying what’s called planar cell polarity. Courtesy of Claire Tomlin.To study cell polarity, some researchers turn to the little hairs that grow from the distal (far) side of each cell on flies’ wings. “How does a single cell in the midst of thousands of cells, identify the distal side?” asks Claire Tomlin, PhD, professor of aeronautics and astronautics at Stanford University and professor of electrical engineering and computer sciences at the University of California, Berkeley. To address that question, Tomlin uses mathematical models to bridge from molecular level understanding to tissue-level effects.

 

In this model of a fly’s wing, some cells (in the center of this image) contain a nonfunctioning clone of the gene “frizzled.” As a result, hairs (white triangles) grown from neighboring cells point toward the Fz-deficient cells. Courtesy of Claire Tomlin.A fly’s wing hairs form between 18 and 34 hours after the pupa forms. Before 18 hours, key proteins in the cell are homogeneously distributed. After that, they localize—two (Dsh and Fz) to the distal side and two (Pk and Vang) to the proximal side. The hairs form where Dsh concentrations are highest. But various mutants exhibit unusual characteristics. For example, mutant cells with no Fz grow more than one hair from the cell’s center. In addition, wild-type cells surrounding those mutants grow hairs pointing toward the mutants, suggesting some sort of signaling between cells.

 

Jeff Axelrod, PhD, associate professor of pathology at Stanford University, proposed a feedback model among the various players to explain this and other mutant outcomes. But the model was controversial. Some felt it couldn’t explain certain phenomena such as why cells that over-express Pk produce increased Dsh at the boundary.

 

Enter Tomlin who developed mathematical models to determine if Axelrod’s model was plausible. She used continuous partial differential equations to represent the observed diffusion.

 

Simulations of various knockout scenarios produced results just like those seen in the lab. Even the controversial result was captured: Overexpressing Pk brought more Dsh to the boundary. The lesson, Tomlin says: “Feedback can be nonintuitive.” Here, it turned out that overexpressed Pk led to overcompensation by the rest of the feedback loop.

 

Tomlin’s model also performed remarkably well in a blind challenge. A researcher in England asked her to attempt a variety of “funky” knockouts. After the simulations were complete, he showed her his lab results for the same experiments. They all matched. “It supports the plausibility of the model,” Tomlin says. “And as we get more and more information from experiments to guide our model, our model can also help guide experiments.”

 

Swimming Larvae in the Natural World

On a larger scale, some researchers study entire organisms moving in their natural environments. “They evolve in the messy natural world,” says Mimi Koehl, PhD, professor of integrative biology at the University of California, Berkeley. “So we should look at how organisms interact mechanically with the world around them.”

 

Koehl has studied how crabs move in and out of water (they crouch lower and wider in water) and how lobsters gather odors from the environment (they flick their antennules to “sniff” the surrounding water). Recently, she teamed with some engineers to study whether sea slug larvae have any control over where they land on the sea floor. Ocean currents carry these microscopic larvae, but these creatures need to land on a coral reef in order to metamorphose into their adult form.

 

“What if you’re really tiny and not a great swimmer,” Koehl asks, “How can you recruit a suitable habitat?”

 

In a laboratory dish, Koehl’s collaborator, Michael Hadfield, PhD, at the University of Hawaii, had seen that chemical cues can trigger sea slug larvae to settle on the bottom. But in nature where they drift over coral reefs, they have to con- tend with turbulence and waves. Wouldn’t the coral odors be washed away and wouldn’t larval behavior be overwhelmed by the ambient flow, Koehl wondered?

 

A computational larval sea slug (blue) finds its way to a coral reef by swimming when it senses no coral aroma (in black filament) and sinking when it senses such cues (in red water). As a result, the larva follows a spiralling path that enhances the chance of landing on the reef. This model was built by simulating realistic  turbulence and waves in a laboratory wave tank. The black branching structures at the bottom of the image are corals exposed to turbulent flow. The yellow and orange filaments swirling around in the  water above the corals are fluorescent dye leaching off the surfaces of the corals (just as the aroma of corals leaks out of corals in nature). A thin optical slice of the dye (aroma) plume is illuminated by a thin sheet of laser light. The brighter and lighter the dye in this image, the higher its concentration.  The aqua line is the  simulated trajectory of a microscopic larva of a sea slug carried in turbulent water flow. The trajectory was calculated using a computer simulation of the larval behavior (in odor-free water it swims; in water with coral-aroma it sinks) as well as the water movement (waves and turbulence), and the changing field of  filaments of aroma swirling around in the water. Photo by M. Reidenbach; trajectory calculated by J. Strother. Courtesy of Mimi Koehl.To mimic the conditions in nature, Koehl and her colleagues measured water flow over coral reefs and then worked with Jeffrey Koseff, PhD, and Matthew Reidenbach, PhD, at Stanford University to recreate a coral reef in a wave tank, complete with realistic wave movements and turbulence. They painted a fluorescent dye on the model corals to represent chemical cues released from coral surfaces. As the fluorescent dye dissolved into the water column, the researchers shined a skinny sheet of laser light vertically through the dye plume so that they could look at how the dye was distributed in the water on the fine scale that would be encountered by a tiny larva. The fine slice revealed that the fluorescent dye wasn’t merely a diffuse cloud. It was made up of fine filaments swirling around with odor-free water. “A tiny dot the size of a larva is going to be in no odor, then odor, on-off, on-off as he swims around through this plume,” Koehl says.

 

What do larvae do in response to this situation? In filaments of odor-free water they swim; in filaments containing coral aromas (above a threshold concentration) they turn off their cilia, pull in their swimming gear, and sink. When larvae exit the cue, they resume swimming.

 

One question remained: Does this behavior help larvae to land on the reef? To study this, James Strother, an undergraduate physics student at University of California, Berkeley, worked with Koehl to create a computational model of larvae over a reef. Mathematical larvae were placed in the video records of dye swirling over corals in waves. The larvae were programmed to sink if surrounded by a cue concentration greater than a pre-determined threshold, and swim if immersed in a lower cue concentration. The larva’s velocity depended on its swimming or sinking speed plus the velocity of the waves and the effect of turbulence. “It’s carried by the fluid but also sinking or swimming by its own volition,” Koehl says.

 

In the simulation, the researchers saw larvae following a spiraling trajectory, eventually landing on the reef. When they calculated the trajectories of thousands of larvae they found the settlement rate into the reef increased about 30 percent because of the larvae’s sink/swim behavior. Larval behavior made a statistical difference to larval survival. “Even if you’re a tiny, weak swimmer, you can bias how the environment moves you,” says Koehl.

 

Crawling Creatures Unplugged

Running creatures with two, four, and six legs veer toward stability. Indeed, they seem mechanically designed to cope with varied and unpredictable terrain and to recover from trips and jolts that disrupt them along the way.

 

To understand how runners achieve such stability, one might assemble a model of multiple skeletal supports, hundreds of muscles and millions of neurons. But that would be an extremely complex model with many variables. Moreover, it would be hard to make any general statements about what’s going on.

 

So the system should be simplified down to its essence, says Robert Full, PhD, professor of integrative biology at the University of California, Berkeley. But that essence should be anchored in a realistic physical representation of an animal.

 

The essence of running—distilled down to a simple, dynamical system in one plane—can be represented by a pogo stick. “It’s a mass sitting on top of a spring,” Full says. “And it’s the same for two, four, six or eight-legged animals.”

 

At top, a cockroach moves across irregular  terrain containing obstacles three times its leg height. At bottom, a biologically-inspired  hexapedal robot successfully traverses a  scaled-up version of the same landscape  without any sensory control.  From Koditschek, et al., Arthropod Structure & Development 33 (2004) 251–272 with permission from Elsevier. Courtesy of Robert Full.In addition to a vertical springing motion, running involves movement in the horizontal plane, Full says. He collaborated with a math colleague at Princeton to produce a spring-mass model that bounces side to side as well as up and down. Then they perturbed the model. The result: heading, velocity, orientation and rotational velocity all remained stable. “This is a passive mechanical self-stabilizing system with almost no neural feedback,” says Full. “The stabilization is built into the tuning of the mechanical system.”

 

His team then tested the spring by gradually adding physical characteristics of animals—first legs, then a simple muscle model and some damping, and finally some programmable leg and hip positions that could control joint torque. Each addition revealed a new characteristic—stabilization with respect to inertia, then speed, then stride. The lesson: animals opt for a combination of speed and stride frequency based on stability. “The most stable region is where the animal actually functioned. It didn’t venture into the unstable regions,” Full says.

 

A postdoc in Full’s lab went on to produce a variety of hopping models and learned that two legs add lateral stability; three increase stability in all directions. “Morphology makes a big difference with respect to control,” Full says.

 

The next step was to add a model of neural control. Full's team had experimented with real animals—cockroaches wearing jet packs or tripping over a step—and found that little neural control was needed to respond to perturbations. So they expanded their model to inclu de only a very simple neural control model: an oscillator (one for the whole system or one for each leg) coupled to a mass supported by legs.

 

This modeling and animal experimentation inspired the design of a physical model, a robot. The insert-like robot consisted of six springy legs coupled to a body without any external neural sensing. Yet, surprisingly, it is remarkably stable and can negotiate varied terrain, including climbing steps. Most of the control resides in the body and not the brain, Full says.

 

Running Dinosaurs

Like Full, John Hutchinson, PhD, a lecturer at the Royal Veterinary College, University of London, uses simplified models to understand animal movement. But that’s in part because he has to: The animals he studies are extinct.

 

Specifically, he’s interested in theropod dinosaurs, a group that walked on only two legs and includes the largest bipeds that have ever lived. Living animals appear to have a speed limit—at a certain size, getting bigger no longer means getting faster. Hutchinson wants to know: Was this true for Tryrannosaurus rex? Could this massive dinosaur run?

 

John Hutchinson created these muscle-activated models (from left to right above) of an ostrich, elephant and Tyrannosaurus rex in order to compare their running ability. When a typical living animal runs fast, the ground reaction force (at midpoint stance) peaks at about 2.5 times body weight. Hutchinson looked at how much muscle the living animals would need in order to sustain that force. He found that birds—small and large—all had enough muscle to run. In addition, large birds such as the ostrich could run quickly because they have big muscles, good muscle leverage and use straight legs, which give them a mechanical advantage. Elephants could also run.  But T. rex doesn’t appear to have been able to sustain a ground reaction force of 2.5 times body weight. Courtesy of John R. Hutchinson.Empirical data give limited information about dinosaur locomotion. Skeletons can tell researchers what positions the animals couldn’t take—for example, poses that would disarticulate the knee. Footprints allow some estimates of dinosaur speeds. And comparisons to living animals also give some clues. However, in the case of T. rex, “There are no six ton bipedal land animals alive today,” Hutchinson says.

 

So he uses computer modeling and simulation to go beyond what he can see in fossils, footprints, and analogous live animals.

 

Initially, he created a simple model to determine how much muscle mass it would take for T. rex to sustain the ground reaction force of normal running (2.5 times body weight). “No matter what posture we put into the model, T. rex could not have carried enough muscle to run quickly—even if you gave it incredibly big muscles,” Hutchinson says. According to the model, T. rex would max out at no more than 15 to 25 miles per hour.

 

Next, he created more complex 3-D models. He started with 6 million possible poses and then eliminated the unlikely ones based on principles of how living animals move. For example, he imposed reasonable limits on such things as limb bending, muscle mass, and the size of the moment arm about each joint. After carving down the possibilities, 3000 poses (.05 percent) remained that could sustain 1.5 times body weight (at the boundary between walking and running). These were fairly straight-legged poses, like those seen in living large mammals.

 

He then used this model to simulate T. rex at the midpoint in a walking stride and asked the computer: How much ground reaction force could have been produced at the foot per one unit of muscle force? It’s a measure of mechanical advantage.
 

What he found: As the T. rex posture gets more erect, the mechanical advantage goes up until it reaches a plateau. “It’s possible that perfectly straight legs aren’t that much better than a little bit of bend in the legs,” he says.

 

For dinosaurs, so much is unknown—the posture, the moment arms, and the dimensions of the animal’s muscles and bodies. But, Hutchinson says, “The unknown values have to be within some range....So perhaps it doesn’t matter what we assume, but what we do with an assumption and how much we vary it. The modeling tools make all this careful inquiry possible; otherwise we'd just be left guessing.”

 

 

Animating Realistic People on the Go

Jessica Hodgins, PhD, professor of computer science and robotics at Carnegie Mellon University, has no problem gathering ample data about her subjects: humans. But, like Hutchinson, she relies on a process of elimination to rule out unlikely human movements in favor of realistic ones. Her goal: to create better computer animations of people and make it easier for casual computer users to create such animations. Ultimately, she'd like to see her work have some practical impact. For example, it might help clinicians plan and implement physical therapy programs.

 

In 1995, Hodgins made her first efforts to simulate human motion. She relied on models of simple springs that were proposed in the biomechanics literature at the time; her own observations of people; and a healthy dose of intuition. The resulting animations weren’t terrible, but it was easy to see they weren’t realistic. “Our standards for human motion are really high,” she says. “We know when something’s wrong.”

 

But this early effort taught Hodgins something: human control laws are hard to design. It’s not enough to get the physics right. Accurately determined forces impacting a rigid body do not automatically produce an appealing human character. Moreover, they make everyone look the same. “We don’t have a language for stylistic subtleties,” Hodgins says.

 

Jessica Hodgins captures the motions of living subjects and then manipulates them to produce realistic-looking movements in new situations. These snapshots, taken from an animation, show a woman walking on a  balance beam, leaping between stepping stones, ducking under a bar, and then seating herself in a chair. Courtesy of Alla Safonova and Jessica Hodgins, Carnegie Mellon UniversityShe decided she needed to know more about how people move. So, starting in 2000, Hodgins created a motion capture lab. By placing numerous reflective markers on an individual person in motion and capturing the 3-D locations of those markers, she created a database of possible movements for a number of ordinary people as well as some professionals such as gymnasts and clowns.

 

Hodgins and her PhD student Alla Savanova could then organize the data in a different sequence to create an animated figure that moves in ways different from the original subjects. “A few examples of a given behavior can be generalized to quite different examples of that behavior and still look realistic,” Hodgins says.

 

For example, using limited motion capture data, she can realistically animate a person following a randomly chosen circuitous path while avoiding various unexpected obstacles. To do this, she takes each pose from the motion capture data and puts it into a graph data structure: Similar poses and velocities land in similar locations on the graph. She can then search the graph for a path that approximates where the person is walking along a specified route. “Any path through this graph should produce natural motion,” Hodgins says.

 

This procedure was quite efficient, but Hodgins made it even more efficient by reducing the number of possible poses to only those that look natural, and then interpolating between them. The physics are still correct for the full behavior, but the only motions the character can make are selected from a smaller set of possible motions.

 

Comparing animations based on the full motion capture data set with animations based on the low dimensional set, there’s almost no difference. “So we’re getting a lot of generality out of limited data,” Hodgins says.

 

The team also worked to optimize the range of possible movements. For example, a person approaching an object to pick it up might bend over too soon or too late, looking unnatural. To find the most realistic options, Hodgins’ group did a significant amount of culling of non-optimal trajectories on the graph. “Optimal solutions look so much more natural,” she says.

 

But the movement of soft tissue is still missing from these rigid body animations: Muscles don’t jiggle and feet don’t compress with each step. Hodgins is now gathering data on such flesh and muscle movements by putting 400 sensors all over the bodies of weight lifters, belly dancers, and ordinary people with a variety of body types. Eventually she hopes to add models of these data to her animations.

 

Hodgins’ work is top down: grabbing data and trying to mine it for the principles of human motion. The other option is to find out how the system works from the bottom up. “It will be nice when we meet in the middle,” she says.

 

 

Standing Up, Learning, and Taking Trains

Demetri Terzopoulos, PhD, the Chancellor's Professor of Computer Science at the University of California, Los Angeles, takes a comprehensive “artificial life” approach to animating humans and lower animals in a realistic manner. His characters are built from a basic biomechanical framework in which physics-based concepts such as joint torques and gravity—and not motion capture data—control movements. On top of that, he uses machine learning techniques to build in additional capabilities—learning, perception, behavior, and cognition—so that his characters can act autonomously.

 

A simple example is that of an articulated skeleton that tries to remain standing while being pulled by a virtual rope tied to its middle. Sometimes the skeleton falls; sometimes it stays up. The yank is repeated until the character "learns" which responses enable it to remain standing. After this type of training, the character can react autonomously, making a protective step to avoid a fall, for example, or getting back up after tumbling to the ground.

 

Terzopoulos' simulations go beyond just the physics and locomotion to also mimic animal perception and behavior. The models used in these simulations have a biomechanical component, but also include a set of sensors and a brain with motor control, perception, behavior, and learning centers. With this more complex model, Terzopoulos and his students created a biomechanical model of fish that can learn how to swim. The fish also avoid collisions with other fish, forage for food, and engage in more complex behaviors such as mating.

 

Virtual pedestrians move through this virtual train station by making autonomous decisions. They  automatically negotiate tight spaces, observe social conventions (staying to the right), and queue up at the ticket line. They know when their train will leave, and decide what to do while waiting—sit and read; watch some performers; or grab a bite to eat. Each individual is constructed of several layers: biomechanics, sensors, and a “brain” able to perceive and learn. Courtesy of Demetri Terzopoulos.From this, Terzopoulos and his team developed a formulation of learning as an optimization problem. "It's trivially simple," he says. "Even a dumb animal can do it through trial and error." His program trained an artificial shark to swim by finding the most energetically efficient way to move given the physics of the environment (gravity in water) and the limits of its physique (e.g., muscle arrangement and strength). The shark begins by twitching, but soon, says Terzopoulos, "It discovers the proper gait given its body structure, and then refines it until it becomes an efficient swimmer."

 

Most recently, Terzopoulos and his team have applied their "artificial life" approach to human characters in a virtual train station (a model of the originalPennStationinNewYorkCity).Usingthe same layers (locomotion, sensors and a brain, this time including a cognition center), his team created a realistic simulation of several thousand autonomous pedestrians commuting through the station. "Each pedestrian is a highly capable individual with things he or she must do, such as purchase a ticket and proceed to the correct train platform at the appropriate time," Terzopoulos says. It's then possible to sit back and watch the train station dynamics on a large scale. "It's order and disorder at the same time," he says. "It's highly complex."

 

Eventually, Terzopoulos would like to create a whole city of autonomous virtual humans. The UCLA Urban Simulation Lab has created a detailed 3-D model of Los Angeles. "Wouldn't it be wonderful to populate it with as many people as possible?" he asks.

 

The Versatility of F=ma

When researchers simulate life in motion, they rely on a powerful rule of nature: Newton’s Second Law of Motion. At all scales, the basic rule that force equals mass multiplied by acceleration (F=ma) helps predict how life moves.

 

Simbios, a National Center for Biomedical Computing at Stanford, was founded on the premise that this commonality—F=ma—should allow development of a common software toolkit for creating simulations at all scales of life.

 

And that’s why the Simbios-sponsored “Life in Motion” symposium brought together the ten researchers described here. “Its pretty impressive that such a vast array of problems can be attacked with essentially the same tool,” says Altman.

 

While highlighting the versatility of physics-based computer simula- tion, the symposium also fostered cross-fertilization among different disciplines that all use motion simulation. “This will help to build a meta-community of scientists with a common interest in understanding how biological matter moves, and how that motion can be simulated in computers," Altman says.

 



All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
All submitted comments are reviewed, so it may be a few days before your comment appears on the site. This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.