Resolution Limits of Optical Microscopy and the Mind

How precise an image can fluorescence microscopy provide?

As modern optics and cell biology have flourished in recent years, they’ve each driven innovation in the other. Yet commonly employed imaging techniques, such as fluorescence microscopy, have run up against fundamental limits of precision. We want to measure ever-smaller objects at ever-shorter time intervals, but the relatively long wavelengths of visible light are a barrier to how precisely we can observe the minute and rapidly changing biological world. We will explore these limitations and the role biocomputation can play to minimize their consequences in the context of fluorescence microscopy.


Figure 1:  Two representations of the Airy disc. A point light source generates an Airy disc, not a point, in the focal plane of a lensing system, such as a pinhole camera.  The magenta dots indicate the position of the source in the X-Y plane. The right side shows a calculated, high resolution monochromatic image of an Airy disc.The general formula in fluorescence microscopy is: (i) genetically modify a gene of interest to produce protein with a fluorescent tag, (ii) express this protein, and (iii) watch and learn from the resulting fluorescent images. While conceptually simple, physical limitations act to substantially obscure the picture. By analogy, imagine you want to understand the job of a particular kind of worker in a complex factory, but you are forced to observe from afar … very far. You might do this by gluing a bright, colored light on top of their heads and watching where they go, with whom they interact, what machines they stand next to in the factory, and when they arrive and leave each day. However, if each protein in a cell were a factory worker, then fluorescence microscopy is like watching the workers from 100 km away!


Many problems arise as you follow those little lights wandering around the factory. At times the light might be obscured or distorted, it might be too dim to see, it might burn out, or there might be so many lights that it is hard to follow any one in particular. Indeed, for fluorescence microscopy the list of factors limiting spatial and temporal resolution is long, though precision of one kind can often be traded for precision of another. However, at its foundation three immutable factors limit resolution:


1) Light from a point does not produce a point of light: Visible light emitted by a small (e.g., protein-sized) object propagates as a spherical wave.  As this light goes through a lensing system, the simplest example of which is an aperture (like a pinhole camera), the waves converge, focusing the light and generating a diffraction pattern at the image plane. In the case of a circular aperture, the resulting diffraction pattern is called an Airy disc, shown in Figure 1. The width of the Airy disc is related to both the size of the aperture, and importantly, the wavelength of light going through the aperture. The smaller the aperture or the longer the wavelength of light, the wider the Airy disc becomes. There is no manipulation of the aperture (or any other lensing system) that can transform this diffraction pattern into a single point of light. While the peak of the Airy disc does indicate the location of the light-emitting object, the width of the peak tremendously limits the measurement precision of molecular positions.


Figure 2: Spatial resolution is limited by Airy disc width and noise. The left-hand side shows two point sources starting at the exact same X-Y position, and then moved apart until two resolvable peaks appear over the Gaussian noise. The right side shows the gray-scale images corresponding to the distributions on the left side. The magenta dots indicate the X-Y position of the sources.If two light-emitting objects are too close to each other, their individual Airy discs will not form two individually resolvable peaks, but instead a single asymmetrical peak. For their unique intensity peaks to be clearly visible and measurable, the two objects must be farther apart than approximately a wavelength of light. However, if noise is added to the image, our ability to distinguish two closely spaced light emitters is further compromised, as shown in Figure 2. There are many possible sources of noise in fluorescence microscopy, but in particular the one described below cannot be eliminated.


2) Infrequent and random events are noisy: Measurements from any set of distinct, randomly occurring events, such as photons from a fluorescent molecule, are subject to fundamental statistical noise called Poisson noise. Imagine your job is to count the number of cars that pass by a certain spot on a rural country road. The passing of the cars is random; it does not depend on time, nor when the last car passed. At the end of one week of measurements, your boss asks how many cars pass in a day. You report that on average 29 cars pass that spot in one day, but that on Monday 31 cars passed, on Tuesday 18 cars passed, on Wednesday 38 cars passed, and so on. That fluctuation about the mean is Poisson Noise, and it is intrinsic to any uncorrelated, random process in nature, including fluorescence imaging.


Fluorescent proteins absorb light of one wavelength and then a random (very short) time later, they emit two photons, both with longer wavelengths. One of those is a higher energy photon that the microscopist hopes to detect, and the other is released as unmeasured heat. How many photons reach our detector (e.g., a camera) per unit time from the random uncorrelated process of photon emission? For these kinds of processes, the mean and the variance are the same, and statistical theory tells us that the signal-to-noise ratio increases as √ kt where k is the mean rate of photon arrival and t is the time for photon collection. The bottom line is that the longer we collect photons, the stronger the signal (i.e., the image) becomes relative to the noise.


3) Photobleaching: You might think that if we simply wait long enough and collect enough photons, we would have an arbitrarily high signal relative to the noise, allowing us to locate molecules of interest with high precision. Sadly, one final, fundamental process thwarts us. To emit a photon, a fluorescent molecule must absorb a photon and enter an ‘excited’ state. Every so often, an excited molecule finds itself near another molecule (e.g., molecular oxygen), with which it can chemically react to produce a new molecule that is no longer fluorescent—this process is referred to as ‘photobleaching.’ Once a molecule has bleached, it will no longer emit visible-wavelength photons. Countering noise by shining more excitatory light on a fluorescent molecule simultaneously increases the opportunities for both generating visible photons (i.e., brighter fluorescence) and engaging in destructive chemical reactions that cause it to bleach. Thus, for a given light level, fluorescent molecules bleach at some average rate that is, in general, beyond one’s control.


Figure 3: A classic example of apophenia. In July 1976 Voyager 1 took the low-resolution image of the Cydonia region of Mars (smaller image), sparking a controversy over the interpretation of the data that this was ‘clearly’ a human face. In fact, it is essentially impossible not to see a face in this image—you are hard-wired this way; you are also hard-wired to connect points into lines, perceive motions of independent bodies as coherent, and you are terrible at assessing whether something is random. Thirty years later, higher-resolution images from the Mars Reconnaissance Orbiter clearly showed that no face exists on the surface (larger image). The total number of photons we could hope to collect from a molecule is limited, hence the signal-to-noise is limited, and hence our ability to localize the molecule is limited. So many limitations! No matter how we look at it, there is no way to create an image with an arbitrarily high degree of precision for the location of fluorescing molecules.  Additional limitations on resolution are imposed by the unevenness of optical surfaces and inhomogeneities in material properties, leading to aberrations in the spatial intensity of the emitted light. Likewise, photon detectors introduce other types of noise, including thermal noise on the camera’s detector chip, and read noise, associated with miscounting how many photons actually hit a particular pixel on the chip. Finally, the biology of interest is often a dynamic process, and a snapshot provides only a limited view of that process.


As compared to a simple aperture, modern microscopes employ a dizzying array of lenses and corrections to give the highest quality images possible, nearly free from spherical and chromatic aberrations. But what about aberrations of the mind? A cell biologist can fall victim to a concept from psychology called ‘confirmation bias’, where our preconceived notions subconsciously cause us to preferentially remember data that agrees with our hypothesis while preferentially ignoring or discounting data that does not agree with our hypothesis. We must also take care to avoid a related, and even more insidious mechanism, known as ‘apophenia’—the tendency to perceive meaningful patterns or connections in random or meaningless data. In many cases the primary algorithm that is employed to ‘process’ image data, and subsequently draw conclusions, is our very own visual cortex, wrought with its own internal biases and processing artifacts. If you are dubious of apophenia, simply consider how easy it is to see faces in almost any field of view, even when no faces are present (called ‘pareidolia’).  A classic example from the Mars Viking 1 mission is shown in Figure 3. When looking at microscopy data and trying to interpret fuzzy, low contrast images, a careful scientist has to ask: What is the control against my own biases for interpreting these results in the face of fundamental imprecision?


Figure 4: Fluorescence distribution of an evenly distributed cell surface protein. a) A rod-shaped bacterial cell with fluorescent molecules in green uniformly distributed over the surface. The black grid is the microscope’s focal plane. Units are in microns. b) The fluorescence distribution of an ensemble of cell surface proteins in (a), as simulated using the BlurLab software. c) Images and outlines of fluorescence along lines ‘1’ and ‘2’ from (b). Note that even though the molecular surface density is uniform, the fluorescence in the microscope’s focal plane is maximal at mid-cell due to the difference in geometry between the cylindrical and hemispherical regions of the cell.Using computational techniques, the physical principles of diffraction and noise can be modeled to show how an image should look given a postulated underlying three-dimensional distribution of fluorescing molecules. Using these simulated images, we can eliminate our inherent visual and interpretive biases. However, to do so requires some care. First, one must posit a model for where molecules are located; this is an assumption we will later need to validate. Second, the simulations have to reproduce the random nature of the process; this can be achieved using stochastic simulations.  Third, it is crucial that we assess the validity of the model that underlies the microscopy data; we can do this by processing both the real experimental data and the computationally generated data with the same set of tools to form a measure of the consistency between the real data and the hypothetical underlying mechanism.


If this seems like arduous overkill, consider a simple and practical example. You want to test whether a certain protein is evenly distributed over the surface of a rod-shaped cell. Your hypothesis may be that if you examine the cross-sections of many cells, an even but random distribution of fluorescent molecules will result, on average, in an even distribution of fluorescence intensity along the cell perimeter. Using computational approaches we can quantitatively address whether this hypothesis is correct, and whether we even have the ability to distinguish between different models given the limitations of imaging. The result? The computer simulation shows a surprisingly complex fluorescence distribution rather than the even distribution you might intuitively expect from a uniformly distributed collection of surface molecules (Figure 4). So the next time you examine a microscopy image, ask yourself if what you think you see is really there, and maybe ask your computer as well.



Tristan Ursell, PhD, is a Genentech Postdoctoral Fellow with Prof. KC Huang, PhD, at Stanford University.  Ursell’s research combines theory, experiments, and computer simulations to understand the organization of and interactions that occur within membranes, cell walls and multi-cellular communities.



Kerwin Casey ("KC") Huang’s Laboratory of Cellular Organization at Stanford University uses biocomputing to fill the gap between hypothesis testing and data collection in fluorescence imaging.  They have written a versatile program, called BlurLab, to perform accurate simulations of microscopy that can then be used to test mechanistic hypotheses in an unbiased way.  The program is free and open-source (; it can be used with MatLab (The Mathworks, Natick, MA) or as a stand-alone executable.

All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
All submitted comments are reviewed, so it may be a few days before your comment appears on the site. This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Enter the characters shown in the image.