CS187 Final Project Report

Kate Salesin


Inspiration

"Pleurosigma (marine diatoms)." Source

For this year's theme of "The Outer Limits," I was inspired to explore the small-scale end of the outer limits of our world. This photograph is one of several from Nikon's annual Photomicrography Competition that uses a polarization microscope to produce an image. This type of microscope uses a linear polarizer above and below the specimen (in crossed orientations) to filter incoming light from below. The diatoms are composed of a translucent, birefringent material, which means they act as waveplates, or phase shifters. The bright colors are a result of the phase shifts incurred while passing through the diatoms, and are a function of their thickness, orientation, and "birefringence" (the difference in index of refraction between the fast and slow transmission axes).

I decided to simulate diatoms viewed through a simple polarization microscope in dirt.


Outline

These are the major features I implemented in my renderer.

  1. Parallelism: for faster rendering
  2. Worley noise: to create interesting background textures that look like fuzzy, floating organic matter
  3. Microscope lens + camera: to accurately simulate how light would pass through a microscope
  4. Spectral rendering: since the waveplate effects are wavelength-dependent
  5. Physically accurate polarization: to accurately simulate the polarization effects of dielectrics, polarizers, and wave retarders

1. Parallelism

Since rendering can be a rather slow process, parallelism was the first modification I made to dirt. First, I break up the image into 16x16 "render blocks" (pixel coordinates) and save them in a work list. I then detect the number of concurrent threads supported by the hardware and start that many threads. The threads continually pop render blocks off the work list until all threads are finished.

The number of threads is also a command-line argument to dirt with key '–t / ––nthreads.'

1 thread / 36.78s 2 threads / 24.16s 3 threads / 21.17s 4 threads / 20.16s 5 threads / 19.14s
6 thread / 18.14s 7 threads / 17.13s 8 threads / 16.13s 9 threads / 16.11s 10 threads / 16.11s
Fig 1. Renders of a Cornell box scene (100spp) with a variable number of threads, showing that a correct image is produced for any number of threads. Number of threads and render times are listed above each image. Since my hardware supports 8 threads, improvement caps there.
1x1 / 16.61s 2x2 / 16.64s 4x4 / 16.61s 8x8 / 16.64s 16x16 / 16.63s 32x32 / 16.62s 100x100 / 17.13s
Fig 2. Renders of a Cornell box scene (512x512, 100spp) with a variable render block size, showing that a correct image is produced for any block size. Block size (in pixels) and render times are listed above each image. It seems that block size doesn't matter much for performance.

Code


2. Worley noise

I used Worley noise to create interesting background textures that look like organic cells. I followed the implementation here, which uses a 3x3 for loop to look up the local neighborhood of feature points and calculate their distance to the shade point.

Fig 3. Some simple Worley noise textures I made to look like different types of cells. I primarily used variations of the rightmost texture in the final image, which happen to look like chains of diatoms.

Code


3. Microscope lens + camera

While there are many complex elements to a typical microscope, I decided to create a simple setup where light passes through a linear polarizer, the specimen, a crossed linear polarizer, an objective lens, and lands on the image plane. As in a real microscope, the image of the specimen is reversed horizontally and vertically on the image plane.

This diagram illustrates my microscope setup.

For the objective lens, I use a glass spherical cap. I implemented a spherical cap surface primitive that can be intersected with rays. One known issue of using lenses with spherical surfaces is spherical aberration, which means that light passing through the lens does not converge to a perfect focal point. To combat this, I followed the recommendations here and here, which suggest both using an aperture to cut off rays on the fringes of the lens that are highly convergent and moving the lens closer to the focal point than the lensmaker's equation would suggest.

For the microscope camera, I created a camera similar to an orthographic camera. To generate rays, this microscope camera produces a ray origin within a pixel of the image plane. Unlike an orthographic camera, the ray direction is the direction from the ray origin to some focal point within the scene, which in our case is the focal point of the objective lens. In theory, rays should pass through the focal point, hit the objective lens, and become collimated. However, due to spherical aberration, this does not work out perfectly and the image becomes distorted toward the edges of the lens.

d = 3 d = 6 d = 9
No lens
With lens
With lens (+ distance correction)

Fig 4. Some tests showing how the camera and lens work together. In all images, the image plane is located a focal distance (d = 2) away from the focal point. The distance of the "specimen" (checker texture) from the focal point is above each column. We can see that in all images, the texture is flipped horizontally and vertically, as we would expect.

Row 1: no lens. As the specimen moves away from the focal point, it grows smaller. Row 2: a hemispherical lens is located at its focal distance (d = 2). The rays toward the edges of the sphere are refracted too strongly, causing them to converge on the opposite side of the lens. Row 3: a hemispherical lens is located closer to the focal point (d = 1). The specimen is less distorted and barely shrinks at long distances.

Code


4. Spectral rendering

Instead of shooting rays of light with RGB colors through a scene, spectral rendering shoots rays of light that each carry one wavelength. This allows us to incorporate realistic wavelength-dependent effects such as dispersion and phase shifts. In our path tracer, we pick a wavelength with a uniform pdf between 400nm and 700nm to shoot into the scene, which adds one dimesion to the Monte Carlo estimate of radiance. This extra dimension manifests as color noise at low sample counts.

To compare the results of my spectral path tracer with our old RGB path tracer, I ported PBRT's spectral upsampling scheme for RGB values to use on diffuse surfaces and light sources. To convert spectra to XYZ colors, I implemented the analytic functions from this paper.

1spp 4spp 16spp 32spp 100spp 1024spp
RGB
Spectral
Fig 5. Renders of a Cornell box scene at variable sample counts, showing that the spectral path tracer converges to the same result as the RGB path tracer. The glass's index of refraction is constant here, for true comparison to RGB.
RGB Spectral (constant IOR) Spectral (wavelength-dependent IOR)
Fig 6. Converged renders of a Cornell box scene. The rightmost image simulates dispersion using the Sellmeier equation to compute a wavelength-dependent index of refraction. We can see some color spreading in the caustic beneath the ball and within the ball.

I also implemented blackbody lights to use in my final render. Blackbodies are bodies that absorb all incident radiation and may emit radiance diffusely, e.g. stars and human bodies. Planck's law provides a simple formula for a blackbody's emitted radiance as a function of wavelength and its temperature.

1000K 2000K 3000K 4000K 5000K 6000K 7000K 8000K
1spp
4spp
16spp
64spp
1024spp
4096spp
Fig 7. Convergence of a simple scene with blackbody sphere lights at variable temperatures. Their radiance has been normalized to highlight the colors that different temperatures produce. Compare to Fig. 3 of this paper for validation.

Code


5. Polarization

The bulk of what I implemented for my project is realistic polarization. Polarization is a property of light that describes how its waveform moves as it progresses along the light propagation direction. On a macroscopic scale, light may be linearly polarized (oscillating back and forth in a single plane), circularly polarized (drawing a circle as the waveform moves through the origin), or elliptically polarized (the general case). Bulk light may also be partially polarized or unpolarized, since it is comprised of many individual photons with unique properties.

I generally followed this recent SIGGRAPH course for my implementation. I chose to represent polarized light in my path tracer using Mueller calculus, which can account for fully, partially, or not at all polarized light, as opposed to Jones calculus, which can only account for fully polarized light. In brief, a Stokes 4-vector represents the polarization state for a ray of light, including its total radiance (S0) and relative responses to a horizontal vs. vertical polarizer (S1) a +45° vs. –45° polarizer (S2), and a right-hand-circular vs. left-hand-circular polarizer (S3). A Mueller matrix represents an optical transformation of a Stokes vector, such as light passing through a linear polarizer. To represent these, I created two new data structures, StokesState and MuellerTransform. In addition to the Stokes vector and Mueller matrix, these two structures also hold the reference bases necessary to transform, add, and multiply Stokes vectors. Keeping track of these reference bases and transforming them correctly was a challenge.

(a) (b) (c)
Unpolarized light through linear polarizer Polarized light through linear polarizer Unpolarized light through
crossed linear polarizers
Fig 8. Tests of my system showing the resulting Stokes vector, S = [S0, S1, S2, S3], and degree of linear polarization when light passes through linear polarizers as a function of polarizer angle. (a) Unpolarized light, S = [1,0,0,0], passes through a horizontal linear polarizer. The total radiance (S0) is attenuated by 50%, and S1 and S2 oscillate with the polarizer angle. The light is always linearly polarized. (b) Polarized light, S = [1,1,0,0] passes through a horizontal linear polarizer. S0, S1, and S2 fluctuate with the polarizer angle. (c) Unpolarized light, S = [1,0,0,0], passes through a horizontal polarizer, then a second polarizer of varying angle.
For dielectrics, Fresnel reflection and transmission induce some polarization. At Brewster's angle, light is completely linearly polarized in the plane perpendicular to the plane of incidence. To implement the Mueller matrices for Fresnel reflection/transmission, I followed the formulas here and reproduced their plots. I noticed when consulting different sources a discrepancy in the (0,1) and (1,0) values of the Mueller matrices (see here vs. here vs. here), whether those elements are 0.5 * (Rp–Rs) or 0.5 * (Rs–Rp); I chose to use 0.5 * (Rs–Rp) because that produces the expected horizontal polarization at Brewster's angle.
Air-to-water Mueller matrices Water-to-air Mueller matrices
Fig 10. Plots of the Mueller matrix elements for Fresnel reflection (red) and transmission (blue) as a function of angle of incidence. Note that in water-to-air, the transmission matrix goes to zero at the critical angle. Compare to the plots "Air-to-Water Reflection and Transmission Matrices" and "Water-to-Air Reflection and Transmission Matrices" here. The red and blue lines are swapped for (0,1) and (1,0) due to the change discussed above.
For a polarization renderer, debugging can be extra challenging. I implemented this paper on visualizing polarization states in such a renderer to help. To use the debugger, pass the key '-p' as a command-line argument to dirt, which will write extra images with the polarization information.
Horizontal polarizer +45° polarizer Vertical polarizer –45° polarizer Horizontal + vertical crossed polarizers
Rendered image
Degree of polarization
Orientation of linear polarization
Fig 11. Polarization information for a simple scene where an orthographic camera views a diffuse light source of intensity 1 through linear polarizers of varying angles. The overall intensity is reduced by 50% for all linear polarizers, and reduced to zero in the case of crossed polarizers (the white border delineates the black images). The middle row shows the degree of polarization of the light arriving at the camera from unpolarized (black) to fully polarized (red). The bottom row shows the orientation angle of the linear polarization, where green is horizontally polarized, yellow is +45° polarized, red is vertically polarized, and blue is –45° polarized.

Note that red and green in the bottom row are swapped from the visualizations in the paper, but follow their text instructions. In Sec. 5.3, they say to map negative S1 (vertical polarization) to red and positive S1 (horizontal polarization) to green; however, this disagrees with their Fig. 5 and Fig. 6, where horizontal polarization is red and vertical polarization is green.

Rendered image Degree of polarization Orientation of linear polarization
Fig 12. Polarization information for a Cornell box scene where dielectric transmission has been set to zero for better visualization (compare to Fig. 3 and Fig. 6 of this paper). I chose to use the "direct overlay" option from the paper, which composites the extra information over a grayscale version of the image. The diffuse surfaces act as depolarizers, thus they remain gray. The light that reflects off the dielectrics is generally polarized perpendicular to the plane of incidence. Also note the same red/green color swap described above.

As previously mentioned, there is another framework for polarization called Jones calculus. I tested my Mueller waveplate calculations against a Jones matrix formulation for validation.

t = 40000nm t = 50000nm t = 60000nm t = 70000nm t = 80000nm
Fig 13. Top row: Plots of Jones matrix calculations for light transmission through a linear polarizer, a waveplate of varying thickness and 45° fast axis angle, and a crossed linear polarizer. The color of the plot line is an RGB representation of the plotted spectrum. Bottom row: Renders of a simple scene where an orthographic camera views a diffuse light source of intensity 1 through a linear polarizer, a waveplate of varying thickness and 45° fast axis angle, and a crossed linear polarizer. Note that the colors match.

Code


Rendering Competition Image

For the final image, I modeled the diatoms/waveplates as layers of flat planes and used textures to look up their thicknesses and fast axis angles. The colors are purely created by the wavelength-dependent transformation of light as it passes through the optical elements, not colors stored in the textures. Some of the textures are Worley noise and some are adapted from drawings in the book Art Forms in Nature, which is a compilation of highly realistic drawings by German naturalist Ernst Haeckel (1834–1919).