Biology in 3D

Spatial depth (“3D”) perception

Our world is three-dimensional. Human visual senses are adapted and trained to receive all these three dimensions. Due to physical and practical constraints our eyes do have certain optical properties: To detect light, light waves have to pass a lens, which focus on a two-dimensional light detecting tissue (the retina). This optical arrangement does allow naked eyes to detect light intensities but does not allow detecting light-fields, i.e. each eye can only retrieve a two-dimensional image of the world. Only the joint information gathered with a second eye allows our brains to give us a three-dimensional impression of the environment. The cells and tissues attached to our eyes do not only process the received two-dimensional visual information, but also classify this information about previous experience. Only the last classification accounts for 3D perception – seeing the world in 3D is a brain made illusion.
What is typical experience with visual information needed to allow for comfortable 3D vision?

 

Two distinct views

Each eye receives a different two-dimensional image of an object in focus, the perspective is different. The views differ from each other less the further away the object is. For example a cup held in our hands is clearly received as three-dimensional, while a mountain at the horizon is not that clear.

 

Perspective

Objects close to a single eye look different from the same objects further away from the eye. The difference is not only the size but also the perspective, i.e. for example angles between edges change. The amount of angular change by moving objects away from the eyes decreases with the distance of the object from our eyes.

 

Sharpness

Being focused on an object close to our eyes let the surrounding appear blurry, while focusing on object far away does not show much difference in sharpness between object and surrounding.

 

Brightness

For our eyes, glowing objects that are far away are less bright than the same objects if they are close. In the same way like stars are dim even if they are as bright as the sun. Thus, we have learned that distance from our eyes connects to its brightness.

 

Movement

If the entire image changes from one time point to the other our brains expect signals from our equilibrium organ about a head movement. Without matching feedback from these organs we feel uncomfortable or perish – something is wrong and not according to our usual experience. Most people know this effect from sea sickness, maligned 3D cinema or virtual reality experiences.

How to make a (good) 3D perception?

Perception of spatial depth on a display medium like paper or screens needs to adapt to the brains expected input. Here the idea is to change a set of two images in a way that it resembles the experience if it was a macroscopic 3D object seen through two eyes. We can meet these changes to the images by following some guidelines adapted to the kind of image source. In fluorescent microscopy in biology most data sets are available as sets of 2D slices of 3D structures. In order to make (good) 3D perception, all the typical experiences explained above should be addressed:

  • Project stacks of images not through an orthogonal but perspective projection
  • Use two different perspectively shifted (not rotated) view points
  • The amount of perspective shift depends on the screen-to-eye distance, eye-to-eye distance and the desired depth perception, so each audience needs a specific shift
  • Dim the intensity of the back part of your 3D image
  • Only move the camera as long as the movement would look-alike the object only was moving
  • Adapt object size and sharpness depending on the position and the presumed focus in the 3D image stack

Even though I recommend the entire list, it is not always easy to consistently carry out all steps on all kind of visual data …

Visualization concepts

Examples of 3D visualizations of volumetric image data.

 

Perspective projection

Essentially, perspective projections make sure that objects far away are smaller than objects which are close. Starting with a stack of 2D images, projections respecting a perspective can be easily done with free software packages like Fiji, Chimera, and many others.

 

3D stereoscopic view Cardboard

Stereoscopic images present two views of the same scenery with a different perspective for each eye next to each other. These images/videos can be watched using 3D capable screens, virtual reality glasses, and ingeniously simple with a self-assembled Google Cardboard and a smart phone.

 

3D anaglyphic view Anaglyph glasses

For anaglyph images a different view for each eye is achieved through putting the two images color-coded directly on top of each other. While the resulting image seen without glasses looks like a messed up image, the images are properly separated again by using glasses (‘red-cyan glasses’) with color filtering foils. This method is cheap but also limits the number of usable colors in the images due to the need to filter one color for each eye. Hence, gray-scale images are best suited for this method.

 

3D lenticular poster prints

3D posters without glasses are real. In lenticular printing, an array of micro-lenses is mounted on top of an image. The lens array imposes certain visual effects depending on the pattern of the image it is attached to. The technique is most commonly known from flip image which change view-point or visual content through slight rotations of the image. Advances in the quality of micro-lenses and print resolution allow also for depth perception in the lenticular images making 3D images and posters possible. Here the lenses split an interlaced background image into two distinct images, one for each eye. The ability to show an object from different view-points is also possible in 3D.

 

3D prints

Synthesizing 3D objects with a printer is advancing quickly. Printing services can print a digital 3D model of an object using distinct materials and properties. These services print for industry and craft but can also print microscopic organisms as a model.