top of page

Master's Research Project

A Visual Literacy Module on Subcellular Scale

Visual literacy is the ability to glean useful information and form mental models from visualizations, however it is not explicitly taught, so students are expected to develop those skills on their own. This leads students to form misconceptions based on the way information is visually presented — one common example is a skewed sense of organelle size due to the way cells are often simplified in diagrams.

This misconception will be addressed in a web-based module aimed at first-year undergraduate biology students using the BioLEAP (Biology Learning Engagement and Assessment Platform) in conjunction with the first-year biology curriculum at the University of Toronto Mississauga.

 

The project was two short animations for the module. Part I: A Primer on Subcellular Scale provides a frame of reference for organelle size by magnifying a cell to the size of a person, and Part II: Interpreting Scale in Visualizations examines strategies for thinking critically about how scale is visually represented.

BMC Faculty Supervisors: 

Dr. Jodie Jenkinson (Primary)

Prof. Michael Corrin 

Prof. Marc Dryer

Content Advisor:

Dr. Fiona Rawle, Department of Biology, University of Toronto Mississauga

PART I: A Primer on Subcellular Scale

&

Questions:

How big is a cell?

How big are organelles relative to one another?

 

We can read about measurements in the micrometre scale, but those numbers don't mean much without a frame of reference. This animation magnifies a cell to the person scale so that we can relate organelle size to familiar everyday objects.

PART II: Interpreting Scale in Visualizations

Questions:

Why is scale sometimes misrepresented in visualizations?

When does it matter whether a cell is accurately represented?

Visualizations aim to explain things clearly, but sometimes emphasizing details for clarity comes at a cost of accuracy. This animation deconstructs visualizations of cells to explain why we need to think critically about how information is visually presented.

Process Work

PROCESS WORK
Screenshot (21)crop.jpg
Screenshot (21)hehebg.jpg
blobertpce.jpg

Scripts & Storyboards

SCRIPTS & STORYBOARDS

After researching the content, scripts were written and then visualized into storyboards for the purpose of getting feedback before working on the animation. Further changes to the script continued to be made after the storyboard stage.

storyboards-thumbs1.jpg
storyboards-thumbs2.jpg

Animatics

ANIMATICS

The storyboards were turned into rough animations to work out timing and tone. I ended up making two iterations of each part after feedback. Below are the most recent iterations of each part.

Animatic PART I:

A Primer on Subcellular Scale

Animatic PART II:

Interpreting Scale in Visualizations

Production

PRODUCTION

3D assets were sculpted in Maya and/or ZBrush, and brought into Maya for rigging, animation, and rendering. Then the rendered frames were brought into After Effects. Each component had its own challenges, which I've broken up below.

I kept a Google Doc of notes for the problems I've encountered and solved on my MRP journey, because what's the point in solving a problem if you can't remember how you did it? My hope is that it will help any current students or artists visiting this page that are looking for tips in Maya, ZBrush, or After Effects. Sorry if it's messy!

MRP Learning Notes

Blobert, Rigging, and Object Tracking

Blobert, Rigging, & Object Tracking
sc1_q002_s002-screenshot.png
blobertwave.png

Our main character, nicknamed Blobert, was modeled from scratch in Maya with help from James Taylor tutorials. He was rigged with control shapes at his hands, feet, and pelvis. His hands were programmed with set driven keys to control finger flexion/extension, thumb abduction/adduction, wrist flexion/extension, pronation/supination, and ulnar/radial flexion.

The complex rigging was worth it to be able to control all of Blobert's little nuanced movements.

Blobert's face was animated and tracked onto his head in After Effects.

 

blobertfacetracking

Resources:

Sculpting Subcellular Structures

Sculpting Subcellular Structures
rough endoplasmic reticulum
smooth endoplasmic reticulum
er nucleus cross section zbrush
golgi apparatus sculpt

I sculpted ZBrush models of the rough ER (endoplasmic reticulum) without added ribosomes, smooth ER, a cross-sectional view, and separated layers of the golgi apparatus. After bringing these into Maya, the mesh had to be cleaned up quite a bit due to the thinness of the shapes causing faces to intersect with themselves.

rough endoplasmic reticulum no ribosomes
02transmissiontest.jpg

Here I brought the models into Maya and tested the transmission settings to make sure the nucleus is visible through all the layers of the rough ER. This is before the cleanup so some edges are still jagged.

nucleus endoplasmic reticulum
03ERribosometest.jpg

Ribosomes were added to the rough ER using MASH distribution on mesh. There are about 6,000,000 ribosomes in a real mouse fibroblast (1) but only about 200,000 are simulated for this project.

Full_cell_card.png

Resources:

  • (1) Bionumbers.org: BNID 113783 - Yewdell JW, Reits E, Neefjes J. Making sense of mass destruction: quantitating MHC class I antigen presentation. Nat Rev Immunol. 2003 Dec3(12):952-61. DOI: 10.1038/nri1250 

organells cross section

The set of ER models were cut with booleans in ZBrush for this cross section. There are 2,000 nuclear pore complexes simulated here, a realistic number (2). The depth of field in the right image was done in After Effects using Frischluft Lenscare. The size and shape of the golgi was changed after feedback to better reflect an animal cell (right).

Resources:

  • (2) Bionumbers.org: BNID 111130  - Adam SA. The nuclear pore complex. Genome Biol. 2001 2(9):REVIEWS0007

Golgi Apparatus and nParticles

Golgi Apparatus and nParticles
Screenshot (17)-golgi.png
golginewscreenshot.jpg

I experimented with animating the budding and merging of the golgi using MASH-distributed nParticles. Originally I had hoped to show a cross-section of vesicles passing between cisternae, but that ended up being unnecessary, so we only see a static outer view in the final animation.

 

The nParticles in each cistern and bud were controlled with deform clusters. I tried several different amounts of particles, but at 90,000 nParticles, the simulation became very heavy and slow to work with, so even though some parts of the mesh still look holey, it wasn't feasible to increase the density of particles further. I was advised that the buds and stacks (left) were too thick, so the final animation uses a finer static model instead (right).

Resources:

Newspaper Texture, nCloth, & MASH

Newspaper Texture, nCloth, & MASH
SC1_q004_s001-10-5-newspaperclippingreve

To get Blobert's main newspaper to fold and flap, I used an nCloth plane with the edges and centre constrained to a rig. This way, I was able to control the translation and folding of the newspaper while retaining the more subtle warping movements (left).

The falling and wrapping newspapers were accomplished using MASH reproMesh (highlighted in green, below) and a merge node. Originally I had planned to use nCloths, but MASH ended up being much easier to control. There was some clipping within the geometry, so I manually animated three newspapers to block most of the problem areas (highlighted in white, below).

Screenshot (12)crop.jpg

Resources:

newspaper1.png
newspaper2.png
SC1_q004_s001-26-tiffbeautyMain.0941smal

For the newspaper print, I knew I wanted to design it myself and include references to my classmates' projects. Above (middle) is the first experiment with mapping an image made in Photoshop onto the newspaper plane (left). On the (right) is one of the final shots with newspapers wrapped around Blobert.

Below are the final outside and inside spreads that were mapped onto each side of the plane, made in InDesign and Photoshop. The headings and subheadings are in reference to my classmates' projects. For the images used within the newspaper, I quickly threw them together in Maya based on shots from those respective projects. The body text is taken from my project proposal.

Click each of the articles to view that artist's website!

newspaper_outside_scsh.png
newspaper_inside_scsh.png

Sandwiches & MASH Dynamics

Sandwiches & MASH Dynamics
Screenshot (16)-ncloth.png
sandwich-05.jpg
sandwich-04.jpg

For the sandwiches, I started out by sculpting and assembling the individual ingredients. I initially tried to animate them with nCloth so I could get the deli meats to fold dynamically, but I ran into too many problems with objects clipping through each other (top left), and the simulation times began to rack up. So I decided to switch to MASH dynamics, which calculates much faster at the cost of the ingredients remaining stiff.

Resources:

Post-Production

POST-PRODUCTION

After all of the frames were rendered out of Maya, I composited them together in After Effects. Here I applied colour correction, motion tracking, text, sound, time remapping, 3D camera, depth of field, and I used masks to fix animation errors. Below is an example of the layers of renders and effects that went into a single frame of the final animation.

aftereffects compositing.png

Bloopers

BLOOPERS

To thank you for scrolling all the way to the bottom, here is a blooper reel of many of the mistakes I managed to record during the production of this project. Many other mistakes were made, of course, but they were not as visually entertaining. Enjoy!

Thanks for reading! 

bottom of page