top of page

Custom Rasterizer (C++)

* UPenn policy restricts public sharing of source code, so the repository isn’t included here, but the descriptions below outline my implementation and technical decisions.​​​​

rast.png

​I built a complete 3D software rasterizer from scratch, implementing the full graphics pipeline. The project features:

​

- Rasterization pipeline: world space -> camera space -> clip space -> NDC -> screen space transformations with perspective division

​

- Core 3D rendering algorithms: bounding box optimization, scanline triangle rasterization, barycentric coordinate interpolation, and Z-buffer depth testing

​

- Shading: perspective-correct interpolation for vertex attributes (colors, UVs, normals), texture mapping, and Lambert diffuse lighting

​

- Camera: functional view and projection matrix transformations with aspect ratio handling

​

It successfully rendered 3D scenes including custom created and textured models with proper depth sorting and lighting (such as the coffee scene on the left). Implemented efficient edge-intersection algorithms and proper normal transformation for accurate shading calculations

​

​​​​​

Written in Qt Creator

​

Languages: C++

​

​

​

​

​

​

​

​

Overall flow (How do we get from a 3D model file to something on screen?):​

  1. Load polygons from JSON file, which gives us vertex positions, colors, UVs, normals

  2. Initialize camera with position, FOV, near/far planes

  3. Create the view matrix (camera transform) and projection matrix (perspective), and initialize Z-buffer (depth buffer)

  4. Transform each individual vertex through the pipeline from world space to screen space

  5. Triangle rasterization process: bounding box optimization (bounding box around each triangle), scanline rasterization (for each row y in bounding box, find left and right edge intersections, fill pixels between left and right), and barycentric coordinates (for each pixel inside the triangle, compute barycentric weights, and these weights tell us how close the pixel is to each vertex to be used for interpolation -- "how do I smoothly blend vertex data across a triangle's surface?")

  6. For each pixel it needs to draw, do perspective-correct interpolation (Z-interpolation to get the depth of this pixel, and attribute interpolations for UVs, normals, colors, etc while accounting for perspective distortion), Z-buffer test (check if this fragment is closer than the closest being stored. If it is, update Z buffer and continue shading process. If it's not, it's going to be covered up anyway), texture sampling (use interpolated UV coordinates to sample texture color from provided texture), and lambert lighting (use the interpolated normal vector to compute dot product with light direction, and apply diffuse lighting formula with ambient).

  7. Return final rendered image

Custom asset I modeled (Maya) and textured (Substance Painter) in order to test my rasterizer:​​​​​​​

textures.png
textureinuv.png
RenderAO.jpg
RenderLambert.jpg

Animation, poses, motion study

1minutes.jpg
Sketches.jpg

1 min. life drawings + pose drawings (with emphasis on line-of-action)

Body dynamics, posing character rigs with expression

Stella Poses.jpg
Poses2.jpg

Character body mechanics animation studies

bottom of page