Computer Graphics by Peter Kutz
Hi! My name is Peter Kutz. I'm currently (as of 2013) a student at the University of Pennsylvania majoring in Digital Media Design. On this page you'll find some samples of my work. Click here
to view my demo reel and resume.
Photorealizer: Physically Based Renderer
Rendered in my path tracer, this image features global illumination, motion blur (translational and rotational), depth of field, rough diffuse bridge surfaces using the Oren-Nayar reflectance model, simulated car paint with clear coat, HDR environment map, and much more. 720p resolution. Click to view actual size.
I've been working on my own physically based 3D renderer since summer 2010. I've named it Photorealizer. My goal from the start has been to achieve photorealism, although at the beginning I had no idea how to do that. I wrote it from scratch in C++, including designing the architecture. It's highly object-oriented, with around 150 classes currently. In terms of the code, I've focused on simplicity, elegance, and readibility. It's been rewarding to develop and produce increasingly more realistic and varied images, and it's been a great vehicle for learning about light, color, physics, math, software design, creative thinking, and more. Simulating the real world has given me a deeper understanding and appreciation of the real world.
To see my latest progress, check out my new blog: photorealizer.blogspot.com
Monte Carlo path tracing. HDR scene-linear floating-point RGB rendering. Halton Sequence quazi-random number generator for many operations, and Mernesse Twister pseudo-random number generator (part of TR1 and C++11, and much better than rand()) for others. Photon mapping, supporting diffuse interreflection and relfected and refracted caustics, with templatized kd-tree, final gathering, and irradiance cache.
Importance sampling for all BRDFs using various methods including inverse transform sampling and rejection sampling. Multiple importance sampling for direct illumination. HDR environment map importance sampling. Importance sampling to choose the number of samples taken of each light. Importance sampling based on perceptual lightness (CIE L*a*b* L*) of surface reflectance color.
Multiple BRDFs, including physically based specular (Cook-Torrance) and diffuse (Oren-Nayar) microfacet models. Realistic energy-conserving diffuse/specular behavior. Subsurface scattering. MTL material loading, and custom material files. Texture mapping for various material properties, and normal mapping (for triangles (using barycentric coordinates) and quads (using inverse bilinear interpolation)). 3D textures. Fresnel reflection and transmission with refraction (and clear coat and one-layer window options), and attenuation through clear materials (for colored clear materials). Dispersion.
Spherical HDR environment maps (using OpenEXR images), sun, spherical lights, and rectangular area lights, all able to be sampled directly. Ambient occlusion.
Color and Image Processing:
Anti-aliasing with multiple reconstruction filters (including Gaussian). Adaptive sampling. sRGB decoding and encoding for input and output images respectively. sRGB-encoded gamma-corrected LDR output and linear HDR output (saves PNG and OpenEXR images respectively). S-shaped transfer curve for LDR output images. Dithering for LDR output images. Progressive sampling (unlimited passes over an image until the desired quality level is reached).
Flat lens depth of field with polygonal aperature (and bokeh). Stereo 3D using parallel axis asymmetric frustum perspective projection, and rendering straight to anaglyph. Motion blur for translation and rotational motion. Natural vignetting (using the cosine fourth "law" of illumination falloff).
Bounding volume hierarchy (BVH) acceleration structure using the surface area heuristic (SAH). Transformations and instancing. Support for motion blur.
Various primitives including polygons, spheres, cylinders, metaballs, bitmap height fields, cubes, infinite ground plane. OBJ model loading. Triangulation for all polygons (using ear clipping for polygons with more than four sides). Smoothing using vertex normal interpolation. Volumetrics (with ray-marching, cubic interpolation for densities, direct illumination, and some scattering).
Built-in z-buffer rasterizer. Perspective-correct texture mapping.
Multithreading and parallel rendering in tiles. Uses OpenMP for cross-platform compatibility (previously used Grand Central Dispatch on Mac).
Qt GUI showing image as it renders (previously Mac Cocoa GUI written in Objective-C).
Written Almost Entirely from Scratch:
There are only a few pieces of code I used that I did not write from scratch: the C++ standard library, Qt and Cocoa GUI frameworks, stb_image and stb_image_write for reading and writing bitmap images, OpenEXR library for reading and writing EXR images, a few of the operations in my custom templatized linear algebra library, a function for efficiently checking whether a ray interesects an axis-aligned bounding box (I had written my own first; this one is just heavily optimized for speed), and some code to generate Perlin noise (only used for 3D textures and volumetrics).
None of the sample images on this page have been post-processed outside of my path tracer (except the rasterized version of the muscle car on the bridge, which I scaled down).
I designed and created this image for the 2012 Penn Computer Graphics
holiday card. I modeled the snow globe myself. See this blog post
for more information. Click to view actual size.
This image features deep path tracing, a teacup model with a Fresnel clear coat to simulate glaze, a spoon model, an HDR environment map, depth of field blur, texture mapping, and supersampling with Gaussian reconstruction filter. Also notice the lack of moiré patterns. Click the image to see the 720p version, and click here
to see last year's original and slightly more noisy version.
This image features deep path tracing, colored glass bunny with Fresnel reflection and refraction and attenuation, rough diffuse floor using Oren-Nayar reflectance model, HDR environment map, depth of field blur, and supersampling with Gaussian reconstruction filter. I also applied a subtle curve (in Photorealizer; nothing in post) to raise the black level and increase contrast slightly. Rendered in 720p. Click to view full size. And click here
to view last year's version of this image—a smaller and noisier version that had a couple significant problems, namely the purple glass turned to clear glass after the first internal bounce, and it used the original Stanford bunny model which has a few huge holes in the bottom.
This image features Monte Carlo subsurface scattering. For this render I went for the look of an exotic blue stone: I made the material moderately backwards scattering and gave it a pretty high index of refraction. For more information and comparison images, click here
. Click image to view actual size.
This image features subsurface multiple scattering accomplished using a dipole diffusion approximation, based on a precomputed hierarchical point cloud of irradiance samples. It also features environment map importance sampling. This image would have been nearly impossible to render without a combination of approximate multiple scattering and environment map importance sampling. For more information, click here
. Click image to view actual size.
This image features ground glass created using a state-of-the-art microfacet model for transmission through rough surfaces. For more information and comparison images, click here
. Click image to view actual size.
This image features diamonds with physically based dispersion using the Sellmeier equation to determine refractive index based on wavelength. Each diamond is a transformed instance of the same mesh. For more information and comparison images, click here
. Click image to view actual size.
Motion blur. This is a newer picture of this car than the one below. This is the same car, but with more metallic paint. The car is moving and the wheels are spinning. This motion blur uses an instantaneous shutter. I could support a slow-opening shutter by importance sampling a curve over time. Also, notice the nice anti-aliasing and lack of moiré patterns on the distant checkered ground.
The same car as in the image above, except this one has a less metallic paint. This image features deep path tracing, a car model with custom materials file, color and transparency texture mapping, an HDR environment map, and supersampling with Gaussian reconstruction filter.
This image features deep path tracing, direct illumination area light, Cook-Torrance specular reflections, textured floor, clear glass ball, light-emitting material, HDR environment map (subtle), and supersampling with Gaussian reconstruction filter. Notice the caustics below the glass ball.
This image features deep path tracing, smooth diffuse surfaces, direct illumination sunlight, and supersampling with Gaussian reconstruction filter. This image shows diffuse interreflection very clearly.
This is a red / cyan anaglyph that can be viewed using red / cyan 3D glasses (left eye red, right eye cyan). The stereo 3D was accomplished using parallel axis asymmetric frustum perspective projection, and the image was rendered directly to anaglyph with no post-processing. Lit with spherical lights (sampled directly), and an HDR environment map.
Ambient occlusion only. Faster than global illumination but much less realistic. Pure white car, ground, and sky. 720p resolution. Click to view actual size.
Instancing. 1272 airplanes with a total of 100 million polygons. Close-up.
Instancing. 1272 airplanes with a total of 100 million polygons. Wide shot showing the entire cloud.
Rasterized version of the muscle car on bridge render at the top of the page (the contents and composition are identical to that image). I built a z-buffer rasterizer with perspective-correct texturing, built from scratch, built into my renderer. I did post-process this image: I scaled it down to 25% of its original size to anti-alias it, because I haven't built anti-aliasing into the rasterizer. Click to view full size.
Lit by an HDR environment map that features very small, very bright spotlights (Grace Cathedral). Rendered using BRDF sampling only. Most paths don't hit the spotlights. Rendered in 8 minutes.
Same environment map as the image above, but now rendered with HDR environment map importance sampling. Sample locations on the environment map (a spherical area light at infinite distance) are drawn from the light's PDF. I compute the light's PDF once and store it in discrete form, then use efficient inverse transform (inverse CDF) sampling with a binary search (custom, not standard library) for lookups. Dramatically lower variance than the naive approach, but rendered in the same amount of time (8 minutes) (and after rendering this I made a couple improvements that make it even faster).
Depth of field. Six-sided aperature. Hexagonal bokeh.
No dithering. Visible banding, especially in the gradient sky. I rendered this image in only 64 shades of gray to make the banding more obvious.
Dithering. Rendered in only 64 shades of gray, the same color palette as the image above.
No normal mapping.
Scattering in a volume. I designed a simple physically based Monte Carlo scattering algorithm based on spherical particles (which are much larger than the wavelength of the scattered light). The volume in this case is a frame from my smoke simulator.
Photon mapping and adaptive sampling. This image took about 5 minutes to render on my laptop at 1080x1080. Click to view actual size. I've improved my photon mapping system since I made this image. For details and images about my updated photon mapping system, see this blog post
Bitmap height map.
Cloud with shadow. I've made a few improvements and fixes to my volume rendering system since creating this image, For the latest quality, see my smoke sim video below.
Texture mapping arbitrary quads using inverse bilinear interpolation. Click to view actual size. This image took 4 seconds to render with a box filter and adaptive sampling with a minimum of 1 and a maximum of 64 samples per pixels and no jittering.
My very first ray-traced image.
List coming soon (or at least at some point). Expect lots of Wikipedia articles, lots of SIGGRAPH papers, lots of websites, and a few books. Also worth noting that when possible, I've tried to figure things out on my own, because it's more fun and I feel like I tend to learn more that way, gaining a deeper, more intuitive understanding of things in the process of figuring them out.
I developed a physically based sky renderer in C++. To make pictures of the sky that are as realistic as possible, I simulated light transport in the Earth's atmosphere using real physics and data. Check out my project blog for all of the details: skyrenderer.blogspot.com
GPU Path Tracer
Karl Li and I created a high performance, interactive, GPU-based, physically based, unbiased path tracer for CIS 565: GPU Programming at the University of Pennsylvania. We wrote it in CUDA. Check out our project blog for details: gpupathtracer.blogspot.com
An image rendered in our GPU path tracer.
Semi-Lagrangian fluid simulation written in C++. I implemented advection, pressure projection, solid wall boundaries, vorticity confinement, buoyancy, modified Euler / Heun’s RK2 time integration, and a preconditioned conjugate gradient method for pressure projection.
Realistic Liquid Simulation using SPH
I have worked with ActionScript 3.0 extensively, including working on many freelance projects. I have been a subcontracter on projects for Honda, Sony Pictures, and HP, among others. I have done raw PCM audio manipulation, and created several full-screen deep-zoom image galleries. Click here to see some of my Flash work.
Most of the samples were programmed by me and designed by others. Long before I did freelance Flash development, I spent a ton of time in middle school and high school making Flash games and animations for fun.
Real-time Interactive 3D modelling tool written in C++ with Qt for user interface and OpenGL for graphics. This was a group project with me and two others. I was in charge of data structures, and also handled most of the architecture of the program, including the scene graph. I created a half-edge mesh data structure with many editing operations. I also added triangulation using ear clipping, and quadrangulation which works by triangulating then combining adjacent triangles if the angle between them is below a certain theshold. I helped with the OBJ 3D model importing, including optimizing it, and I wrote the code to save models as OBJs. I also helped with materials and lighting. We used Subversion for version control. 9262 lines of code in 24 from-scratch custom classes.
Here are the two entries I made for Stephen Colbert's Green Screen Challenge
in high school. Clips from both (along with my name) were featured as part of a montage on the results show, with the first video below featured prominently as the finale of the montage and the longest clip in the montage. These videos were created primarily in Adobe After Effects and edited in Sony Vegas.
Volumetric Cloud Renderer
This is a volumetric cloud renderer that I wrote in C++. It loads configuration text files, generates volumetrics based on Perlin noise, and uses ray marching and the Beer–Lambert law to render. (I didn't write the Perlin noise generator used in this project, however I did write a Perlin noise generator from scratch in high school (along with a ton of Flash games, a chess engine, an unbeatable tic-tac-toe game using minimax and alpha–beta pruning, fractal generators, and lots more).) I have since merged this renderer into Photorealizer.
I made a strange attractor renderer based on math found here
. I implemented my strange attractor renderer on top of my Photorealizer framework so that I could take advantage of Photorealizer's many image-processing capabilities.
This particular image rendered overnight, with 41,024,000,000 iterations. Click to view actual size.
I'm very interested in color. One of the things that makes color so interesting is the way it can be looked at from so many different angles: physics, optics, colorimetry, perception, philosphy, consciousness, aesthetics, etc. I thought it would be cool to make some high-quality visualizations of various color spaces, like the one below.
Top-down view of the CIE L*a*b* (CIELAB for short) color solid showing only the sRGB gamut—the colors that are displayable on a typical computer screen. For more information and images, click here
. Click to view actual size.