1 The left image known as the Cornell Box1 demon-
strates the color bleeding effects of interreflection
faithfully simulated by the radiosity method. Andrzej
Zarzycki modeled the right image of the Florence
Courtyard (Piazza ss. Annunziata) rendered using
Lightscape radiosity software.
2 Sunflowers and stream. These images are snapshots
from layered depth images.2 The viewer can, in real
time, render this and nearby views. The LDIs were
created and rendered by Jonathan Shade. The data was
supplied courtesy of Oliver Deussen.
capture hairs on a stuffed animal). Even if it did, image
The accelerating progression of ever-higher compu-
synthesis algorithms wouldn’t be able to deal with such tational and rendering speeds at ever-lower costs will, in
rich detail in anything close to real time. the not too distant future, bring real-time ray tracing
IBR combines these computer vision and graphics within reach of an average PC user. This will funda-
technologies by asking the question, what can I do to mentally change the look we expect from computer
render new images directly from a set of images plus graphics.
limited geometric information that computer vision
I also hope this will inspire a newly energized effort at
algorithms can supply? In this way, we are freed from developing tools for physical simulation. The power of
having to create detailed mathematical representations procedural modeling and physically based animation
of the world, yet we can potentially create very realistic and rendering will help expand our ability to explore
images. Lumigraphs2 are one example, constructed from new ideas. However, none of these advances will put the
multiple images. The Lumigraph, much like a digital power of imagination amplification in the hands of non-
hologram, is a unified representation of what an object specialists. Better, easier, more intuitive tools need to be
looks like from all directions.
built to help us express our imagination. One of my
The sunflower and stream images shown in Figure 2 favorite Siggraph papers of recent years is Igarashi et
were rendered in real time from another data structure al.’s, “Teddy.”4 This paper presents the type of intuitive
referred to as a layered depth image.3 An LDI falls interface to modeling that, if extended more broadly,
somewhere between a Lumigraph and an image. It dif- will bring the ability to create imaginary worlds much
fers from a simple image in that it contains, for each closer to everyone.
pixel, a list of color values plus depth. The depth value
In addition to new technologies, people need to be
provides the means to induce parallax as the viewpoint empowered to participate in the use of new tools. Edu-
changes. As you move your head from side to side, things cational curricula should provide more focus on the
far away will stay still, while nearby objects will move kinds of skills needed to leverage this new computa-
back and forth in your visual field. Multiple, per-pixel, tional power. The focus on traditional reading and writ-
color-depth pairs fill in the gaps created when an occlud- ing skills should be expanded to include formats that
ed surface becomes visible.
are interactive and make increasing use of multimedia.
Given all the interesting recent work in IBR, why title We don’t even have verbs to fully describe either the
this section “a temporary refuge”? IBR provides a tan- process of authoring or viewing such rich documents.
talizing means to capture very complex real-world Whatever the name we choose for these processes, we
geometry plus lighting and a way to leverage slow glob- need to teach these skills.
al illumination methods. However, IBR provides only
■
limited help with imagination amplification. IBR by its
nature doesn’t help us ask what-if questions. Having to
start from images to create new images puts us in a References
chicken-and-egg bind.
1. M. Cohen and D. Greenberg, “The Hemicube,” Proc. Sig-
graph, ACM Press, New York, 1985, pp. 31-40.
2. S. Gortler et al., “The Lumigraph,” Proc. Siggraph, ACM
Press, New York, 1996, pp. 43-54.
3. J. Shade et al., “Layered Depth Images,” Proc. Siggraph,
ACM Press, 1998, pp. 231-242.
4. T. Igarashi et al., “Teddy: A Sketching Interface for 3D
Freeform Design,” Proc. Siggraph, ACM Press, New York,
1999, pp. 409-416.
Th e h op e
Despite the frustration just expressed, I’m still both
amazed about the progress computer graphics has made
to date and very hopeful for its future. Recent movies,
such as Jurassic Park, Titanic, and Star Wars Episode One,
have relied heavily on computer graphics rendering with
great success. I would challenge anyone to spot when
the dinosaurs are pure rendering and not filmed physi-
cal models.
Contact Cohen at mcohen@microsoft.com.
IEEE Computer Graphics and Applications
55