Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 20 February 2020

(p. 583) The Digital Revolution

(p. 583) Appendix 2 The Digital Revolution

Today's visual effects are dominated by digital solutions. Oddly enough, the introduction of the computer was not the key to the beginning of the digital effects world. Very sophisticated computers were already in use in motion-control technology. Digital scanning of the negative, digital printing back to the negative, and software for the industry were the driving forces that launched the digital revolution. The computer was absolutely necessary, but software, or the lack of software, was the real limiting factor. Once all three technologies came to a certain level of maturity, the industry took off. There seemed to be no end to what could be done. There still seems to be no end in sight.

Usage of digital solutions is broad and cannot be confined to a single domain in the effects industry. It is clear, though, that the optical printer function has been replaced entirely by digital compositing techniques. Many, if not all, of the limitations of the optical printer were gone, as the rules that applied to the digital world dictated an almost unlimited set of solutions for compositing, with no regard to “generation loss,” or limitations of layers or complexity, that were tremendously faster as well.

Compositing was not the only process that took off. The actual process of creating realistic virtual imagery that ranged from spaceships and stormy waters to creatures that have and have not been seen before has become commonplace. The digital techniques have no bounds, and year after year we see the results on the motion picture screen.

The Digital Scanner

(p. 584) The Digital Scanner

Digital scanning, the process of converting the imagery, in this case many frames of negative, into a computer-readable digital form, was a necessary component to launch the digital effects world. At the time that the digital scanners were being developed, almost all effects originated from film, negative to be specific, and needed to end up on film, again negative, for the motion picture industry. Early digital scanners varied and were very experimental. Almost all of them had a film-transport mechanism that carried the negative and some kind of illumination system/sensor system. In the early days, they were all custom built, and it was not until the technology matured that a few companies, and only a few companies, began to commercially manufacture digital scanners.

A key aspect that has always been important for a digital scanner is to ensure that the information in the negative is fully retained in the digital form. This can be characterized by having enough pixel resolution and enough color dynamic range and resolution. Other important aspects are speed of the digitization process, consistency over time, and ease of use.

Although scanners took on different forms in the early stages, using such technologies as the flying spot scan, a technology well known in the video industry, and the early solidstate sensors, primarily developed for use in spy satellites and observatory telescopes, today's commercial scanners all use solid-state trilinear array sensors and provide fast, accurate, and efficient digital scan data.

The Digital Printer

The Digital Printer

The bookend component to the digital scanner is the digital printer. The digital printer imprints the digital data back onto the film negative. As with the digital scanners, this technology started with home-brewed prototypes that were manufactured by individuals and small companies. The gas-laser scanning/illumination systems and cathode-ray-tube (CRT) phosphor systems were the basis of the first digital printers. In time, the later solidstate laser systems virtually replaced all other technologies.

All these systems had the same goal, which could be described in the following simple test. Start with the original negative of some test shot with lots of good shadows and highlights. Digitally scan this information into the digital form. Then re-create the negative using the digital printer. Create a print from both of these negatives using the same exact process at the same exact time, and visually evaluate the results. It should be obvious that the goal is to have them at least look the same. A further test is to rescan the new digitally fabricated negative and mathematically analyze the two images for their differences. The more they are alike, the better the scan/reproduction loop and thus the quality of the imagery.

Today's commercial digital printers are dominated by systems that incorporate three solid-state lasers capable of putting out the colors red, green, and blue. Up until about 2002, (p. 585) short-lived and not easily maintained gas-tube lasers dominated the market. Before that, there were CRT systems. But it was the introduction of the “blue” solid-state laser in the 1990s that allowed the current systems to come down in cost and complexity, resulting in highly reliable designs.

Given the ability to digitally scan the original negative and to produce a negative from digital data, the basis for the digital revolution was set. Manipulating and creating the imagery became synonymous with using computers, a virtually limitless universe from which to work. The process of programming the computers and creating application software to perform the specific tasks began slowly and then exploded, as witnessed every year with mind-boggling papers and imagery delivered for the industry's premier conference, SIGGRAPH (Special Interest Group in Graphics), sponsored by the Association of Computing Machinery (ACM).

The Digital Tool Set

The Digital Tool Set

This limitless void was not easily filled. Application software does not come easily. Software is the set of instructions created by a programmer to run on a computer. Usually software is written to satisfy some function, whether it is your accounting program or the word-processing program that was used to write this document. All these programs have had teams of software engineers creating the final application.

From the beginning, individual visual effects houses wrote their own programs. As the whole field matured, giving way to a base of general knowledge and standards, individual vendors made attempts to write software applications that would be used by more than one company.

To write your own applications requires quite a roster of individuals for the visual effects industry. First you have to know what you want, or know what the function of the software will be. The digital age of visual effects has certainly been one of discovering exactly what is needed, building those tools, and then, of course, reestablishing the new needs and goals and again building the new tools on top of the older ones. This circle of discovery and application building has not stopped and probably never will. One of the most interesting aspects of visual effects in the digital age is that there seems to be no limit to the discoveries for new technologies.

In addition, to write your own applications, you must have computer scientists, software engineers, and knowledgeable people in the fields of physics, biomechanics, mathematics, and more. The applications to date have spanned a broad set of functions from modeling, animation, lighting, and effects software, to name a few.

Three-Dimensional and Two-Dimensional Applications

(p. 586) Three-Dimensional and Two-Dimensional Applications

The digital tool set can be categorized into two somewhat distinct sets based on the type of data that is handled by the application. The two-dimensional, or 2-D, applications use and typically output only images, that is, an array of pixels of an arbitrary bit depth with an arbitrary number of layers. The remaining applications fall into the three-dimensional, 3-D, type, which typically use 3-D objects in some form to produce either more 3-D data or 2-D data.

2-D Applications

For the visual effects industry, the most important applications are those that composite imagery. This is the heart and soul of the digital revolution. Digital techniques were able to replace the function of the optical printer with these compositing packages. Not only was the entire function of the optical printer replaced, but also the new tool allowed for the field of summing imagery to be expanded well beyond what could be done with the state-of-the-art optical printer.

The following list is not complete but samples most of the established tasks that are in use today within a single application package.

  • Simple A-over-B compositing. The summing of two pictures while controlling the resulting opacity of the B layer.

  • Unlimited multilayer A-over-B compositing. The summing of multiple pictures; unlike the optical printer, the digital compositing programs do not have a significant limit on the number of layers that can be summed into the picture.

  • Double exposing. Full control of how layers are treated in the summing process.

  • Matte extraction for multicolored backgrounds. Process that replaces the photo/filter matte extraction process of the optical printer with very sophisticated software routines specifically tailored for different extraction problems.

  • Grain removal. The ability to reduce the amount of grain in a photographic image.

  • Grain addition. The ability to add grain to make the photographic image match normal film characteristics; usually added to pure computer-generated objects that when rendered do not include the simulation of the grain.

  • Animation 2-D. Moving layers around in the image.

  • Shape alteration, animated. Changing the shape of an object; step needed for the “morph” effect.

  • Color correction. The ability to fine-tune the color palette of a series of images.

  • Color animation. The ability to color correct over time.

  • Rotoscope. A process of tracing live-action things, most commonly humans or animals, to retrieve that realistic movement into an animation.

  • File conversions. The ability to convert a 2-D data file into one of the many available in the digital world; most packages allow for input and output to these different formats.

  • (p. 587) Blurring effects. Altering the image in many different ways to fake rack focus or motion blur.

  • Time domain alterations. Slowing down or speeding up the imagery.

  • Semi-3-D image plane projection, animated. The ability to project imagery on a 2-D plane that is placed effectively in a 3-D environment and rendered

    to a 2-D image.

  • Painting. Adding color to imagery.

  • Scripting. The ability of the user to remember and re-create the actions of the composite perfectly each time through scripts, which can take many forms but are usually graphical in nature, thus allowing for an easier convergence to the solution than the optical printer. (Desperately missing in the optical printer age was the ability to list the exact steps involved with a composite and the machine that operated on the list without failure. This is exactly what was provided by the digital revolution.)

  • Viewing solutions. The user's ability to see the results of the work in various forms before the imagery is processed to its final output form, whether that be a low-resolution result or some other abbreviated test.

As important as the list of techniques above is the human interface of the application. All of these techniques are housed within a construct called the applications interface (or user interface). Work flow, speed, and the artistic execution of the work are dominated by this user interface design. The single most important development in today's 2-D software is the human interface; it cannot be ignored.

The impact of the 2-D digital world has not yet reached its apex. The whole film postproduction process is being restructured due to the success of the past ten years of 2-D digital techniques. Witness the current trend to either digitize an entire film or shoot the entire film digitally, bypassing the use of the negative altogether. This step allows the previous postproduction steps of color timing, and editorial additions such as fades, to be completed in the digital medium. All these changes are a direct result of the progress of the 2-D digital world and continue to blur the line between digital visual effects and the post-process for a motion picture.

3-D Applications

The digital visual effects world did not just liberate the practitioners from the optical printer but opened up another world, a 3-D world, of tools and applications. Instead of building a miniature spaceship, practitioners could create one on the computer, and it could then be flown (animated) on the computer and finally composited on the computer. Instead of building a large dinosaur and using stop-action photography, practitioners could build, animate, and render one on the computer. A new menu of solutions awaited practitioners.

In a simple world, 3-D computer graphics can be roughly broken down into the following phases: building, prelighting, rigging, animating, and lighting-rendering, followed by a reintegration into original photography, that is, the composite. Like all things in visual effects, all these phases actually overlap, and thus one must plan from the start how the design will affect all these phases.

(p. 588) Building

“3-D” was given this name because most things inhabit a world of three dimensions, typically assigned the nomenclature x, y, and z of the Cartesian coordinate system. Within this system, objects must be described in these three dimensions. This is sometimes called the “model-making” phase and specifically relates to how the object takes up space. There are applications that allow the user to easily build fairly complex systems much in the same way as a computer-aided design (CAD) program does. The end results are usually a list of polygons or a list of mathematical entities called nonuniform rational B-splines (NURBs), which define the surface in three dimensions.

These technologies that are based on CAD-like systems all do a wonderful job of putting together a class of relatively solid objects such as aircrafts or buildings. There are limitations for the general packages, though. Usually when something more organic needs to be built, say, the shape of a special rock or a person's face, there are two approaches. One is to just sculpt the shape on the computer using application tools that allow this to happen. This requires great patience and, of course, a great sculptor. Other technologies have come to light that can digitize the objects, even the face of an actor. These “3-D digital scanners” all have a mechanism that allow them to deliver a cloud of points in space, that is, a big list of points in 3-D space that sample the surface in some rigorous fashion. These points are then delivered to other application software that convert the large list to the more convenient representations of the polygon or NURB form, not a trivial step at all.

There is even a third type of object that pushes the boundaries even further. Suppose the object exhibits the even more organic look of a tree. Again, because we are in the remarkable world of computer graphics, there are applications written to produce 3-D representations of such objects that are convincingly real. Anytime that a program is used to create, in this case, a 3-D description of an object, the process is termed “procedural.”

Prelighting

Once the object is built so that it describes how it fills 3-D space, another process is begun that will describe how it will look to the human eye. The space-filling information does not tell us whether the object is red, shiny, bumpy, or clear, for instance, and it is this step, the prelight step, that defines just what physical characteristics the model will have.

This step utilizes theories of how light interacts with different surfaces and the corresponding computer programs that are used to compute and simulate the final look of the object within a defined lighting configuration. The process of creating, that is, computing, the final look is sometimes called “rendering.”

The prelighter is tasked with the problem of assigning the needed characteristics onto the surface of the object. A simple and incomplete list includes diffuse color, ambient color, specular color, subsurface color and characteristics, isotropic or anisotropic surface characteristics, specific shading math to simulate iridescence, transparency, reflectivity, wetness, wrinkles, bumps, and about a thousand other items. One item, a class in itself, should be mentioned, and that is hair. Hair is also defined in this step. The qualities cover at least the following: color as controlled by length and placement over model, thickness of color, transparency of hair, types of curls, hair density at skin level, hair clumping, and hair shadowing, to name a few.

The field is huge. Every year some astonishing new technique is developed. Almost all the lighting programs written are approximations of the real solution, where the real (p. 589) solution usually will take enormous computing power to create. So the art is in finding the approximation that will satisfy the look needed. Lighting techniques will continue to develop for some time. The visual effects industry will always be looking at advancements to provide a new sense of realism to motion picture images.

Rigging

While prelighting is the definition of how light will react with the object, rigging is the process of defining how an object will change shape and the establishment of the proper controls, for the animator, to do so. A corollary to this goal is to provide relief to the animator from having to animate small repetitive actions such as skin folding and jiggling.

Some objects are not rigged. For instance, a solid-body object such as a building may not be rigged. Any object that will change its shape must be rigged to some degree, however. Creatures are a good example of objects that need to be rigged. Even an aircraft needs to be rigged to simulate the movement of the flight-control surfaces and the bending and flexing of the wing surfaces.

The act of rigging is defining just how an object will move and how to control those movements. Imagine a three-boned creature. Each bone is connected end to end with the first bone's free joint, which is connected to the ground. There are different ways to move the bones. They can be rigged such that the animator actually controls the angle that bone 2 makes with bone 1 and bone 3 makes with bone 2. (Remember, since this is in 3-D there are really two such angles.) Another rig might be one where the animator is allowed to just grab” the last and open end of bone 3) and place it where it is needed. The computer is used to find at least one solution for the rest of the joints, to solve for the final “grabbed” position.

Usually the process of rigging restricts movement. The elbow joint does not move in every direction, and even the directions of freedom have limits. These details can be implemented in the rigging step to help the animator move the character properly.

For characters, the process of rigging is further defined as controlling the actual 3-D volume the creature takes on as a function of the movement. As the elephant walks, his skin moves over his muscles, which in turn flex and move over the skeletal system. These complex details are approximated in the rigging step. Although many techniques have been developed to simulate this animation process, most of them try to define basic structural elements such as the skeleton, muscle, skin, and fat.

The solutions for exactly determining the final form the skin will take run the gamut from pure sculptural (done by human) solutions to incredibly detailed procedural (all done by computer calculations) methods. As the elephant places its leg on the ground, there may be some fat jiggle resulting from the impact of the footfall. Even this aspect can be rigged; in this case the procedural software will model the fat layer as a network of springs and weights. Most visual-effects facilities make use of all these methods.

(p. 590) Animating

Grabbing the real model in your hands and placing the model in the exact location it needed to be was the straightforward approach of the stop-motion animator. Today, this process is done on the computer, and although it has been abstracted to the computer screen, the animator can animate, finesse, and see the results almost in real time, an advantage that was unheard of in the stop-action world.

It is the animator's job to bring to life the movement of the object, whether it be an aircraft or a character. The animator uses application software, to visualize the environment, the object he or she is animating, the control structure as provided by the riggers, and a visual playback system to evaluate the work.

Animation over the years has become more and more sophisticated with the addition of many new techniques. One of the most visible new techniques is the process of “motion capture” as used in the Warner Bros. film Polar Express (Robert Zemeckis, 2004). Motion capture is a process by which the motion of a human or animal is extracted in a computer-readable form. This procedure is done on a special stage. The subjects whose action will be recorded wear special retro-reflectors attached to their bodies. A large multicamera system, which sees” the performers, detects and records the movement. Computers and software resolve all the data and determine exactly where the markers were located in 3-D space for every single frame. Those markers and the knowledge of where they were placed on the human subjects further allow this data to drive the animation of the desired computergenerated character. Thus, the actor or performer usurps the animator. For the motion picture Polar Express Tom Hanks through his performance became the animator for many of the characters in the film.

As techniques for modeling the true dynamics of physical systems come into play, sometimes the animation may be driven by a procedural solution. For example, the actual flight path of a jet aircraft may need to be as real as possible, thus calling for systems that would be faster at finding the solutions than a human animator.

Other procedural animation solutions have recently been visible. Imagine having to animate an army of fifty thousand warriors, as was done in the Lord of the Rings trilogy (Peter Jackson, 2001–3). This task could have taken years and years to produce if each character were to be individually animated. Instead, an application called Massive was created to allow a small number of individual operators to control and train a small number of agents” how to behave under the circumstances of their environment. These few agent rules were then automatically applied to the fifty thousand characters in the scenes and resulted in distinctive animation for each character. Such systems, sometimes called artificial intelligence,” truly capitalize on the power of the computer to apply large amounts of detail to the animation.

Lighting-Rendering

The prelighters have defined the look of an object. They have set it up such that if light were to land on its surface it would reflect back light in a very specific and desired fashion. But where is that original source of light coming from? It is the job of the lighter to create the (p. 591) final look of the object for use in the composite. This usually means defining just what kind of lights will be used in the scene and where they will be placed, and then feeding all the information to the program that performs the final rendering.

Since lighting and rendering in today's digital-effects world is just an approximation of the real world, this step can be a delicate enterprise. If it were possible to describe the object thoroughly and then render light as it really is understood in the physical world, this would be a conceptually simple, albeit tedious, process. But the computational speed of today's computers is not sufficient to allow for this approach, nor will it be for some time. Consequently, the digital community resorts to the approximation and embraces the issues that come with it.

There are two types of lighting situations. One is where the practitioner has a scene that has been shot to film and is trying to imitate the lighting conditions of the original physical environment at the time of the filmed exposure. The other is when the entire scene is completely generated on the computer and thus has no reference to any physical situation.

When dealing with the former case, the lighter will always want to know exactly where and what the physical situation was in order to simulate the lighting on the computer. Doing so will allow the lighter to illuminate the object that is being synthetically added to the scene in a way that will produce a final image that looks like it belongs in the scene. If the lighter, for instance, had the main key light illuminating from the wrong direction, there would be something wrong with this object; in other words, it would not be responding to the light like all the other objects in the scene.

In today's digital world, the process of simulating real-world illumination is aided by a series of photographic processes that help record the environment. Typically these systems capture a full spherical representation of the environment—sky, ground, objects nearby, and so on. Rhythm and Hues has developed a special high-dynamic-range-imagery (HDRI) digital camera that is essentially six digital cameras mounted on the faces of a cube. Each camera automatically exposes many frames, each with a different exposure time, effectively covering a wide range of exposure. Given this digital information from the HDRI cameras, a digital process is run on the computer that extracts the needed lighting that will be required to accurately illuminate the computer-generated objects in the scene.

In the case where there is no reference to real” imagery, it is still important to place or define the lights consistently throughout the scene; although in this situation they are usually defined by someone whose task it is to artfully create the entire environment, usually the production designer.

Digital Effects

Digital Effects

A very important aspect of visual effects is the ability to re-create natural phenomena such as water, fire, snow, rain, fog, dust, smoke, clouds, cracking earth, shattering glass, heat, cloth, flowing hair, and so on. In the past, these effects, especially those involving water because of its notorious ability to resist scale changes, were very difficult to create. Over a period of time, (p. 592) many of these obstacles have been successfully surmounted using digital solutions. A very good example is the recent Fox production The Day After Tomorrow (Roland Emmerich, 2004). Almost all of the natural phenomena depicted in the movie employed digital-effects solutions.

The whole field is very eclectic. Usually a facility will have people dedicated to development of software to solve parts of these problems. Topics usually covered by such research fall into the fields of biomechanics, rigid-body dynamics, physics, numerical solutions, simulations, artificial intelligence, and behavior animation, to name a few. Usually if a problem cannot be given to an animator or lighter, it will be delivered to the digital-effects artist to find a solution. The solutions usually are very sophisticated and usually cross boundaries of other areas, such as lighting, rigging, and animation.

Putting a Shot Together in the Digital World

Putting a Shot Together in the Digital World

“A reptilian creature with hands similar to a human's is having a tug-of-war with an actor over a small toy” is the screenplay description of the next example. The shot will be an interior, with wide-angle lens, in the bedroom set. It is the job of the visual effects (VFX) supervisor to determine what is supposed to happen and exactly how it will be executed.

For this example the VFX supervisor will evaluate methodologies such as creating the reptilian creature as a prosthetic costume versus creating it synthetically on the computer. Assuming there is some overriding reason to not create the costume, for example, the creature has proportions that would never support having a human in the costume, the VFX supervisor determines the creature will be created on the computer.

The next, very important, step is to define what the creature looks like and how it will behave. Usually the VFX supervisor and director will resort to the age-old technology of 2-D pre-visualization, or in other words, utilizing an artist to produce drawings. Studies are created and discussed for the static look. Even studies that try to suggest different poses for the creature are explored. Once an agreed-upon set of drawings is found, the VFX supervisor can begin the next step in the process of defining the creature.

This preplanning phase is very important. It is less costly to change a drawing at this stage than to change the character at a latter time when great energy has already been spent in building it. Ideally, this preparatory work occurs before the commencement of photography for the specific shot.

It is even better if the facility that is creating the creature is able to create a facsimile of the creature to help define just how it will behave for the specific shot. With regard to the current example, will the creature have poor footing and skid on the floor of the room? How strong is the creature? How do all the joints move? Does the creature drool during the tug-of-war? Any preparation that can be done will help sell the creature. More planning usually results in higher-quality work.

At this point in the preparation of the shot, not one frame of film has been shot. The facility has begun to build the final creature, and studies have been performed to give the director and VFX supervisor a pretty good idea of how the creature will act.

(p. 593) One more step can be included in the planning phase. It is called pre-visualization (pre-vis), a simple computer animation that tries to simulate the struggle between the human and the creature, with the inclusion of the room, objects in the room, and a simple animated human. The pre-vis will help define any gags that might be necessary to rig before the shoot and will give the director a good idea of the millimeter and placement of the camera lens. Again, another step in the planning process will only make the actual shoot day go that much more smoothly, and that rapidly translates to a less expensive shot.

The toy is a plastic duck. The VFX supervisor determines that the human, during the actual shoot, needs to interact with something tugging on the duck; otherwise, viewers will not be convinced that the reptilian creature is actually there. A plan is devised to mount the duck at the end of a very long stick. A person will hold the duck/stick device and will pull and push on the stick while remaining offscreen. The actor will hold onto the duck and struggle with it. This methodology will produce more convincing action, especially for the actor. It will have an artifact, though—that being the stick that will be in the frame during the struggle.

The VFX supervisor determines that the stick will be removed later in postproduction using what is typically called a rig” removal technology, a 2-D image manipulation.

The plan is set. It is shoot day. Stick and duck and actor are ready. The footage is shot. All works to plan. (This is not usually the case, but for this case an optimistic approach will be taken to move this example along.)

The VFX supervisor still has work to do on the set. To facilitate the 3-D work to come, data needs to be gathered about the particular aspects of the shoot. For instance, the lighters will want to know everything about the lighting on the set. The animators will want to know exactly the geometric 3-D nature of the set and the duck and the stick.

There have been many techniques used to document the lighting of a particular scene. In the beginning, simple notes about the types of lights used and their approximate location on the set were created. More recently, additional information is obtained by actually filming within the specific set lighting a series of approximately one-foot-diameter balls with coatings that are flat white, flat gray, and highly reflective. Even more sophisticated are special cameras that allow a technician to record the environment either onto film or straight to digital form. These cameras are capable of imaging the full view of the set, more than a 360-degree panorama, in that the top and bottom of the environment image are included, to create a sort of spherical picture of the environment. In addition, the cameras capture the full range of energies from all the light sources, something that cannot be represented on just one frame of film. This data is delivered later in the postproduction to the lighter for the specific scene and will contribute greatly to the look of realism.

The goal is to put a computer-generated creature back into the scene. To achieve this end, the creature has to be re-created on the computer under the same circumstances that were present for the live action. The creature must be on the floor and must appear to be rendered with the same millimeter lens and same camera position as the live-action footage. The process of determining the exact path of the camera and the millimeter used on the lens throughout the shot is called camera tracking. It is a postproduction task; that is, it is done at the facility after the film has been shot.

Camera tracking can be aided by more physical information about the set. The VFX supervisor is responsible for obtaining as much 3-D information about the shot as can be determined. The VFX supervisor engages individuals whose specific task is to “measure” all the objects and the room. This process is called digitizing.” There are many methods to (p. 594) obtain the information, including photo methods, regular survey methods, and more recently, the LIDAR method. LIDAR stands for light detection and ranging” and can produce a very accurate and rapid sampling of the environment, whether it be the bedroom set of the example or a city block of buildings.

The last thing the VFX supervisor will obtain from the set before moving on to the postproduction phase of the sample shot is the duck/stick device. As the shot is animated, it will be a requirement that the reptilian creature have its hands on the duck during the struggle. To assist this process, the duck/stick device will also be digitized and brought into the computer. Once 3-D camera tracking is accomplished, that is, once the VFX supervisor knows where and how the camera moved during the shot, another individual will perform what is called “match moving,” on the scene. This is the process of finding out exactly where the duck/stick unit was during the shot, frame by frame. This is accomplished by taking the digitized duck/stick object and moving it around until its position is the same as that in the actual movie image, frame by frame within the shot. This will allow the animators to know exactly where to place the creature's hands and how to place them around the object so that the final computer image created will not “swim,” or appear to be disconnected.

With the duck/stick unit in hand, the process moves to the postproduction phase, which is outlined below.

Postproduction Phase for the Sample Shot • Digitize the duck/stick for match move.

  • • Digitize the film negative on the film scanner.

  • • Input the digitization of the set plus all notes into the computer system.

  • • Begin camera 3-D tracking.

  • • Finish modeling, rigging, and prelighting the creature.

  • • Input the on-set lighting information in preparation for lighting the creature.

  • • Once 3-D tracking is complete, begin match move of the duck/stick.

  • • Once the match move is complete, begin animation blocking and first-pass lighting.

  • • As a separate and independent step, remove the stick (“rig removal”) from the scene, frame by frame.

  • • In a recursive process, refine the lighting, the animation, and the composite.

  • • Assuming the goal is reached, output the imagery to create the final negative.

The struggle over the duck example shot is only the beginning. The overall complexity is low by today's standards but still represents the thought processes that must occur no matter the size of the problem.

The Visual-Effects Facility

The Visual-Effects Facility

In order to accomplish these digital feats there has to be some infrastructure in place to do the work. While we have given much attention to this point to writing the software required for this process, there are also backbone infrastructural pieces of hardware that are typically needed to do this work. They include, at least, computer workstations for the artists, render (p. 595) farms to calculate imagery offline, massive disk systems to hold imagery, backup systems to store data securely off the more expensive disk systems, a network capable of handling large amounts of data and traffic, playback display devices including HIDEF and film, editing equipment, and, of course, the film scanner and printer.

Probably the most notable difference between this list and that of, say, some ten years ago is the idea of a special-purpose supercomputer for all large computational needs. Today, with the power of the commercial consumer computer being so high and the price so low, the supercomputer has been replaced by the idea of using many of these smaller processors—thousands—to satisfy the computing needs of a facility. Not only is it significantly cheaper, but it is easily scalable in size and, most important, resistant to the rather depressing fact that the lifetime of a computer is only a few years. Given Moore's law, computers are almost obsolete the day they are installed. Thus, a large array of computers allows the facility to retire and update portions without starting with a whole new computer.

The industry is changing rapidly with the advent of the digital intermediate (DI) in film production. Production companies that specialize in the handling of the postoperation phases of film production are now more likely to handle all the film scanning and printing of a motion picture. Thus, facilities will not need to house the hardware for the scanning and printing of film negatives.

General trends in the price and performance of hardware—that is, faster, better, and cheaper—have made it much easier for start-up facilities to hit the ground running. Today's start-up's biggest issue is in hiring the artist and the support team to execute and support the system. Couple the advancements in hardware with the commercialization of software for the effects business, and the result is a small explosion of facilities around the world producing high-quality work.

As previously noted, the rapid advance of technology has given the creative process new tools with which to tell the cinematic story. Accordingly, this has affected the filmmakers’ approach to moviemaking in both an artistic and a fundamental business sense.

For example, the screenwriter's palette is ever expanded in his or her ability to describe environments, characters, and situations. In the past, when the writer wrote, The cavalry charged the castle,” the filmmaker had to keep in mind the logistics and the cost associated with dressing an army of human extras with horses in period costumes in a location that had an actual castle. Now one can imagine almost everything in that scenario to be computer generated. Of course, there is still a cost associated with these visual effects that the producers of the film have to deal with, but the writer now knows that his or her perhaps expansive vision is quite capable of being brought to fruition.

The directors of these films also understand that previous limitations to the approach of designing and photographing their films are quickly and continually evolving. Films such as 20th Century Fox's The Day After Tomorrow or computer-generated character films such as Warner Bros.' Scooby Doo series (Raja Gosnell, 2003, 2004) could not have been made as they have been without the state-of-the-art visual effects applications used. Keep in mind, as stated earlier, that in this day and age almost all films, not just the big “tent-pole” blockbuster studio films, contain some type of visual effects work. Bringing a visual design approach into the storytelling narrative can now be part of the director's vision and storytelling tool set. In 2004's Eternal Sunshine of the Spotless Mind (Michel Gondry), for example, the story was integral to the design of the visual effects approach.

(p. 596) The director can now arbitrarily change the color, the balance, or the design of any object in the frame to add to a scene's impact. This often occurs without the audience even being aware that they are watching a postproduction process. The director is now directing not only the actors and the camera, but also the tapestry of textures and elements that compose every shot.

On the business side of the equation, the producers of a film are those individuals who are responsible for the financial aspects of the production. Often they are the ones caught between their fiduciary responsibility to the sponsoring studio or investors of the project, and the creative appetites of the director, the cinematographer, and the writer. When this dialectic comes to the subject of visual effects, it is the producer's responsibility to find the capacity and wherewithal to apply the correct fiscal balance to the film. This can be a huge task on a big effects film for which the effects budget alone may comprise half or even more than half of the budget for the entire film. The scope of the financial variance may run the gamut from millions of dollars to a few thousand, but it is an endemic problem for the producer to find where that money is coming from and how it is to be applied. This theme of having new tools that can be used that also must in the end be accountable to the budget and resources available will no doubt continue as new tools are created and applied to the filmmaking process.

Accordingly, as the technology has become more efficient and user friendly, and with the related diminishing cost of software and hardware, even low-budget independent filmmakers now have access to using effects in their films. The software has become such that many rudimentary yet applicable effects can be created by individuals on their laptop computers while sitting in the comfort of their living rooms.

For the larger studio films, where there is clearly much at stake, there is still the dilemma of how the visual effects are done. Directorial choices, studio executive input, financial restrictions, and hard deadlines with release dates make the visual effects process more changeable and thus more challenging.

At the outset of the computer era, post Star Wars, there were only a few fledgling companies that led the foray into visual effects films. George Lucas's Industrial Light and Magic was the main player in the field. However, other offshoot companies came into being in the early 1980s. Apogee Productions, Boss Films, and Bob Abel & Associates were among the earliest. These days there are multitudes of visual effects companies scattered around the globe. The Internet and satellite transmission has made it possible for a director to edit the film in Los Angeles while effects for the film are being created in Mumbai, London, and Vancouver. However, there are still only a handful of large companies, including our own Rhythm & Hues Studios, in existence that can turn over hundreds of highly crafted visual effects shots for a single film.

So how do the filmmakers decide how and where to get the actual computer effects work done? John Swallow, the vice president of production technology at Universal Studios, told us that he “casts the company, not unlike casting an actor. Some companies are good at one thing and not another, so I go where it seems like I will get the appropriate work done with people I trust. And part of the casting process is talking through the tool sets you can use to accomplish what you want.” Some directors or producers take work to facilities where they have personal relationships and trust built up with certain supervisors and executives, and where there is a sustained and proven pipeline to get the work done. Also, as we have pointed out, cost is a driving factor. Effects work is normally bid out to multiple facilities in order for the studios and producers to get the best price. Work sometimes goes (p. 597) to other countries, where the government subsidizes film budgets for dollars spent in their labor pool. Australia, the European Union, Canada, and New Zealand are all currently involved in tax-rebate situations.

So who actually does the work? Visual effects and animation facilities such as Rhythm & Hues Studios that focus on doing digital effects for the feature film business are home to an assortment of individuals whose skill sets range across the board in terms of applications and technologies. Animators, lighters, matte painters, compositors, match movers, and rotoscope artists all can be employed to work on a single shot. These artists are intertwined with a gaggle of visual effects producers, coordinators, pipeline support managers, software and hardware technicians, and systems engineers. It is no small task to create and deliver hundreds of high-end visual effects shots for a single movie with a hard, immovable delivery date.

The infrastructure and computing power of the company must be both robust and flexible in order to take on the ever-changing landscape of the postproduction process. Preview screenings of a film have put additional stress on both the studio and the director to accommodate audience criticisms before the movie is released. Visual effects, being one of the last tasks finished on a film, are routinely excised or added right up until the last moment before the prints are struck for theater distribution. It can be a taxing and laborintensive procedure for all involved. As technology has made the imagery more sophisticated, it has also allowed it to be implemented faster, thus fostering an ability to work up to the very last moment.

As long as writers and filmmakers imagine stories that go beyond the boundaries of what can be filmed as reality, there will be a need for visual effects. The trend since the beginning of filmmaking indicates that there is no limit to new ideas that will require the assistance of visual effects.

The absolute underlying methodologies of telling stories in the motion picture form will radically change in the future. These changes will go hand in hand with the increased capabilities of the visual effects industry. Based on trends of today, those activities that have been solely categorized as “visual effects” have already transcended their original definitions. As evidenced with the motion picture Sky Captain and the World of Tomorrow (Kerry Conran, 2004), the visual effects augmented the story, providing the entire environment around which the human actors performed for the whole film. The techniques used to create a computer-generated character in Lord of the Rings became the dominant technique that replaced cinematography in making other motion pictures such as Polar Express. It seems that not only environments for whole pictures but principal characters of whole films are and will continue to be an option for computer-generated effects.

Visual effects must satisfy the sophisticated viewer in the future. Viewing an old motion picture with visual effects such as The 7th Voyage of Sinbad (Nathan Jeeran, 1958), today's viewer will note the somewhat odd, unrealistic nature of the skeleton battle created by Ray Harryhausen using stop-action techniques. At the time, however, viewers had no problem looking at this effect, which was something they had never seen before. As the viewer becomes more and more visually sophisticated, however, old techniques will not satisfy the visual effects requirement of suspending disbelief long enough for the story to be told. For many films to come, the future of visual effects will be dominated by technologies that will contribute to the creation of either absolutely photo-real imagery to maintain that suspension of disbelief, or new looks” or unreal worlds that the audience has never seen before.

(p. 598) On a more pragmatic level, the future of visual effects can be examined as a continuous evolution of ideas in the fields of lighting and material definitions, animation, natural phenomenon simulation, and the cluster of effects falling under the category of image touch-up, such as rig removal, simple compositing, and so forth. All of these changes will be directly linked with the further development of the software that humans use to perform effects work.

Animation is ripe for change. One of the desirable aspects of using motion capture to acquire the animation of a character, as was done, for instance, in Polar Express, is the fact that there is only one actor working with one director to create the performance. One difficulty of producing a non-motion-captured animated character is the fact that many people are used to animate the single character. That means they must all have the ability to express the same character, a process difficult to achieve. One of the future advances of animation will be the ability for one person to set up the “characteristics” of the character so that many people can implement those “characteristics” across many scenes. Further in the future, perhaps a single individual could animate a character in an entire film by not only describing the “characteristics” of the character, but also allowing more semiautomated processes to animate the lower-level behavior of the character (walking, running, dealing with inanimate objects). Whatever software is developed in the future, it will embody the goals of creating a consistent preprogrammed behavior of the character while reducing the labor to do so.

Motion capture, the process of recording human or animal movement to act as a basis for animation, will undergo further development. Although the process is widely used, further improvements in accurate recording of the face and eyes are required. Software still needs to be developed to allow faster and more efficient editing of the motion-capture data. And finally, character retargeting, the process of using motion-capture data to animate a character that is very different from the original recorded subject, for example, a human driving a two-thousand-point, fifteen-foot giant, will have to evolve with new methodologies.

The process of lighting computer-generated elements is one of those technologies that will always strive to make the results look perfectly real. As mentioned earlier, lighting requires the visual effects practitioners to render the imagery they produce by the skillful management of a set of approximations. The further reduction of those approximations and the increase in computer speed will have a tremendous effect on the world of lighting and thus on all visual effects. Having a more accurate solution will mean more realistic rendering of the desired environment and the ability to rapidly find the desired “look.” The process could be compressed into the simple practice of artful production design—“build these sets using this wood, steel, and velvet”—and artful photography direction—“put these kind of lights here.” Modeling and building infrastructure that ties the synthetic world closer to what we really see and experience will certainly be a trend.

The simulation of natural phenomena will become more and more sophisticated with further research and development. Even though we have witnessed exceptional computergenerated water in a series of films over the years, it is still very difficult to create realistic computer-generated images of water, whether it is in a glass, a river, or an ocean. The same goes for fire, dust, explosions, clouds, and so on. As in lighting, the tools used to solve these problems produce approximations. With further software development and the ever-increasing capacity of the computer, the imagery produced will be executed using more interactive techniques and will result in flawless representations of natural phenomena.

(p. 599) The future of visual effects becomes a little blurry as the future of film postproduction techniques is considered. With the advent of the digital revolution, the whole postproduction process is changing rapidly. Films have been edited digitally for some time, but at resolutions and quality comparable to video, accompanied by a postconformation of the negative to produce the film. Now the digital world allows the combining of the editorial, color timing, and other operations at film-quality levels. Since the film is carried throughout the postprocess in a digital form called the digital intermediate (DI), which is adequate for motion picture representation, some tasks will have to shift. Operations once common to the visual effects studio will be done within the postproduction editorial unit or the color-timing unit. The blurring essentially means that there will be more options in structuring the work for postproduction.

When talking about the future of visual effects, it is difficult to avoid the concept of fabricating the human form. It is easy to say it will not be long before a very convincing synthetic human appears in a principal role in a motion picture. We have already seen computer-generated humans in motion pictures. They are used primarily when it is too dangerous or impossible to place a real human in the circumstances of the film production. These synthetic characters are called computer-generated stunt doubles. They provide a solution to a problem, such as the fist fight in the Matrix trilogy (Andy Nachowski and Larry Wachowski 1999, 2003), in which the creation of a computer-generated Agent Smith allowed the filmmakers to have many copies of him present in the fight, as well as the execution of the shot where Smith takes a fist direct to his face in ultra-slow motion, something that did not easily lend itself to a real human solution.

Duplicating the human form for a principal part in a film will probably happen soon, but it will not be complete. The development will continue beyond the first day one appears in a movie. There are many issues and challenges, mostly old technologies that will require major evolutionary steps of improvement. The list includes but is not limited to:

  • • Hair geometry and lighting

  • • Shadowing of the hair onto itself and the skin and other surfaces

  • • Hair dynamics, hair touching hair, hair touching skin or other objects, hair influenced by air

  • • Skin lighting and detail, including further work on subsurface scattering, bone and cartilage lighting influences, changing blood flows, and so on

  • • Fluid flow and dynamics into the eyes

  • • Bone-, organ-, muscle-, tendon-, fat-, and skin-simulation advances

  • • Clothing simulations

  • • Body dynamics, including further understanding of walking, hiking, climbing, running, and the like

  • • Expression control for face and body

  • • Refinements for motion capture for body and face

  • • Retargeting refinements from motion capture to character target

  • • Semiautonomous animation software

Each and every item listed exists today in some form at various facilities. The future of visual effects is tightly coupled to how these young technologies develop and relate to one another in a way that produces perfect solutions at prices that filmmakers can afford.

Probably the most interesting issue about having a synthetic human form in a movie is the reason to have one. Assuming you are not using the synthetic human for stunt purposes, (p. 600) why have a photo-real synthetic human? What are the conditions that would require a synthetic human? One theory is the idea of creating a character, much like the animated character Shrek, for which the lifetime of the character is not at risk. The synthetic human character will not grow old and will cost the same each time it is used; in other words, it will not demand more and more money if it becomes famous. The creators of the synthetic human character will own all the rights. Undoubtedly someone will create the need, and it will happen.

Ten years ago the dominant prediction about visual effects was that they would become digital and dependent on the speed and price of computers. Even though Moore's law has it that the performance of computers consistently doubles every eighteen months or so, the utility of the programs used for working in visual effects has not followed the same curve. Software has always lagged behind hardware in the fast-paced computer industry. Even though faster computers and better graphics-display hardware will be a big boost to the future of visual effects, the dominant required growth will be in software. There will be advances in the science of calculating the images as well as the equally important human interface application software that will have to support more real-time solutions with more intuitive interfaces. Software development for the next ten years will focus on closing the software-hardware gap.

Jon Landau, producer of 20th Century Fox's Titanic (James Cameron, 1997), when talking about the future of film stated that “[p]eople will always want to go to the cinema. If for nothing else, for the social experience.” Given that assessment, visual effects, which have had a fundamental effect on the motion picture industry from the very beginning, are tied closely to the cinematic experience. Accordingly, it seems obvious that with a quick extrapolation, visual effects are here to stay and will provide story support for motion pictures that will entertain us deep into the future.