From Computer to Camera – How CGI has Changed Cinema "CGI allows the viewer to be catapulted into future landscapes or thrust back into past historical timelines — worlds which we will never get to experience otherwise. Not a bad achievement for the movie-making world. "

In 2018, Computer Generated Imagery, or CGI, forms an intrinsic part of modern filmmaking — you would be hard pressed to find any films that were created without some form of digital imagery being used in the process. But where did Hollywood’s obsession with CGI start, and how has the technique evolved over the years?

To answer these questions, we first need to turn back the clock a hundred years or so and take a retrospective look at the history of special effects within cinema as a whole. We need to understand how the practical camera tricks that were invented in this time period provided the foundations for modern cinema effects.

Our historic location of choice — Paris. The date — around the end of the 19th century. At this time, the Lumière brothers, two of the world’s first filmmakers, have just invented their revolutionary motion-picture camera, and more people were being exposed to cinematography as a medium, becoming enthralled by it. One of those people was Georges Méliès; a Parisian stage magician and illusionist who was fascinated by the burgeoning world of cinema. After modifying his own film camera in 1895, Méliès began showing films almost everyday at the theatre where he worked. It wasn’t long before he began to make films himself, directing over 500 films between 1896 and 1913.

Méliès’ approach to making films was novel for the era. Inspired by his beginnings in visual tricks and illusions, he sought to bring this form of practical magic onto the screen. He achieved this through the physical manipulation and altering of the film reel itself — and in doing so, invented three of the potentially most influential special effects in cinema history: the stop-trick (or stop-motion take, involving objects transforming or disappearing on film), the double exposure (or superimposition, placing objects onto a film scene which were not there in the first place), and the scene dissolve. Instead of wanting to capture the quotidien lives of ordinary people, the goal of most filmmakers at the time,  Méliès immersed his stories in the fantastical and magical, focusing on trips to outer space or fairy kingdoms. He is even credited for making one of the earliest science fiction films in 1902 with Le Voyage dans la Lune (A Trip to the Moon). He wanted to showcase film as an art form and how it could take the viewer to new and distant places, much like a storybook.

Méliès’ revolutionary techniques and practical screen illusions serve as the ideological basis of modern day special effects, but how exactly did such hands-on techniques lead to the advent and popularisation of specifically computer-generated imagery in movies?

By the 1960s and 1970s, film studios were getting ever more ambitious. They were striving for bigger sets, more visually striking scenes, and in particular movies that appealed to the growing trend of sci-fi at the time. Creating these kinds of films through practical effects — like stop motion, double exposure, and other classic tricks that were created directly through the camera — would require a massive budget, huge design teams, and a lengthy amount of time. These problems are ultimately what prompted the change from practical to technological effects in filmmaking; with computers, a lot more was possible for a lot less money.

It was clear that the demand for cheaper alternatives to practical effects was there — now all that was needed was the technology. The earliest forms of CGI initially didn’t look great (or at least examining  them from the 21st century, they don’t look great), but the technologies were still advancing quickly. The first computer generated animation was created in 1972 by the co-founders of Pixar, Eric Catmull and Fred Park. This short animation featured a 3D model of Catmull’s hand.This marked the first digitally made model of its kind, and the methods that Catmull and Park used to create their 3D image would later go on to create not only more 3D hands, but full 3D humans on screen. It was a brief and basic animation, but it is hailed as CGI’s official debut into the mainstream.

From then on, CGI went from strength to strength. By the time the 1980s rolled around, audiences were seeing films like Tron, whose entire plotlines centered around the existence of CGI. At this point, the imagery was still quite rudimentary and ‘boxy’. Nevertheless, this type of technological advancement and its popularity led to a fundamental shift in the way that filmmakers and directors thought about film, CGI, and their potential together.

This potential was explored significantly further in the 1990s, with films like Jurassic Park in 1993 and Toy Story in 1995. Jurassic Park introduced movie-watchers to CGI that could make fully living and realistic animals — in this case dinosaurs with meticulously detailed features like scaled and torn skin, razor-sharp claws, and dripping saliva. Two years after saw Toy Story, the first fully computer-animated feature film; a trailblazer both for this new genre and for the fledgling computer animation studios known as Pixar. This decade demonstrated to people that CGI films were fully transitioning into the mainstream — instead of digital filmmaking being used only by experimental directors in independent films and short clips, CGI feature films were now taking the weekend box offices by storm.

And so we’ve arrived in the 21st century, where CGI is no longer in its rookie phase and has truly cemented itself as a tenet of modern-day movie-making. In a way, we have finally realised the initial dream of early filmmakers, like Georges Méliès — thanks to CGI, film can now be as limitless and as magical as Méliès himself once strived to make it. CGI allows the viewer to be catapulted into future landscapes or thrust back into past historical timelines — worlds which we will never get to experience otherwise. Not a bad achievement for the movie-making world.

Despite this advancement however, there are still some critics who think that CGI has ultimately had a negative impact on Hollywood. According to them, it makes directors lazy — instead of travelling to locations, they use a green screen. Instead of hiring real people as extras, they use CGI to create crowds of people in the background. Things like this, some say, detract from the authenticity of film as an art form, with the process of filmmaking itself becoming streamlined down to just a few clicks of a button and some lines of code. They also argue that CGI isn’t always done well, and that if this is the case, then a film can end up looking weird and unnatural — especially when CGI is trying to replicate humans. This argument is based on a concept in cinema known as the “Uncanny Valley”, when a CGI-generated human looks so realistic that it comes across as eerie or creepy to the viewer when the character is in action. (Examples of such “eerily-realistic” CGI characters include Scrooge in 2009’s animated A Christmas Carol and most of the characters in 2004’s The Polar Express.)

Indeed, sometimes CGI can go awry, but I still think that it’s reasonable to say that it hasn’t reached perfection yet. I’m not even of the opinion that art forms ever can be perfect — all art is subjective and is produced and presented to an audience as such. We also cannot deny the great wealth of movies that CGI has brought the world – they have broadened every horizon and films are now essentially limitless, a fact which has undeniably greatly enhanced people’s movie-watching experience. Compared with those of twenty years ago, today’s CGI films look a lot more realistic — and who’s to say that in another twenty years technology won’t have advanced enough so that we won’t be able to tell live-action apart from animation? I for one am very interested in seeing how CGI evolves next.

Leave a Reply

Your email address will not be published. Required fields are marked *