Video Games have gained immense popularity over the last two decades. In terms of revenue, it has outperformed other industries with some of the highest-selling entertainment products belonging to this industry.
Video games are an entertainment product enjoyed by a select few. Still, the rampant technological innovations have also led to the increased affordability and accessibility of computers, consoles, and games designed for these platforms to the masses.
With the evolution of enhanced graphical features for video games to utilize came the need for Video Game Cutscenes to manifest themselves as an essential storytelling device.
They have reached a point where they are no longer a unique or ubiquitous product of the industry. Therefore, this article will delve into the history and role of cutscenes in video games and how they carry both interactive and narrative functions.
What Is a Cutscene?
A cutscene is a cinematic sequence that suspends regular gameplay to convey plot and spectacle. They are breakaways from in-game action and mostly depict stories within a game.
The typical view regarding the functionality of a cutscene is that of ‘reward’ and ‘respite’ for the player/gamer. However, cutscenes also act as markers of progression, clues, and story shifts that await the fate of the character being controlled by the player in a video game.
Cutscenes perform certain gameplay functions as well as provide information to the player about upcoming in-game events. It acts as a narrative device, letting the player know about the motive behind their challenges, thus bridging the gap between events and actions.
Types of Cutscenes:
Cutscenes, in general, are distinguishable into five major types. If listed down in order of their appearance in video games, they are of the following types:
- Pre Rendered Cutscenes
- Live-Action Cutscenes
- Mixed Media Cutscenes
- Real-Time Rendered Cutscenes
- Interactive Cutscenes
History and Development:
At first, Cutscenes steadily made their way into video games. Following its widespread recognition and acceptance as the new norm, it paved the way for newer and varied forms of experimentation.
Before explaining the significance of cutscenes in a video game, it is essential to revisit its history to acknowledge its motive, cause, and effect as an ever-popular video game component.
- Pac-Man (Namco, 1980) was one of the earliest examples of a video game that included cutscenes in the form of short, comical, non-playable stories or events poised as interludes. Such interludes presented Pac-Man and the monsters chasing each other around on the screen, breaking away from gameplay.
The entire cutscene-like depiction of events gave off a staged appearance and was pre-rendered, a technique wherein cutscenes were developed earlier than the rest of the game.
This often led to a disparity in terms of cutscenes and the quality of in-game graphics. Yet, it was established as the preferred technique for quite a few years before the arrival of real-time rendered cutscenes.
- Popular video game titles of the mid-1980s, such as Donkey Kong (Nintendo, 1985), Super Mario Bros. (Nintendo, 1985), and The Legend of Zelda (Nintendo, 1986) started making use of more such animated intermissions to convey the plot.
For instance, in Donkey Kong, Kong, the gorilla abducts the Princess, which requires the Jumpman to save her. This was delivered via short clippings describing parts of the story.
The idea of having cutscenes present in a game like Donkey Kong allowed them to become a critical storytelling device.
Cutscenes have also helped the player establish a motive and given them a purpose to chase — it made sure as to why the Jumpman must win over Kong.
Similarly, in Prince of Persia (Broderbund, 1989), cutscenes were implemented to portray the ordeal of the captive Princess whom the Prince must save.
- It was around this time that the term “cutscene” was first put to use by the developers of Maniac Mansion (Lucasfilm Games, 1987) to describe in-game, non-interactive cinematics.
Before progressing any further, one should define what they mean by the term ‘cinematics’. In this article, the term cinematics is applied after comprehending whether the cutscenes from the games being discussed qualify in terms of usage of space, design, acting (voice acting and motion capture/scan), sound, and graphical fidelity.
- The following year, Snatcher (Konami, 1988) was a breakthrough in the field of real-time, in-game cutscenes with glaring cinematic influences embedded within its sequences. Snatcher, by then, was more cinematic and dealt with a much more mature topic compared to other video games of that era.
Another thing to keep in mind is that the video games mentioned above were primarily released for consoles with limited storage capacities, which in turn, did not allow developers to be actively invested in rendering large-scale, graphically intensive cutscenes beyond an experimental basis. This shortcoming in the area of storage capacity persisted for years to come.
With the increase in storage capacity and the use of external storage devices such as CD-ROM in the early 1990s, it became much easier for video game developers to expand on video, music, and voice generated for the development of in-game cinematics
- The seventh installment in the Final Fantasy series, Final Fantasy VII (Square, 1997) raised the bar for what players expect in terms of in-game cinematics and production standards.
Pre-Rendered Cutscenes used in this game, also referred to as Full Motion Video, heightened the aesthetic involved in the making of cutscenes but it did not align itself with the actual rendered graphics used in the remainder of the game.
- A hybrid game format also spurred out due to the enlarged storage space, which made use of live-action cutscenes riddled with American parodic, B-Movie aesthetics. Prominent examples of such a hybrid format include Command & Conquer: Red Alert (Virgin Interactive Entertainment, 1996), Night Trap (Sega, 1992), and Wing Commander IV: The Price of Freedom (Electronic Arts, 1996).
The usage of live-action cutscenes meant letting actors play out the cutscenes and deliver the story in place of animated sequences. While the gameplay used real-time rendered graphics, the sudden inclusion of non-graphical interludes was not met with a positive response. This led to the abandoning of such a technique.
With the advancement in Video Game Graphics as evident in the late 1990s, it became obvious for game developers to let go of the traditional method of attaching pre-rendered cutscenes into their games, mainly due to the cost factor and overall development time.
On the other hand, real-time rendered cutscenes produced alongside the development of gameplay structure seamlessly incorporated itself within the overall aesthetic of a game.
The reliance upon real-time rendered, in-game cinematics was, for the most part, made possible through the development of several highly advanced game engines.
The advantage of this rendering method enabled developers to create complex visuals within the game that adhered to the notion of making video game cutscenes even more cinematic.
- Early adopters of this rendering method include the first Tomb Raider game (Eidos Interactive, 1996) and Hideo Kojima’s Metal Gear Solid II (Konami, 2001). The development of cinematic sequences with the help of such advanced video game engines provided impressive visuals which raised the bar for video game cutscenes in general.
The 21st century witnessed a growth in the usage and length of cutscenes, leading to a unique merger of cutscenes and gameplay. Japanese Video Game Designers registered to this transition and developed unique specimens which involved the seamless incorporation of cutscenes within the gameplay structure, a trend that would later manifest itself into becoming the industry standard for story-based games.
- Prominent and earliest examples of video games recording such a shift include Silent Hill 2 (Konami, 2001), Final Fantasy X (Square, 2001), Onimusha III (Capcom, 2004), and Resident Evil 4 (Capcom, 2004)
With Video Game Cutscenes being in vogue, came the scheme of introducing exclusive releases of video games tied to original motion pictures. Thus, certain film-based video games were developed alongside the shooting of their respective films.
In several instances, it was seen that the director of the film supervised the making of sequences that were later added to the game mostly as cutscenes. These sequences were encapsulated via the technology of motion capture.
- Enter the Matrix (Infogames, 2003), the first game in the Matrix franchise featured 1 hour of live-action, 35mm film footage written and directed specifically for the game. The unique martial art moves were motion captured from the actors and stunt doubles of the original film and were utilized in the game.
- Although the James Bond video games were in existence much before the time of motion capture and did not make use of live-action cutscenes designed alongside the film, they did showcase heavy cinematic influences from the original motion pictures. In the post-2000s, the trend of motion capture has been followed more or less during the shoot for James Bond films as well, with games being announced after the motion picture release.
The following years proved to be crucial for developing video game cutscenes with the evolution of high definition graphics and a shifted emphasis toward the inclusion of more cinematic sequences. Several titles released exclusively for the newest generation of consoles showcased graphical enhancements and marked a new era for story-based video games.
Cutscenes were on the verge of gradual expansion in length. Some of the games from this generation include in-game cinematics having a runtime similar to the length of feature films or sometimes, even exceeding it. Popular titles from this generation succeeded in pushing the boundaries in terms of what a video game can offer, both in content and presentation.
- Titles released in 2007 such as Call of Duty 4: Modern Warfare (Infinity Ward, 2007), Bioshock (2K Games, 2007), and the graphically intensive Crysis (Crytek, 2007) offered a seamless transition from cutscenes to gameplay and vice versa. This particular feature marked a significant shift to where the first-person shooter genre would gradually evolve.
- The first installment in the Uncharted video game series, Uncharted: Drake’s Fortune (Naughty Dog, 2007) made use of extensive motion capture and emphasized heavily upon facial animations, which in turn, helped enhance the lifelike attributes of the characters within a cutscene and offered a way more cinematic storytelling. Such degrees of seamlessness would render games similar to the Uncharted franchise as ‘playable films’.
- The most notable example regarding the use of interactive cutscenes would have to be the latest installment in the Witcher franchise, The Witcher 3: Wild Hunt (CD Projekt, 2015). The interactive cutscenes present in this game take the form of a branching dialogue system that requires the protagonist to have conversations with non-playable characters to take up main quests as well as side quests throughout the game.
Such interactivity in cutscenes allows the player to make subsequent choices regarding his/her speech and his/her choice, which affects the overall gameplay structure. The branching dialogue system enables game designers to provide a decision-making stance for the player without having to tackle the challenges of natural language processing in the field of Artificial Intelligence.
One of the key factors explored in this article has been the identification of cutscenes as an important narrative device. Cutscenes that were once considered to be intervals between two game events have now reached a point where it is no longer distinguishable from any separate gameplay structure. They have enhanced both the interactive and immersive qualities of a video game and have imparted a sense of subjectivity to the medium.