As a filmmaker or videographer, you’ll need to understand why some frame rates are more common than others and why there are so many varying speeds. Frame rate, commonly referred to as frames per second (fps), is the rate at which a device, such as a motion picture camera, can produce unique, sequential images called frames.
Film / video is just a sequence of still images
In order to fully understand the reason for different frame rates, it’s important to understand the history of motion pictures. When watching film or video, we’re not actually witnessing true motion. What we’re really seeing is a sequence of still pictures, known as frames. In the mid-1800s, inventions like the zoetrope demonstrated that a sequence of drawings, showing different stages of action, would appear as movement if shown at a rapid rate. The human eye can register 10-12 frames per second as individual images. However, beyond 10-12 frames per second, we accept the sequence as motion and the "seams" begin to disappear.
Photo cameras were in existence during this period, but use of photographs for motion pictures was limited to experiments. Subjects could pose in various positions to suggest motion, but photographic emulsions weren’t sensitive enough for the short exposures needed to film something actually in motion. This is why subjects in the 1800s had to pose for so long to expose a single image; film just wasn’t sensitive enough yet.
The silent film era exhibited varying frame rates
Advancements in celluloid film and more sensitive emulsion lead to the invention of motion picture cameras in the late 1880s. The earliest cameras and projectors needed to be hand-cranked to advance the film through the gate. This lead to varying frame rates. Early silent films had frame rates from 14 – 26 frames per second, which was enough to provide a sense of motion, but the motion was often jerky or uneven. You can imagine how film cranked by hand when photographed, and then cranked by hand again when projected, would make it nearly impossible to portray true-to-life motion.
Late in this period, motion picture cameras and projectors developed mechanized cranks, which allowed for constant speeds of recording and projection. Even so, individual scenes were oftentimes filmed and projected at varying frame rates due to filmmakers favoring different speeds for different scenes (usually between 18 and 23 fps). Often film reels were delivered with instructions as to how fast or slow each scene should be shown. Additionally, exhibitors and projectionists favored certain frame rates as well, creating further inconsistency.
24 fps was an economical and technical decision
What changed everything was sound synchronization. Synchronizing sound with film was attempted as early as 1900, but the technology was too unreliable for major motion pictures. By the late 20s, it became possible to sync sound using a phonograph or similar device, interlocked mechanically with a projector. The first "talkie", a film with recorded dialogue, used this method, set at 24 fps. It was 1927s The Jazz Singer.
Eventually, sound was synced to film by actually printing an optical track on the filmstrip alongside the image. This practice linked frame rate to the limitations of audio technology of the time. Given that film is an expensive medium, it was in Hollywood’s best interest to consume as little film as possible during a production. Although silent films ran at an average of 16 fps, it wasn’t possible to produce a quality soundtrack at that frame rate. Eventually, the studios decided on 24 fps because it was the slowest frame rate possible for producing intelligible sound; which means the decision was not an aesthetic decision, but a technical and economical decision.
Television gave birth to 60i and 50i
Now that we understand why film has been 24 fps for the past century, why are there so many other frame rates? In the 1950s, television changed everything. The first TV units (and most TV units up until the early 2000s) were CRT (Cathode Ray Tube) monitors. The limits of vacuum tube technology at the time required that CRT displays refresh at AC line frequency. AC (Alternating Current) is the flow of electric power running through our walls, and a TV unit must be plugged into a wall outlet. The AC line frequency is 60 Hertz in the U.S. and 50 Hz in Europe.
The AC frequency limited TV refresh rate to a multiple of 60 (U.S.) and 50 (Europe). Since 24 fps wasn’t applicable, the U.S. adopted NTSC format, which is 30 fps, interlaced (60i). And Europe adopted PAL format, which is 25 fps, interlaced (50i). The reason for interlacing was to double the perceived frame rate, which improves motion and reduces flicker, without needing to increase bandwidth. 30 fps, progressive (or 30p) at 60 Hz halves bandwidth because each frame flashes twice, but is transmitted only once. This is how film is projected. A 24 fps film is projected at a minimum of 48 Hz using a two-bladed shutter, which flashes each frame twice before advancing the film.
The difference: interlaced and progressive scanning
Interlaced scan means that two video fields make up one frame. There are odd fields and even fields (like venetian blinds), which flash one after the other. To simplify the concept, if you’re watching video filmed at 60i, you’re seeing a half-frame every 1/60th of a second and a full-frame every 1/30th of second. Because there are only 30 complete frames, 60i uses the same bandwidth as 30p, whilst portraying more fluid motion and reducing flicker when displayed.
Progressive scan, on the other hand, is when each frame is scanned sequentially in its entirety. Progressive scan is higher quality, but requires twice as much bandwidth; and was unable to be used in broadcast until the advent of digital TV and HDTV signals.
Interlaced video is quickly becoming obsolete as progressive scan displays, such as DLPs, LCDs, plasmas and OLEDs continue to replace CRT displays, which are interlaced-only. Likewise, HDTV (as previously mentioned), DVDs and Blu-ray discs are all progressive scan format. And in order to view interlaced video on a progressive scan display, the footage must be de-interlaced, which exhibits varying results in quality. As digital cinema, TV and camera technology continue to phase out interlaced formats, progressive frame rates, like 60p, 30p, and 24p, have increased in popularity in the U.S.
Frame rate "standards" are finally breaking down
60 fps and 30 fps have generally been the standard for broadcast production, while 24 fps has been the standard for film production. However, the latest cameras, projectors and televisions support multiple frame rates and formats, allowing filmmakers and videographers to break free from convention and film in whatever frame rate is most appropriate for their content or audience.
For many years, there have been advocates for high frame rates (HFR) in both film and broadcast. Frame rates like 48 fps, 72 fps and 120 fps are either too new or still in trial stages and haven’t acquired mainstream support. 48 fps is an alternative to film’s typical frame rate of 24 fps. As of 2012, 48 fps has only been used on a handful of major motion pictures, but is garnering support from more and more influential filmmakers. 120 fps is the chosen frame rate of UHDTV (Ultra-High-Definition Television), which hopes to one day replace current broadcast standards around the globe. This would eliminate the discrepancy between NTSC and PAL standards, as television technology is no longer limited by AC line frequency.
High frame rates (HFR) are clearer and more realistic
While there are many who find 24 fps the most aesthetically pleasing frame rate for films and television dramas, there are those who prefer HFR (High Frame Rates). As we’ve already learned, 24 fps was standardized due to the economic and technical limitations of the times; which was nearly 100 years ago. Since the 24 standard wasn’t an aesthetic choice, HFR advocates don’t see a reason to adhere to old tech. Instead, they advocate frame rates closer to 60 fps, because higher frame rates are more in line with human vision. HFR reduce motion blur and display a clearer image that’s a closer approximation to real life.
Audiences aren’t new to high frames, since we associate high frame rates with a video-look. As stated previously, most television, such as reality TV, soap operas and other broadcast programing, are produced with frame rates of 30 or 60 fps. Advocates of HFR admit that it takes time to adjust to a film with higher frame rates. In order for the motion picture industry to adopt HFR, audiences will have to disassociate it from cheap broadcast productions.
Advocates of higher frame rates (HFR):
Douglas Trumbull, a special effects artist on a variety of major films (most notably 2001: A Space Odyssey and Blade Runner), developed a cinematic process in the late 70s called Showscan. Trumbull wanted to increase the fidelity and definition of major motion pictures, so he conducted research to find the most optimal resolution and frame rate. He eventually chose 65 mm film projected at 60 fps. Trumbull did numerous test on audiences’ emotional reaction to frame rates and found that emotional response peaked at 72 fps. Trumbull’s directorial effort, Brainstorm (1983) was to be the first Showscan film, but MGM backed out, not wanting to release the film in an experimental format. The Showscan Film Corporation eventually went bankrupt in 2002.
Major studios were unwilling to invest in the higher costs associated with Showscan. And with celluloid film almost entirely obsolete, it’s unlikely Showscan will resurge. However, the advent of digital cinema cameras and digital projection has made higher resolution and faster frame rates more economically feasible. Peter Jackson and James Cameron are just a few of the filmmakers supporting the new technologies. Jackson filmed The Hobbit series in 48 fps and Cameron has revealed plans to potentially film his Avatar sequels in either 48 or 60 fps. While only a handful of theaters have projectors capable of showing 48 fps, support for the frame rate will continue to increase.
Higher frame rates still come at a higher cost
Although sensor technology in digital cinema, professional-grade, and consumer-grade cameras have made HFR more affordable, it will always be less expensive to film less frames per second. As discussed earlier, one of the major reasons Hollywood chose 24 fps was because it was the slowest frame rate possible to get intelligible sound from the optical track printed along the length of the film. Higher frame rates would have been equally as effective, but the cost of film and film developing could easily skyrocket a production budget. Obviously, it’s in the best interest of a studio to keep costs down.
Though not working with film, digital filmmakers and videographers must be conscious of data rate consumption. If their camera records at 24 mbps (megabits per second) and their frame rate is set to 24 fps, the camera is distributing approximately 1 megabit of data per frame. Increasing frame rate to 60 fps will distribute less than half a megabit of data per frame, which results in a reduction of overall picture quality. To increase picture quality, data rate must be increased, which leads to a faster rate of storage consumption. Although the cost of storing data continues to decrease over time, those costs must always be considered.
More frames per second can also become cumbersome in post-production work. Higher frame rates increases the cost of color-grading, motion graphics, chroma keying, CGI and other post-production manipulation. More frames per second requires more processing power, storage, and labor. Plain and simple, more frames per second costs more money. As technology advances, will these costs decrease? Of course, but there will always be a economic benefit to producing less frames.
Many reject HFR and advocate 24 fps as the "gold standard"
Whether 24 fps came about as a technical decision or not is besides the point for some filmmakers, videographers and film-lovers. They simply love the aesthetic. But it’s also more than that. 24 fps advocates say the idea that HFR is something you "just have to get used to" is ridiculous and the science seems to prove otherwise.
The fact that high frame rates are closer to what the eye actually sees creates an interesting problem. To some people, HFR for narrative work falls into the Uncanny Valley. The Uncanny Valley, usually applied to robotics, is a psychological hypothesis which states that when something is life-like, but not perfect, we reject it. For a documentary, event video or reality TV, we accept HFR, because we know what we’re watching is real. But movies and television dramas are full of conventions that we’ve come to accept in storytelling. In movies dialogue isn’t really the way people talk; sets, costumes and lighting aren’t the way reality looks; and acting isn’t necessarily the way real people behave.
Yet, we accept these conventions at 24 fps, even though we know it’s fake. But a high frame rate portrays motion that’s too real and highlights the artifice of the production. Because of this, we may always accept HFR as something that’s "not acted", which makes it ideal for non-fiction work, but a poor choice for narrative. Unless we can suspend our disbelief, we can’t become invested in a story.
What this all means to filmmakers and videographers:
If there’s anything you should take a way from this post, it’s that frame rates are no longer bound to the limits of technology. The choice of frame rate is an aesthetic choice. It’s unlikely that any one frame rate will replace another. There are proponents on both sides of the argument. High frame rates are more realistic and have less motion blur, but come with a higher cost and an arguably negative connotation. Low frame rates, while the de-facto standard for many years, are old-tech and exhibit heavy motion blur. Whether it’s a low frame rate or a high frame rate, you have to choose what’s best for your project based on your budget, audience, and method of distribution.