In the earliest days of filming, the choice of what camera (or film stock) the production used didn't affect the post team; for a long time, it was a relatively settled workflow. In the film days, and even in the tape days of video, there was really only one way of doing things, and much of it was outsourced to a specialized lab. If the camera team decided to shoot Panavision instead of Arriflex, or even Moviecam, it didn't matter much to the assistant editor. Shooting on Fuji or Kodak film stock might matter to the lab and the final dailies colorist, but the edit team didn't know to worry. The major issue was, did they shoot spherical or anamorphic lenses, one box to tick on a camera report.
With the digital video explosion of the 2000s, however, every camera has started to come with its own set of logistical problems and issues that require post-production teams to keep up with a great variety of plugins, file formats and special software that can change with every job.
Even within a single camera, several major decisions can affect how the post pipeline will go, which often means it's best to have a workflow conversation with the camera team before production begins to get everyone on the same page.
Download our free Guide to Major Camera Platforms now.
RAW Video
The first major thing a post team should be getting a handle on with camera choice is whether the camera is capable of shooting in RAW video and if the production is choosing to shoot in RAW if it's available.
RAW video records the RAW data coming off the sensor before it's processed into a usable video signal. Depending on the RAW format, camera settings like ISO and White Balance can then be changed in post-production with the same image quality as if you had made the changes in the camera, which can be a great benefit if there were errors on set. RAW video has become incredibly popular over the last decade and is increasingly the default workflow of choice for many productions.
However, there are drawbacks to RAW that cause some productions to continue shooting to a traditional video format, even in a camera that is capable of RAW. First off, the files are often harder for the post team to handle and require processing. If you are shooting something with an exceptionally tight turnaround or with a small post team, it might make more sense to work with a traditional video format.
RAW is primarily beneficial for the flexibility you get in post. If the white balance is off in-camera, you can more easily change it in post with a raw capture format. With traditional video, settings like white balance and ISO get "baked" into the footage. Some cinematographers prefer to perfectly bake the look into the camera file they want and then let the post-production team work with those files without the flexibility of RAW.
RAW cameras are also increasingly capable of shooting into two formats at once or "generating their own proxies." However, while cameras can do this, it's not a particularly common practice for one key reason; it doubles your download time for cards. If the camera is both shooting an 8k RAW file and a 1080p prores file, you need to download both from the camera card to the on-set backup, which increases your download time. Additionally, you need to duplicate everything you have on the camera card to multiple copies for insurance purposes. In-camera proxies end up eating up more time and hard drive space than is beneficial.
There are a few cameras, however, that have a new workflow that shoots the RAW to one card and the proxie to another card. This workflow seems like it might take off on sets since the proxy will then be immediately available for the editor while the RAW files are still being downloaded to multiple backup copies.
LOG
Once you've left the world of RAW capture behind, whether it was because the camera couldn't record RAW or because there was some reason the production has chosen not to record RAW, the next decision made on set is whether to capture in LOG or linear video.
Linear video is the world we live in most of the time. When you edit in your NLE, it shows you linear video. Your phone shoots in linear video, and it displays linear video. But the file format created for linear video is only capable of handling a certain amount of dynamic range. For a standard 10-bit video file, that is usually considered to be 7-9 stops of latitude, depending on how you measure dynamic range.
But a 12-bit video sensor or the incoming 14- and 16-bit sensors are capable of recording a much, much wider range of brightness values. To squeeze that larger dynamic range into a smaller video package, LOG video was created. This process takes the 12-bit linear data coming off a sensor and uses logarithm encoding to "squeeze" it into a 10-bit video package.
This is a huge benefit for the post-production team that wants to preserve all that light value detail in the post pipeline for the most flexible color grade possible. However, standard 10-bit video is made to display 10-bit linear video images. Your images that are encoded in LOG tend to look very "flat" or "milky" when used in this fashion.
To overcome this, we use either a LUT (a discrete file you can load into your software and apply to footage) or a transform (a mathematical equation that transforms footage from one format to another) to process logarithmic footage to look correct in a video space. LUTs have been the default for a long time, but the industry is increasingly moving to transforms for their higher level of precision and flexibility. The most common workflows for using transforms are the ACES workflow and the RCM (Resolve Color Management) system built into Blackmagic DaVinci Resolve. For both RCM and ACES, you need them to have a transform created for the profile of the camera.
It is generally considered a good idea to check in with the production to see if they have a preferred workflow for you to use. Whether it's the camera manufacturer's LUT, a custom LUT built by the production, or the ACES or RCM systems, make sure you can properly view the footage the production creates. No self-respecting post team should ever be working on an edit with footage in its LOG form.
Timecode & Audio
Another essential factor of camera choice that often gets neglected in the conversation about post-production is how it handles timecode and audio. If you are working on a multi-camera job, a camera with good timecode inputs that can maintain steady timecode will make your life infinitely easier than a camera that lacks those functions. In audio as well, while we generally still prefer to run dual system audio, many productions like to run a mix to the camera for backup purposes and to get the edit workflow started more quickly. You'll ideally want a camera with industry-standard and robust audio inputs and outputs.
A final issue to consider is the somewhat obscure but increasingly vital area of file metadata over SDI or HDMI. While this seems confusing, it's actually pretty simple; some cameras can pass along certain metadata, including things like filename, over their HDMI or SDI ports. This can be a huge benefit with some camera-to-cloud workflows where an external box, like a Teradek Cube, encodes real-time proxies for the edit team to get over the web. If the camera can send the filename out over SDI into that Cube box, then the files going on the Cube can get the right names and make relinking to the full-res file properly later a snap. Without that output, the camera-to-cloud workflow makes much less sense.
Lens Squeeze
The final issue to worry about is one that we worried about in the film days as well; the squeeze of the lenses. The vast majority of productions shoot with spherical lenses where you don't need to worry about any squeeze to the lens. But there are lenses available called "anamorphic" lenses that take a wide image and squeeze it down to fit on a narrower sensor. This is how "widescreen" movies were made in the analog film days. You would have a 2x anamorphic lens that would take a 2.39 image and squeeze it down to a 1.33x piece of motion picture film. Then on the projector, you'd put a 2x de-anamorphoser to get a "normal" looking image that filled the widescreen.
In the digital era, we tend to do our de-anamorphosing in post-production, often during the dailies stage, expanding the image to look correct. You need to make sure you get the information from production if they shot in spherical or anamorphic, and if they shot anamorphic, it's vital that you ask them to shoot a framing chart with each lens they are working with so that you have a reference. Ideally, that framing chart would be taped out with frame lines and also have some recognizable elements on it, including perfectly drawn circles and pictures of humans to help if there is a problem troubleshooting issues in post.
In addition to the standard 2x anamorphic lenses, lens makers have released 1.5x anamorphic lenses designed to work with the wider 16x9 sensors of modern digital cameras. Since the sensor is already wider than the old 1.33x1 film frame (roughly 4x3), the anamorphic lenses don't need to be as strong, so a few vendors have released 1.5x lenses to help cinematographers craft wider images that take advantage of the full sensor and also offer some of the qualities users love about shooting anamorphic. As you can see, when a production settles on a camera and lens combination, it can majorly affect your post-production workflow.
Download our full guide to the major camera platforms and what features they offer to be helpful to post-production teams.
For tips on post-production, check out MediaSilo’s guide to Post Production Workflows.
MediaSilo allows for easy management of your media files, seamless collaboration for critical feedback and out of the box synchronization with your timeline for efficient changes. See how MediaSilo is powering modern post production workflows with a 14-day free trial.