Copyright Notice
Author: Robert L. Trim
Title of book: Digital Imaging Technician: A Very Practical Guide to On-set Asset Management of Digital Media.
© 2013, 2014, 2015, 2016 Robert L. Trim. All Rights Reserved. 2nd Edition v1.01
Contact Info.:
[email protected]
ALL RIGHTS RESERVED. This book contains material protected under International and Federal Copyright Laws and Treaties. Any unauthorized reprint or use of this material is prohibited. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without express written permission from the author / publisher. As well, and you know the drill, there are many copyrighted or trademarked names and identifiers in this text. Those belong solely and exclusively to their owners. They have been identified where possible, within the text. Images and graphics have been noted as well. They are the property of their indicated owners, but used here with credit attributed and/or permission.
Published by: Trim Enterprises, LLC ISBN 978-1-63415-875-6 Cover Photography and layout by Adam Scott Font: Baskerville, Helvetica regular and light. Book layout created in iBooks Author.
i
Table of Contents Introduction
viii
About The Author
9
About Special Contributors
10
What This Book Will Cover
11
Setting Up Your Work Environment
14
Chapter 1 Intro. To The Role of the DIT
16
Should I Be A Digital Imaging Technician
19
Scope of DIT …Not Written in Stone
22
The Bumpy Transition to Digital
26
Ch. 1 Review Questions
30
Chapter 2 Codecs, Color, Color Spaces
31
Codecs; The Magic Sauce
32
How We See Color
42
Color Spaces & Management
46
LUT Magic & Camera Profiles
53
Chapter 2 Review Questions
62
Chapter 3 Color Change and Codecs
65
Exercise 1: Color Change During Flip
56
Exercise 2: Practical LUT Use
72
Chapter 4 Camera to Editorial: The Crooked Path
76
Exercise 3: Find A Workflow
82
ii
Chapter 5 Best Practices On Set: Don’t Ever Break These
83
Where You’ll Hunker Down On-Set
89
Ch. 5 Review Questions
92
Chapter 6 Heavy Iron, RAW Files and The Great Chase
93
Basic DIT Workstation Hardware
98
Workstation Building
116
DIT Software Configurations
120
Basic Workflow Issues
125
Ch.7 Review Questions
127
Chapter 7 Ingesting Assets From Camera
129
Setting Up File & Folder Structures
131
What is CheckSum'ing
133
Checksum Softwares
134
Exercise 4: File/Folder Structure
135
Exercise 5: ShotPut Pro Interface
136
Ch.7 Review Questions
143
Chapter 8 Delivering Dailies
144
Raster Disaster
145
Basic Color Grading On-Set
147
Chapter 9 One Light & Scope Basics
151
Scoping the Image
153
Exercise 6: Setting-up DaVinci Resolve
156
Exercise 7: Working With Scopes
162
Ch.9 Review Questions
169
iii
Chapter 10 On-Set Workflows
171
Exercise 8 DaVinci Resolve One-Light Workflow
172
Exercise 9: DaVinci Resolve Audio Sync
182
Chapter 10 Reviews Questions
191
SCRATCH Overview
192
Exercise 10: SCRATCH Project Setup
193
Exercise 11: SCRATCH Syncing Audio
197
Exercise 12: Syncing Non-Timecode Matched Files in SCRATCH
204
Exercise 13: SCRATCH One-Light Workflow
206
Exercise 14: SCRATCH LUTs
211
Exercise 15: SCRATCH Output for Edit & Dailies
215
SCRATCH Review Questions
228
Chapter 11 REDCINE-X Workflow
233
Exercise 16: REDCINE-X Audio Sync
212
Exercise 17: Flipping RED Footage using REDCINE-X Pro
240
Exercise 18: REDCINE-X Pro LUTs
249
Chapter 12 Independent Workflow
254
Chapter 13 ACES Workflow
257
Chapter 14 The Naked Workflow
261
Chapter 15 Professional Problem Solving
265
iv
Chapter 16 Final Thoughts
267
Paperwork, Forms and CYA Stuff
271
The Digital Dilemma
273
Appendix A
277
Professional Problem Solving Answers
Appendix B Post House Specifications
256
Appendix C Review Questions Answers
260
Appendix D
262
Industry Links
Glossary of Terms (for PDF version)
263
v
Introduction
Unquestionably, the most substantial problem facing us, when it comes to digital assets, is how to handle and protect them. After all, they do tend to disappear at the most awkward times. Like, when you need them. What follows is an attempt at bringing the wrangling of those ones and zeros, which create photos, videos and documents, under your control. The time from the moment the camera ‘mag’ (camera memory card or hard drive) is ejected from the digital camera, until those properly formatted files are handed to the editorial department, is what this book is all about. It’s been a vast waste land, lacking standards, full of rumor, conjecture, ignorance and just plain bad practices. We’ll fix that here and now. Think of this as your basic course in protecting others ass•ets (pun intended). As a Digital Imaging Technician (DIT), there is only one job on a digital production project for which more technical knowledge is required -- that’s the Director of Photography (DoP). The DIT is a very close second, and with most older non-digital camera savvy DoPs, you might be the sharpest pencil in the crew. But there are others to please, or not piss-off, involved in this workflow. In-fact, we’ll spend a short chapter specifically on this topic. It might be the most important chapter in this tome. There are other great resources out there on the ‘interweb’ that should be part of your ongoing study. Wolfcrow is one that really sticks out. This site has a great deal of valuable information on workflows and related information. Another is the Tao Of Color, with a wealth of information and links focused on color correction. Both have newsletters which are worth the free subscription. What’s required to get the most out of the information in this book? A computer, some specific software, and digital assets to play with. As you follow along, each lesson and exercise will focus on a skill or technique that is best learned hands-on. Fortunately, most of the softwares listed in this book are freeware, or have demo or trial versions for you to load on your personal computer. If you’re reading this as part of an academic course, several of these softwares can have certifications of competency attached. Your institution might have the properly certified instructors to mentor you through the process. Nonetheless, what’s in this book has not been compiled anywhere, under one cover in this manner. It’s taken 2 years to bring all this together and for the DIT process to mature enough to be considered a true workflow. That being said-- we’re not ‘ever’ out of the woods in the digital profession. Technology and innovations in codecs, cameras, and software see to that. So strap on whatever it is that keeps you interested and motivated to chase the digital dragon. We’re headed into the deep end of the bit-bucket to make you a very savvy digital asset manager. Enjoy the ride. Robert L. Trim
vi
vii
There is only one idea or concept to remember, no matter what you do, don’t screw it up.
viii
About the Author
Robert Trim has spent more than 30 years in various parts of the film and video production industry. With his first fascination being the still image, Robert put himself through college doing models portfolios, corporate brochures and annual reports, product photography and lots of painful weddings and baby portraits.
Robert’s currently an Associate Professor of Digital Media at Utah Valley University located in Orem, Utah, just south of Salt Lake City. He’s an AVID Certified Instructor for Media Composer and AVID Symphony software, and ((RADAR)) DIT certified.. He teaches these and asset management in addition to his first love-cinematography.
Then the moving image caught his eye. 16mm film became a passion. The frames whirred, the labs processed and late nights were spent hunched over editing benches with the ever-so-lovely smell of splicing glue. This gave way to the new media--video tape. Immediate gratification. Shoot it and see it right away. Albeit black and white, using really bulky gear and not so portable cameras. Never mind. It was cool. After the undergraduate degree, Robert worked in the industry for 20 years. As creative director for a regional advertising agency serving 400 Kentucky Fried Chicken franchise stores, and more auto dealers than one could shake a stick at. And of course the odd clothing store or real-estate development marketing firms. Along the way he garnered several regional and national advertising awards, and added 300 TV and radio commercials to his portfolio. During his 8-year stint in the broadcast News business, he edited and produced his way to 11 Regional News Emmy awards working for networks and network affiliates. Robert delved deeply into corporate and educational media production, producing more than 400 programhours of educational tele-courses, corporate promotional, and product videos. Along the way, he found a few hours to help raise a family and finish a Masters Degree in business administration.
9
About The Special Contributors To This Book “Should we think we are the well of all knowledge and perspectives, we will prove to be the fool in the room”.
I just made that up-- and believe it to the core. I am not a rocket scientist, but my grandfather was. He worked on the Redstone Rocket program in White Sands, New Mexico back in the late 50’s. I claim no such brilliance. But I am a skilled discerner and seeker of those that are rocket scientists in our industry. Throughout this book, you will hear directly from those who have most likely forgotten more than I will ever know about all kinds of knowledge. They have been very kind to tolerate my questions so you can have their sage experience and insight. To those, and others that have crossed my path, leaving me with gems of wisdom, I thank you. Chief among these are: Daren Smith, owner and chief technologist for ((Radar)) Mobile Studio, and Lisa Konecny, DIT and software/ hardware/systems-innards guru. Daren has a long history within the digital asset world, working at Pixar, Luminus, and Digital Domain along the way. He has one goal in this digital asset management game, do it right. Period. His understanding of the endto-end process, and all the parts between, is without equal. If there was ever a Wonder Woman of IT, it’s Lisa. Her uncanny ability to figure stuff out and then fit the micro into the macro, is without peer.
Jon Sebba: There are not enough words in the world to thank a person who like to proof others writings. Jon, a published writer himself, gave countless hours to fix my grammatical errors and the use of ‘effect’ when I meant ‘affect’. It flows out of my keyboard, then fine folks like Jon help the written ideas make sense, with sentence restructuring that--flows. Jon hales from South Africa. Now retired from Civil Engineering, he and his wife spend their days writing, keeping a proper garden and performing the grandparent duties. Many thanks my friend. Karen Bateman: A technical proofer par-excellence. She brought years of experience in technical manual creation for the Super-Collider and other tech-heavy industries, to the tutorial segments of this work. Karen is currently a Jr. Vice President and Risk Management Assessor for one of the worlds largest banks. Many thanks for squeezing this into your busy life. Mindy Trim: A take-no-prisoners final set of eyes; and she has very good grasp of the topic being an Assnt. Editor and asset conduit for several TV series. She didn’t cut good old Dad a break. Nor should she. UVU DGM-2340 section 1. They were first class taught with this text and the true test subjects. Only students would tolerate paying tuition to fix what the professor thought was ‘really good stuff ’. I gave them my best and they gave back. It doesn’t get any better than that.
The ‘real world’ perspective for Daren and Lisa, on both the production and post production handoff, kept this writing effort on the rails. It also allowed the core concepts to be as current as publishing will allow. And my second and third and fourth eyes:
10
About The Special Contributors To This Book
Foreword Welcome to the second edition of Digital Imaging Technician. This is a fairly significant revamp of the book with a new layout, significant updating of several software tutorials and lots of other fun improvements like the first time reveal of the ‘Naked Workflow’. This is revolutionary. With the release of BlackMagic’s v.12 of DaVinci Resolve, those chapters underwent a major overhaul. Resolve received a facelift and significant new features. I think you will find the addition of worksheets in support of several chapters helpful in understanding the more difficult concepts. These worksheets are part of the asset download, in .doc format for you to print out and use. As with any publication that links to or references web pages, those links change. Every attempt has been made to keep them as up to date as possible. The same goes for camera and software references. With software and firmware being updating constantly, it’s difficult to keep that information as current as I would like. Still, you should be able to Google almost everything in the book for the more current information. Lets get started.
11
What This Book Will Cover
This book is created as the support for academic course-
work focused on the delivery of asset management practices and procedures. At first read, it might seem a bit lacking in some areas. That’s because the instructor would be the ‘gap filler’ during lectures and hands on labs. Nonetheless, if you were to use this book outside a classroom setting, there is a lot of practical knowledge to be gained because this book will cover more than you think is possible in a short span of time. The phrase ‘sit down, hold on and enjoy the ride’ comes to mind. Here’s the important 30,000 foot view of where this book, and what you will learn from it, fits into the overall scheme of the DIT career. There is more. Lots more and that ‘more’ is very technical. Although we will wade into the deep end of the digital pool, some is outside the scope of this book, but well within the scope of the professional DIT. We will deal with that extremely technical ‘stuff ’ in another book.
Step 1 •Offload mags
•Backup workflow
Step 2
Step 3
• Import footage (sound & video files)
• Transcode
• Which software
• Dailies & Editorial files
•Cameras, file formats & hard drives Fig. 1 DIT process steps on-set.
• On-set experience
Once you grasp of the core processes and concepts within the pages of this first DIT book, you will be ready to wade into your first BBQ. This is what grips and electricians call your ‘first day’ on the set for a ‘newbee’. It is important to state at the outset, this endeavor will require a great deal of time. There’s a lot of reading and a fair amount of hands-on tutorials to plow though. But it will be very rewarding. When all is said and done, you will have a great overview and gain important skill sets in areas you only once dreamed of. Many try to be or think, they can be a DIT with a lap top, hard drive and card reader. The end result is often not good for the production company. Learn what’s in this book and it will set you far above the ‘wanna-be’ DITs. After all, it’s what you can do on the production set, that counts. You must be able to accomplish the tasks at hand in order to be hired, or remain employed. Not much else matters. There will be a lot of software to learn. If you don’t feel you have a handle on basic computer operating system (or OS) functions (like creating folders, saving files to specific places on hard drives, managing files, etc.), then this course will get out of hand for you, fairly fast. We will be focused on the Mac OS, as is most of the industry in this specific area. You must understand basic functions within the Mac OS and be able to navigate the files and folder layouts with at least a basic skill level. We will not spend any time teaching the Mac OS in this book. This training is focused on asset management in today’s digital cinema productions. We will cover asset management for other areas of digital media, as they do overlap a great deal. The job function for this book’s focus is called the Data Wrangler and the Digital Imaging Technician (DIT, pronounced D-I-T, not Dit). But it’s bigger than just managing data as we will see. There’s also a fair amount of hardware to master and the magical juice that holds it together-- ‘workflow’. Also know as the data pipeline. That pipeline starts at the camera setup through to final delivery of the pro-
12
What This Book Will Cover ject. This book will cover only the camera to editorial part of the pipeline. The DIT’s job is right in the middle of production and post-production. It’s one of the most technical job on-set these days. The skills you will learn can be applied to a one-man shop, to a corporate video production setting, to a full blown, multi-million dollar movie. It’s, as they say, scalable. With all this in mind, let’s begin. NOTE: To acquire the digital files used in the tutorials, send an email to:
[email protected]. The total download is around 6 gigs! If you would prefer the assets on a thumb drive (for a small fee to cover the shipping), indicate that in your email.
13
Setting Up Your Work Environment
NOTE: What follows is directed to those in an academic setting, however, if you are working on your own, the need for an external drive, and the folder and file structures, will apply.
- 8 MB of cache or more.
- SSD based drives work wonderfully for this course and most digital media work.
Standard ‘spinning platter’ drives that meet these specs are the orange LaCie portable drives and NOTE: To acquire the digital files used in the tutorials, send many of the external drives found on the Other an email to:
[email protected]. The total World Computing (OtherWorldComputing.com) download is around 6 gigs! If you would prefer the assets web site. OWC is very Mac friendly and has wonon a thumb drive (for a fee to cover the thumb drive and derful customer service. Other on-line sites like Tishipping), indicate that in your email. ger Direct (TigerDirect.com) also have good deals on portable drives. G-Drive series of hard drives have long been used in this industry. They also offer drives with multiple interfaces making them very adaptable to a wide variety of computers. Later, we will go deeply into how the DIT workstations are setup, file/folder structures for actual on-set work and Key here is the USB-3 interface, and a fast rotational more. For now, we need to get your immediate work envispeed. It is entirely possible to use a 16 gig, USB-3 flash ronment setup properly for the assignments to come. drive to put your assets on, and work with little read-write lag. Required for this course are the following: • Access to a Mac OS based computer with Intel processors. Newer MacBook Pro™ laptops work fine and the desktops in most academic settings, will be able to handle the needs of the assignments. However, you can use a Windows OS computer for most of the tutorials. Only a few of the softwares used in this book are Mac only. • The Asset Management Coursework assets. This is a large! download from the web. If you didn’t order a flash drive with the tutorial assets on it, you will be given the link to a web location that has all the assets for the course. • A personal portable hard drive with at least 16 GBs of free space. This drive must have the following specs: - USB-3 or faster connection speed. Firewire 800 or Thunderbolt™ will work fine. There’s a good selection of drives available now, that have several interfaces which makes them more practical.
In a pinch, you could try local stores, but they often don’t have drives that meet the specs you will need for digital media/video. You will need ALL the above immediately before delving into this book. Setting Up Your Portable Drive: To keep track of the tutorials, we will need to setup several folders on the portable drive. I’ll refer to this drive as ‘your drive’ from here on out. Follow these instructions carefully. Double check your work, please. On your hard drive create the following folder structure: • Copy the downloaded CourseAssets.zip files to your external drive. There should be three .zip files total when all are downloaded.
- Drive rotational speed of 7200 rpms.
14
Setting Up Your Work Environment • Un-Zip the files by double clicking on them. One of the new folders will be CourseAssets. The others will be CA-2 and CA-3. • Drag the contents of CA-2 and CA-3 into the CourseAssets folder. Do NOT drag just the CA folders into the CourseAssets folder, you want to select ONLY what’s inside the CA folders. • Inside the CourseAssets folder, create a new folder Completed Assignments Software For The Course: Several of the software used in the this course are free from the web. You can download and install them on your computer, freeing you up from the classroom computers. The following software can be installed on your computer if your machine meets the software specs:
NOTE: It is very important that you label your drive and any drive you use on-set. Look around you, if you or someone you know left their drive behind, how would anyone know who it belonged to? Use a label machine or a permanent marker and put your name and phone number on the drive.
- MpegStreamclip™ - Sony XDCAM Browser - RED CineX Pro™ - Arri Raw Converter™ (ARC) - BlackMagic DaVinci Resolve - QtChange (win/Mac) (30 day free trial) - Scratch (win/mac free trial) - ShotPut Pro (Win/Mac 6 offload free trial) Some of these softwares are totally free, others are trials. The trial time starts when you first launch the program(s). Be careful not to launch them until you think you will be working with the tutorials, so you won’t run out of ‘free’ use time. There are hot links within the chapters they are used, so you can download and install them when you get to that specific tutorial.
15
1
Introduction To The Roll Of The DIT It
Required viewing and reading: Blue- Behind the Scenes, http://www.youtube.com/watch?v=e_z1RRfpGhc&featu re=player_embedded and The Roll of the DIT, http://www.youtube.com/watch?v=ahH3PZA7Gh0 and Defining the DIT, Biggest Misconceptions http://nofilmschool.com/2013/10/defining-dit-biggestmisconception-dits/
could not have been better stated than this. From InsideJobs.com the most current definition, of one aspect of the DIT job, is as follows: As a Digital Imaging Technician, you work on film sets, making sure that everything goes smoothly with the camera equipment, and that the highest digital image quality is achieved. Digital Imaging Technicians are there through every step of filming, helping the Cinematographer or Director of Photography achieve their vision. The switch from traditional film stock …..to digital technology has made you an invaluable asset on any motion picture, television, or commercial set. Many Cinematographers need assistance navigating this relatively new high-definition terrain, which is where Digital Imaging Technicians come in. Your foremost duty is to serve as on-set quality control. You synchronize the cameras and make sure they’re set up to properly capture the scene. You’re also concerned with the exposure and saturation, checking that the captured image isn’t too dark or too bright. Plus, it’s your job to ensure that there’s visual continuity from scene to scene. In other words, your goal is to limit how much needs to be fixed post-production. You must be incredibly organized since you’re responsible for downloading the camera footage onto a hard drive. Every bit of downloaded data must be categorized and named. This makes it easier for Cinematographers to review clips while on set, and this step is incredibly crucial to the editing process. Basically, you’re a resource and sounding board for anything related to image quality. Also known as: Digital Imager, Imager. I take issue with parts of the definition above because it involves creative control and that is the purview of other department heads. This is where on-set politics come in.
16
Introduction To The Roll Introduction To The Roll Of The DIT
We’ll address this later in this book. Politics are almost as important as doing a good job being a DIT. Some would argue it’s more important. A division of duties needs to be pointed out here. The ‘Data Wrangler’ is a job limited to gathering and offloading the camera memory media (or Mags). They log and track the data to the various backup drives. Interestingly enough they’re also called a ‘Slappy’ by some post production houses. They ‘slap’ or load drives in and out of docking stations. So the name ‘Slappy’, came from the sound it makes when you dock drives or put tapes in their players. The DIT might also do the data wrangler job, but their skill sets go beyond this basic function. DITs are skilled at transcoding files (flipping), syncing audio and video and color correction for presentation as dailies and delivery to editorial. We’ll explain ‘dailies’ a bit later. Media production, TV shows, feature films, corporate videos, etc., are collaborative by nature. The key creatives on these projects are typically: Producers(s) Director Writer
Process film = Transcode digital files Create work prints = Create Dailies. There are now additional aspects to this position due largely to the improvements in software and new standards. The salary range as of publication date is $17,000 – $37,000 (Data from U.S. Department of Labor). However, these figures are low if you’re in a region that has union or larger budget productions. You could easily double these figures. The very basic job tasks for a DIT are: • Preparation and offloading camera magazines. (Data Wrangler typically does this if a the budget will allow) • Consult on technical/software camera setups. • Confirm multiple backups. • Organize and prep footage for editorial department. • Synchronize audio clips with video clips • Create ‘dailies’. There are more tasks. We’ll get to those.
Director of Photography Art Director Digital Image Technician Yes, the list now includes the DIT. It’s not that the folks in the photochemical processing part of the film-based production aren’t creative. They do bring creative ‘bits’ to the whole process. It’s just that digital acquisition is now so technical, at times complicated, the DIT can and does offer creative suggestions. Proper camera setup, exposure settings, are part of the creative processes. They also reign over the workflow path, assuring what was shot makes it to the editorial part of the pipeline, intact. The DIT is now the ‘digital lab’ on the production set. There are direct equivalents between the old film handling and processing, and what the DIT does. Offload film = backup camera memory cards
NOTE: In a union setting, the DIT falls under Local 600, the camera department representation. As such they report to the DoP (Director of Photography or Cinematographer) on set. The DIT position is new on set, as are the position scope, job duties, etc. All this is still being defined. It’s not without controversy as are most evolving job positions in a union setting. If you’re going to be working in a union region, it would be good to make contact with the local office and get a feel as to what they are thinking and how you can get in. Don’t be surprised if you find a lack of information within your local about this position.
17
Introduction To The Roll Introduction To The Roll Of The DIT
The DIT is part of the camera department. So the reporting structure on-set is: DoP (Director of Photography or Cinematographer) Camera Operator (head of the camera crew) 1st Assnt. Camera 2nd Assnt. Camera DIT Now, the DIT has two bosses technically. The DoP is the big boss of course, but most of the interaction is with the 1st Assnt. Camera operator. They are the voice heard on the radio calling for a mag change and they are the person who hands off the camera mag to the DIT. They also call for the DIT when there is a question about camera software setup or on-set video monitoring. On most sets, you will find the DIT close to the side of the DoP. Questions about the look of the monitor and camera, and about exposure are directed to the DIT. You now get a glimpse into the depth and breadth of knowledge the DIT has to master. It’s not just setting at your computer, copying files and moving the assets through special software It is about knowing the camera inside and out. It’s about knowing what the exposure might do the final image. It’s about the ability to setup high quality monitors so the image on set will be as close to what they will expect to see in post. And this is just the start.
18
1.2 Should I Be A Digital Imaging Technician?
If you can answer this math question, you are ready to be a DIT: “If two monkeys have six pockets, how many apples fit in the box”, Darren Smith, CEO ((Radar)). Therein lies your average day for a DIT. The Inside Jobs website states the following: You should have a high school degree (certificate) or higher and share these traits: Detail Oriented: You pay close attention to all the little details. Trustworthy: You are known for your personal integrity and honesty. Levelheaded: You hold your emotions in check, even in tough situations. And the big one- - you’re willing to learn more and more and more. If you were to enter the marketplace today with basic knowledge of, say, the Mac or Windows OS, the Sony EX-3 camera and two critical pieces of asset management software, you would be way behind the curve. You need to have a passion for learning all you can about every camera available. Well, at least the common ones used in your region. This list should include: • RED cameras. • Canon’s 5D, 7D and Ti series. If you’re in an area where Nikon is popular, your focus should be on those as well. • Sony XDCAM based cameras and the F5, F55, F65. • Canon’s C-300.
• Arri Alexia. • BlackMagic cameras, once they have a market share, which might happen. • GoPro™. This little imager is pervasive now and has special file handling needs. This industry is largely based on the camera ‘flavor of the day’. If one of the top DoPs likes XYZ camera, lots will follow their lead. You need to be one step ahead if possible. You can’t do the job professionally, based on hype, conjecture or what some blog rumor says is true. You have to know for certain what a camera can and can’t do, as well as every menu setting and what that setting does. At the minimum, you must have all the most recent camera software build manuals for reference. If you’ve only seen these cameras in magazines or YouTube videos, don’t despair. You can make friends with the local production equipment rental and sales houses, and they will find time for you to ‘play’ with these cameras on either end of a rental. You should also download the manuals for most cameras off the web. Look in the Appendix for links to manuals and manufacturers web sites. Smart phone ‘apps’ now have lots of specific camera information. These are all good tools for the professional. Acquire a Knack For Learning If you don’t understand how software works, or basic functions of your chosen OS, then it’s time to look elsewhere for a career. The DIT job function is very focused on learning software at a fairly high level, and figuring out how one software will relate or intersect with another, fast. If you don’t understand a data bus from a GPU, it’s time to look elsewhere. DITs must understand the innards of the computer and what will make them faster.
19
1.2 Should I Be A Digital Imaging Technician? If don’t like the idea of leaving set an hour after everyone else, this is not a good fit for you. But-- if you gobble up computer, camera and techie stuff, you’re on the right track to success. Fundamentally, you must be a people person. You will be dealing with many different personalities on set. Some strong and overbearing, some lacking communication skills but who expect you to read their minds. Most will have no clue what you are doing all day, but expect what they need-- right now. You will be satisfying the needs of on-set demands, the production and postproduction requirements, while balancing the realities of what the most powerful computers you can afford, can do. And it must be done, all day, every day without error. If this fits with your personal goals, it’s a great, new, ever-changing carrier. Be advised, you will be spending a great deal of money and time keeping up. Again, you will be chasing the digital dragon like you have never done before. Oh, the answer to the math problem at the beginning of this chapter is-- “eight, two terabyte drives.” This is a good read with information from working DITs. Defining DIT: The Big Misconception http://nofilmschool.com/2013/10/defining-dit-biggestmisconception-dits Work for you? If so, lets get started.
20
1.2 Should I Be A Digital Imaging Technician?
Some Real TIPs For All DITs (lol); - If you sit there dragging and dropping to transfer files, you are not a DIT. - If you don't know color theory, you are not a DIT. - If you cannot read a histogram, you are not a DIT. - If you can't {do}setups like detail, noise reduction, secondary color correction in the camera in every scene, you are not a DIT. - If you can't match whole material on-set instead of "fixing everything in post", you are not a DIT. - If you can't record the proper signal - so you can use as much information in post as you can, you are not a DIT. - If you never use a gray card… you don't know why you should use a gray card, and you are not a DIT. - If you can't advise on exposure, you are not a DIT. - If your DP does not respect your opinion, you are not a DIT. - If you color on-set, yet don't/can't calibrate your own monitor, you are not a DIT. - If you are not the consultant on-set for your camera crew, you are not a DIT. - If producers are more excited over your rate than your work, you are not a DIT. - A DIT is a person who is competent in data management, look creation, and giving knowledgable advice to the DP on ways to get the best image possible. Kwan Khan Well, some in this list are a bit over the top, but there is a kernel of truth in everything Kwan mentions.
21
1.3 The Scope Of The DIT… Not Written In Stone
Required reading: Working with a DIT, http://www.sixteen19.com/blog-entry/daily-staley-worki ng-dit Required viewing: Paul Cameron, “How are you securing integrity of the image in the capture world”. http://www.youtube.com/watch?v=i2YXcRxJitI
“On-set dailies is a growing trend in the industry that naturally falls on the D.I.T. but in whatever capacity we’re working for a production, one of the most important components of our job is on-set quality control. In the end, the best way to describe my craft, is that the D.I.T. provides a set of critical eyes to help ensure the highest quality product from the set.” Ben Cain, DIT and Colorist As mentioned, the DIT’s job is to assist the DoP during the production, to keep it moving, stave off potential technology problems and be clairvoyant. We can’t always be what they think we should be. We are after all, human. Because the DIT gives the director and the DoP their ‘window to the world’ when looking at their work on your workstation, you must be able to show them quality images that match their vision. If you read many of Charley Anderson’s blog posts, you hear one steady drum beat; ‘Why does this stuff have to be so expensive?’. Good question. There are no good answers accept ‘it just is’. Starts in Pre-Production “What I would recommend, for every job, is to request a specific day of prep to go to post and get your monitors calibrated to their standards (every post house is different.” FYI). Charley Anderson, Local 600 DIT. ‘Dork In a Tent’ Blog Most people think that the DITs job begins the first day of production. Far from true. You’re chasing information about the job you’re getting ready to do, getting equipment ready, communicating with the DoP, etc. You will be working with the camera department when the camera gear gets shipped in as well. Each job will be different. This will require you to adapt on the fly. That starts with as much information as you can gather in pre-production.
22
1.3 The Scope Of The DIT… Not Written In Stone Scope Has Expanded In recent years, this position has gotten much larger in scope and demands. What was once a computer, a card reader, and some hard drives has grown to require a blisteringly powerful computer, really fast data connections, huge hard drive arrays and some really expensive, sophisticated software. A graphic representation of the new DIT workstation configuration looks something like this (Fig. 1):
the camera running, set focus and change lenses. You should hope they have experience with the cameras’ inner-workings, but don’t count on it. The digital camera (think of it as more a computer with a lens), are far more complicated and different between manufacturers. It’s all to common for the camera department to huddle around the camera with a setup issue. Then you may hear, ‘hey, come over here, we have a question about…..’. It’s now your job to help solve their problem. Familiarity with the camera menus and setup is now fully part of the DIT position.
Backing up or full on asset management: It’s also normal for the DIT services to be limited to just backing-up the camera files, organizing them into logical folder structures and that’s it. On other productions you will start with that and then process all the files with limited color correction, syncing the audio and video files, creating dailies and deliverables for editorial. All this will be explained as we move along, but for now, the basic concept to remember is that each job will be different. Video assist: Another task for the DIT is setting up the monitors for proper color and camera-output viewing. This might be wireless or wired feeds Fig. 1 Basic flow diagram of hardware for a DIT station. from the camera to the monitor all the way through to a LiveBackups Grade caliber system. You may have this equipment in your inventory (extra income that is offset by extra work Just a few years ago, the assistant camera person on set) or the equipment will be rented, but you will be brought you the memory card or the camera hard drive, asked to make it all work. you plugged it in, finder-copied the files to a folder on another hard drive and called it good. Bad things happened Camera settings: But one task that is increasingly from time to time. A file didn’t get copied or was cormore important for the DIT to perform on-set is camera rupted during the copying process. Digital film production settings. Everything from proper compression settings to got a black eye. the actual exposure for the shot. There is some controversy right now about the appropriateness of the DIT setting the F stop as part of their duties. You now need to know the workings and operations of the cameras inside and out. Making back-ups is not the camera operators job. He/she was hired because they can frame the shot. The first Assistant Camera is hired to keep
Insurance companies stepped in and required more data assurance. The completion insurance ‘bean counters’ knew two things: motion pictures shot on film stock for the past 100 years have a good track record for not having bad things happen to the camera negative. In addition, their personal hard drives have crashed and they lost all
23
1.3 The Scope Of The DIT… Not Written In Stone their emails and family photos. As a DIT, you use a computer and the insurance companies assume you will lose all the productions file, which they will have to pay to reshoot. In Figure 1, you will notice the three backup drives. Film based productions are not required to strike three negatives from the original, but this goes back to the insurance companies executives’ bad experience with the data on their personal computer and the short history for digital acquisition. Asset management and dailies: Making three backups are not the only tasks within the scope of the job. As mentioned, next comes asset management. You have dutifully made the required multiple back-ups of all files from the camera. Now you are told that the director and DoP want to see dailies. You’ve heard that term from film productions and it’s an interesting misnomer. Even if you live in LA or a major film city where film processing labs exist, it took a day for the exposed film stock to get to the lab. A day to process and strike a work print, and process that print, then a day to get back to set where it could be viewed. So three days for something called ‘dailies’. Digitally, we can give the director what they shot the next day. There is one significant requirement to accomplishing this: computer horsepower. Camera RAW files and other hard-to-crack-open codecs, require huge processing abilities to transcode, or convert them into viewable file formats. Part of asset management is to take one of the backups and create dailies. To do this, you must first sync the audio files with the video files on each take. This requires software that can ingest the files and link them quickly. Then you must transcode these files into viewable file formats. All with some sense of organization and expediency. Metadata Another part of the digital asset management job is metadata. If it weren’t enough that you are backing up and confirming files and creating dailies, you will often be asked to use some software that will allow for digital information to be added to the file(s) for later searching. Which software will depend on the production company. They will provide a ‘Tech Specs’ sheet, and you must follow it to the letter. (see appendix B for examples). Sometimes they will provide you with a copy of their software, which means, another piece of software you have to master on the fly.
One-Lighting On-set, ‘quick looks’ at scenes require the audio to be synced up and some basic color correction performed. One-light as it’s referred to. Simply put, it’s very basic corrections to the overall color balance and luminance levels and that’s about it. If there is a decided-upon ‘look’, those setting are saved as LUT (Look Up Table), which can be simply dragged onto the clip and all the corrections are made for you. Setting Exposure There are basically two types of DoPs out there. Those who fully understand exposure in the digital realm, and those that are uncomfortable with the transition from film to digital. DoPs often get paid a great deal of money each week to make creative decisions, select lenses and set exposure. Get ready for the DoPs that need you by their sides to set exposure. Both need a different level of personal support. This is where the ‘people’ skills come to play. Unions As you might already know, unions within the film industry represent various crafts. Local 600 for the camera department. Teamsters for transportation, etc. All the unions have fairly strict, defined ‘fences’ that their members must work within. The odd-craft-out is the DIT. It has, for now, ended up within the camera crew and Local 600. However, it’s a strange relationship at best. It was best stated by an LA DIT I was chatting with, “They (Local 600) pretty much say, welcome, we’ll take your dues and now go out and work and don’t bother us.” The DIT duties are changing fast and the Union, in reality, is struggling to define the job scope and support their members. This response from Tyson Birmann, a Union DIT, to an article on web about what DITs do, is a very good argument for where DITs should be in the structure of Unions. “All of the confusion starts with the Local 600, and they won't change it. It is the ambiguity of the positions allows them to keep control of it within IATSE. The position is a major point of contention between the various locals. Local 700 Claims that it is covered by the COLORIST and ASSISTANT COLORIST positions. Local 695 says that the position is covered by VIDEO TECH. If you look at the "verbiage" for each of these positions, various aspects apply to all of
24
1.3 The Scope Of The DIT… Not Written In Stone them. In reality, a good portion of the DITs (who aren't doing real-time color between camera and monitor) are doing the same exact work that nonunion techs are doing at places like Fotokem. The problem is that instead of the Locals coming to an agreement, they continue to leave it vague so that they all have a claim on it. The bottom line should be that we are IATSE and the local shouldn't matter as much. I'm one of the few that would rather see DIT go to the 700. (Editorial union) In the end, a larger percentage of the work being done is for the editor. The preparation of the material, the workflow, etc. I think that they position of DIT should be EXCLUSIVELY the act of doing real time grading an image evaluation on set. Once the image is recorded and saved, it should go to another union. A Union that has a better understanding of the needs of editorial and post. I realize the whole "agent of the Cinematographer" thing comes into play. But I'd answer that with the fact that the DI colorist is the MOST influential in the final image the DP puts to screen. They are in 700. I just know that when I call 600 to ask them about anything technical, they really don't have answers. Most of them don't really have any clue what happens after the light goes through the lens. As a result, I think that is where their jurisdiction should end as well. That is my humble opinion, but I am also someone dealing with this gray area everyday.” Ultimately, you will need to identify what you may be asked to do on each and every production job. Suffice it to say, your job will expand more and more if you let it.
25
1.4 The Bumpy Transition To Digital
This has been a bad road littered with ruined projects and some negative karma thrown about. Like a great deal of leading edge technology, the cart gets out ahead of the horse. Digital imaging on movie sets was no exception. Digital technology made the move to digital acquisition. This happened fairly smoothly due to the fact that, one part of the system didn’t change. The media. The new found digital ones-and-zeros were recorded to--video tape. Tape was a known commodity and how it was handled in the workflow, was a given, very much like the film-based workflow. The digital tape was played back in the same familiar looking machines, with the same controls. Only the electronics inside were different. This was a comfort zone for all involved. File Based Recording Then the seismic shift-- file-based media. With one simple change, everything was out of whack in the technology chain. Recording to hard drives (at first) then memory cards was a natural next step. But the recording media technology was not up to the task. Video files were huge and the data was a continuous, non-stop stream of bits and bytes. Enter the solution-compression and codecs. Much like the early parts of the Bible, this begat that and their product begat this proprietary technology, and so the digital quagmire spread. Everyone wanted their piece of the emerging marketplace and if their system worked only with their software and codecs, they ruled a pipeline. But not everyone bought the same systems so the idea of ‘not playing well with others’, became the normal way of working. Not fun. The big force behind all this was to replace film with digital acquisition. This was a formidable target at the
time, and a goal which is only now starting to become reality. Some would differ with great fervor, but that is a discussion for another day. To achieve the quality of film would require totally transformative work on two technology fronts: better sensors and much higher data rates. Both had road blocks. The higher data rates require faster media for recording, or, yet to be invented, new compression softwares. Codecs were coming of age with the first digital-to-tape formats (DV, DVC Pro, DigiBeta, Mpeg-2, etc.), but now this compression had to be reinvented for file based recording. Parallel to all this, the sensor technology consumed vast amounts of R&D capital, manpower and required lots of thinking outside the box. The camera imaging chips came, raining down from Sony, JVC, Texas Instruments, and more. By moving first into the still camera arena which allowed for mass market, and high volume sales to recover investment dollars, the technology inched forward. Expensive and awful by any standard, they were revolutionary. Fundamentally, taking a single still image and saving that to a digital file is far easier than capturing 24 or 30 images per second and saving all that data, in a never ending stream. The cataclysmic intersection was on the production set. Camera crews struggled with the settings, all the wires, cables and such, to make these new beasts run. But what to do with the data. Because the technology was new across the board, everything was fragile and unknown. Hard drives crashed or an errant cable came unplugged and the drives directory was scrambled. The assets were lost. Data recovery tools did not exist to repair these new file formats and data structures. The intrepid pioneers pressed on. For many, their first experience was their last, for the time being. Digital acquisition was then relegated to documentarians and indy film makers.
26
1.4 The Bumpy Transition To Digital shot in raster sizes ranging from 2000 pixels wide (or 2k) to 4000 pixels wide (4k). And if that we’re enough, it recorded in a new file format for video, called RAW. These were not image files, but data files that had to be reconstructed via software, to create a viewable image.
Fig. 1 Red One Digital Cinema Camera. Image © Digital Intermediate, UK. Digital camera imaging improved when certain technical issues were overcome. Far from perfect, the images became impressive. Digital camera makers also learned that the cameras themselves had to be bullet proof in the hands of gorillas. Filmmaking is hard on everything. The data part remained awash in voodoo. That side of the equation still had to be figured out. Although High Definition (HD) resolution was appearing in the test labs as early as 1968, HD did not become widely available until the early 1990’s. The raster size or image width in pixels, was limited to 1920 and 1280 (commonly referred to as 1080 and 720 where the vertical resolution is used to describe the frame size). Then RED One camera (Fig.1) burst on the newly named ‘digital cinema’ arena with a system geared and built for film production, but had a workflow that was, well, pure hell. What made this camera so different? It
To solve some of the file handling (workflow) problems created by the RED One camera, a bright programmer outside of Los Angeles, Calif., created a tool called R3D Manager (now renamed DoubleData™). This small software utility, allowed users of the RED camera to move their files from the recording media (CF cards or hard drives) to backup drives with something called Checksum error checking. We’ll get more into this is later, but for now, every file is checked for bit-by-bit integrity while it’s being copied. The file on the destination hard drive is guaranteed to be a mirror of the original file. Data Piling Up The fundamental problem with all this ‘data activity’ was that anyone with a computer served as the yet-to-benamed or defined data manager. They knew nothing accept how to drag files from the camera magazine to their computer. Even today, some are still clueless. End result: lost and corrupted files, no organization or method to the process. It was a dark time for digital film making. Unfortunately, bad practices continues to this day. Flash ahead a few years and we have data management fairly well under control. Well, we can assure it’s not corrupted, and it IS organized, but we don’t have a good handle on the shear volume of data being generated. Better images mean bigger files and bigger files need more hard drives. One of the current cameras from Sony can generate 11+ terabytes of data on an average shoot day. That’s before the data is backing it up three times. How-
Tech History Bit: The first film, for general release, to be shot video the transferred to film stock for distribution, is believed to be ‘Car Wash’ release by Universal Pictures, in 1976. The script was bad, the acting over the top, and song did better in the music charts, than the film did in the theaters. The restrictions of both the Standard Definition broadcast format and the large, TV studio sized cameras, were clearly evident in the lack of visual quality and awkward camera moves. But it was the first salvo for video acquisition in the theatrical film realm. The first high definition (HD) feature film, shot and edited and released in a digital format was Star Wars Episode III: Revenge of the Sith, shot with the Sony CineAlta™ F-950 camera. 27
1.4 The Bumpy Transition To Digital ever, on an average day with the more widely used digital cinema cameras, it’s common to handle 200 gigabytes to a terabyte of data. Many will ague that the cost of data storage now outweighs the cost of film stock and processing, off-setting the potential faster on-set shooting time offered by digital cameras. Again, another discussion for a different day. But it’s a major factor in the decision on which platform to shoot with. The DIT can help mitigate some of this pain with good workflow practices. Confusion With All The Different Camera Formats
This is an area where you, as the chief technologist onset, ‘needs to know’ your stuff. It’s a long held aphorism that ‘it’s not the tool, it’s the operator’. The DIT can’t help the story being filmed, good or bad. They can assure that the technology distractions won’t interfere with the ability to tell the story. As a DIT you won’t be consulted on which camera to use. You will have to react to that decision.
ing larger raster images. Arri also offers an easier workflow which added to it’s desirability. The RED One camera was fraught with problems, being basically software driven, but the waiting list to get one, despite the issues, was more than a year long. It was the flavor of the month and still is, with the introduction of their new models and a new imaging sensor (called Dragon™). Ultimately each camera manufacturer touted their images, and sometimes ease of workflow, as an advantage. Some cameras like the Sony EX-3 (Fig.2) became the favorite of reality shows. These cameras produce good images from a reasonably priced camera, that fits in the camera operators hands like the older, shoulder mounted news gathering style cameras. Again, they represented a friendly interface in an increasingly changing world. But the recorded file structure and codec presented quite a hurtle for asset managers. File Compression
From the manufacturing side of things, camera creators strive to bring something to the process which will make productions want to use their gear: better images, more dynamic range, less noise, larger raster sizes, and the list Fig. 2 Sony EX-3 XDCAM goes on. RED camera. Image ©Sony Corp. cameras offer images that are 4k (that’s 8 time larger than standard HD images), and 12 stops of dynamic range. Others struggle with 1920 x1080 raster sizes and 8 to 10 stops of latitude. Arri settled on a 2k raster size with the Alexa™ camera, and focused on processing an outstanding image, earning it a strong foothold in the production market place. Even with other cameras produc-
The compressed video format files are now giving way to much larger ‘RAW’ files. Far less compression, if any, and substantial data rates. And with their basic structure being a data, not an image file, again a whole new wrinkle to upset the digital cinema workflow boat. All this, just when the computer manufacturers created enough processing power in the magic box to handle video and audio files in real time, the game had changed. It now seemed like there wasn’t enough CPU power on the planet, to munch through these raw files. So, what codecs or compression types are you expected to deal with as a DIT? Just spend an hour on the web and Google some cameras you know about. Look at their spec sheets and note the file format(s) they can record. AVCHD (a derivative of h.264), XDCAM (Mpeg-2 in a different wrapper), Mpeg-2 VBR (JVC’s mode), Arri Raw, Log-C, R3D, Jpeg 2000 are now emerging, and h.265 was announced in 2013. The list goes on and each has its own way of being handled, decoded, or flipped. If you weren’t already overrun by all this information; the scary part is YOU must know each of these codecs
28
1.4 The Bumpy Transition To Digital and wrappers produced by all the different cameras. You also must have the ability to move these files into something the editors can work with AND be viewed on the producer’s iPad and played back with a one-light color correction for the DoP. Ripple Effect, Into Post. Still with me, eh. Okay, here we go. Post-production, starting with editorial, has needs. Based on the editing system/software they are using, you will deliver files to them in one format or another. The job of the DIT is to confer with the DoP about what camera they are going to use and then, again with the post production facility about their preferences. Your goal, as a professional DIT, is to make their jobs easier…or at least not to introduce more problems. Think of yourself as a bridge or conduit.
now the norm. In 2013, several film camera manufactures saw the writing on the wall and ceased production of all their film cameras. The need for the trained, professional DIT to protect their images, while the creatives types are being ‘creative’, is growing, and fast! It’s not that hard to learn, but it’s not everyones cup of tea. If you’re going to have anything to do with any aspect of digital media, you will gain tremendous skills and career value from learning what we will cover in the following chapters.
Networks (broadcast, cable and over-the-net) have ‘deliverables standards documents’ that detail, very specifically, how files will be organized, codecs to be used, documentation, how the show will be delivered, etc. These are quite lengthy and very specific. What you’re asked to deliver might well be dictated by these requirements. One great advantage to working with an established post-facility, they have a workflow, in-place. Their livelihood and reputation ride on moving this mass of data (scenes and takes in the minds of the producers and directors) through to distribution. All the in-between details are just distractions. Producers and studios want a movie or a finished commercial or corporate presentation for the web. They don’t care about the technical issues involved in getting it there. They are business people and creative people, so technology is, for most of them, just a huge distraction. Conferring with the post facility, will give you a road map to success. If they want certain deliverables done a specific way, make that happen. What happens on-set to the files, good or bad, ends up in the hands of post production. So, to come full circle, why is this job needed on set? The answer should be obvious from all the problems mentioned to this point. Fundamentally, digital acquisition is
29
Chapter 1 Review Questions
Answers can be found in Appendix C. 1) A Data Wrangler does what on set? A. Color corrects, applies a look to the clips. B. Works with the camera department setting iris levels on the camera. C. Brings mags to and from the camera, then does backups and logging.
2) The DIT replaces what job on a traditional film set?
5) The scope of the DIT job starts in ______ and ends with the assets delivered to _________. A. pre-production, editorial B. the first day of production, last day of production C. pre-production, color correction
6) The typical asset workflow would be which of the following? A. Camera mags to backup to one-light to asset delivery.
A. Audio Tech
B. Camera mags to backup to additional backups to audio sync to one-light to asset delivery
B. Loader
C. Camera mags to backup to editorial.
C. Scripty
D. A, B and C are all correct.
3) The shift from film stock to tape to file based workflows created the position of DIT
7) The DIT is the critical eye on-set to insure data image integrity.
A. True
A. True
B. False
B. False
4) Because there are just a few standardized camera file formats, the position of DIT is relatively easy to do?
You should be able to explain:
A. True
What are the 6 basic or key tasks of the DIT?
B. False
30
2
Codecs, Color, Color Spaces
The 3- C’s Now, as promised, we’re stepping into the deep end of the digital pool. Codecs, Color and Color Space are all tightly linked. I’m only going to mention this once (well maybe not), you MUST understand the 3-C’s at a very fundamental level. Because these are all fairly complicated yet totally intertwined, we will delve into each in its own section. Basically, Codecs make it possible through compression (aka, throwing out information) for us to record consecutive high resolution images to digital media. Color is what we see. It might be via reflective imaging, like looking at a painting, printed page or the cloths one wears. There are a lot of variables that effect how we physically see any given color, however, there are very scientific ways of defining that color. It is the variables are what make it hard for all humans to see the same color. Or even agree on the color they see. Color Spaces are the containers that define the range and intensity of colors we are able to display. The spoiler alert here is that we can see many more colors than most digital imaging devices can render or capture. The holy grail is perfect rendition of visible colors and luminance's within the digital realm. Lets take them one at a time for ease of understanding. But we’ll take them in what may seem like a backwards fashion. With Codecs, we are trying to balance the obtainable color range with the limitation of computers and recording media. Understanding this problem will make the realization to the bigger problem easier to grasp.
31
2.1 Codecs The Magic Sauce
A Bit Of History "You're trying to stuff an elephant through a straw," said Ben Waggoner, noted expert in compression and codecs.
That sums up the task at hand.
To get a grip on the numbers, a full uncompressed HD video signal can run as high as 10Gb/s. That’s Gigabits or 10 billion bits every second. There are precious few recording devices that can handle that kind of data stream continuously. This is a tightrope act where we want to get the best possible image (still or video) yet keep the file size or data rate as low as possible. It’s such an significant task that there is a career called ‘compressionist’ to do this job. The answer to handling all this continuous data movement is ‘compression’. This means throwing out as much data (ones and zeros) as possible while still keeping the image quality up to some sort of standard. Notice we didn’t say ‘the best image’. Image quality is subjective in most cases. Look at YouTube for example. Most thought it unusable or un-acceptable. Now it’s part of our quality reference, ‘do you want it digital cinema quality or do you want it YouTube quality?’ Which means, that level or lack-of-quality, is now acceptable to many.
of Video Coding, Marco Jacobs and Jonah Probell, ARC International {marco.jacobs, jonah.probell}@ARC.com There’s lots of discussion about when the first compression appeared. In 1984 the really early ‘delivery only’ codecs appeared. We’ll go into the difference between ‘delivery’ and ‘acquisition’ directed codecs in a minute. But in 1994, the development of codecs for cameras appeared, and were used in the DV format. For our discussion, this was at the point when digital recording replaced analog, still using the tape based format, it was all now digital. The issue was -and still is- the amount of data generated by the analog-to-digital (A-to-D) converter inside the camera, outstripped the ability to write the data to tape. The first Standard Definition (SD) digital cameras spit out a whopping - for that time- 12 megabytes per second. Tape could handle about half that. The first video codecs for real time processing came into play, knocking the data rate down to around 3.6 MB/s for audio and video combined, at a 4:1 compression. The camera that use the DV for mat and changed documentaries and independent filmmaking, world-wide was the Panasonic DVX-100™. (Fig. 1)
As mentioned, compression is done with software called ‘codecs’. Codec is a combination of two words;
Compression and Decompression They are two sides of the same coin. When Did The First Codecs Appear “As early as 1929, Ray Davis Kell described a form of video compression and was granted a patent for it. He wrote, “It has been customary in the past to transmit successive complete images of the transmitted picture... In accordance with this invention, this difficulty is avoided by transmitting only the difference between successive images of the object.” Although it would be many years before this technique would actually be used in practice, it is still a cornerstone of many video compression standards today.” A Brief History
Fig. 1 DVX-100 Digital, tape based, SD camera. Image courtesy ©Panasonic™ 32
2.1 Codecs The Magic Sauce
Tech Info: Photosite is not a pixel It should be understood that a photosite is that individual photo receptor on the camera imaging chip. Each photosite is designated to a color; red, green or blue. To create a ‘pixel’ all three colors must be combined, so it could take 3 photosites to create one pixel. Figure 2 simplifies the encoding process. What is fundamentally amazing in all this, is that every photo site for every frame has to be processed. Think about 3
The acquisition side of compression takes place at the camera. All the data created by the sensor in an analog format, is converted to digital. The larger the sensor, the more photosites, the larger the processing demands. These are beefy processors designed to handle continuous streams of data without a hiccup. Because the data stream is so large, these codecs are not well suited for use in delivery over the internet or to your smart phone. Delivery codecs are very ingenious bits of software. Whereas the acquisition codecs try not to throw out very much data, keeping the image quality as high as possible, the delivery codecs’ job is to throw out as much as possible and keep the image acceptable. There have been quite a number of codecs created along the digital trek. For a while, they were loosely written to standards. Hence the creation of standards committees. 1992 saw the creation of JPEG, which stands for Joint Photographic Experts Group. This was and still is, widely used for still images and developed for the new digital cameras and image delivery over the web. An offshoot of this standard emerged for motion video; Motion JPEG and now the very high resolution JPEG-2000. The latter is being used as an acquisition format in some high end cameras.
Fig. 2 Flow chart of how a single frame of video is compressed. Courtesy ARC International ©2007 million pixels on the camera sensor, times 24 or 30 frames per second. But there’s more to this equation. Not only are there 3 million pixels but there’s 8 or 10 or 12 individual ‘bits’ of digital information for each pixel. These bits define the color, luminance and saturation of the color seen by each pixel. The math looks something like this: (3,000,000 x 8) x 24 = 576,000,000 bits of data per second of video recorded. In todays terms, that’s 72MB/sec. It’s staggering by any standard. Literally dozens of codecs came into being. They were stratified into the two categories mentioned before: delivery and acquisition. This is an important concept to grasp when understanding which codec to choose for a given task.
There are committees for MPEG (Moving Pictures Expert Group) and the ‘H-dot’ or MPEG-4 codecs that have become so prevalent in todays digital video realm. MPEG was initially designed as a delivery format, with MPEG-1 primary use in video CD’s (which failed to catch on in the United States) and MPEG-2 used in DVDs. MPEG-2 found favor with JVC as an acquisition format and a few editing systems, like the Media 100, used MPEG encoding as the basis for their system. The H-dot series of codecs, actually started back in 1984 with H.120 for transmission of teleconference video over phone lines. H.264, which is a very robust multi-media compressor, is equally at home on the web for delivery and in the camera for acquisition. It is also the underpinnings of BluRay. However, it is NOT a good choice for editing. It has very high processor demands, bringing most robust editing systems to their knees. To make matters even more interesting, Sony’s XDCAM format is hybrid where the XDCAM ‘file container’, appears to be H.264 but has the MPEG-2 encoder inside. A tough nut to crack when it comes to decoding.
33
2.1 Codecs The Magic Sauce The AVCHD file container used by many video capable cameras, is really H.264. This format is used by Panasonic and digital still cameras like the: Canon 5D, 7D, Ti series, Nikon, Lumix brands, and many more. The latest version of H.264 is used to capture the UHD (4k) raster sizes. Source/ Purpose Camera HD & UHD Acquisition
Codec ProRes 4444XQ ---------DNxHR
Raster SD thru 5k ---------SD thru 3840
Bit-rate 330Mb/s --------166 MB/s
On the audio side there’s MPEG3 or MP3. This codec was specifically designed to deliver audio content ONLY. It has all the qualities of a good video codec, Variable Bit Rates (VBR) or Constant Bit Rates (CBR), selectable data rates, and bit rates. It has become THE delivery compressor for all kinds of audio pipelines. Whether it be the mobile audio player or streaming radio station, it’s a good bet there’s a MP3 codec working in there.
Source/ Service
Codec
Resolution
Blu-ray
H.264 or MPEG2
1920x1080i/ p
H.264 or VC-1
1920x1080i/ p
28
MPEG2
1920x1080i
10.39
MPEG2
1920x1080i
~16
MPEG2
1920x1080i
15
1440x1080
<10
1920x1080
<10
Various
<10
HD DVD Camera HD & UHD Acquisition
H.264 XAVC
4096x2160
80MB/s +
Camera HD
H.264
2000
170 Mb/s
Table 1 New acquisition codecs and their basic specifications.
H.264 is replacing MPEG2 as the codec for satellite delivery (Dish and DirectTV). The latest version, H.265 was just released for testing in 2013, with specifications that look promising on both sides of the video-use spectrum. Table 2 shows some common ‘delivery’ codecs and their characteristics. There are several other codecs designed specifically for ‘delivery only’, and are not part of a standards group oversight. RM or RealMedia™, for example, was designed by the RealNetworks, inc., to support their over-the-web teleconferencing. It uses proprietary processes and servers to deliver multi-site, multiple simultaneous connections. It also has the ability to send out advertising while delivering the video stream. This codec was designed for a specific delivery business model, and has been quite successful. Other unique codecs are DivX, Flash, Shockwave and Silverlight. Flash is Adobe’s delivery format, widely used on the web. Silverlight is Microsoft’s answer to web delivery and grew out of their Media Player implementation of H.264.
ATSC HDTV Digital Cable Verizon FiOS- VOD Dish HD DirectTV HD IPTV Xbox Live Video DVD Apple iTunes Web “HD”
MPEG2/ MPEG4 MPEG2/ MPEG4 H.264
Bit-rate 40Mb/s
VC-1
1280x720p
6.8
MPEG2 QuickTime/ H.264
720x480i
8
1280x720p
4
1280x720p
1.5
H.264
Table 2 Outputs with codec and bit rates.
On-Camera Storage Today’s in-camera computers are quite powerful resulting in more processing abilities. The recording media attached to the cameras has gained in read-write speed, fully capable of keeping up with the massive data rates pushed out with the new HD video. We’re talking in the order of 400 MB/s and higher. Camera attached hard drives provide more than 450 MB/s data throughput. So the data pipeline is now up to the task of capturing the new HD digital cinema files. Well, almost.
34
2.1 Codecs The Magic Sauce The new camera RAW files are somewhat compressed. The data straight from the camera sensor is still to massive to be captured straight into even the fastest drive arrays. RED cameras’ REDCODE RAW, or R3D files, were at first, 3:1 compression. RED calls them ‘less lossy’. RED now offers the ability for the user to set several levels of compression, thus reducing the file size and extending the record time to the media. Even the highest level of compression produces very nice images to work with. At the core of the R3D file is Wavelet compression codec which minimizes the blocky compression artifacts by using Bayering. This is outside the scope of this course so we won’t delve into it for now. Camera File Sizes Camera file size is now the issue within digital cinema acquisition. The Sony F-65 camera (Fig. 3) was mentioned earlier. This camera is fully capable of 8k raster sizes and l a r g e r, w i t h very little c o m p re s s i o n . The resulting file sizes are monumental. On a recent f e a t u r e fil m , shot with a RED camera, the DIT did s o m e Fig. 3 F-65 camera. Image calculations courtesy Sony™ based on an average shoot d a y s fi l e creation. Then he looked at the data rates advertised by Sony for the F-65 and came up with a number that would cause a producers’ budget to wince. 11 terabytes(TB)! of camera files could be easily generated daily from this camera. The sheer volume of data is prompting the question from the studios and budget controllers, ‘ is the image really worth this amount of hard drive storage?’ This huge amount of data moves through the pipeline to the post-editing house as well. That would be 11TB of hard drives filled each and every day of the production per camera, before you figure in three backups. The next question, is there time to do those backups and not get behind? We’ll talk about the physical structure of the DIT data pipeline later, but moving this amount of data is no simple task, no matter how much money you can throw at the system.
Compression Costs, Decompression Is Free Codecs will continue to evolve as the entire imaging system and workflow advances. It’s just another subject for the DIT to be well-versed in. One last thought about codecs is the business side. For a number of years, codecs (both the compressor and the decompressor) were free. Now, you license the compressor. When you buy editing software, like Final Cut Pro or AVID, the cost of the software includes the associated codecs licensing fees. Even camera companies are charging for their file format software. Sony is charging several thousand dollars for their RAW codec software if you want to upgrade the camera. On the up-side, the decompressor for your computer to play back these files is still free. Containers and The Bits Inside Required viewing: Codecs & Containers: http://www.youtube.com/watch?v=WpBjGUlBTHU The first point we need to get across is an issue, all over the web, misstating the difference between a file format and what’s inside. Many uninformed bloggers and posters to the web, call H.264 or MPEG4 a ‘format’. It isn’t. The extension of the file is the format, eg. .mov, .avi. What seems to be the confusion focuses on the compressor that’s inside the container or format. In case you hadn’t guessed already, there are lots of specific terms to understand and using them improperly, causes communication confusion. Codecs: How They Work The stressors on our computers are the codecs and all the processing required to work with big bit streams. We need to fully understand what’s going on here. The compression side is a wonderful amalgamation of complex math formulas and really smart code. The math looks at each individual video frame, then figures out how much (visual information) to thrown away and still maintain a good looking image. This process is also, ‘lossy’. Any data processing where digital information is removed to create a smaller file, is lossy. There are two considerations when doing this; video quality after compression has been applied (this can be objective and subjective), and performance characteristics. We’ll get to both of those in a moment. There are two basic operational principles that codecs are built on; Temporal quantization and Spatial quantization.
35
2.1 Codecs The Magic Sauce Spatial compression assumes each frame or image is unique and processes it individually. This is commonly referred to as ‘Intra-frame’ compression. JPEG is a good example and originally designed for still images, it can be used for motion based imaging, but the overall file size will be large and the demands on the computer, higher. A good example of this is the RAW files generated by the Arri Alixa or Black Magic cameras shooting in RAW. Each frame of the video clip is a separate file. The emerging jpeg based motion file system is Motion Jpeg2000. Each frame in this format is a full frame of image information. Temporal compression focuses on storing the ‘changes’ within a frame, not the entire frame. This is also refered to as ‘inter-frame’ compession. It does compress a
Fig. 5 Image courtesy of PiTiVi.org
c o n t a i n e r. Fo r example; .mov, .avi, .tif f, etc. Some containers can hold different codecs. Take t h e . m o v fil e o r container for example. Immediately we know it’s a QuickTime file by the extension. The .mov file can have motion jpeg or H.264 or ProRes, or a host of other codecs applied to the actual video file data.
It is quite possible to hand someone a .mov file and not have it play on their computer. Why? The computer might have the ability to open the .mov file but does not have the required codec to de-code the data inside. It will most likely throw an error about not having the proper codec installed. The container, in most cases, has nothing to do with the compression inside. That being said, there are containers that are proprietary and the container and the codec are linked. An example of this is the RM or Real Media™ container which has a proprietary codec inside. For the data wrangler on media productions, there will be a mix of these codecs. The range of codecs will be vast by the way. Fig. 4 The top 5 images represent spatial compression. The lower 5 images represent what’s happening in temporal compression.
VBR and CBR Variable bit rate (VBR) is a fairly new ability in the codec realm. This has to do with the codec and not the container. What this allows the compressor to do is look ahead at the upcoming frames, figuring out what data it can
single, entire frame, called a keyframe or I-frame, then uses that frame as a reference to calculate the changes for the next few frames, until the next key or I-frame. MPEG2 is an example of temporal compression and what is actually happening is a bit more complicated than just described. For now, this understanding is adequate. A good article that makes this all easy to understand in the real world: Intra-frame vs Inter-frame Compression. Containers For Codecs Codecs are held inside containers (Fig. 5). Simply put, the extension on a video or still image file signifies the
Fig. 6 Static boat at dock (lt). Action shot of motocross racing (rt). eliminate, based on what is happening in the scene. It does this dynamically. An example would be a static shot
36
2.1 Codecs The Magic Sauce of a boat at dock (refer to Figure 6). Not much going on. Any changes are slowly happening over time. On the other hand, the motorcycle racing shot. The camera is panning, the objects in the frame are moving from pixel to pixel continuously. This is very hard on compressors. The VBR mode will lower the compression, raising the data rates, helping reduce compression artifacts.
required to produce a finished product. Unfortunately there isn’t one generic setting for all programming content. CBR is constant bit rate, where the data rate or compression is the same no matter what is going on in the frame. Both VBR and CBR have their uses. For example, the RED camera REDCODE RAW codec is CBR. RED made the decision to offer the highest quality possible, at all time, in spite of the resulting file sizes. Here’s an example of an image with different compression rates applied. The left image (Fig. 8) is a camera JPEG file which weighed in at 3.7 MB in size. The right image is highly compressed JPEG that now is just over 30kB in size or 100 times smaller, at the expense
Fig. 7 Example of Macroblocking. Image courtesy Rosa Menkman ©2010 If you’ve seen a concert in HD on your TV and the strobes and lights on stage start firing, the screen will breakup into ‘macroblocks’, little squares making the image unwatchable for the most part. (Fig. 7) This is the result of to much data coming in to fast for the codec to handle. Remember the temporal compression where only the differences are processed? Macroblocking is a total over run of the compressor causing failure. VBR encoding does it’s best, looking ahead to those demanding action shots, and apply less compression. But more compression to the sunset, where things are are not moving. The end result is a frame or frames, that have very high data rates for the high action scenes and low data rates for the static scenes. This reduces the overall file size. The downside to VBR, to make it really effective, you must do two processing passes on the same file. The first pass gathers data about each frame or series of frames, noting the ones that need less or more compression. The second pass uses that datafile to implement the compression. It takes twice as long to process the file, but the end result will be typically better looking, and a smaller file. However, the codec applied has a large part to play in the overall image quality. ‘Compressionists’ test, and test, and test each setting and codec before being
Fig. 8 Full resolution JPEG camera image (lt). Highly compressed JPEG image (rt). of resolution. The compressor artifacts quickly become evident, although not so much so in this printed format. The smaller, compressed image is an example of what would go onto the web for faster web page loading through slower connections. The image on the right would load very compared to the image on the left and would be quite acceptable. But take the highly compressed file on the right, and have it printed out to photo paper and the results would be less than satisfactory. Breaking Open The Containers For Playback Ultimately, we need to get into the files and decode the information for viewing. Some are hard to crack. Take the XDCAM files for example, they are MPEG-2 inside. Until recently, you had to use Sony’s XDCAM Transfer program to open the container and assemble the various clips. If you shot 20 clips on one camera magazine, they would appear as one file inside the XDCAM file structure. The Sony software has the ability to break the file apart into the individual takes, then export them re-wrapped as a .mov file for editing software.
37
2.1 Codecs The Magic Sauce The issue extends itself further to actually loosing metadata like timecode. If you export the camera clips using Sony’s utility, the timecode is intact. When you transcode the files to another codec, say ProRes or DNxHD, using some third-party transcoding software, the time code is not transferred and all files start with 00:00:00. Not helpful if you have production notes that are linked to camera timecode. However, Sony now has a program that is both more user friendly and it transcodes into other codecs with the timecode intact. Sony XDCAM Browser is the program to use, if you need to flip the files and don’t have another piece of software capable of working with these files. On the up side, many non-linear editors have plugins to read XDCAM and a lot more codecs, natively. Two of the file flipping, or transcoding, software that can open the camera files, re-encode the files with all metadata intact are ProxyMill™ and Episode™. More may follow.
These files are highly compressed, 8 bit color depth (4:2:0) and data rates can vary. This will all make more sense in the next segment on color spaces and bit depths. Each time the camera starts and stops, a separate .mts clip and the related metadata files(s) are created. All of these files are ‘wrapped’ inside the overall folder structure. RAW Isn’t Always Un-compressed Mentioned earlier, was the fact that the RED camera’s REDCODE camera file is compressed at a ratio of 3:1. It uses a CBR Wavelet compressor. Compared to other compressions like AVCHD with compression well above 30:1, RED’s 3:1 is minor. But compression still needs to be done so as not to overrun the recording media. That recording to the camera mag is not just the image information, but lots of metadata, audio and other files, all at the same time. Looking at the RED R3D file structure on the mag we see the following files: (Fig. 10)
Inside the file Structure
To d i g i n t o a container and extract a file for viewing requires two stages: opening the container and decoding the codec. Take the very popular AVCHD™ used in digital still cameras (DSLRs) like Canon 5d’s. The video stream is Fig. 9 Typical file structure .MTS or MPEG of the data on a Canon 5D Transport Stream. Inside the .MTS memory card. container is the H.264 codec. The file structure on the memory card is displayed in Figure 9. All of these folders, sub folders, and files contain information supporting the overall file structure. Don’t do as some have, and navigate into the STREAM folder, and export or move the .MTS files thinking you have the entire video clip(s). The .CPI files and others, are needed to re-construct the full resolution of the original image. The entire folder structure is needed for non-linear editors and other decoding software to make sense of the files inside.
Fig. 10 File structure of RED camera files from camera memory card. When the RED camera stops recording, that shot or clip is stored within its own folder labeled with the name structure ending in .RDC. Inside the folder are the .R3D video clips. There might be several if the take is long enough. The FAT file structure of the mag requires that individual files not exceed 4 gigs in size. If they do, the RED operating system breaks the files into 4 gig or less chunks, which will be re-assembled automatically, when brought into an editing system. Also notice the .mov files. These aren’t video files, but proxies of the recorded clip. A proxy file is a small file that points the original footage but doesn’t contain any footage. It is a very simple bridge between QuickTime player and the RAW data file. If you have RED’s codec installed on your computer with QuickTime, you can double click on the .mov proxy file and it will play back
38
2.1 Codecs The Magic Sauce for a quick reference in various resolutions as indicated by the last letter before the .mov extension. ‘F’ is for full resolution, ‘H’ is for half and so on. The md5sums file was created by the copying software, ShotPut™, when these files were transferred to the backup drive. It’s a text file with information about the transfer and we’ll look into that file later. Wait, There’s More Data Writing Going On Now, these video files are being written to the camera memory plus four channels of 24 bit, 48kHz audio and lots of camera data. All the information about the exposure, compression, LUT, etc. are encoded along with the image. That’s what we call ‘overhead’ and it’s even more data streaming to the recording media. So compression of the image is necessary to leave bandwidth for the other ‘stuff ’ that needs to be captured. RED now offers user selectable compressions from 3:1 to 18:1. The recording media write-speed required is 480 MB/s and the recording media, can’t drop a frame of video, ever. That’s really robust media. Match Data Rate With Pipeline The use of compression is universal now, for all our media recording or delivery pipelines, however, there’s a need to match the data rate with the pipeline. One of the most valuable questions the person doing the compression can ask is ‘Where is this going to?’ In other words, what will be the end media. DVD? Vimeo™? iPad or mobile device? Take for example, web delivery on Vimeo™. It would be really nice to deliver spectacular video at 7 MB/s, the rate we have for DVD’s, but that would totally overrun the delivery ability of the average users home internet connection. The download or buffering times would make it un-watchable. So we compress to a level that the image is pretty good, and the data rate is within the bandwidth of the internet connection, without a lot of waiting for the video to play. That’s about 1.7 Mb/sec. Still, even with all that compression, the HD video files look darn good. Some delivery codecs offer more than just a good image through a narrow pipeline. Adobe Flash and Real Media have the ability to scale their delivery data rate by sampling the connection speed. If the connection is truly broadband, then the compression will drop and the data rate will go up, delivering a nicer image. But they also have security features. Many companies sell their video products over the web in a subscription service. If someone could download those videos, like you can do
Fig. 11 Image ©Robert Trim 2013 easily from YouTube™ or Vimeo™, then repost or share them, it undermines the business model of the subscription service. Flash has the ability to protect, in most cases, downloading of the media. Metadata Information Metadata, the big feature of newer codecs. Basic codec functions that we’ve talked about so far compress video, carry audio and it’s a one-way relationship. With each step forward in the evolution of codecs, more and more pathways have been opened up to capture information within the codec file itself. Lets look at the metadata captured with each snap of the shutter on a consumer digital still camera. The image of the lady (Figure 11) is a JPEG file right off the camera card. The date, time, camera type, software version number resolution are shown in Figure 12. Figure13 shows even more metadata about this
Fig. 12 Metadata about the individual frame.
39
2.1 Codecs The Magic Sauce For delivery media like DVD or BluRay™, our options are limited due to the standards for that media. However, it’s always best to deliver the highest quality file to the compression front-end for re-compression, as possible. To give someone an H.264 file instead of a full ProRes render, would result in an awful looking DVD. The playback device plays heavily into the decision as well. Hand held and tablet devices are really powerful now, but not powerful enough to handle large data rates. Likewise, there are only a few compressors available for all these devices. Take Flash for example. Plays fine on Android devices, but won’t play back on iOS devices. H.264 on the other hand will play back on both. If the files are headed for editorial use, then we want them retain as much quality as possible, but not the full RAW format. Most systems now, will play back RAW files but it’s really slow. They systems just can’t keep up so they drop frames, have jerky playback. The file exported for editing systems need to closely match the codec that system finds ‘friendly’. Codecs Effect The Color
Fig. 13 In-depth metadata about the individual exposure specific image. All this information is important to pass along through the image processing pipeline. We’ll look at specific metadata with the programs used for the tutorials. What Drives the Decision to Use a Particular Codec? As we have discussed, there’s different codecs for different purposes. Matching those to the desired task is not really that hard once you understand what any particular codec is designed for. For web use, we would want low bandwidth and good compression characteristics with as few artifacts as possible. Another side of the web is data retrieval. The newer codecs have the ability to ‘snoop’ data from the connection and the box connected to the end of the cable. Also referred to as ‘data mining’.
Codecs vary in how they handle color bit depths. If we were to take the same image and process it with two different codecs, claiming the same color space, the same images to appear visibly different from each other. A codec with 4:4:4 color bit depth allocates the first of the ‘4s’ to luminance, the second two ‘4s’ define color depth. ( 4 bits will allow 256 shades of that color). Codecs that are utilizing a 4:4:4 color bit depths will maintain the best image possible. But the bit depth for an image from a Canon 5D is 4:2:0. When you lose ‘bits’ that establish the quality of color(s), the image will suffer. Still, when all is said and done, the image off the Canon 5D (using the H.264 codec) which is highly compressed, look really good---until you take them into color grading. Then the image tends to fall apart quickly. Read a really straight forward explanation of bit depth as it relates to the file codec here: http://blogs.adobe.com/VideoRoad/2010/06/color_ subsampling_or_what_is_4.html ACES Encoding System Is there a magic path or codec for each of these delivery modalities? Some will say, “no”. But there is, and it’s available now. The ACES encoding system is a forward
40
2.1 Codecs The Magic Sauce looking standard who’s goal is to preserve the original image in such a way, that any future technology can use it, without degradation. We will dedicate a section just to the ACES workflow later in the book.
TECH Discussion: Will everything, someday, be un-compressed? This is a hot topic of discussion. One side of the debate strives for little or no compression. Going to great lengths with software hacks of DSLR’s allowing essentially uncompressed data streams, moving to outboard recording devices. On the other side of the ‘data pile’ are those who feel that some compression (at times a lot of compression) is needed to keep from getting buried in hard drive storage. Or to ease the workflow issues. Will the two meet in the middle with better codecs? Here a brief but good read. ‘Why is Uncompressed Better?’ http://www.redsharknews.com/production/item/972-in-future,-there-may-only-be-uncompressedvideo
41
2.2 How We See Color
Required Reading: Seeing Impossible Colors: http://io9.com/train-yourself-to-see-impossible-colors-10 82507819
“Contrary to popular belief, "colour" is not really an intrinsic property of the things we see around us. Rather, it is the sensation resulting from a given spectral distribution of light, detected by the three colour-sensors in the eye and interpreted by the brain.” W.A. Steer PhD, ‘Introduction to Colour Science’.
To simply say that our eye is complex, would be an
This article delves into an interesting phenomena with colors and how we see them. However, there’s a good explanation of how the color receptors work inside the eye.
understatement. The basic fact that light energy (photons) are chemically converted into neuro-signals which are then interpreted by the brain to create an image, is nothing short of amazing. But this awe inspiring system is flawed.
Required viewing: “Don’t Trust Your Eyes Unless You’re Surrounded by Gray”, http://www.redsharknews.com/post/item/1301-don-t-tr ust-your-eyes-unless-you-re-surrounded-by-grey-the-simult aneous-colour-effect
On the X-Rite web site, they quote the numbers from color blindness/impaired research indicating “1 in 12 males and 1 in 255 females have some degree of color blindness or impairment”. That’s just a start to the issues we have seeing what is in front of us. Take for example what you actually see. Field of view, if you were to equate the eye to a camera lens, is unique. We can ‘focus our attention’ to a small area within the field our eyes’ of view, similar to zooming-in the lens of a camera, but we can’t do what an optical zoom lens does: only show that area of focus. We still see or perceive, the surrounding area. Fig. 1 Eyeball cross This is peripheral visection. sion and this ‘outside the area of focus’ causes issues when perceiving reality. We will discuss this more in the next part of this chapter. The receptor part of the eye is called the retina, which is located on the interior part of the eyeball (Fig. 1).
42
2.2 How We See Color
TECH NOTE: In Steve Hullfish’s must-read book “The Art and Technique of Digital Color Correction”, he talks about the ’tiring of the eyes’ while doing color correction. “It’s very important to ‘reset’ you eyes at least every hour. After a long time of staring at a computer or video screen, your eyes become tired and need a fresh perspective on the colors they are seeing on the monitor.” He stresses the need to get up, leave the room, go outside or out of the environment you’re in to ‘reset’ the eyes to the colors in the real world. To make things worse, we have a brain. And it’s not a pure ‘digital imaging device’ like the ones found inside our cameras, which takes what is projected on it and turns that into an image. This is highly simplified of course, but bear with me for a moment. Our eyes also take the image information, and send that to the brain. So far they are similar. It’s the brain that causes us problems.
Fig. 2 Color test where the eye is fooled by the color surrounding the center squares. All center gray squares are the same. The image cast by the lens of the eye, strikes this area made up of rods and cones. Cones receive the color information. There are fewer of them than rods and they’re clustered in the middle of the retina. The downsides of this configuration are; the diminished ability to see color at night and color of fast moving objects. The rods take over in these situations, which are the black-to-white (luminance) receptors. How Much Can We See? The visible light spectrum humans see is quite narrow compared to other living creatures, but it works fine for us. Basically we see mixtures of red, green and blue light. Even with these being distinct colors, we struggle with some areas within this mix. Take black and a dark navy blue. It can be hard to discern the difference unless the color saturation (brightness of the color) is fairly strong.
Fundamentally, the eye does not see the image, it receives the light and sends that information to the brain where the brain assembles it into an image. It has the ability to alter what we see into what it/we think is right or should be right. That’s a big difference from the cameras internal functions. The graphics in Figure 2 are good examples of how the brain can be tricked. Inside the color swatches, the center gray blocks are the identical. Besides the graphics in Figure 2, another good example is what I call ‘the reality of color’. If you walk into a room that has light fixtures with bulbs skewed to the blue range (daylight), the room will look quite blue when you enter. Spend a bit of time in the room and your brain will ‘color correct’ the light. It knows what is white, or should be white. After a few minutes the brain will alter what you see (the camera equivalent would be auto white balance), and remove the blue cast out of the light. The blue cast is still there, you brain has manipulated the mental-color balance to create what it thinks should be normal. This is a big issue when it come to color correcting images, and we’ll get into that later. Resolution Ability Resolution is variable as well. The lower the light level, the lower your eyes resolution. Add to that, problems that might be caused by the lens of your eye (near or far sighted, stigmatism, etc.) and you perceive a different
43
2.2 How We See Color version of what is right in front of you. Eyewitness accounts of the same event are always circumspect unless there are several of them. All the accounts can then be averaged into some recreation of what really happened. This ‘averaging’ levels out differences in visual acuity, color perception, and other mental ‘tweaks’ that might alter the reality of the witnessed event.
Color Space Defined--Somewhat So now you have the both the good and bad news about how we see. What we see it on or the media we view; a monitor, color ink on a page, etc., has all been mathematically defined into a color space. It’s known as the CIE 1931 color space. Years of testing with a broad swath of human subjects in very controlled environments, came up with a defined visible spectrum of colors and luminance ranges commonly referred to CIE RGB and CIE XYZ color space. This range of luminance and saturations creates the spectrum chart in Figure 3. But this is not the range of colors we can then reproduce on a monitor. That is far
Fig. 3 CIE 1931 standardized color reference.
Fig. 5 RGB and CMYK color space representations. Courtesy Adobe. more limited. The inside triangle drawn on the chart displays what would be shown on a really good monitor. Figure 4 displays the third dimension of the color space. It’s not necessary to fully understand the ramifications of this other than to know it’s there and a factor in the Gamut ability of a monitor, which we will delve into, in the quick definitions. Although this is RGB, not all RGB implementations are the same. Adobe released this graphic (Fig. 5) to show the difference in the range of colors represented by RGB variation and what can be expected in the print or reflected light world of color.
Fig.4 CIE color space shown 3D. Courtesy Sysmagazine geek daily blog
You have most likely seen sRGB tossed around in spec sheets for printers and monitors. This alliteration of the RGB color space is quite specific to computer monitors, printers and consumer grade cameras.
44
2.2 How We See Color To get this wrapped up without making your brain hurt, we have the color, the saturation of the color and the luminance of the color to take into consideration. All markedly affect what we perceive as proper color. There is a good chance that the monitor or printed page you’re viewing this on, will alter these reference charts. The Computer Monitor in The Mix Your computer monitor plays one of the more significant roles in displaying what an image really looks like. There are three variables for those as well. Resolution, Gamma, and Gamut. Color monitors are relatively easy to create these days, but like a fine test instrument, you have to pay more for better quality. If you’re going to be involved in color correction, you need to buy specific, expensive monitors to judge colors, in spite of what you now know are flaws in your vision ‘tool’. We’re going to spend a chunk of time on this in the Color Spaces section and the DIT Kit Hardware section.
Do You Have Golden Eyes? Now for a fun test of your color acuity. The article tells you how to test if you have ‘golden eyes’ for color correction. Be warned, it’s hard and you must be setting at a good monitor. Apple Cinema or Retina display or something of that caliber. If you stare at the screen to long making decisions without looking away, your eyes will be fooled. The lower the number you score, the better your eyes. The task is to put the color shades into their proper order. http://www.provideocoalition.com/another-test-for-gol den-eyes (Note: the structure of links in this blog type web site often ends up with a 404 error. To get around this, use the sites Search and enter “Another test for Golden Eyes”. It should take you to all the articles Steven Hulfish has written. You can scroll through the list and find the article. I think it’s good to read his experience first before going to the linked site below.) http://www.xrite.com/online-color-test-challenge
Fig. 6 23 gray scale stepped chart. Courtesy Dpreview.com Figure 6 is a stepped gray scale chart. If you can see all the steps on the screen you are viewing this book on, then it has a fairly good luminance rendering ability. The older 8 bit screens will be limited to the range between D and V. But even new monitors that are not properly set up, will not see the full range until calibrated. Wrap-up In a nutshell, how we actually see color is complicated. The key to being successful when it comes to making technical and artistic decisions about what we are seeing, is to understand both the strengths and limitations of all the elements it takes to ‘see’ the image. This means understanding the limitations of your own eyesight, the limitations of the file you are trying to view and the quality of the monitor you are using to view the image. Professional still photographers do exhaustive testing on every piece of equipment in their ‘image chain’. This helps them previsualize the outcome of the final image, when the shutter is pressed. The professional DoP does the same and so should the DIT and the colorist.
45
2.3 Color Spaces & Management
Required viewing: Introduction to Color in Digital Filmmaking: http://filmmakeriq.com/lessons/introduction-to-color -in-digital-filmmaking/ and Color Management 101: http://youtu.be/Hl90Ve2dZY4 and Basics of color & color management
“Once film was only a capture and display media, and all postproduction meant digital manipulation, the industry has been in a constant battle to make sure that the director’s vision was correctly translated to the final screen. The aim was simple: to keep the data and processing time under control, avoid the images drifting in color or degrading in quality and simultaneously push the technology and state of the art of filmmaking to make more and more arresting imagery that would emotionally connect with an audience. From accurate skin tones on the side of the face of your leading lady, to the black levels of a CG robot, to the interacting of effects and non-effects shots, the challenge required some of the best minds of the industry to tackle color management.” The Art of Digital Color by Mike Seymour, noted VFX, 2nd Unit Director and Producer.
https://www.youtube.com/watch?v=fq-kNtwifFk
There’s a strong and vital move afoot to force a stan-
A very active forum specific to colorists: Liftgammagain.com http://www.liftgammagain.com/forum/index.php
dard color space on files that are flipped, after they come off the camera mag. If all goes well, this standard will be applied within the camera recording process. But politically, one shouldn’t count on it. Camera manufacturers spend great sums of money creating the recording formats and codecs to support their hardware and workflows. In the near term, this holy grail of asset management will be relegated to post-camera file processing. The proposed ‘standardized’ color space, called ACES (Academy Color Encoding System), is meant to help keep the integrity of the image information through the whole workflow pipe line. This new standard that has taken several years to create and has yet to gain total agreement within the industry. It does, however, answer many of the questions and issues asset managers have shown concerns about. Briefly: ACES Benefits ACES provides benefits throughout the motion picture production chain:
46
2.3 Color Spaces & Management For Cinematographers:
the audio interview. This color control pipeline is in-place, within some softwares already.
•Eliminates uncertainty between on-set look management and downstream color correction
Color Spaces:
•Preserves the full range of highlights, shadows, and colors captured on-set for use throughout the post-production and mastering processes
Required reading: “What is Color, Color Bit Depth and a Color Model?” http://wolfcrow.com/blog/what-is-color-color-bit-dept h-and-a-color-model/
•Preserves the ability to "light by eye" rather than relying on video monitors •Enables future expansion of the creative palette by removing the limitations of today’s legacy workflows For Visual Effects and Post-production Facilities: •Enables reliable, unambiguous interchange of images from all film and digital sources •Enables flexible workflow design •Preserves the use of "custom looks" •Simplifies visual matching of different cam-
RGB & CMYK, Additive vs. Subtractive What we ‘see’, or what is being ultimately rendered to the screen, or film stock, or projected in the theater, is highly dependent on colorspace. In addition, color space can be affected by the codec, so it’s all inter-related. RGB and CMYK Basic color spaces come in two types: additive (RGB) and subtractive (CMYK). To remember these; RGB is transmission based or in other words, projected to the eye. Subtractive is reflection based, like ink, paint, dye in fabrics. If we add Red and Green and Blue (RGB) light together, we get white light, or the combination of all colors. (Fig.1)
eras •Provides colorists with a better starting point and is designed to deliver the full dynamic range of progressive generations of digital motion picture cameras For Content Owners: •Offers simpler remastering for non-theatrical distribution platforms •Uses a standardized archival file format •Preserves the highest quality images for use with next-generation display technologies Academy of Arts and Sciences, ACES This article on the FXGuide.com, and the incorporated audio interview, adds a more in-depth, albeit technical, look at ACES. If it seems a bit over your head, hang in there. Read the article first, then listen carefully to
Fig.1 R-G-B Light Mixed. Image courtesy Ismail Arı, ©6 Nisan 2008 The printing industry uses the subtractive color scheme where the more colors you add, the closer you get to black, or the absence of all colors. Inks, dyes, and paints absorb colors and reflect only the color you see. If we add equal amounts of C (cyan), M (magenta),
47
2.3 Color Spaces & Management Y(yellow) and K (black), the mix will absorb all colors and reflect none, thus, it appears to be black. Within the RGB color space, we’re fraught with issues that we must be aware of, when working with digital media files. Not all RGB color space files are created equal, nor do they respect that pure color space.
Fig. 2 8 bit image file.
Transcoding softwares do not always pass color and luminance (luma) straight through. An upcoming assignment will walk you thought a simple exercise that shows the problem graphically. The core issue in keeping the color consistent while working with media files, lies in the codecs themselves. In a perfect world we could have unlimited bits of data applied to each of the three colors. But that’s not possible. With 8-bit color depth we get 265 tones per channel per pixel. Three bits are used to define blue, three bits to define red and the remaining two bits define green or the luminance part of the pixel. Very limiting for what we do.
Fig. 3 24 bit image file.
It Starts At The Camera Color space management starts at the camera. It needs to be set up (forced if possible) into a color space setting that will NOT damage or limit the range of colors captured. Typically this is a RAW image file: R3D for RED cameras, a form of LogC color space for Arri’s, Blackmagic and other brands of cameras. The images from these files appear flat and desaturated. They cause no-end of on-set comments from un-educated producers
Table 1 Chart courtesy ©Andrew Gibson Photo.tutsplus.com Table 1, breaks down the possible tonal values available based on the bit depth applied. Scientists agree that the human eye can see about 10 million colors and shades. An 8-bit JPEG file, considered fairly low quality by todays standards, will produce more shades than the eye can discern. But there is more to this than just the number of colors, there’s the transitions between those shades. The color images Figure 2 and 3, exemplify this issue. Although the same range of color is evident, so is the banding caused by the reduction in bits. The subtleties in transition between the shades of the color are lost and the banding is the result. This is where the Gamut of a computer monitor become valuable. If you have an inexpensive monitor, even the 24-bit image will have some banding.
Fig. 4 Arri Alexa images. LOG color space on the left compared with simple one-light color correction on the right. ©Arri looking at the camera monitors, “That image looks awful… hope we aren’t wasting our time shooting digital” . (Fig. 4) Fortunately, most digital cinema cameras allow us to monitor the output either in the RAW image profile or
48
2.3 Color Spaces & Management one with a standard color spaces, like Rec. 709. That way, the images look close to normal/acceptable to the eye of the uninformed. All bets are off when monitoring from lesser cameras like DSLRs due to the limited video output circuitry and high compression rates applied to the image stream. They will show you what they are recording. There are several color spaces available now, that offer end-to-end workflows: Rec. 709, Rec2020, Log-C, RGB, P3 and ACESproxie. The P3 is favored in some instances, due to the fact that it’s the standard for digital cinema projection. These are often applied when the files are transcoded. But understand that like codecs, some are more useful when editing and others are relegated to delivery. The P3 colorspace is almost universally applied around the time of color correction, when the finished program is being prepped for delivery to theaters with digital cinema projectors. Going back to the camera, the movement of the data stream that creates the image, to the file is often called Input Device Transform (IDT). Actually any part of the pathway that outputs from one source into another can be called IDT if the end result is a new file or some piece of equipment like a monitor. In this case the ‘input’ is referring to the point between the camera output to the monitor and the camera monitor. Or between the camera output and the on-set monitors. The use if color space is important to understand within applications such as this. If you don’t and there is a problem with the image on the monitor, you might not know where to start your trouble shooting process. Now all that being said, we typically make the correction on a monitor using a LUT. We will go more into this and Look Up Tables (LUTs) in the next section. On a basic level, this data table corrects the color and luminance from the camera to a point where it looks ‘normal’ on the monitor. Normal is a loosely defined word in this case, but ultimately it shifts the image quality to something that is closer to the reality the crew needs, to make decisions on exposure and lighting. In the scope of camera offerings in the industry, there are precious few that can actually utilize a LUT file. For the DSLR, EX-3, FS-100 and any many other cameras that use a highly compressed file format like AVCHD, it must be applied within the camera menu software to force a preset of color, saturation, detail, etc. You will be ’tweaking’ these settings, depending on the camera, to get a base recorded image, that will respond well in post
Fig. 5 CIE standardized color reference with Rec. 709 color space highlighted. processing. We will delve deeper into this, in the section on Camera Image Profiles. Basics of Color Spaces In the last section on How We See Color, we delved into color spaces. Figure 5 shows the total range of colors visible to the eye (the horseshoe shape) per the CIE standardized reference. The triangle within that color range is the Rec. 709 standard. It does not include all the colors or luminance’s in the visible spectrum. Rec. 709 is the set standard to represent as many of those colors and luminance’s within the abilites of a CRT or LED based monitor. It is the current standard for broadcast HD TV. “The CIE 1931 color spaces are the first defined quantitative links between physical pure colors (i.e. wavelengths) in the electromagnetic visible spectrum, and physiological perceived colors in human color vision. The mathematical relationships that define these color spaces are essential tools for color management.” Web Reference It’s getting more complicated when we start mixing ‘physical colors’ with ‘electromagnetic’ with ‘physiological perceived colors’. Two are finite and the last is firmly mired in the human strengths and defiencies. Add that to the inability for our window to the world, the computer monitor, to render these perfectly and we’re on our way to a real mess. How can we even begin to setup an environment that allows us to know we’re seeing and adjusting colors and luminances properly? Lets start with what can be done at the camera. Please understand that
49
2.3 Color Spaces & Management ‘at the camera’ means you are dealing with what the manufacturers have given you inside the camera. Understanding these terms and how they relate to the image, proper exposure and rendition of color, is a jumping off point Exposure Building Blocks One of the key factors within a color space is how they handle the two building blocks of the image, the luminance and chroma information. You need to understand some terms first and graphically see what they look like. Log exposure profile. Log is short for logarithmic. This is the exposure modality for film. Because of the wide latitude of film and the careful formulation of the light capturing Halides on the film, the ability render detail in the blacks as well as the near over-exposed whites are what digital sensors attempt to emulate. (Fig. 6)
Fig. 7 Exponential graphing of color space mapping. white. But it does ‘bunch’ most of the visual information in the middle, which is perfect for color correction later on. Less information is lost or clipped. Log-C or pure Log color space, is the ‘RAW’ file format for the BlackMagic, Arri and several other high-end cameras. Exponential exposure profile mapping tries to map the luminance and chrominance in such a way as to deliver a robust image. Whites are at 100% and blacks are near 0%. Color is saturated but not over-saturated. Because of the emphasis on expanding the images range, the visual information in the middle of the image will appear ‘thin’. (Fig. 7)
Fig. 6 Log exposure graph. Courtesy http://www.olympusmicro.com
The images captured within a color space with this exposure profile will appear quite flat as seen in the left side of Figure 4. No real solid blacks nor bright whites. The colors will seem under-stated. You might even wonder if something is terribly wrong with your camera by looking at these images without the knowledge of what you’re seeing. A Log graph looks like Figure 6. The basis of the logarithmic math formula is that each increment is the ‘log function’ of the last point on the graph. Simply put, the next point when heading from white or 100% exposure to 0% or black, is half the amount of the previous point. The next is half of that, and so forth. It results in a nice ‘easing’ into black or
Fig. 8 Linear exposure graph. Rec. 709 is exponential. REDcolor space is based on Rec. 709 but has some differences that you can visibly see
50
2.3 Color Spaces & Management on most monitors. This is something you must understand before you export files destine for a Rec. 709 application or viewing setup. Linear exposure profiles. Computer monitors and the basic information off the camera imaging devise works fundamentally in a linear way. It is very discrete logic or math where the path from black to white is a straight line (Fig. 8). This is fine for some applications but not well suited for images that must have wide luminance and shading ranges. It is vital to know what your images are going to be viewed. That includes the software, video graphics card and monitor connected to the computer. As a professional DIT you have asked what software will be used to edit the footage you’re working with, and found out it will be AVID. Rec. 709 is the standard color space for AVID Media Composer.
because you put all those colors into a bigger ‘container’. Rec. 709 ‘lopped’ off colors and luminance’s that did not fit into its space and those can not be re-created. They are gone, just like over-exposing an image loses all visual information above 100% white. Figure 9 displays a number of color spaces, including the widest currently available, ACES. It’s visually clear in this graphic that the standard for HDTV (Rec. 709) is quite limiting. If there is a rule of thumb when moving from one color space to another, make sure you are moving to one that offers a wider range of colors and luminance’s. Again the same is true with the use of codecs. If you transcode from a lower bit rate AVCHD codec to a higher bit rate codec like ProRes 444, and the color space is the same, there should be no disernable loss. The gain in making the migration to the higher bit rate codec is that when processed within color correction and further output re-compression, the image will suffer less artifacting. Several professional cameras now allow recording with ACESproxies or Rec. 2020 color space. That immediate gain in Gammut is valuable to the integrity of the image as it moves through the pipeline. An explanation of Gammut will come shortly. Here’s How This Will Workout On-set The camera might shoot in Rec. 709 color space and Log-C exposure profile. You will then be transcoding the files into DNxHD36 for delivery to editorial. AVID works in Rec. 709 colorspace. The one-light you create with be based on, or fundamentally Rec. 709. This allows what you see on-set the same image ‘look’ as the editor will see when they open the files in AVID. But there’s a twist.
Fig. 9 ACES color space in relationship to CIE and common color spaces. Courtesy NanoSys.
Pushing and Pulling Color Spaces Alluded to in the codecs section, moving from one compressed codec to another can damage the image further. The same is true for color spaces. If you recorded in a Rec. 709 color space, then converted to a wider color space like Rec.2020, this image won’t look better just
If the editor is using computer monitors, that’s in RGB color space, and you are the same on-set, then images should appear to be close. But, if the editor is outputting to a client monitor that is Plasma or LED, those are still RGB color space, but they represent colors and luminance ranges quite differently. There will be discussion as to why one image looks so much different than the other-- and which image is correct. As the DIT you need to have a properly calibrated monitor that will closely represent what the client will see in the edit bay. Besides using a good quality monitor that is calibrated, there’s another trick you can do to help level the viewing issues. It’s called a LUT, or Look Up Table. Think of this as a magic bullet to creating a simple exchange table setting the camera monitor or on-set monitor, to closely
51
2.3 Color Spaces & Management resemble yours. And the LUT can make your daily job much easier, as you will see.
TECH INFO: Very recently a new colorspace has emerged which hopes to solve long-term storage of digital images, maintaining their color integrity. Named Rec. 2020, it offers a wide bit gamut and is targeted at 4k images and larger. Working in 10 and 12 bit/pixel resolution, the color rendition is substantially greater than Rec. 709 and other CIE specific color spaces. Keep an eye on this new standard. One principle, mentioned before, that must be adhered to when moving camera files to another format, which implies a codec change, is that the target codec should be as high a quality as possible. FujiFilm has an anachronism for this, ‘SOOM’. This stands for Scan Once, Output Many. Referring to the fundamental high quality of film stock, if you shoot your master on film or high quality digital formats, it can be repurposed with far less loss to other formats.
wider color space are changing as you read this. All of this change directly affects your work. You now have enough information to wade into this article. It is dense but understanable. It focuses specifically on the ARRI cameras but the color concepts, LUT applications, are appliciable across the board.
Arri Alexa - Legal vs. Extended
http://bennettcain.com/blog/2012/8/1/arri-alexa-le gal-vs-extended.html
What this specifically targets is a bad habit in the ink or now digital based print industry. The typical workflow would originate in RGB, switch to CMYK, then switch the CMYK file back to RGB as needed. That change in colorspace reduces the color quality a little bit with each switch. The same problem can happen with our video files if you’re not careful. Color spaces like RGB with DNG, ProRes, DNxHD, and other high-data rate codecs, allow for a high enough quality image, that any transcoded file originating from them, will look much better. And files from something like AVCHD moved (transcoded or flipped) into ProRes will not suffer image-quality loss. Be Warned Color space and exposure profiles are far more complicated than we can cover here. What you now know are the basics that will help trouble shoot the visual pipeline when images start looking ‘off ’. It will help you make informed decisions when transcoding. It will help you purchase the right monitor for your work. We’ll get into that in the hardware section. I would highly recommend you pay attention to any articles discussing color spaces in the video realm from here on out. The improvement in compression within a
52
2.4 LUT Magic & Camera Profiles
Required Reading: What’s a Look Up Table http://nofilmschool.com/2011/05/what-is-a-look-up -table-lut-anyway/
What LUTs Are, And What They Aren’t A lot of professionals will tell you that LUTs or Look Up Tables, are the coin of the realm when it comes to sharing color correction between software and cameras. Part of this is right--sharing. The color part is totally wrong when you put the word ‘sharing’ in the same sentence. Well, there are two uses of a LUT with color. One is a limited use within the DIT process and the other is way! down the pipeline in post color correction. Here’s the basics that you need to know about the use of LUTs: • LUTs can set/change saturation • LUTs can change ‘relative’ contrast • LUTs are a fixed point in space • LUTs can’t adapt (aka not relative) • LUTs are color space specific • LUTs are shot specific We’re going to take these basics in stride over the next few pages. First, let’s look inside the LUT file. When you open a LUT file, the file looks like this: # # SCRATCH Lab v7.0 (Build 758) generated 3D LUT. # This is an approximation to the color-transform and/or 3D LUT # Dimension 32x32x32 (16 bits) # First line specified the sample spacing # 0 2114 4228 6342 8456 10570 12684 14798 16912 19026 21140 23254 25368 27482 29596 31710 33825 35939 38053 40167 42281 44395
46509 48623 50737 52851 54965 57079 59193 61307 63421 65535 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 42 0 0 2185 0 0 4357 0 0 6555 0 0 8776 0 0 11018 ! …… This list of numbers from a 3D LUT generated by Scratch Lab, goes way beyond what you see here. The location and pattern of the numbers is a pre-defined standard, that is then read by the receiving software. In a very simplistic way, LUTs help span the differences. For example: Monitor A needs to look like B-standard. A-B=C
Fig. 1 Arri Alexa images. LOG color space on the left compared with simple one-light color correction on the right. ©Arri
53
2.4 LUT Magic & Camera Profiles C = the differences between monitor A and, say, B-standard which might be Rec. 709. The LUT stores information necessary to ‘tweak’ the video coming into monitor A so it more accurately emulates the Rec. 709 color space. The same is true with a camera file that is Log C and you need it to look like Rec. 709. (Fig. 1) The LUT holds the information to change the chroma and luminance, so that it fits within the Rec. 709 standard.
LUTs can be applied or implemented several places withing the monitoring pipeline; • Within the camera to alter or correct the camera output to a viewing monitor. • To a LUT box that is in the video path from a source to a viewing monitor. • If a monitor has the ability, LUTs can be loaded into the monitors software to correct the monitor or to add a ‘look’ requested by the DoP or director. • In the output of the color correction software. • On the timeline or directly to a clip in the timeline. Needless to say, they are very handy tools with a wide variety of uses within the realm of what we do on and off the set. But they are NOT the holy grail. They ARE a tool that can, in some cases, be very helpful.
Fig. 2 Standard or 2D LUT compared to 3D LUT. Image ©Courtesy Lizo corp. There are two basic LUT file structures, 2D and 3D. The 3D LUT, shown in Figure 2, is now emerging as the preferential file type, but not all software and cameras can handle this more complicated and accurate data structure. But it does re-enforce the concept that you always need to make sure that the standard you choose will work for everyone in the production pipeline. The basic difference between the 2D and 3D version of LUTs is this: • 3D LUT defines color on 3 axises • 3D LUT is more accurate • 3D LUT offers more finite points to define color The basic uses for a LUT are; • to correct a non-broadcast standard monitor closer to the standard. • to match several monitors • to adjust a monitor to more closely match the final output (e.g., working with digital images that will later be printed) • to apply a correction to chroma and/or luminance that can move an image closet to a normal, balanced look, or change the image to a desired ‘look’. The word ‘look’ is highlighted here because a look is something that is done in final color correction, not on set. That being said, many wrongly call it a look. What we are doing is more of a tweak to make things look better.
This is the point where we need to tackle that list of what-LUTs-are at the start of this section. If you walk away with nothing else, these points are the ones to know. LUTs are a fixed point in space- This is the key to using them properly. Nothing else is important. When a LUT is created, it’s done with a specific clip of video. That stored information is ONLY about that clip and the changes made to THAT clip. Can it then be applied to another clip with the same results? Maybe. If the next clip is shot under the same conditions (lighting, exposure, etc) as the shot the LUT was created from. Think of a LUT as that moment when you press the shutter on a still camera. It does not capture the moment before or after. It’s information about that moment. LUTs can’t adapt (aka not relative)- Now we move a bit deeper and explain why LUTs are so specific. They are dumb. A few pages back you saw a simple math formula that explains this concept. Lets go deeper and change a few varriables so they make more sense to shots being processed after they leave the camera. A = the specific shot B = the adjustments to the shot, creating the change or the LUT C = the desired result So A + B = C Now if we bring in shot ‘D’ and drop it into the simple arithmatic problem we have a very typical issue. You most likely remember those from school years back where they had you solve for ‘A’. For some of us, that was a long time ago. Sorry.
54
2.4 LUT Magic & Camera Profiles Now we’ll plug in some numbers: A=1 B=2 1 + 2 = 3 or the shot corrected with the LUT But shot C is a bit more blue, so C = 1.2 B+D≠C
or 2 + 1.2
≠3
So there’s the non-math reality of using this LUT on an different shot. The DoP decides they want a crushed black but slightly blue (cool) look to the shot. You set down with them and work on ONE selected shot. Let’s say that it’s a shot that was created on a set under artifical light sources. You might be able to use that same LUT for other shots in that same setup/environment provided there weren’t drastic changes to exposure or, more importantly, color balance. The DoP is happy with that correction and says ‘just apply that to all the shots please’ and walks off. The next shots to come in from the camera are outside, afternoon light. Even with white balancing the cameras, there will be more blue in those shots simply because they were shot in an environment that is fundamentally more blue to start with. The DoP added blue to the inside shots so the LUT now contains +blue correction. Add that +blue to a shot that already has blue in it, and the results are + a-bunch-more-blue. It will look nothing like what was decided based on the first shot. Make more sense now?
709 relative to the known luminance profile. The same goes for the chroma/saturation. LUTs are color space specific- This should be clear now with the previous explaination. If you create a LUT in the LOG color/exposure space and try and apply it to a Linear color space, the results will be disasterus. The previous section graphically displayed the charting of each of the color space profiles and it’s easy to see you can’t overlay one on the other without problems. LUTs are shot specific- I’m not beating a dead hourse here. They are shot specific but very useful because they are so specific. And here’s where they work like a charm for those further down the post production line. Remember that shot you carefully corrected under the watchful eye of the DoP? A picture is work a whole bunch of words and you just created a picture that exactly lets the colorist know the ‘look’ target they are aiming for. What you do with this shot, is put it in a separate folder with that LUT. Lable it ‘Directors LUT -ScXXX-ShXXX’. The LUT will be named the same. You are indicating that this LUT is for a specific scene and shot. What the DoP is ultimately saying is, ‘I want all the shots to look like this’. Now the colorist has a good reference to start grading-too. They have the option to pull up the shot, apply the LUT and then make a reference frame.
A LUT can not analyze the color and luminances of a shot and determine what needs to be done to make it look like the desired or reference shot. It’s dumb. It’s a pile of numbers that make finite changes no matter what. So what do you do? We’ll get to that. LUTs can set/change saturation & LUTs can change ‘relative’ contrast- This is what LUTs are good at in a general use mode. Figure 3 shows the on-line LUT generator for the Arri cameras. When you plug in the camera and some other information, this software creates standard LUTs for every common ISO of that camera. All the LUT is doing is changing the Log C exposure profile to the Rec. 709 color space. That’s it. Period. Those Log C exposures look flat and desaturated out of the camera. They are a finite amout ‘desaturated’ and ‘flat’. The LUT moves the luminance settings to Rec.
Fig. 3 Arri on-line LUT Creator tool. LUTs can be used for Looks in post color grading. Gut there is a catch. For example, if you have a sequence of
55
2.4 LUT Magic & Camera Profiles shots on the timeline all color matched, a LUT can give that sequence a ‘look’. Something that might emulate film stock or the desaturated appearance of the bleach bypass process. However, if the shots on the sequence are not color matched to start with, the LUT can make any shot that isn’t matched, look worse. As mentioned earlier in this book, LUTs and colors pace management go hand in hand. They must be fully understood to be effective with color correction and transcoding. Creating LUTs A LUT can be created within the software used to do one-light color correction or other softwares like LUT BUDDY by Red Giant Software. Several camera manufacturers have pre-made LUTs for you to use. Arri has an on-line tool that allows you to create a range of LUTs based on the camera iso, for use in dailies processing and post production (Fig. 3). And there are many more. What can be done with and the actual functionality of the LUT is often frought with problems. The safe bet, if you are thinking about using LUTs to do a one light, create the LUT at the point where you make the luminance and saturation corrections. That LUT will generally work for shots from that camera set to that specific color space. LUTs are fickel things and take a fair amount of work to create, and predictiably to apply them with the same results on lots of different shots. There are talented people who have created ‘looks’ that are sold in packages, for specific software, where you can simply drop their LUT onto a shot or a whole finished edit, and get a consistent look. One such package is from Red Giant called Magic Bullet Looks and another from Color Grading Central, ‘LUTS’. The success of these Looks is the conistence of the base grade to the project. Again, this is outside the domain of the DIT. The difference with LUTs created by the camera manufactures for their cameras is they know the magic suace that makes up the color and luminance combinations, that create their specific image files. They know what’s under the hood of their cameras and how to best create a file that will work consistenly. In the case of Arri and their LUT Creation Tool, it does not matter what the white balance was or the iris setting (within reason). Their LUTs are based around the changes that happen when you change the ISO of the
camera. Changing the ISO or the relative sensitivity of their Alexa camera for example, causes certain changes in the color saturation (amongst other changes), and they have plotted those differences. That information can then be used to create a LUT that brings whatever shift in color back to ‘normal’ or a more desireable appearance. But it does go further than that of course. We will go into this concept in the chapter on One Light and Scope Basics for processing of image files. Although you might create a LUT to bring a non-standards compliant monitor into a reasonable aproximation of the viewing standard, or to simply correct the raw exposure of the camera file to a standard such as Rec. 709, others on set might want a more creative use. The set is not the place nor is the DIT the person to do this. That being said, you will be asked to do it by your boss. Best to humor them and maybe educate them, even a little bit, about this process. They will feel good about leaning something new and you are their new go-to person for information that makes them look good. Practical Applications On Set Let’s step though the process of using a LUT in several specific instances on a working set. First will be to correct a monitor to reflect the proper output of the camera. This is a several step process that can also be applied to the monitors in your personal DIT station or editing system. A step by step overview looks like this; - Color correct the monitor. - Create a LUT to compensate for the differences still existing in the monitors rendition of the image. - Apply the LUT to the monitor. This might be an .icc profile stored in the computer. There’s a lot more trial and error than this simple three step process indicates. Color correct the monitor- If you’re using a high-quality, broadcast standard monitor, this is a much easier task due to the fact that the monitor is already capable of accurately displaying that standard. If you are not using a broadcast standard monitor, you can use LUTs to more closely represent the standard. For those non-standard monitors, you will need a monitor probe from one of several manufacturers. X-rite, CalMan and others sell software and sensors that will create an .icc profile based on your computer and that monitor. The .icc profile is very much like a LUT where the software generates specific colors for the probe to see/read on the monitor. The difference between the
56
2.4 LUT Magic & Camera Profiles color generated and the actual color shown on the montior is what’s used to create a ‘difference’ file, or .icc. It’s a color profile of the monitor-computer-graphic card cobination, bringing the monitor closer to a standard. This standard is typically Rec. 709. Be advised that a $200 LED based computer monitor is fundamentally limited to a smaller amount of colors and luminences than the standard requires. So no amount of LUT based change will ever be able to make that level of monitor as accurate as the monitor built to the standard. High end monitors might have the probe built in, along with their own software. The beauty of this process is that the manufacturers have closely tied their monitor and testing system, creating a finely tuned system. But it does not take into account what the graphics card will do to any images going to that monitor. However, at that point, you are reasonably assured that the monitor is as color correct as it can be. That takes one variable out of the mix. If the monitor is broadcast standards complianat but does not have self-color correcting abilities, then it might have the feature where you can load/push a LUT into the monitor. In this process, a third party color correction probe and software will generate the LUT file that then can be loaded into the monitors software. The video signal from your computer or camera output, will be put through that LUT ‘tweaking’ the color balance of the signal so the monitor will display an accruact image. I’m simplifying this of course. What you need to know is the LUT does ‘what’ to ‘what’ and in ‘what’ order. These steps are fine for high end monitors, but what about the monitors on set that might be of lesser quality and not have the abilities of the higher priced offerings? You can still do this same process with the help of a box. A LUT box as they are commonly called. Black Magic makes several as does AJA. What these are, are a bit of software and computer bits that go in-line with the video signal cables, allowing you to load a LUT into them, and they pass the corrected signal on to the monitor. In this process you would put the probe on the monitor and run the color correction software to create a LUT. That LUT file is then loaded into the LUT box. This process will take several attempts to get it right. Don’t expect to do this on-set the first day of production. On one production I recently worked on, it took nine tries to get something that was acceptably close. And with consumer or pro-sumer grade monitors, you will need to settle for ‘acceptable’.
Lets add another variable into the mix--wireless video connections. It should be clear by now, based on what you have read, that changing a video signal from ones and zeors to something like an analog transmitted frequency, then reconverted to be viewed on a monitor, offeres many places for the signal be corrupted or altered. The key to solving this and other simular computer-to-monitor color correction problems is that the pathway must be set up as it would be used under production settings. And somewhere in that pathway there must be the ability to store the color correction profile. It could be at the start of the pathway, at the transmitter or at the monitor. As long as it is consistent in its configuration when you tested it and on the set. And it should go without saying, that if someone starts tweaking the settings on the monitor, then the usefullness of the LUT is immediately rendered useless. Trust me when I say, someone is going to touch the knobs. What you are trying to accomplish here is simple to talk about but harder to make happen. You want the image on the camera monitor, your computer monitor and the monitor that the DoP is looking at, to be as close to the same as possible. If you can setup your on-set monitorng system so you can see all the monitors while calibrating everything, it’s much faster and more accruate. This article takes you through setting up a non-standards compliant Plasma monitor. It’s appliciable to any monitor by the way. In the chapter on basic DIT workstation hardware, we will focus on the core of the DIT workstation and not address this out-board equipment at all. The core function of the DIT is to handle and process the camera files. They can offer other services, like on-set monitoring, over and above the core work as an additional revenue stream. It is commonly done. LUTs directly into the camera- The next on-set application of LUTs is to load them directly into the cameras output. This application does NOT alter what is being recorded, rather it takes a really ugly, flat almost colorless Log C image and does a quick color correction. The monitor will display a presentable image that will be close to what the camera actually sees and far more accurate to judge exposure and lighting. There are a few, high end digital cinema cameras that allow for LUT-loading. You will need to figure out how the one on your production handles LUTs and what type of LUT they can accept. Some are proprietary, or only 2-D LUTs or CUBE format LUTs. You will have to
57
2.4 LUT Magic & Camera Profiles research this on a production by production basis, then react accordingly. In most cases, if a LUT has not been established, the DIT will configure the camera to Rec. 709 (or one of the new REDcolor settings if using the RED cameras) for image display on-set. Later, when processing the camera files, they will leave the output color correction to Rec. 709 (or to a color space most closely matching the edit suites hardware and software configuration), and call that good for basic production monitoring. If the DoP wants to see a more accurate rendering of what the camera is producing based on the final ‘look’ they are trying to achieve, the DoP will come to the DIT and ask to see some shots with the Look-LUT applied. This can be done quickly in Resolve, Scratch, Red Cine-X, and most NLE softwares. There are times when the DP wants a ‘look’ applied to the recorded files, and this is done through a LUT as well. To give an idea how this might be done, THIS video tutorial steps through the process with a RED camera. A BIG WARNING here-- NEVER output files headed to editorial with the DoP created ‘look’ applied. There are many reasons for this not the least of which is the simple fact that the environment on set is NOT the place to do critical color decisions. But, the DoP is your boss so you can facilitate their needs and still keep the workflow intact. Here’s how you do it; - Working with the DoP, create the LUT. - Save that LUT as ‘DoP_LUT_the_date’. - Do your routeen Rec. 709 one-light output for editorial. - Put the LUT in a folder on the drives headed to editorial called ‘DoP_LUT’ Now when the DoP comes to editing and asks why his LUT has not been applied, the editor can simply retreave
the file and apply it in a few mouse clicks. Inevitably that LUT/Look is abandonded but everyone is happy and you have not upset the workflow for editorial. Ultimately, as part of your daily work flow, you will be applying a simple one-light and flipping the files into a format, specific to a need (eg. editorial, dailies viewing off the web, etc.). The redeeming value to a standardizations like: Rec. 709, Log-C, and the new ACES, is quality assurance both initially and down the road. Additionally, if everyone used the ACES standards, the output images will be comparable across all viewing modalities. The reason for these high quality standards is to assure the best possible image. When you have to re-package the files for a new delivery mode, the quality is intact because of these standards. This concept is mimicking the native ability of film stock to produce a stable and solid image that will last decades into the future. Warning Flags About Codecs All image processing and transcoding softwares are not created equal. Strong statement, but true. And I’m not just talking about their functionality. I’m referring to how they actually handle the integrity of the files. Codecs themselves cause image issues that must be taken into account. For example: the RAW camera file, with the appropriate LUT applied, might look great on your monitor on-set. You then export it to H.264 for others to view. The color space of H.264 is different and the compression is substantially higher. You will get comments back about the colors not looking as saturated or about the image looking flatter. (Fig. 4) In some cases there’s no way around this and the
NOTE: Why even venture into DSLR camera settings at this point, when we have focused on ‘real’ camera platforms. We’ll, ‘real’ is a very lose term today. The game-changer for film-making was the 5D mk ll. The RED One brought RAW imaging at an affordable price. The Canon 5D opened the door to ridiculously low priced, HD image acquisition that has proven itself in feature films, TV shows and documentary productions. It’s not going away any time soon. It IS going to be more accepted on larger productions as another tool to get the job done. The process of making this feature-lacking digital cinema camera produce very acceptable images, can be applied to other like cameras across the board. As a DIT in a professional and independent production setting, you will work with RED, 5D and other platforms, shooting the same project on any given day.
58
2.4 LUT Magic & Camera Profiles
Fig. 4 Left image is screenshot from RED Cine-X, color corrected. Right image is screen shot from QuickTime after conversion to H.264 codec. complainers will just have to live with it. In other cases, you can tweak the output setting for the codec, after you have applied a look, that will correct some of these issues. If you can work this out, by testing the images with different settings. It will be to the advantage of the whole production and to your standing as a professional DIT. Third Party LUTs for DSLRs Required Reading: “CineStyle LOG and LUT explained for people like me” http://www.cinema5d.com/news/?p=6165 We’re going to take a bit of a side trip here, so bare with me. Erroneously and often called LUTs, when applied to DSLRs and other cameras that don’t accept the importing of Look Up Tables, they really should be referred to as Camera Profiles. The basic reason for this is fundamental to how the camera processes and saves the image. But the term LUT is widely used, so we won’t get to upset at this point. Cameras that save image files in a highly compressed or unique file structure, often do not have the ability to understand all the controls contained within a LUT file. The cameras are not sophisticated enough in the software side, to offer this kind of functionality. But you can significantly change many settings that will achieve quite similar results. There’s third party software available for these profiles, offering the camera owner some of the same abilities to control the look of the recorded image, as a LUT. An example of this profile control and presets is offered by the Magic Lantern software hack for Canon DSLR cameras. The authors of this program exploited the untapped features and controls native within the camera already, then added more to leverage user control over the recorded image. Until the very recent addition of exporting RAW image streams from the Canon 5D
DSLRs, we were stuck with highly compressed files in the AVCDH (H.264 codec) format. Because of the high compression, exposure has to be very accurate and color setting as close to correct as possible. The resulting files from these DSLR cameras are quite fragile. They can withstand some color correction, but not nearly as much as a RAW file structure can handle.
Tech Note: The Magic Lantern ‘hack’ isn’t really a hack in the truest sense. It’s actually a small file, residing on the camera memory card, that effects the factory settings and offering more features. By doing this method of altering the camera software and firmware, the code is less likely to ‘brick’ or freeze the camera permanently. However, cameras have been bricked, so proceed with some caution. You would be well advised to do some research on the web from knowledgable sources about the best camera settings to achieve an image file that will hold up under color correction in post production. The downside to this way of altering the camera’s firmware and system software is that the Magic Lantern software must be installed on each memory card you use. Erasing files off the card is preferable to re-formatting the card, then having to re-install Magic Lantern again. Not that big of an issue once you incorporate this workflow in to your daily workflow. There’s a Technicolor LUT that’s free, and created for Canon EOS cameras. Based on Technicolor’s usage and testing of its CineStyle™ Picture Style, they recommend the following camera settings to optimize the image quality of the Canon EOS camera: Sharpness: 0 ! Contrast: -4 ! Saturation: -2 ! Color Tone: 0 ISO: a multiple of the camera’s native ISO (i.e. a multiple of 100 or 160 depending on the camera). What this does is set the in-camera sharpening to neutral. Lowers the Contrast and the color saturation. Keeps the Color Tone (Hue) neutral and suggests specific ISOs based on extensive testing to determine the best ISO to create the least amount of image noise. The resulting image will be quite flat and desaturated. This is a great
59
2.4 LUT Magic & Camera Profiles starting point for further color correction without damaging the image. Considerations For Profile Settings Let’s say the camera uses the AVCHD codec (H.264). The color space is 4:2:0 and the compression is roughly 30:1. This results in a manageable 38Mb/sec. data rate. In the Canon 5D latest software, it allows for Intraframe or Interframe compression. Remember that Intraframe has all recorded frames as ‘I’ frames. Each frame is stand alone, full images. Interframe has I frames every 15 frames with the frames between only containing the ‘difference’ between the frames. There is a substantial difference between the file sizes created by these different settings. However, if the amount of data created is not an issue, then the Intraframe setting should be the one of choice. The images will hold up better in post production. On any similar camera, what we need to control are the following: - ISO or light sensitivity. This directly relates to exposure AND image noise created by the electronics. Canon uses a push-pull arrangement for it’s ISO’s. The camera is based on ISO 100 or multiples of 100. If you set the camera to ISO 125, the camera has to ‘push’ the native exposure up to 125. This induces noise in the image. If you set it to 160, ISO 200 is closest, so 160 is a ‘pull’ down from 200. This has less noise in the image. Rule of thumb is multiples of 160 work the best for the 5D cameras. - Shutter speed at 50. The is as close as you can get with this camera to the filmic 1/48th of a second when shooting 24 fps. The shutter speed should always be double what the frames per second is set to. 30fps would be a 1/60th of a second shutter speed. - Exposure. This is by far the most critical setting you can ‘get right’. Because the resulting camera files are compressed, everything you do is burned-in. If the exposure is wrong, then it will be hard or impossible to fix. Over-exposure is the most important to avoid. Underexposure has its issues as well. In post, you will need to raise the video levels of the under-exposed clip and that will introduce image noise. Rule of thumb here is ETTR. Expose To The Right side of the histogram. Make sure what you want to be white with details still visible in the whites, then make sure the whites do not go off the right side of the historgram.
- White balance. It is difficult to fix off-color images in post. This process of color correcting an image that is to blue or to orange, introduces noise and effects other colors in the image at the same time. As a DIT, you should have an 18% gray card in your case. The camera department should have one by default, but there is a bad tend, that because it’s digital, it’s not that important to do a white balance. The opposite is true. Set your exposure, then white balance the camera. - Profile settings. As mentioned in the previous section, there are significant ‘tweaks’ that help the image when it makes it to post production. These need to be understood and tested. Search the web for camera settings and experiment with as many as you can. - Styles. Called somewhat differently by each camera manufacturer, this setting basically forces a LUT or image luminance curve on the recorded file. This setting is worth the time to dial in. One of the settings might be ‘Cine-style’. This setting moves the black level up and the white levels down into the center of the exposure range. It also adjusts the toe and shoulder of the exposure curve. Other cameras, like the Sony FS-100 and FS-700, have very powerful profile setting abilities for digital cinema cameras in this price range. Finding a sweet spot for camera profiles is a labor of love. Hours of testing and bringing the footage into color correction software for analysis. Philip Bloom does some of the most significant testing and reporting on this and other cameras. Here’s a link to his research on the FS-100 camera profiles for flat-profile exposures. The more you read about these, the better grasp you will have on how similar settings can be applied any camera not shooting in a RAW format. If you need a setting for Canon 5D, and other DLSR cameras, Phillip Bloom and Vincent LaForette would be rock solid sources. You need to find sources for all camera platforms you might encounter. Remember that part early in this books about learning, learning, learning? Rule of thumb at this point: if the camera saves its images in RAW file format(s), then a LUT can be applied or created in camera. If the camera saves its files in a compressed (Non-RAW) format, then you might only be able alter the picture profile, which is burnt-in (baked in) to the final image. However, cameras like the Arris and Black Magic and others, now offer recording in RAW or in compressed codecs like DNxHD or ProRes. These
60
2.4 LUT Magic & Camera Profiles cameras can take a LUT. In RAW files, the LUT is either a side-car (mxl) type file or separately added into the clip file as metadata. All cameras offer software tools to tweak the cameras functions (color saturation, hue, exposure index, etc.) but not all will accept a LUT. Testing and More Testing It’s probably clear by this point that we don’t just run-and-gun, hoping for the best outcome. A professional will test every setting and then look at the outcome with a critical eye. Same goes for LUTs and camera profile settings. Here’s a good example of why. Years ago, I purchased a large screen TV to be used on-set, in place of a green screen. Easier for the talent to work with. The issue was the color correctness if the image on the TV when a studio camera was pointed at it. The colors weren’t even close. So our engineer put the SEMPTE bars pattern into the TV and tried to change the color controls on the TV itself. What we learned, by hours of testing, is that what the Hue control was supposed to do in all other systems, this TV didn’t. The Hue shifted the colors but also changed the saturation of the colors, and not evenly. Then the Saturation control had no linearity to it at all and did not increase or decrease the RGB channels evenly. This was a very expensive set, at the time, by one of the biggest names in the business. We got it to work, but not without abandoning the TVs controls and putting a several thousand dollar color correction box in-line, that worked like it was supposed to. What this all means is that, just because your camera might have a chroma, or hue or saturation control, does not mean that the software inside is written to allow those controls to work as one might expect. You have to test. The profiles listed above for Canon cameras, most likely will not work for Nikon cameras. The settings listed above were the result of hours and hours of testing. The outcome is typically a comment by someone who owns the same camera, “how do you get such great looking images?” You can just smile and say “it’s the secret sauce.” And like all good sauces, it takes time to work out the flavors to perfection.
61
Chapter 2 Review Questions
Answers can be found in Appendix C. 1) What does Codec stand for? A. Color Decision List B. Compression Decompression C. Cotangent Decompression
2) If the saved file size is smaller than the data coming off the camera sensor, the file is considered compressed
5) Gamma is the term used to describe the midrange of luminance within an image? A. True B. False 6) Temporal compression is a compression scheme where only the differences between frames is stored until a key or ‘I’ frame is triggered? A) True B) False
A. True B. False
7) A camera data rate of 50 mb/sec can also be stated as: A) 50 mega-bits per second
3) A .mov file is the codec. A. True B. False
4) Video works in what basic color space? A. CMYK B. RGB C. Yb-Cm-B
B) 50 mega-bytes per second C) 50 million ones-and-zeros D) None of the above is correct
8) a 10-bit image file can potentially display how many colors? A) 256 B) 256 thousand C) 10.7 million D) 1.7 billion
62
Chapter 2 Review Questions 9) With new software packages for asset management, it’s not important that the asset manager know the codec structures? A) True B) False
10) Compression of the camera file ONLY takes place when the frame is transcoded for editorial?
14) The most common codec used in consumer and mid-range professional cameras is? A) .r3d B) mpeg-1 C) h.264 D) XDcam 15) Mpeg3 is good for what kind of application? A) HD video B) UHD video C) XHD video D) Audio
A) True B) False
11) Camera image profiles are the same as LUTs? A) True B) False
12) Acquisition codecs are perfectly fine to use for delivery as well. A) True B) False
13) Codecs are.... A) Lossy
16) VBR compression has what characteristic? A) Smaller file sizes because it applies the same, maximum compression across the entire clip or file. B) Data rates that change due to the movement or complexity of objects within the frame. C) Controls the Variety of Brightness and Resolutions within a clip or file. 17) LUT stands for? A) Locked Until Transition B) Look Under Transition C) Look Up Table 18) Flipping/transcoding a file from one codec to another does NOT alter the look or color of the image. A) True B) False 19) Which color space contains more possible colors? A) Monochrome B) sRGB C) AdobeRGB D) Rec. 709
B) Lossless C) Transparent
63
Chapter 2 Review Questions 20) LUTs are perfect for passing color changes from clip to clip or across the entire timeline? A) True B) False 21)LUTs are ‘changes’ between the original and the desired outcome. A) True B) False 22) LUTs contain enough information and abilities to change any file they are applied, to match the desired outcome. A) True B) False
23) Explain: The difference between inter-frame and intra-frame compression?
64
3
Color Change and Codecs
We’ve spent a lot of time, and words, explaining some hard, complicated concepts. It’s time to take a break and run
through an exercise or two that exemplifies the issues with color spaces and codecs, and the use of LUTs.
In this exercise, you will be introduced, in a very basic way for now, to the program DaVinci Resolve, and how to determine if a codec is damaging your work. Spoiler alert-- it will. But how much and in what ways? Then in Exercise 2 you will create a LUT from one shot and apply it to another. Both shots were done within 50’ of each other in the same building space. Other than color, can we conform them to a Rec. 709 color space using one of the shots as a template?
65
3.1 Exercise 1- Color Change During Flip Codecs don’t always pass files untouched.
Tech Note: Since the invention of digital graphics, codecs, and file transformation (switching a file type from one to another), the issue of color space has been a problem. Simply put, the color space or color rendering at the fundamental level of the file type, is not the same for .tiff or .gif or .png. It gets crazier when we deal with video codecs. It's partially related to 'down-rezing' to 8 bit color space. But the most damage is done by the color science built into the compressor itself. In this quick assignment, you will ingest a known, industry standard reference file (SMPTE bars) into a codec flipping software and output it several different ways. Then you will bring those files into Resolve and compare them. They should match the original file…’should’. This exercise should take 30 min. to accomplish. You will need: • Course asset files. • Personal hard drive. • DaVinci Resolve v12 software. You will need the following files: Located inside the color correction folder: SMPTE_Bars.tif, Bars-ProRez.mov, DNxHD.mov, Bars-h264.mov.
MpegSteamClip is a free file flipping software loosely based on the FFmpeg encoding engine. Squared Five (http://www.squared5.com/) did a ground up re-write creating a fast, clean piece of software. It is much faster to render with than QuickTime. Most other free or paid encoders use the open source FFmpeg engine, but the results from each varies depending on how the additional code in the software package is implemented. MpegStreamClip has worked hard to create a tool that does as little damage as possible, but it’s not perfect. Few are. It’s always a good practice to test any encoder before using it on clients footage. 1. Locate the files. The path should be: YourHardDrive>AssetMgt Course>CourseAssets> ColorCorrection>
Moving The Files Into Resolve for Technical Review If you're working on a computer that has Resolve already installed, system files and databases are setup for you, follow the next steps. If you're installing Resolve for the first time on your computer, follow the installation process provided by BlackMagic. What follows will take you forward from that point on. What you see in this tutorial is based on Resolve v12.x. 1. Open DaVinci Resolve . If you haven’t worked in DaVinci, don’t stress. This will take you right through the process step by step. We’re going to create a new user, new project, and import our media. Then we’ll look at the three SMPTE bars files inside the color correction mode of Resolve. What follows assumes you have Resolve installed and your hard drive attached to the computer.
66
3.1 Exercise 1- Color Change During Flip
Fig. 1
2.
Resolve User Log In Window.
Launch Resolve and it will open to a Project Manager window. (Fig. 1)
If this is the first time you have launched the program, there will be a single Untitled project in the upper left of the screen. First we need to tell the program where to save you media and scratch files. 3. Click-on the Resolve Menu at the top left of the Finder window. (Fig. 2) and select Preferences. This will open system-wide setup window for media management. 4.C l i c k - o n t h e Media Storage selection on the left (Fig. 3).
Fig. 2 DaVinci Resolve Menu.
The list of connected media is shown in the middle part of the screen.
5. Click-on your connected external drive (or any drive you like if you are working on your personal computer). In this case my external drive is Curriculum-1. With this selected, the program will save all its files (exported, imported and scratch) to this drive.
Fig. 3 Media Storage window. 6. Click-on screen
the Save button in the lower left of the
Now we need to create a project. In the lower right of the Project Manager screen is a ‘Create Project’ button. 7. Click-on the Create Project button and the Create New Project window opens (Fig. 4) where you can enter the project name. 8. Enter Flip-test for the name of the project. 9. Save the project by clicking-on the Create button.
Fig. 4 Create New Project window.
The Untitled project now has the new name. One of the keys to success with Resolve is setting up the raster and frame rate properly. If not done at the outset, the rendering or output processes will be filled with grief. We can check for the default settings just by hovering the cursor over the Flip-test project icon in the upper left corner of the screen.
67
3.1 Exercise 1- Color Change During Flip 10. Right-Click-on the Flip-test icon.
On the left side of the screen is the Media Storage list with all connected media. You will want to find the CourseAssets folder on your hard drive. (Fig. 6)
Fig. 5 Project Settings window. DaVinci will open the Project Settings window. (Fig. 5) The Master Project Settings selection on the left side list is what we need. Make sure the project settings are as follows: - Timeline resolution = 1920x0180HD - Pixel aspect ratio = Square - Timeline frame rate = 24 The next settings will confirm that the media drives are properly set. To the left side, 11. Click-on the General options. Were looking for the Cache files and Gallery files location. If you are on your personal computer, you can ignore changing these settings. However if you are in a classroom lab environment, 12. Click-on the Browse button for each of these and navigate to: your external drive>CourseResources folder. 13. Use Click-on Save button. If you get a window that indicates that these changes will take place the next time you restart Resolve. Click OK, quit and restart the program and then open the project we just created. Importing Media Importing media files into Resolve is much like other programs. You will use a media browser interface to locate the media and then drag that media into the Bin or Project folder for use later in the process.
Fig. 6 Media Storage Browser. Inside that folder is the Color Correction folder that contains the files we will work with. When you select this folder (Fig. 7) you will see all the files that Resolve understands listed to the right. There are three files we are interested in for this exercise
Fig. 7 Media Pool Master bin with bars clips. 1. Hold down the CMND key and click-on the following files: - SMPTE_bars.tif - Bars-ProRes.mov - Bars-DNxHD.mov - Bars-h264.mov They should all be highlighted. (Fig. 8) 2. Drag these clips into the Master media area just below the Media Storage frame. With the clips in the bin, we can now move to the editor part of the program, where we put them on a timeline within a project.
68
3.1 Exercise 1- Color Change During Flip lines up with the 01:00:08:00 point on the timeline. You just created an 8 second long clip from one frame. (Fig. 10)
Fig. 11 Color bar clips on the timeline. First 3 shown. Fig. 8 Bars clips selected in Media Storage, ColorCorrection folder. 3. Click-on the Edit tab in the lower, center of the screen. (Fig. 9)
Fig. 9 Edit Tab.
6. Click-and-drag the remaining files to the timeline in this o rd e r : B a rs - P ro R e s. m o v, B a rs - D N x H D. m o v, Bars-h264.mov (Fig. 11) We can now move to the Color correction part of the program to analyize these files. These files were created by exporting the .tiff file thee times into each of the codecs indicated within the name of the file. Nothing else was done to them. By comparing the transcoded file, to the orginal, we will detect whether the quality if the transcoded file was damaged. 7. Click-on the Color tab in the lower part of the screen. (Fig. 12)
This moves the program to the editor functions within the Resolve suite. In the upper left of the screen are the assets/clips in the Master bin. You are going to move the clips down to the timeline area in a specific order. The first clip is the SEMPTE_bars.tiff file. This is only one frame long but we can fix that on the timeline. 4. Click-and-drag the SEMPTE_bars.tiff clip down to the timeline just to the right of the V1 Video 1 track. 5. Move the mouse cursor to the right side of the clip, click-and-drag the right side clip, to the right until it
Fig. 12 Color tab selected. 8. Click-on the Timeline icon in the upper right of the window. (Fig. 13)
Fig. 13 Reveal Timeline button. This opens up the timeline in the middle of the screen for easy access to each clip. Click-on the first clip in the timeline and it will load into the viewer window.
Fig. 10 Tiff clip stretched to the right creating an 8 second clip.
Using Scopes 1. Click-on the Workspace menu and select the Scopes sub menu. (Fig. 14)
69
3.1 Exercise 1- Color Change During Flip What we’re interested in is any changes clip-to-clip compaired to the original tiff file.(Fig. 16) Looking at the vector scope, click on each separate clip and notice the changes in the scope.
Fig. 14 Video Scopes menu.
2. Select ON from the menu. The go-back to the menu and Select 2 Up. The video scopes will appear floating over the screen. You now need to change which scopes to display. NOTE: we will go much deeper into what these mean and how they help our workflow in other exercises. For now, we only need to see the changes when we use them. 3. Click in the upper left corner of the scopes display window and select Vectorscope from the drop-down menu. On the right scope display, select Waveform from the menu.
Fig. 16 Original .tiff file.
If you see the spikes for each color changing left or right, this is an overall hue shift into the image. If the color spikes fall lower from the boxes than the tiff file, then the codec has removed chroma information. If the lines between the color spikes change shape, that means that video noise and color noise has been interjected when the file was compressed. The ProRes (Fig. 17) and the DNxHD clips look very close to the original. This is because they are very well designed codecs with the primary goal of creating transcoded files that are very close to the orginal. They also change the file structure from the I frame and then B and P frame format to ALL ‘I’ frames. The quality is maintained.
Fig. 15 Both scopes displayed. There should now be 2 scopes displayed. (Fig. 15) We want these two specific displays to get the right information out of the video files. With the first clip highlighted in the timeline, look at the Vectorscope. This display tells us ONLY information about the color in the clip and nothing about the luminance. Notice how each of the colors in the bars are in or near their respective color boxes in the scope. ‘B’ is for blue, ‘M’ for magenta, ‘R’ for red, etc. Those boxes are targets for determining if a color is broadcast legal. It the color exceeds its box, the color is to intense and therefore, not broadcast legal.
Fig. 17 ProRes file.
70
3.1 Exercise 1- Color Change During Flip The h.264 file isn’t so clean (Fig. 18). The colors shifted a bit and there’s re-shaping of the connecting
You’re undoubtedly aware of the move by editors, to handle files within the non-linear editor, natively. This means the program one day will link to the camera file(s) directly, not requiring the transcoding process. We’re getting closer, with each software update. Vegas, Premier, and Final Cut seem to have taken the lead on this. AVID’s version 7 and newer, has made huge strides in offering this feature. It to will allow native, full resolution linking, in further updates. For all of the softwares, there’s always something new to incorporate. Arri announced a new 4k camera… and that’s all they said. What will the file format be? Codec? We won’t know until it’s released, then we’ll have to adapt.
Fig. 18 The scope for the h.264 file. lines indicating noise has been interjected. This is a compressor which has the goal to create small, reasonably good looking files. Information is thrown out and what remains, is not nearly as pure as the original. But that’s the trade off and the file visually, still, looks farily good.
There are always new file formats emerging and new tweaks to existing ones. These cause less harm to the cameras’ original footage all along the pipeline. All you have to do is keep up.
What this tells us is fairly basic information that only the pro’s fully understand. Flipping a file into a different codec WILL change the image color quality and how the files actually renders the colors. This is an important concept to understand as we move ahead. This color shift will happen with every codec, color space, and piece of equipment connected to your system. To make things more frustrating, you and the next person will have the same computers but different monitors. The same images will not look the same. I mentioned earlier that you will be chasing the ‘digital dragon’. This is just part of the dragon and the chase. There are very deep, technical reasons for this color change and they are somewhat beyond this book’s scope. In the Appendix are very good, highly recommended books on this topic. The more you know and understand color space and codecs, the better you will perform in all digital media related endeavors. If you grasp what to look for based on color processing in the hardware and software you are using, it will make your efforts to create great looking images less stressful. As DIT professionals, we should include part of the medical oath “do no harm” into our work. Not always possible, but with a little knowledge, we can lessen the damage.
71
3.2 Exercise 2- Practical LUT Use
In the chapter on LUTs and Camera Profiles, it was made clear that LUTs do have a use and there are implementations when they should not be used. In this exercise, you will create a ‘luminance only’ LUT from one camera clip and apply it to another. Both were shot on the RED One camera using the older REDColor science. Both clips were exposed for flesh tones. The first clip was shot in an area with no daylight falling on the scene, but had a lot of ambient green cast from the fluorescent lights in the ceiling. The second clip was shot just 50’ away in an area with high windows allowing daylight to fill the scene. This added blue to the highlights. With the exposure targeting the flesh tones, the surrounding parts of the scene may appear darker or much lighter. In places, the second shot appears to be totally blown out in the Rec. 709 color space. However, in the Rec2020 or ACES color space, there is a surprising amount of detail in the highlights. We won’t get into those wide-gammut color spaces in this section. For now we will just focus on what a LUT can do to assist the workflow of a DIT.
You will want to launch Resolve if you don’t have it open from the last exercise. If you’re unfamiliar with setting up Resolve, refer to the first steps in Exercise 1. What will do next assumes you have Resolve open to the projects window. 1. In the Project Manager window, Click-on the New Project button. Name the project ‘LUT Test’. 2. Right-click on the new project icon and select Config from the menu. 3. Change the Timeline Resolution to 2048x1152 2k 16:9. One of the clips we will use is 2k and the other is 4k. It’s a good practice to set the project raster to the smaller size. Resolve will down-convert the 4k clip. 4. Change the frame rate to 23.976. 5. Click-on Save in the lower right corner of the window. 6. Double-click on the new project to open it. Now we need to import the footage. The clips you
This exercise should take 30 min. to accomplish. You will need: • Course asset files. • Personal hard drive. • DaVinci Resolve v12 software. You will need the following files: Located inside the color correction, within the RED Assets folder: - A010_C003_1204RN.RDC - A010_C033_120404.RDC
Fig.1 RED assets selected. need are located: (Fig. 1) yourdrive>CourseAssets>Red Assets. 7. Select (cmnd - click) the clips: A010_C003_1204RN.RDC A010_C033_120404.RDC and drag them into the Master media frame just below. If you see a window popup that asks you to
72
3.2 Exercise 2- Practical LUT Use change the project settings, don’t let it change them. We want the project to stay at 2k, even though there’s a 4k clip being imported. 8. Move to the Edit window by selecting the Edit tab in the lower-center of the screen. 9. Click-on the A010_C003_1204RN.RDC clip and drag to the Timeline and drop it. Now do the same for the other clip. 10. Move to the Color window in the program by selecting the Color tab at the bottom of the screen. 11. Open the timeline by clicking-on the Timeline icon in the upper left of the screen. 12. Click-on the first clip on the timeline. This should be the one with the man sitting in the chair reading a book. What we are going to do now is adjust the black levels and white levels ONLY, to create a full range image. 13. Open the Scopes using the Workspace menu. 14. Make sure one of the scopes is the Waveform display. Looking at the Waveform scope you can see that the traces do not go very high, which means that there isn’t a full range of whites in the image. More accurately stated ‘the whites are low’. And the black levels are just above the bottom line which means that there are no ‘real’ solid blacks in the image. (Fig. 2) This is what we will fix with the following adjustments. 15. Drag the playhead down the clip until we see the man’s face looking up. This is roughly 09:14:19:00
Fig. 3 Lift adjuster.
Fig. 4 Gain adjuster.
16. Using the Lift and Gain luminance adjusters (Fig. 3), drag the Lift to the left until the bottom of the traces meet the lower line in the scope. 17. Using the Gain luminance adjuster (Fig. 4), drag it to the right to ‘lift’ the whites. Adjust until the white peaks are just below the upper line on the scope. The properly adjusted image should look like Figure 5. We now need to save this adjustment in the form of a 3D CUBE format LUT. It’s a very simple process.
Fig. 5 Waveform of the image properly adjusted.
Fig. 2 Waveform before luminance corrections. on the timeline. You can look at the upper right area above the video display to see the timecode position.
18. Right-Click on the clip you have adjusted in the timeline and a menu will open (Fig. 6 ) 19. Select Generate 3D LUT (CUBE) from the menu. A typical finder window will open for you to name the LUT. (Fig. 7 ) 20. Name the LUT RED-to-709. This lets you know that this LUT is specific to adjusting these RED camera files to the Rec. 709 color space for luminance settings.
73
3.2 Exercise 2- Practical LUT Use
Notice where your computer is saving this file. Resolve is still not really good at finding the same folder, at times, when you need to recall the LUT.
Fig. 6 Clip contextual menu.
21. Save this LUT file. The next step is to apply this LUT to the second clip in the timeline. But first you need to locate the ‘hero’ shot. As when you made your adjustments, you scrolled through the shot and found what could be considered the most important part of the shot. Do this with the second shot. 22. Scroll into the second shot (the blond girl walking) to the point where she sits down and looks up. Approximately
24. Look for the LUT you just named RED-to-709. Notice it has the clip name attached to it. That lets you know where this LUT was generated from. 25. Click-on this LUT.
Fig. 8 Image before LUT application.
13:39:36:00 timecode.
Fig. 9 Image after LUT application. Immediately the image changes, looking brighter with a more balanced exposure. (Fig. 9) Look at the scopes for this image. Notice that the wall behind her is not off the top of the scope. It’s ‘clipped’. This means that the Rec. 709 color space could not handle this level of whites so it ‘cut’ them off. The camera actually recorded all the detail in the whites but they are way outside the ability of Rec. 709 to handle.
Fig. 7 Save LUT window. 23. Right-click on this clip in the timeline and select 3D LUT. This will open the folder where all the LUTs are stored.
Scroll through the shot. How does it look overall? At the start where she’s standing in front of the doors, it’s very ‘blown out’ but she looks good. As she walks, she continues to look well exposed. Scrub to the point where she is just emerging from behind the concrete pillar. That concrete is close to middle gray. On the scope, you should see a large mass
74
3.2 Exercise 2- Practical LUT Use about mid-way that represents this gray. It’s telling you that the mid-tones are close at this point in the shot but maybe low when she sits down in a less-illuminated part of the set. As a DIT we don’t worry about this. The colorist will take care of that in post production. If the face looks fairly good, we have done our job. Lets back track for a moment to the section on LUTs and the discussion about using them for color correction. These two shots are a perfect example of how carrying color correction from one shot to another would be problematic. The shot of the lady was done in the area of the location where daylight shown on the set from high windows. The overall color cast has a lot of blue in it. On the other hand, the shot of the man has a magenta-green cast caused by the fluoresent lights overhead. If you were to color correct that man and then use that color information on the shot of the lady, it would cause a real problem with how the second shot looks. You should take a few minutes, go back to the shot of the man, do color correction to remove the off-color tones and save that as a new LUT. Then go to second shot, use the drop down menu and select No LUT. This removes the LUT. Then put the new LUT with color correction on this clip and see what it looks like. What would happen if you created the LUT from her shot, with color correction to neutralize the blue cast, then put that LUT on the man? That would be a profound change in that shot. Removing all the blue from the man would create a color balance that would be extremely warm/reddish. So, in our work, we use LUTs to fix luminance levels. You can, in the case of Log C color space files, make some chroma level change. Chroma is the ‘saturation’ of colors. Log C files tend to be flat in both exposure and color saturation. Adjusting those two settings ONLY would be within the margins of quality work.
75
4 Camera to Editorial:
The Crooked Path
-Required Reading: ‘On Set Workflow’ (for RED footage), http://thepostlab.com/von-thomas-on-set-workflow-formaniac/ Pay close attention to the comments after the article. Very informative.
“…in many cases the job description is shifting from a video engineering role to one more focused on data pipelines, digital color workflow, and bridging the gap between on-set and post.” Ben Cain, DIT and on-set colorist.
A good place to start is by looking at a side by side
comparison of the typical film workflow and the digital workflow.
What is left out of this diagram (Fig. 1) on the film side (green pathway) is the lab processing of the film stock before telecine. Telecine is the process of scanning the
Fig. 1 Post Workflow Diagram ©IndependentFilmConsultants.com
original camera negative into a digital file. In a pure film workflow, a new work print positive of the original camera
76
Camera to Editorial: Camera to Editorial:
negative would be created for editing with a razor blade and glue. For simplicity sake, this diagram assumes a digital edit process for both formats.
sweetening and mixing and the final deliverables. Refer to Appendix B for an example of the criteria that will be placed on the DIT by the post house.
We’re going to focus on the digital side and its pathway issues and solutions. The key players in the digital or ‘file based pipeline’ have been mention previously. The DoP is interested in seeing what has been shot and some on-set color correction. The producers and director want to review the selected takes so they can comment to the editor. The editor needs files he/she can work with right away, which would be a version the camera original files, in some sort of logical order.
Ultimately, this person is a technical scheduler and problem solver. They are the master keeper of the time table and the whip to keep the project moving. As a DIT, you must be their friend. Please the post-production supervisor, and you have a lock on work with that production house.
Looking at Figure 1, on the digital side, the box ‘File Conversions’ (right below digital camera) is the DIT position. In its more basic form, digital workflow looks like Figure 2.
Fig. 2 Basic digital asset management pipeline.
Post Production Supervisor The unmentioned player is the Post Production Workflow Supervisor. He or she is focused on all the technical and logistic details of the workflow. On small shoots there’s typically the DIT, the editor, the colorist (might be the editor) and the sound mixer. On bigger budget productions there will be a supervisory position over these people/job functions. When using established post production services, the post supervisor will be the person that hand-holds the project from the DIT through delivery. The Post Supervisor work with the production company, establishing a file workflow depending on the camera and its file format, so the DIT can work more effectively. The post house doesn’t want surprises when the hard drives hit the door. They also allocate resources to manage the job including: hard drive space, file structures to manage assets along the pipeline, which documentation will be done to keep the project moving, what type of file will be exported for effects work, audio
Looking at The Crooked Path The chapter title mentions a ‘crooked path’. It may seem counter intuitive that what we’re striving for is a straight path to delivery, not some scenic tour. There are lots of white papers and flow charts for just about every camera file format and editing software out there, but those road maps don’t always apply. Let’s take the workflow for the ARRI ALEXA™ camera. There are several workflows by the way, and it depends on what is actually recorded in camera. The Arri camera can output ProRes4444 or DNxHD and ARRIRAW™ depending on the camera settings. The ProRes file format can be used directly, without transcoding, within most non-linear softwares. The DNxHD MXF file is native to AVID Media Composer™. Both codecs are a real plus when fast turn-around is required. The ARRIRAW™ file format, like most camera RAW files, require transcoding. Arri has a free utility called Arri Raw Converter (ARC) to open the raw files for viewing and transcoding (flipping) to a more edit suitable codec. So the workflow might look like either one of the diagrams on this page. (images courtesy ©AbleCine Training Resources).
Fig.3 Simple straight-to-editing workflow 77
Camera to Editorial: Camera to Editorial:
Starting with Figure 3, the camera recorded in DNxHD or ProRes file format. The camera card is then ejected and handed off to the DIT (not shown), where it can be monitored (reviewed), backed up (not shown) and a one-light color correction applied. One of the backups goes directly to editorial where it can be connected to the editing computer and work can begin immediately. The finished timeline is then passed on to color grading and sound mixing (not shown), then output as a deliverable. This is a clean, fast workflow. Figure 4 uses ARRIRAW™ as the camera original. These are big files, must be recorded to T-Link™ certified recording devices ( hard drives or SSD media). The DIT will backup the files (shown as Ingest ) and save a copy to a RAID array. Part of the ingest is flipping the files for monitoring (review) and one-light dailies. Those files are then passed to editorial where work begins. After the program-edit is ‘locked’ or approved, the high resolution files are re-connected to the edit (conforming) and color grading can be done. Then visual effects (VFX) and compositing. That finished, the resulting high resolution
Fig. 5 On-line/On-line workflow. space in a hurry. But the time saved offsets the hard drive volume. E. Gustavo Peterson, DoP for the movie “Box of Shadows” diagramed his RED camera workflow as shown in Figure 6. This Final Cut workflow is a bit dated, but the logic and pathways are solid.
Fig. 4 ARRIRAW™ workflow. file is used to create deliverables like DVD’s, film negatives for theatrical distribution, TV transmission, or BluRay™. Figure 5 is unique to ARRI, although BlackMagic Digital Cinema Cameras are now touting this workflow. It’s both on-line and off-line simultaneously. If you use the T-Link™ recording device, you can record both ProRes or DNxHD and ARRIRAW™ at the same time. You can hand off the compressed files to editorial immediately and process the raw files at a later time; then relink, in the last stages of post, for the optimal image quality. The downside is, this sucks up large amounts of hard drive
Fig. 6 Older RED media workflow using Final Cut Pro editing system.
Take a few minutes to look over this flow chart carefully. There will be terms you are not familiar with, for now. Later in the book you can revisit this diagram and gain a fuller understanding for some of the 3-letter abbreviations peppered throughout. But one interesting note is found just to the middle right of the diagram. The dotted lines marked ‘DPX export’. This happens to be the native RAW file format of both the ARRI and the BlackMagic cameras. It’s also the typical file format preferred by compositing and special effects houses.
78
Camera to Editorial: Camera to Editorial:
Tips to Understanding Your Workflow Ultimately, it is important for you to understand the workflow you’re going to use on any given production. Von Thomas’s article referred to earlier, helps understand the importance of figuring this out. Making a flow chart similar to what you’ve seen, is a very good idea. Keep notes on specifics so you will have a reminder. Many DITs make a checklist that sits right beside them, on-set, so no one has to guess what was agreed upon. A tried-and-true practice is to have a bound notebook, like those black and white college notebooks, to keep all your notes from every production you work on. You never know when someone will need some information from last years TV commercial. Personal organization practices like these separate the professional from the rank amateur. All workflows are crooked paths. Every production has its uniqueness to deal with. Expect the path to be the scenic tour, not the shortest distance, and you’ll be ready for most problems. But you should research as many paths as possible. Forums like Creative Cow, RedUser, and DVX User are great places to learn what others have found useful. Don’t re-invent the wheel. The Cloud The phrase of the past year is ‘The Cloud’. Adobe and AVID now deliver software via the Cloud. Whole data storage businesses have developed around the Cloud concept. It’s quite cleaver actually. Masses of storage devices with custom software, create what appears to be a managed hard drive to your computer, over a large internet connection. What this now offers the media creation industry is, a central repository for all the assets from a production project. The assets are then accessible to all the players on the project. The editors, VFX, sound mixing, Producers, and Studios, all can see, and work with, any part of the production, at anytime, anywhere in the world, where there is a good internet connection. Ron Burdett, a noted post production workflow guru, and emerging technologies consultant in LA, recently stated at a digital media forum, “Hollywood is now all about files, storage, servers and networks.” Film is, for all intents and purposes, dead. What little bits of film are being shot, move to digital immediately in the ‘lab processes’, and remain digital through release.
How does this evolution in the pathway to the end product affect the workflow? It makes it simpler, non-site specific and fundamentally, faster. The service providers (visual effects, sound, editors) don’t have to be together in one building or city any longer. For busy directors, producers and studio executives, they can monitor the project from anywhere, at anytime. It has been common for a director to shoot in one state, fly to LA or New York to look at edits, then fly back to set to continue shooting. Now they can whip-out their tablet and watch the output of the camera live, or review the mornings shots, or see how the latest edit, is coming together. There are legitimate concerns about data security. Pretty much any data repository can be hacked. Just because it’s digital files for a movie residing on something called a ‘cloud’, does not make it hack-proof. It probably makes it a target. As this technology rolls out, security concerns are being addressed. The Cloud does not negate the need for backups or standard workflow practices. It’s a support tool that has evolved out of technology moving ahead. It’s a natural extension of several other existing services re-configured, modified, and customized, for a whole new use. As the DIT, if you aren’t using the cloud now, you will be pushing your output to a cloud-based workflow very soon. What You Absolutly Need to Have Clear in Your Head Before You Walk On Set It seems like there are lots of ‘things you MUST know’ in this job. It comes with the territory. But lets spell this out in simple terms because it affects the workflow. Know the starting point - These are the cameras that will be used and the file formats/codecs they will shoot in. Write it down and do any research that you need to understand the implications to your workflow and software. Know what they want- This is a multi part answer question. For example: -Do they want dailies? If yes, Which format and with burns or not? - Do they want the camera files flipped for editorial? Into which code and what data rate? - How are dailies going to be handled? Do you pass them along or do you give them to production for distribute? - How are files getting to editorial? Will you shuttle a smaller drive with a day or two of work on it back and forth? Who takes care of the shipping? Or is all the
79
Camera to Editorial: Camera to Editorial:
footage going to be delivered daily to an on-site or near-site editor? Or, will there be no deliveries until the production is done and one of the backup drives gets sent to the post facility. - Is there a special file and folder structure they want when organizing the drives and files? Typically, if you have something that seems to work, and it’s understandable, they will go with that. - Who do you call if you have questions. This is a big one. There are forms included with the book assets that are a starting point for gathering the information needed to answer these questions. Look them over and modify them to meet your needs. Something in writing always wins over ‘I thought you said...’ Real World Workflows Here’s a workflow that is typical (if there is such a thing). It starts with a ARRI ALEXA camera. - Camera shoots ProRes 4444 in Log C. What you know at this point is that ProRes is an easy codec to work with. It does create relatively large files, but not as big as ARRIRAW. The Log C color space is going to be easy to deal with because ARRI already has the LUTs to help with one-light corrections. We’ll get more into one-light and what it entails in chapter 9. - Transcodes for editorial in ProResLT. ProRes 4444 are big files with a lot of datarate bandwidth for editing to handle. So production has decided that you will transcode to ProRes at a reduced datarate. This means you will be applying a one-light correction and syncing the audio files to the video using timecode striped on both files. - Dailies will be 720p, 24fps, h.264, with specific burn-ins. So now you are exporting two transcoded files for each clip shot. Your rendering times just went up. Burn-ins are text that is added over the image when transcoded. We will, again, get into this later. Dailies are to be delivered to production on a USB3 128 gig thumb drive, daily. - Production wants two master backups plus a traveler drive that is shipped to editorial every other day. How many hard drives of what capacity do you need for the production? In Chapter 14 we will learn how to crunch the numbers and figure out how much hard drive space will be need for the production. The traveler drives are a different story. You will need enough to keep feeding them to editorial. Know that some will be held up on that end and some delayed in shipping back to you. It’s very rare
that you would amass more than a few hundred gigs of ProResLT footage on any given day. But you will be keeping a total backup on your personal array as a third backup, so you need to plan for that amount of data storage. - Day of wrap, what happens next? You should have a clear idea of what will happen the day of production wrap and the days after that. It should be spelled out in writing. How many days do you need to tidy up the backup drives and make sure they are perfectly synced, file for file? What paperwork are you going to provide describing your work and what’s on the drives. It’s common to deliver at least one full backup drive the next day after wrap. A final, fully conformed backup drive(s) might be a few days out. Of course, that last days work should be on the traveler/shuttle drive headed to editorial, the next morning. Another workflow happens around a shoot that uses multiple, different cameras. Everything is the same to a point. - You transcode all camera files into one codec and frame rate. - You sound sync all files. This gets a bit more time consuming when one or more of the cameras does not have timecode. You then must hand sync all shots with their audio files using the visual slate-clap at the start of each shot. - You still transcode all files for dailies, which is easier once everything is in one codec. Some softwares will allow you to do this without doing the master codec first then re-transcoding into h.264 for dailies. Big time saver. - You will still need to deliver footage to editorial. The big rub with multiple different cameras is their rasters, codecs and compliance with the decided upon frame rate. Just be assured that 2k and 4k are not standard raster sizes between all cameras. And some can’t shoot 23.976 fps. This makes your job more focused on how to handle each camera mag that come to your desk. Another workflow addition involves a new backup. Often larger productions want a copy struck to LTO tape. In the hardware section we will explain what this tape backup is and how it works. For now, all you need to know is that the machine costs $3000+ , the tapes are about $40 each and they hold about 6 TB of data. Oh, and they copy files at the rate of USB2. This represents major sidetask to keep up with backup to this media while doing everything else.
80
Camera to Editorial: Camera to Editorial:
A more current additional duty of the DIT is to backup to a cloud storage provider. These third party resources are a great boon to the industry when it comes to moving files to all who need them, fast. Well, fast is relative. It won’t be two days waiting for a hard drive to arrive but the upload of gigabytes of data, even over the fastest of internet connections is a study in patience. There are lots of locations in the US and the world where internet connections are just not fast enough to even attempt a cloud upload. The final piece of a workflow might be metadata. We will delve into these critical bits of information, attached to each file, later. But what you need to know now is that you might have to manually add scene and take information to each clip shot before it’s transcoded. This data is very handy and often required by post production to facilitate searches and organization of the production assets. Final Thoughts On Workflows You must understand clearly the workflow for the production. You must get it in writing. You should diagram it out, or at the lease create a check list.
81
4.1 Exercise 3- Find a Workflow
This exercise should take 90 min. to accomplish. You will need: • Access to the web • Simple graphics program to create the flow chart. • Word Processor program to create the synopsis. In this exercise you will search the net to find a published workflow. Select a camera or camera original file format that interests you. This could be: - Canon (5D, 7D, Ti series, Nikon) They all use the same file format. - Canon C500 recording in RAW - Panasonic high end HD camera - JVC high end HD camera (tape or memory device type) - Sony HD camera, XDCAM recording format - Sony F65 recording in RAW format - Sony F5 or F55 - Arri Alexa - RED camera - Black Magic Camera - GoPro (newest models)
Each step must be labeled with information about the technical specs for that step.
___ Submission for grading Check with your instructor on how this assignment is to be submitted for grading. ______
What you will do is: • Search the web and read about the workflow for a specific edit software. You can pick Premier, AVID, Final Cut-X. The workflow has to be one that meets the standards of full backups, flipping for preview, one-light, delivery to editorial, and output. The assumption is that you will be working in 1920 raster, 24 fps, and outputting to BluRay. • Write up a synopsis of the workflow. • Create a flow chart style diagram of the workflow like you saw earlier in this chapter.
82
5
Best Practices On Set
Don’t Ever Break These
“Being organized and knowing which cards have been copied and which have not is essential to large productions. It can get insanely crazy on set (preaching to the choir)…” Dan Montgomery, President Imagine Software, creator of ShotPutPro.
YOU handle media. Period. Any questions?
Okay, here’s the drill.
We’ve spent the first few chapters of this book going over technical stuff. We’ll get back to that fairly soon, but for the next few pages, we need to talk about actually doing the job of DIT. Remember that your ultimate boss is the DoP. You are part of the camera department. You will work very closely with the 1st Assnt. Camera folks for each camera unit. Clear communications with all of these people is critical. Most 1st AC’s have a specific way of building their cameras for the days shoot. They have a pattern and process for each take. It works for them and that’s good for the production. Likewise, you must have your work habits that will better ensure success of what you do. Let’s go over a few critial points to help your success. First up, you are a professional entrusted with a very valuable commodity to the production. Act accordingly. People like to be around people they like. It’s a real advantage to be ‘part of the team’ and support their every need. However, there are time when you must be ridegly professional and stand your ground. People respect that. What they do and how they do it is important to them because it’s directly tied to their lively hood. The DIT is no different. You will have what seems to be down time while things copy or render. If you can be helpful to your department
during the ‘watching paint dry’ moments of the day, do so. But don’t be afraid to say ‘no’ to a request if you’re busy doing what you do. Actually I never say ‘no’, I give them the options that are presented at the time of the interruption and let them decide what is the more important use of my time. Being distracted is the biggest enemy to your work everyday on set. Pre-production & Prep: It should go without saying that everything isn’t anything unless it’s in writing. You should have met with the producer or UPM at some point and worked out your ‘deal’. The pre-production checklist included with the assets for this book is a good starting point. As mentioned in the Camera to Editorial section, you absolutely have to know the basics of what you’re dealing with. Information like: - what cameras are being used - codec they will record in - how many cameras - do they want dailies - what are the deliverables to post - how many backups and in what format or media. The day or so before the shoot, when the camera department is ‘preping’, you drop by and snag all the recording media. This ‘prep’ day may or may not be part of your contract. It’s so important to build a relationship with the camera crews. They and you need to feel that you’re apart of the same team. But it might be more than just working out camera settings and formatting mags. You might have help with camera test shoots, get those files and work out a custom LUT for the on-set monitors. On some shoots, you will fly or drive in the day before, check your gear and make the early AM set call. Well,
83
Best Practices On Set Best Practices On Set
maybe an hour before call. You need to have your gear in place, ready to service the camera department. This is where the attention to detail begins. Make what follows a habit. You want to: - Mark each mag with a number. Use a Sharpee marker and give each mag a number. Some camera departments will do this as part of their prep. It should also be noted that A cam and B cam will have their own cards and really don’t like to exchange them. If you get an A cam card, return it to A cam. - Each mag needs to be put in the camera to be formatted. You might think that your laptop can format a simple FAT32 or exFAT based drive, but this is not the case. There are special files and some directory setups that only the camera can accomplish. This also, is a good way to weed out bad media. If it won’t format, now is the time to find that out, not the first day on location. Bad media will need to be swapped out with the camera rental house as soon as possible. Depending on your location, that could be across town or totally out of country, requiring some sort of expedited shipment.
NOTE ON PROCESSES: Each set is different, this much we know already. Above, we’ve talked about the 1st Assistant Camera calling you to come and swap out a camera mag. On some sets, the camera department will have a case with a fresh magazine, right there with them. And this is fine. When shooting film stock, there is always a fresh magazine of film at the ready, same can be done for the memory cards or hard drives if you have enough of them. Some productions can only afford two camera storage devices. The workflow then wraps around swapping cards out when the other card is backed up. Now, all this being said about card formatting, each camera has its quirks. The Blackmagic Pocket Camera couldn’t format cards in camera. You will have to do that in the computer and the format is exFAT.
The Arri Alexia used UDF format which MacOS can do but it won’t be recognized by the camera. Typically the DIT will erase the card to exFAT format which will trigger the camera to require and in-camera re-formatting. This takes all of 3 seconds to accomplish once the card is insereted into the camera. And this will be one of those workflow steps you will work out with the 1st AC. - Put a tab of green paper tape (camera or electrical tape) over the connectors on each mag. Green for ‘ready to go’. This is the visual marker that this mag is ready to shoot. The last thing you want to happen is someone to question ‘is this the full mag or is THIS the full mag?’ A neat trick learned on a receint shoot was the use of a blank piece of printer lable from the typical lable printers used by most camera departments. Stick that to the magazine. The camera department will mark it with Sharpee marker indicating which camera and mag it is. Then, when offloaded, the DIT will use a black Expo marker to erase the Sharpee markings. This is a trick used on slates all the time. The Sharpee is thought to be perminent, but the chemcials in the black Expo marker will wash it off. - By doing this process, even with the green tape on the preped-for-set mag, if the last card number is still on the mag, the camera department will question wheather the mag was actually offloaded and not over-write it until it is proven to be backed up. Double fail-safe. - Offload the mags often. This is a good time to express your feelings about not filling the mag totally full before changing it out. 80% of maximum capacity is as far as you want to go. If you remember the article you read about Von Thomas, he mentioned that he likes to pull the mags “at 30%”. If they are using a 128 gig SSD drive, try not to let them amass more than 90 gigs on the drive, before it’s pulled and backed-up. Here’s another sobering thought about lots of data in one place. As fast as eSATA is, if you’re making 3 backups at the same time, you could realisticly see data rates of 1 gig per minute. That could mean 90 minutes to backup (and checksum) those 90 gigs. Understand that most UPM’s budget time and workflows around 1000’ film loads. That’s roughly 10 min. of run time or 10 takes of a 30 second scene. This workflow has worked for 100 years and should not be
84
Best Practices On Set Best Practices On Set
Fig. 1 Protective Hard Drive Cases
new mag off until you’ve taped the used mag. Never break this rule. Never! - 2nd Assistant Camera gets new mag/hard drive with green tape. They remove the green tape before installing the mag. - Carry the mags/hard drives to and from the camera position in some sort of case. Configure the case for the number of cameras on the shoot. Figure 1 shows a case that holds several different memory formats Think of it this way, the film magazine might have the shot of a multi-thousand dollar, staged car crash scene that could only be shot once. No retakes. Today you have, in your hot little hands, the 1’s and zeros that make up the image for that shot. Damage that digital information and you’re in really big trouble. Digital media is far more sensitive to being damaged than film.
Pelican ™ Case for Mags.
tampered with. There’s apps that will allow you to figure how much digital shooting time will equate to a 1000’ film magazine. But, with digital and large camera storage devices, directors feel they can leave the camera running endlessly, as-well-as shoot half the day before offloading the camera magazine. The most vulnerable time for the data on the camera mag is while the mag is still in the camera, and not backed up. - S wa p p i n g o u t c a m e r a m a g s. C a m e r a department calls over the radio “Camera A, mag change”. If you’re in control of the mags, you grab a fresh mag and head to set. The 2nd Assnt. Camera person will put a piece of red tape over the connectors on the mag (red means stop before doing anything with that mag) and mark it with the mag number. If it’s the first mag of the day for A camera, it would be ‘A-01’. You always indicate a mag number with a 0 before the number. - If you are doing the mag swaps on the camera, as soon as the mag is properly ejected from the camera, put the red tape over the connectors. You do nothing else except mark the mag, immediately. Do NOT pass the
I can’t count the number of productions I’ve been on where the camera operator ejects a memory card, puts it in a pocket, digs into another pocket for a card, slaps it into the camera, formats it and continues shooting. You guessed it, several forgot which was the new card and full card pocket, erasing all that shooting. The sweat, dirt, and friction inside clothing pockets are disastrous to memory cards. Still, this practice goes on daily by people that put the title ‘professional’ after their name. The memory card is not magically protecting the data. It will be corrupted at one point or another. - Back at your DIT station, organize your workspace and keep it that way. Remember the admonition earlier about the DIT being very good at details? Set a way of doing things and do NOT, ever deviate from that pattern. On the left side of your workspace, keep only the the mags that are ready to be ingested. On the right, ONLY the mags that have been fully processed. - Never remove the red tape from a mag until insert it into the reader or connect to the interface cable. There have been cases where a DIT has removed the tape, taken a phone call or been interrupted, set the media down, then can’t remember if it was backed up or not. Not a good thing to have happen, a huge time waster and it totally interrupts your focus. I make it a habit of putting that piece of red tape on the card reader. It will take a while for the backup to complete, and I can easily
85
Best Practices On Set Best Practices On Set
see what card is in the reader by referencing the piece of tape. - Your backup workflow starts now. We’ll cover that in detail a bit later. - Backups still at risk. Your backup went swimmingly. All files are now backed to your local array and one or two other hard drives. You’re not out of the woods yet. Those backup drives are still in one physical location. This next part will depend greatly on what the production you’re on requires, or the physical workflow that has been agreed upon. One part of this is for sure, every day, two of those backups need to leave the set for different locations. One needs to be vaulted. This can be a bank vault, a fireproof safe off site, somewhere that will insure the survival of the hard drives through most catastrophic events. The second backup drive typically goes to the post production house so work can begin. The third backup can stay on set along with your backup on the array.
“My biggest pet peeve regarding on-set data handling is the cavalier attitude people have about it. With film, we hand to the negative over to the lowest-paid member of the camera department. With digital data, it’s even scarier. Everybody has a computer, and everybody has copied files to a hard drive before, so handling data seems even easier than handling film. many people assume [the task] can therefore be handed down to the least-experienced person on-set.” Brook Willard, DIT whose credits include The Muppets, The Amazing S p i d e r m a n , T h e G r e a t G a t s b y. A m e r i c a n Cinematographer, Jan. 2012. - The vaulted and post production back-up drives go off set daily! Whomever transports, does so in a protective case and signs for it. Transportation (or ‘transpo’) is the key department to move things, people, equipment, etc. You will make arrangements with the Transportation Captain on how to notify them on a daily basis to have the production drives transported. They might take one to the off-site production offices where they have a safe. The other backup might go to the airport where it’s overnighted (counter to counter) to the post production house. This is typically called a ‘traveler drive’. - A cell phone camera shot of the shipping documents can be emailed to you. If the post house is in town, Transpo will deliver there as well. - The drive to the post house could be delayed a day or so until the files are audio synced, a one-light, and format flip is accomplished. It’s hard to stay up with the incoming camera media, show the director and director of photography ‘quick looks’ while you’re ingesting, logging, servicing the camera department, etc. The drive that goes to editorial will contain the camera original files AND the conformed files so they have both in the same place.
Fig. 2 Properly labeled drive from finished production.
If you are traveling internationally, on the return flights when the drives are full of assets, split the drives up between the crews bags. Never have all sets of drives in one case. If you can ship out of country via FedX or other quality shipping mode, make sure a full set of asset drives are shipped, insured, and registered.
- Paperwork follows the drive. The camera report, script supervisor (scripty), and audio departments notes will accompany the drive to editorial. If hand delivered, the post house has to sign for hard drive delivery and that signed form needs to get back to the DIT on set. If the drive(s) is shipped, then the shipping document is proof that it was delivered. - If shipped, the shipping case has to be lockable/sealable. Buying good shipping cases like Pelican, Strom, etc., needs to be budgeted for and they can be reused. Count on a dozen or so protective cases
86
Best Practices On Set Best Practices On Set
NOTE On The Job Duties: Understand that the DIT is part of the camera department and your boss is the DoP. What your boss wants, you do. It is very common for a DoP that is not comfortable with digital cinema cameras to have you, the DIT, set the camera exposure. Many DITs feel this is wrong and outside the scope of the DITs responsibilities. Generally speaking, the DoP gets paid well to create the look of the production and set exposure on the camera. The key function of determining proper exposure is his alone. That being said, to stay hired, you must please your boss. If they need you to confirm proper exposure, it further re-enforces the importance of you knowing all you can about each camera you will work with. for shipping. These cases must be able to protect the media in the hands of gorillas. The production (and maybe you) will have to work out a way to get the empty cases back to you for re-use. Often, the weeks’ cases are thrown in a larger shipping box and shipped back to the set. All this ‘other’ work is part of your daily activities. - ALL drives/mags are numbered. You will need a label maker. If the drives or memory media are not already marked, or the current labeling does not work for the production, you’ll need to grab the label maker and create unique identifies for each magazine. Keep it simple, clearly readable and consistent. Also make sure the labels won’t jam the card in the camera or card reader. Some memory card slots on cameras have very tight tolerances. - Mags are logged each time they are off loaded to track issues. This can be part of your meta data. A hard log or spread sheet is part of CYA (Cover You Ass). Every time a mag comes back to your workstation, log it in. What your log looks like is personal but something simple (hard copy or spreadsheet) needs to be kept. - Another bit of information that should be kept is the offloading softwares’ log sheet. This will automatically
be saved with the files when they are backed up, but it would be advisable for you to keep a copy, in a folder on your hard drive. There are often questions asked about what got backed up, when, etc. It moves things along if you can answer these questions quickly. - Prepping the mag for return to set. This is an a area of some discussion. Some prefer the mags be erased before returning to the camera for re-use. Others delete just the database file off the card. When the card is mounted in the camera, the camera software will not recognize it and ask that it be formatted before use. If there is any question as to the integrity of the files off that card BEFORE it’s reformatted, there is recovery software that will rebuild the directory allowing access to the files again. The best safety practice is to have enough recording media for a few days of shooting. This is good practice anyway, as you might not be able to get all the files off the camera media, from a multi camera shoot, fast enough each day. Even on a single camera shoot, it’s reassuring to have the past days footage at hand, untouched while you create one-lights and sync audio files. If there’s an issue, you can quickly grab the mag and re-load the bad file. Shelve The Mags, or Reuse? A Disney feature film looked at the cost of film stock and processing the negative, comparing it to the purchase price of the camera CF memory cards. They we’re surprised to find the costs very similar. So they opted to never reuse a memory card and instead, vault the hard drive backup and the memory card. When the feature film was done, released and out of the production pipeline, those cards were pulled and reused on one of the “Pirates of the Caribbean” franchises a year later. The cost to the second production….zero dollars for camera mags. A significant savings even on a huge budget production like Pirates. You won’t find most productions shelving camera mags unless they are cheap or have the budget. Ultimately, productions see the digital mode as a money saver. Not reusing mags runs counter to their best budget considerations unless the mags are really cheap, like SD cards. They are not going to shelve $1000 SxS or REDmags on any production no matter what the budget. End of the production day: you need to get your offload report to production. There’s a handy spread sheet included with the assets for this book that can easily be
87
Best Practices On Set Best Practices On Set
modified for your production needs. Once filled out with the days information, you can email it to production. It always seems that the DIT is the last one off set. Those last camera cards and audio off-loads come in and you really need to get them backed up and safe, before walking off set. Again remember, the data is at risk until it’s backed up and secure.
88
5.1 Where You’ll Hunker Down On-Set
It’s good to be close to the set, but not close enough to allow the DoP and director to camp-out when they are between activities. You have a job to do that requires a lot of concentration and attention to detail. Distractions are not good. You do need to be close enough to camera so that the mags can be run back and forth easily without holding up production. It’s a delicate balance. In some cases you won’t be anywhere near set, for example, with 2nd unit work. You’ll have to work out how
and when mags get moved and how you communicate. In many productions, the DIT will be at the production offices, which could be blocks or miles away. People on the camera department and the transportation department will need to work with you to move assets in a timely and safe manner. In a more radical but not all that uncommon situations, a DIT provider needed to be near camera all day, every day. Nothing new here accept that 'near' was on a small boat close to the floating camera platform. They built an 'away' version of their DIT workstation that would be protected enough to work on a 16' open boat, on the water, day after day without problems. That ment having their own power on board as well. Every show and production will have something new to solve so you can do your job. Here are the basic considerations for where the DIT workstation should be setup: • Power. • Noise. • Workspace. • Environment. Let’s take them in order. -Power. Power is critical. Uninterrupted, clean power. If the electrical department is providing your power, you will need to let them know how much you need (they want to know voltage and amps), and when you need it. If you need to be fired up and working an hour before call and 2 hours after the daily wrap, they need this information. Build a good relationship with them and they will take care of your needs. They also like pastries. Just sayin’.
Fig. 1 DIT station on the set of ‘Hitchcock’. Image courtesy LightIron Systems and fxguide.com.
This being said, bad things may still happen. Generators run out of fuel, cords get unplugged (called a ‘kick out’) and your power can instantly cut off. You must invest in a UPS (Uninterruptible Power Supply) that will run your system long enough to either complete the running tasks or do an orderly shutdown. You need to figure out the amps or watts that your equipment will use,
89
5.1 Where You’ll Hunker Down On-Set and the length of time you want the UPS to support your system. The following link is to a UPS size calculator. (http://www2.middleatlantic.com/UPS/Search.aspx). Although this site sells good UPS systems, you can take the results and find other options that will fit your system’s physical constraints.
editorial to co-ordinate daily tasks. You will be listening to recorded takes while syncing the audio and video clips. It is mandatory that you have headphones as part of your kit so you can work on-set and not disturb the production. Another noise the DIT creates is that from computer fans and hard drives. Although the newer systems and drives are really quiet, you are never sure what the audio boom mic will pickup. If you can be off the recording stage, it’s a much better arrangement. Outside noise can interfere with your work. People talking, loud production sounds like effects fans or special effects equipment will be totally distracting. Side conversations around you are hard to ignore and will break your concentration. You don’t need to be a hermit, but being away from distractions is a good thing for the DIT activity.
To determine how -Environmental Conditions. This should be self much capacity you evident but computers have issues with moisture, dust and heat. You need to set you system up in an environment need in a UPS, add where all of the listed problems are not issues. If your up the watts all the living outdoors, having a way to mount a large umbrella equipment will take, including the monitor. Then add 30% to that number and buy accordingly. Fig. 3 DIT Work Stations
Fig. 2 Two different styles of UPS. Lower capacity on the left, higher capacity on the right.
Uninterruptible power supplies come in several different configurations. All of them are heavy because there’s a sealed, lead acid battery inside. The power should be clean. By that we mean, the proper voltage, frequency and without spikes or noise. Most UPS’s have conditioning circuits inside, to filter the power. The better power strips also have spike suppression and some filtering. You don’t need power strips with filters in them, if you have a UPS, but you do need a filtering power strip if you don’t have a UPS. Chaining a filtering power strip and a UPS can cause problems, preventing either from being able to work properly. If you need extra sockets, and have a UPS, find a strip that doesn’t have the power conditioning. Environment -Noise. There are two kinds of noise that must be taken into consideration when doing DIT work… the noise you create and the noise the set environment creates. You will bother the set more than they will disturb you. You will be taking phone calls from production, transportation, and
Image courtesy RedUser Net forum
90
5.1 Where You’ll Hunker Down On-Set on your workstation, shielding you and it from the sun and rain, is a good start. -Workspace. Workspace is about getting what you need…or making do. Hopefully someone will ask you how much space you need to do your work. That will depend on your kit size and whatever working table space you might require. Ask for more than you need and settle for what you get. Some DIT setups, especially if you’re doing more than just asset backups, require more equipment, carts, power,… everything. If you fly to the set/location, you’ll be traveling light (if carrying a 60 lb.+ Pelican type case is considered light). You’ll have to ask for a folding table and chair(s) to be provided by the production company. Some DIT’s have a converted van or small motorhome to work out of, so power to the vehicle and a parking spot close to set will be required. Companies like ((Radar)) Mobile Studio (http://radarmobilestudios.com/vehicles/) offer a 40 foot, full service motorhome with editorial, DIT, coloring and producer area, all in one package. They also offer smaller, converted LandRovers that hold the DIT station in the back seat area, and are self-sufficient, go anywhere type of vehicles. Affectionally named ((Radar)) Remote.
Fig. 4 ((Radar)) One Mobile DIT facility with edit and color stations included. Courtesy and ©Radar Mobile Studios
But the move has been back to more portable setups. These can be wheeled-racks or Pelican™-type cases. The hi-bred is something like a Sprinter van with the interior customized as a working environment and has the ability to roll out the computing core of the DIT workstation for remote or more on-set situations. Be advised that vehicals over 20’ long, in city environments might find it hard to park near set. As well, the transportation department (TransPo) can demand that they drive your vehicle if the set location is tight. It is also quite common now for DIT's to take over the old dark room space within the camera truck. Good point about this is that the camera truck will be near any location the project is filming. The downside is that these film loading rooms are small. Very small. And there’s extra equipment that might be just as important as the expensive stuff on your cart. Look carefully at Figure 5, showing Von Thomas’s DIT workstation. On the right is a vital piece of equipment for him. The coffee maker. Yep, there will be loooong days that you’ll have to get through somehow.
Fig. 5 Von Thomas’s DIT workstation on the set. Image courtesy © Von Thomas
91
5.2 Chapter 5 Review Questions
Answers located in Appendix C 1) The big picture view of asset workflow is... a. Camera to DIT to Editorial to Color correction to Distribution. b. Camera to DIT to Production to Vault to Editorial. c. Camera to DIT to Distribution. 2) Shooting in the ProRes codec means you must flip the files into a more edit-friendly codec before editing. a. True b. False
3) The term ‘workflow’ encompasses which of the following. a. The camera file’s path from the camera to the editorial department. b.The process by which the DIT gets paid for their work. c. The total daily hours worked.
4) Green tape put over the connectors of a memory card/mag, indicates what? a. The mag is ready for offload. b. The mag does not need to be reformatted. c. The mag is backed up and ready to go back to set.
6) When is the camera files/data at the greatest risk? a. After the third backup but before the drives are shipped off set. b. From the moment the mag is ejected from the camera, to the point where the first checksummed backup is created. c. Assets are never at risk because of the high reliability of todays memory devices. 7) It’s really not important to log mags when they come into and leave DIT. a. True b. False 8) If three backups are made, where does each get move to? a. Vault, editorial, production b. Vault, on set, trunk of DITs car. c. Vault, Editorial, Cloud. 9) The power on set is stable so you can religh on it to run your workstation. a. True b. False
5) It’s good practice to save time and money, to fill the camera mags as full as possible before offloading. a. True b. False
92
6
Heavy Iron, RAW Files and The Great Chase
Referred to earlier, computers have become more
powerful and our video files, larger and more complex. They seem to leapfrog each other. Take MPEG2 file for mat. The sheer structure of this file was overwhelming to non-linear editing systems (NLE). The solution was a dedicated processing board, plugged into your computer, to handle the coding and decoding of MPEG files. As a further point of reference, computers were just crossing the 1 GHz. threshold, processing was 8 bit and a ‘gig’ of ram was an expensive luxury. Comparatively speaking, we have more processing power in a smart phone than the fully dressed-out desktop computer of that day. ‘That day’ was the early 1990’s by the way. Computer processing power grew at a staggering rate. It soon became possible to process even the most difficult video files in near-real time. But the new codecs and complex file structures, threw us back into ‘rendering hell’. You’d drop the file on the timeline, and render it before you could play it back. Add an effect, re-render. Want to review the show before making any more changes, start the render, go to lunch, and hope it wouldn’t crash. The cameras generating these files grew in complexity and resolution. Standard Definition (SD or 720x486) gave way to several new HD file formats. The reason why ’several’ new HD formats emerged, was the standards committee from the Federal oversight o r g a n i z a t i o n f o r b ro a d c a s t i n g ( t h e Fe d e r a l Communication Commission or FCC). The FCC chose not one, but four HD standards for the market place: 720p, 720i, 1080p, 1080i. The ‘i’ stands for interlace and the ‘p’ denotes progressive scan.
The interlace display format is a holdover from the SD days where everything shown on the home TV was interlace, meaning the full screen image is comprised of two scanning passes. The first screen scan was the ‘odd’ numbered scan lines. So lines 1, 3, 5, 7, etc. are ‘painted’ to the screen first, then the ‘even’ lines are painted between the first scan of the screen. We see it as one complete image of course due to the phosphors on the picture tube of the older TV’s and the persistence of vision with our eyes. HDTV Announced, Standards Set There were financial and political pressures at play when the new HD standards for broadcast were announced. The manufactures of all the new cameras, broadcast equipment, TV transmitters, etc. wanted their product to be the standard. The FCC, known for quietly going away and figuring out what single standard that would be adopted, blinked and announced all four would be acceptable or legal, to use. For the consumer and the TV manufacturers, this was the worst of all scenarios. Network broadcasters adopted several of the defined formats and Cable adopted another. The first consumer HD TVs were expensive and most, did not have a tuner in them to receive HD programming, no matter which format was transmitted. As a consumer walking around in the electronics store, which one would you buy? None of them received all the standards. A select few, only one. And they were all expensive. Very expensive. The answer of for the consumer was Radio Shack™. They soon came out with a set-top box that would decode at least two of the standards, so you could watch what little HD programming was offered. Like a lot of technology break-outs in the consumer realm, there was the technology, then the patch to make it work, then the
93
Heavy Iron, RAW Files Heavy Iron, RAW Files and The Great Chase
Tech-Info: Persistence of vision can best be identified with that spot in your vision just after someone takes a flash picture of you. You see the reversed image of the bright flash in your vision for some time. Persistence of vision is best realized in the movie theater where film is being projected. Each frame of film is projected for one-twenty forth of a second. Then the shutter on the projector closes. The screen goes black. The room is totally dark. The film projector moves the film ahead one frame and re-opens the shutter for that brief exposure. You don’t see the black in-between because the image that was projected is, in a sense, burned in you eyes retina. That image, like the bright camera flash, persists until the next image is shown. The new image wipes away the last one and we have, what appears to be, a moving image. Old Cathode Ray Tube (CRT) TVs and computer monitors have phosphors on the inside of the screens’ glass. When the electrons were shot onto the screen, the phosphors would glow for a brief moment in time, then decay to black. All this happens so fast, you can’t see the decay, only the glowing image. To the eye with it’s own persistence of image, all appears to be smooth and fluid. The interlace method of creating an image on the computer screen relies on the eyes persistence of vision. A fun test for this image ‘burn in’ is to stare at a photo on the computer screen for 20 seconds, then look at a white sheet of paper. You should see what amounts to a negative of the computer screen. It will fade within seconds. next revision that had most of what you needed, included. But that still didn’t solve the fundamental problem, of very little HD programming to actually watch. HD Production The transmission of HD programming, at the TV station level, became a monumental and very expensive hurtle to overcome. Quite literally, there was not a single part of the old SD system, accept for the audio board and speakers, that could be re-used for HD. Everything had to be built from scratch. And that was from the broadcasting side of the signal path. To get this new HD programming to the broadcast outlets, new HD cameras were created by all the major manufactures. Panasonic’s SD line of cameras were upgraded and the DVX-100 emerged as the HVX-200. Canons XL line of cameras re-emerged as the XL-HD. The race was on. With each camera, new codecs and file structures emerged. The driving force behind good quality images is data rates. How many ones-and-zeros can we move from the camera sensor to the recording media. The more we can move, the better the image. 50Mb/sec data rates gave way to 75, then 100. While this was happening, the
codecs that ultimately wrapped the images, became more complex and that complexity needed, you guessed it, computer horsepower. Seeing a trend here? Emergence of RAW files Then the computer killer appeared, RAW digital cinema files. Almost all the HD cameras were shooting in 720 or 1080 raster sizes (1920x1080 is still the most
Fig. 1 Front projection, 3 CRT TV. Image courtesy Sears Corp.
common HD raster format today). The digital cinema class of cameras weighed in at a minimum of 2000 pixels wide and, the real heavy weight, 10 and 12 bits of
94
Heavy Iron, RAW Files Heavy Iron, RAW Files and The Great Chase
Tech-Info: With the first over-the-air, commercial HD broadcast in 1996, the adoption of HD was set to a time table. With standards in place the FCC stipulated that all SD signal broadcasts would stop around 2006. There was a huge uproar by the broadcasters at the seemingly short timetable and the capital expense required to do the switch over. The date was moved to June 2009. It was not what one would call a race to get HD programming on the air in local markets. They were the most impacted by the high costs. In the Salt Lake television market, two stations (KTVX-ABC and KUTV-CBS) found it cheaper to buy or build a totally new facility, and one night, turn off the old studios and turn on the new HD studios. Smaller markets, where one mom-and-pop station served the rural communities, simple went ‘dark’. They could not afford the transition. To make matters worse, the HD signal bandwidth was so wide, to carry the new signal, all the TV stations had to move from the VHF frequencies to UHF. That meant totally new transmitters and transmission antennas. What about the world? The target date is 2016 for the entire world to switch over to HD, no matter what the broadcast standard (US is NTSC, Europe is PAL, Russia and other countries are SECAM). color depth. File sizes grew exponentially and so did the complexity of the decoding processing. Uncompressed 1080i high-definition video recorded at 60 frames per second (fps) will fill hard drives at a rate of 410 gigabytes per hour of video material. Computers were again, not powerful enough for the task and rendering became the retro-new workflow process. Media producers, editors and graphic artists are a very minor part of the computer purchasing equation. But they need the biggest, most powerful processing machines they can buy. Most custom built the machines, to meet the demands, and those custom integrators make lots of money building them for the new digital film business. A strange thing happened that paused the further development of more powerful, consumer computers. Big desktop computer purchasers stopped buying the latest models. When say, Boeing Aircraft buys computers for an upgrade, we’re talking about thousands of machines. These corporate buyers looked at the average use for these machines, and found that the new 2.x GHz processor was so far over-the-top in processing power for spreadsheets and word processing, the budget accountants put a stop to new purchases. The ripple effect was financially catastrophic for many computer makers. IBM, one of the first PC manufacturers, dropped out and nearly went bankrupt. Dell hung in, but Gateway fell by the wayside only to re-emerge with a
different computer focus. Many others litter the computer roadway with the newest, more powerful CPUs in development but no market for them. Media professionals need these fire-breathing monsters badly, but the digital media industry was, and still is, a very small piece of the purchasing pie. Through all this Apple focused on this target market, the creatives, but struggled with it’s own identity. They adapted and survived, and remain the media industries primary platform of choice to this day. Today, the custom computer builders are back at it, creating machines that are tweaked to the highest degree, squeezing out all the processing power potential to be thrown at the digital file processing. Processing Issues Media professionals are now faced with two significant computer processing issues that require sheer, brute force: CPU power and interconnect bandwidth. All driven by the new 3k or 3000-pixel wide, 4k, 5k and soon, 9k raster sizes, some of which are pushing 16-bit depths. Let’s break this down, because they are different side of the dilemma. When it comes to rendering, playing back or decompressing a file, the slowest links in the process are the processor and the computer bus. You need both the 64-bit wide bus and the high CPU clock rates to crunch
95
Heavy Iron, RAW Files Heavy Iron, RAW Files and The Great Chase
the files. We’ll go into this ‘bit width’ in the hardware chapter. GPUs Leveraged There is a new wrinkle, that is GPUs or Graphic Processing Units. These had been relegated to video display cards until now, but many software developers have found ways to leverage these very, very fast processors and memory, residing on the graphics cards. Adobes new processing engine is GPU based, and rendering times are very fast if not eliminated. The rendering, if needed, can be done in the background while you’re working on editing. You can now buy cards that have thousands of GPUs on them, solely for this kind of task. Couple huge amounts of RAM memory with a fast processor and the computer will operate at it’s maximum potential, provided the software has been written to take advantage of these features. That is another issue, external to the hardware, that a digital media professional has to take into account. I am writing this on an 4-CPU 8-core laptop. It would stand to reason that processing would be faster on this machine than say, as single or dual core equipped computer. Not so. There are very few software packages that maximize the multiple processing cores, and even less that can saturate, or use each core, to it’s maximum processing levels. Code to do this is difficult to write. Input-Output Bottle Neck Input-Output, or IO, is the next bottle neck in our systems. This comes in many forms. Interfaces like USB and FireWire and eSATA, all have limits. The amount of cache memory on the hard drive effects transfer rates. As fast as the hard drives might seem, they are still the slowest link in the chain. Why? They are mechanical. A spinning disk with a mechanical arm that has to move back and forth across the platter, locating data. And there are more physical and technology issues to slow our workflow down. The key to this I-O issue is latency. It’s measure as IOPS which is Input-Output Processes per Second. This takes into consideration the function of the system which controls the storage device. This would be both the hard drive itself and the controller card. The process of writing a bit of information to the drive goes something like this: • Packet of data heads to the storage device. • Interface card grabs the packet. 1- cycle.
• Interface card checks storage device to see if it can take the data. 1- cycle. • Drive says ‘send it to me’. 1-cycle. • Interface card acknowledges. 1-cycle. • Data is sent to cache memory. 1-cycle. • Data is then sent to the recording media. 1-cycle. We also need to factor in the physical movement of the recording head on a spinning disk drive. More delay. These delays are measured in micro-seconds, or millionths of a second. The lower the number, the better. But even these numbers are slippery. Some quick checking on the web will show that some drives claim very low numbers and SSDs have the best specs. Within those SSD specifications, some are quite a bit better because the controller on one more efficiently writes to the memory. Interconnections The next is the interconnect between those drives. You’re familiar with USB, USB-3, FireWire (400 and 800), eSATA, ThunderBolt™, SAS, and maybe less familiar with FiberChanel, and some of the new hybrids. We will consider each of these in the hardware section of this book. For now, I have just listed them in order of throughput. The Big ‘System’ Picture In the assembly of a DIT system, everything must be taken into consideration to maximize your processing workflow. But don’t make your decision based on manufacturers specification sheets. They were written by the marketing department with help from the technicians. Numbers on their face value, are slippery at best. You need to do some tests or get direct information from those who have. Getting back to the core of our discussion here, we still need “heavy iron” (aka computing monsters) to make a dent in the emerging camera generated data. They are few and far between. Apple, after not Fig. 2 MacPro computer. upgrading their flagship Image ©Apple Computer MacPro desktop for four years, released a new MacPro with jaw dropping processing abilities. (Fig. 2)
96
Heavy Iron, RAW Files Heavy Iron, RAW Files and The Great Chase
But lest you think you can buy the lower end MacPro for DIT work, you will suprised. Not in a good way. Initial testing does indicate that they are formidable processing machines. But only the fully ‘dressed’ models, and those are really quite expensive.
next section, we will look deeply inside the key parts of any machine that has to process digital media files. You need a firm understanding of how these systems work and how all the parts play in harmony creating a tool that will keep you moving ahead on a daily basis.
Windows OS and DIT Work Wait, there are the Windows OS™ based, Intel™ processor machines being used. About 20% of the media industry use these computers based around available software tools. Ultimately that drives the decision. Are the software packages you need even written for this or that operating system? But there is a second and often more problematic problem with Windows based machines. Until just recently, the there wasn’t a ProRes codec compressor for Windows OS. The OS could read the codec, it just wasn’t able to actually do the compression. Add to that the problem with hard drive formatting not being able to easily move from Mac to Windows platforms. Mac uses HFS file systems and Windows uses NTFS. Without special software on both machines, neither can read or write to each others drive format without issues. Most post houses are Mac based. That should tell you something about your decision. On par in processing power with the Apple MacPro, Windows OS™ machines are both less expensive to build and faster processors. Some media professionals have gone to the ‘dark side’, as it’s referred to, and built super fast Windows based machine then run the Mac OS on top of the existing system. They are referred to as ‘HackInToshes’. Very fast but temperamental. Unless you’re really comfortable with computer hardware and kernel code hacking, this may not be a good choice to risk your employment on. A second push is emerging and that is the Unix operating system, or some derivitive. Unix, Lenix, etc., share a wonderful core code that is very efficient, fairly easy to code and fundamentally, it uses every bit of processing power that is available, with little code effort. The Mac OS since the first v10 release, are Unix based. More and more programs are being released with Unix/Lenix as part of their supported OSs. Worth watching. So we are not out of the woods yet when it comes to catching up with the demands of our business. There seems to always emerge a new file structure or data rate or bit depth associated with the digital media files that keep really fast processing power just out of reach. In the
97
6.1 Basic DIT Workstation Hardware
Required reading: Ultimate D.I.T. Cart, http://library.creativecow.net/thomas_von/magazine_3 1_Ultimate-DIT-Cart/1 Note: As is the case with all articles that mention specific hardware and software, they get dated fast. The basic configuration and system goals don’t change. Defining DIT: The Right Tools For the Job http://nofilmschool.com/2013/10/dit-table-dit-prof essional
There are a number of basic configurations for DIT workstations. The key is matching the hardware and software to the job. Some configurations are focused on color correction on set (one-lights). Others are processing behemoths, leveraging two desktop CPUs, incredibly fast internal networks and several high-end (expensive!) processing cards. But they all do the same thing-- move camera original data to backup hard drives for safe keeping or further work. Figure 1 exemplifies a workflow model for RED footage. No matter what the workflow or camera, the
Fig. 1 Basic diagram of a simple DIT hardware configuration. workstation needs to be purpose-driven. In other words, know what you are trying to accomplish and build to that goal. That being said, some configurations work across a wide variety of camera files or tasks. If you aren’t going to work with RED footage, there isn’t a need to spend thousands of dollars on the RED Rocket card (Fig. 2), which specifically focuses on render-processing of R3D files in near real time. This card can’t be used when crunching AVCHD, XDCAM and other non-RED files.
98
6.1 Basic DIT Workstation Hardware TECH NOTE:
Fig. 2 RED Rocket card. Image courtesy RED Cinema.
There are generic processing accelerators however. The Fusion io™ line of cards offer tremendous processing speeds for any task that can be cached or includes disk caching (Fig 3). It does NOT have any firmware on-board that can specifically compress or decompress codecs. It just moves anything on it, faster. Think of it as really fast memory.
The downside of Thunderbolt is it’s inability to access GPU and CUDA technology through the Thunderbolt interface. This is an issue with Intel and how they have written the Thunderbolt standard. If it gets an update, allowing the ability to access exter nal GPU processors via Thunderbolt, it will be a great addition to the DIT workstation. If you look at the new MacPro, it’s all Thunderbolt and USB3 connections for external devices. This is a limitation. The new MacPro has 6 Thunderbolt ports but only two controllers. This will cause an issue with maximizing the 20 gigabit potential of Thunderbolt. A single monitor, connected via Thunderbolt, will consume 4 gigabits of throughput. Two monitors will hog up 8 gigabits, leaving only 12 gigabits for data. That is dropping very close to USB3. Again, we’ll have to see how all this shakes out over time.
Fig. 3 Fusion-io Drive card. Image courtesy Fusion-io
There has been more effort by the programmers to leverage the CUDA™ technology found on graphics cards. These GPUs are very fast and religated to only one task, processing images. This offloads a huge amount of processing pressure on the computers CPU. RED Cinema and others, have begun writing code for their software that will seek out and utilize any CUDA processors. These cards are somewhat expensive, but less costly than the RED Rocket, for example. Today you can put in two of these roughly $1000 graphics processing cards and render files at nearly the same rates at the dedicated processing cards. This is something to keep your eye on as the softwares get re-written. Ultimately, you will put $20,000 or more in a basic workstation and close to $50,000+ into one geared to bigger, high-end productions. DIT Station, Pieces, and Parts Now is the time to look under the hood of a typical, DIT workstation configuration. It’s important that
operators understand how their machine is configured. What goes where, and how it’s wired.
Fig. 4 Typical, simplified, DIT workstation hardware configuration.
Figure 4 represents a typical workstation that will handle a wide variety of workflows. We’ll spend a few pages discussing the individual parts.
99
6.1 Basic DIT Workstation Hardware
Choosing a CPU CPU- More powerful laptops are developed every 6 months or so. I’m writing this on an 8 core, Mac Book Retina, with really fast memory and SSD drive. It’s quantumly faster than the last fully decked-out Mac Book I owned. Same goes for Windows-based laptops, and for ‘desktop’ format machines. Which do you choose?
Fig. 5 Road case configured DIT station Image courtesy BigFoot Mobile Cases. Your CPU decision should be made based on two considerations: pure processing power and software requirements. If the software you need only works on one OS, then buy or build your CPU accordingly. It
Fig. 6 Rogue4 DIT portable station. Courtesy DIT Stations. should be understood that virtually all DIT stations out there, are Mac based. With the new MacPro computer, the video end of the media business might just have the infusion of pure horsepower they need.
The downside to the new MacPro is expandability. There is virtually nothing you can do to upgrade the inside of this small form-factor computer. Any expansion or additional processing cards, hard drives, interface cards must be connected via a ThunderBolt port. This might be a significant limitation.
Fig. 7 On-set color grading workstation. Image courtesy RedUser Net forum.
What migh also drive your decision is portability. Do you need a full system in a box (Fig. 6) or can you move up to something built in a cart as shown in Figure 5? They both have their strengths and short commings. For example I have worked a show with a full system stuffed inside a Pelican case. It worked great but I hated the ergonomics of the system. But it was the perfect sysem for the job and got me near set that would otherwise not be accessable with a cart-based configuration. If you are interested in a larger format computer, a tower for example, there are some pluses to this decision. One is the case size. This allows you to put more drives, interconnection boards, and better video cards, inside the one case. Not only can you have a sizable system drive, but these tower cases typically will have enough room for a 4-drive array, a card reader, and an LTO tape drive. And when a new video card, or faster processor come to market, you can easily swap those into the case, keeping your system fresh and up-to-date. Figure 8 shows a typical, custom built media processing station with an 8-drive array, and customizable options making it a true processing workhorse.
100
6.1 Basic DIT Workstation Hardware limits, but the shear volume of traffic and demand, with lane shifting to get on and off the highway, brings the whole system to a crawl. Bus speeds can be defined in bit numbers (8, 16, 32 or 64 bit bus widths) and ‘flops’. Lets take the ‘bit’ numbers first. This concept is easy to understand actually.
Fig. 8 ProMAX One+ custom media processing machine. The custom build a Windows based machine to run the Mac OS, or ‘HackInToshes’, are blisteringly fast, but stability is an issue. The advantages to this configuration besides the pure processing power, is that the cost to build them is substantially less than the Apple-branded products and they are easily upgradeable. If you decide to venture into the HackInTosh realm, be ready to spend lots of hours trouble shooting, tweaking and researching this alternative before you ever use it professionally. What you are looking for in a CPU, the core of the entire system, can be defined in four points: FAST CPUs, fast memory (and lots of it), internal bus speed and fast connections. - Processors. Inside the CPU’s, the processors are getting faster and offer more ‘cores’ to process the job at hand. Multi-core CPUs have not been fully leveraged by many software manufactures, yet. So you might not see a great increase in rendering with an 8-core processor over, say, a 4-core machine. But this is changing rapidly. It can be complicated writing code that maximizes multiple CPU abilities. Some softwares can leverage multiple processors but not to saturation. In other words, they will use 7 of the 8 available processors, but only at a rate of 50% of each potential processors ability. HPs Z840 series of machines offer an upgrade to 32 cores of processors. This machine has some interesting design implementations that make it the go-to machine for heavy video processing -Bus speeds. These are not well understood by most, nor is it often clear what the real through-put of a machine is unless you do some research. Simplistically put, you can have the fastest hard drives, and smoking fast CPUs, but if they can’t get that data moved around on the computers bus, it’s like having an eight-lane highway moving 15 mph in rush hour traffic. The highway is fully capable of legal speed
-Bits of Data. We talked about the example of a freeway and the number of lanes available to move cars, side by side, down the road. The more lanes, the more cars can move in the same direction at any given time; theoretically of course. If the cars could be put 6 wide, bumper to bumper, that would be a 6-bit width highway. The bit width of a computer bus is very much the same. The more bits it is capable of handling in one cycle, the faster the processing will get done. Now, there are other mitigating factors, just like the freeway. If you have the latest and greatest 64-bit bus and the OS is 64-bit ‘able’, but the software you’re using is only 32-bit, then the processing task won’t be faster. This is due to the limitation of the software to utilize the total ability of the system. Another factor is the ‘speed’ of the bus. That’s measured in gigahertz. The higher this number, the faster the data moves down the bus. But this is not the whole story. Interestingly enought the old Mac Pros (often call the ‘cheese graters’) were 32 bits on the bus but 64 bits withing the processor. That changed about 2008 and those updated versions, with 12 cores of processors, are still desireable machines for what we do. And they are cheap on the used market. —‘FLOPS’ is another measurement. (Who makes these names up anyway?) This is the CPU speed. A command or task is received, processed, then sent out. The rate of change speed between each of those states, is a ‘flop’ (moving from an ‘on’ state, to an ‘off ’ state). It’s also listed as ‘floating point calculations’ per second. There’s lots to understand about this if you’re inclined to get very technical. Ultimately, it’s a smoke and mirrors game when the manufacturers start their marketing. The FLOPS measurement is an accurate determination of what a processor can do. If heavy processing is required, then you might want to do research which drive or accelerator board has the faster through-put. It is quite possible that a statistically slower computer motherboard will process data faster than a
101
6.1 Basic DIT Workstation Hardware statistically faster one. It all depends on the efficient design of the firmware and driver software. All this being said, Apple has announced their new professional range of computers, claiming 7 teraFLOPS in bus speed. That 7 trillion calculations per second! This is, in anyones book, staggering. - Memory. The Random Access Memory (RAM) is used to store and move data inside the computer. It has several criteria to gauge it’s abilities; bit width, amount of ram, and speed. Increasing the amount of RAM in your computer, is the cheapest speed-boost for your computer. Most of the applications you will encounter want 6 gigs of ram to operate. If that RAM is not available, then the system will allocate space on the hard drive. A scratch disk if you will. This is much slower than RAM and there will be a noticeable slowdown in processing if the computer has to use hard drive space. Large, professional color grading programs like Assimilates’ SCRATCH™ can use every bit of 256 gigs of RAM if it’s available! That is a significant investment in computer RAM so you must weigh the benefits with the cost. - IO Speed. This is where there’s a lot of voodoo in the marketplace. Take, for example, USB. The first and second versions were acceptable for thumb drives and ‘consumer speed’ file handling. Then came USB-2, which is faster. Then USB-3 with it’s ‘advertised’ bandwidth above Firewire 400. Now USB-3a has been announced which claims 10Gbit bandwidth. But there is more to the story and it’s wrapped up in how the data goes back and forth along the cable. File handling to-and-from a storage device is managed by the computer and the interface card in the hard drive. It sends (writes) a file to the hard drive. As the file is written, the computer asks the hard drive, ‘did you get what I sent?’ The software on the hard drive, and its interface board, pause the write process to answer ‘yes that was good’ or ‘nope, didn’t get the complete data transfer package, please send again’. The computer will pause, interrupting the continuous stream of video data, then re-send that packet of information (if needed) and the process continues. This is exactly how ethernet and other “packet’ized” data has been handled for decades. USB uses the same bus for both communication directions. Firewire, on the other hand, has a different standard and process. Think of it as a small
pipe inside a larger pipe. The larger pipe carries the stream of data, in our case a video file that is being recorded from the camera’s processor, and the smaller pipe is the back channel that handles the ‘yes’ and ‘no’ stuff. Video is a continuous stream and can’t be interrupted. When it is, we may lose frames of the video. Not good. Firewire was invented for more accurate video and large file transfers. A quick look at this table shows the ‘potential’ or ideal speeds of each of the most common connections: USB 1.1 = 12 Mbit/s Firewire 400 = 400 Mbit/s USB 2.0 = 480 Mbit/s FireWire 800 = 800 Mbit/s USB 3.0 = 5 Gbit/s • eSATA = Up to 6 Gbit/s right now. It does depend on the internal SATA chip. • ThunderBolt = 10 Gbits/sec. each direction. • Thunderbolt 2= 20 Gbits/sec. one direction only. The real-world tests often show far less data rates due to all kinds of factors, the biggest of which is the software and firmware design of the controller chips. • • • • •
Firewire, eSATA, SAS and Thunderbolt Firewire 800 is faster still and enjoys the dual communication pathways pioneered in the 400 version. However, the implementation and further development of Firewire is dead. -eSATA has taken favor for interconnection between the computer and the drive arrays. Not only does it have a higher transfer rate, it also has a wider bus. eSATA uses what’s called ‘lanes’ or parallel data paths. By doing so, it moves huge amounts of data very fast, and can maintain this data through-put from several drives on the same bus. eSATA is hard to beat when it comes to proven, data handling. Drives themselves can be the bottle-neck when it comes to reading and writing information. Take for instance the SAS (Serial Attached SCSI) drive interface that is capable of 6 Gbit/sec data rates. There are virtually no spinning platter-based drives in existence that can keep up with this potential throughput. However, if you configure that potential into a RAID of drives and you have the ability to move very, very large amounts of data, with no delays caused by the drives or media. The new Thunderbolt™ is all these on steroids, or so it would seem. Thunderbolt™ has it’s issues living up
102
6.1 Basic DIT Workstation Hardware to the advertised transfer rates unless the firmware of the hard drive or connected device is well crafted. The biggest issue for the end user, is finding drives and arrays that are Thunderbolt™ enabled. There has been, for whatever reason, a painfully slow rollout of this interface. But one thing Thunderbolt™ has in spades is the ability to pass data, power and video, plus daisy chain (link in series) up to 6 drives and monitors. The next version of Thunderbolt offers even more abilities. The spec’d throughput for Thunderbolt v1, is 5 Gbits/sec/channel or 20 Gbits/sec. total. Crazy stuff, and the next version offers more of everything, but only two channels and each are 10Gbits/sec. We’ll see how this affects the true through-put and usability of the interface when it hits the streets. -SAS- This stands for Serial Attached SCSI (Small Computer Serial Interface). Without going to deep into this system and sub system standard, it’s basically a hybrid of the SCSI standard that creates an environment for much faster data transfer because the ‘overhead’ needed to actually write data to a storage device is reduced. Some would refer to the ‘overhead’ as ‘latency’ because all the ‘housekeeping’ that needs to be done by the firmware attached to the drive, slows the data transfer down. It is more expensive to implement within a DIT station, but well worth it if you’re handling masses of
Fig. 9 LaCie Rugged Drives. The left drive has USB2, Firewire 400 and 800 ports. The drive on the right has eSATA and USB2 connections.
large files. The ACES workflow standard will require SAS driven storage devices to keep up with the volume of data. Maintaining Through-Put Speed There two keys to maintaining high through-put on your system: segmenting the busses and latency. First, the ‘Busses’. Keep them separate whenever possible. You’ll see computers specifications listing 3- USB3 ports, 2Firewire ports, etc. But they are typically one bus for each interface type and all the ‘same’ ports share that bus. This is were the through-put can be reduced. Here’s an example; you have a Firewire 800 port on your computer. You hook a Firewire 800 drive to that port, the transfer speeds are good. You can daisy-chain Firewire 800 drives which is a very nice feature, except, each drive incrementally reduces the transfer speed. If you are copying files to four daisy-chained drives, the actual through-put will be one quarter of the potential of each drive. Additionally, the potential for data loss doubles. But that’s not the total picture. You now know that daisy-chained drives are not a good idea, so you put each backup drive into a different Firewire-800 port on your computer. Inside, both connections share the same bus so the speed to each port is reduced as you add more drives. This is far better than daisy-chaining. If you were to build up your own computer, you could put several individual interface cards on the computer bus and keep the throughput as high as possible. To make matters worse, lets say your drive has both Firewire-800 and 400 ports. You connect the first drive in the chain via the 800 speed port. Then you connect a second drive to the first with a Firewire 400 cable. That 400 speed connection now controls the entire transfer process. The entire bus speed will slow down to the speed of the slowest drive in the chain. In simple terms, the Firewire 800 port is now moving at 400 data speeds. How do we get around this? Simple, keep like drives on separate buses. But remember, the slowest drive on the bus, brings the entire bus down to it’s speed. So if you have two USB-3 ports on your computer and connect a USB-2 in one and a USB-3 in the other, and both connectors are on the same internal bus (better computers will have a separate bus for each connection), the data transfer will move at USB-2 speed.
103
6.1 Basic DIT Workstation Hardware eSATA does not daisy chain. It works like USB, in a star configuration. It’s common for a DIT station to have a Thunderbolt™ connection to a eSATA hub. If you think about this, it makes perfect sense. Very fast main pipeline from the computer to a hub that will then separate out the massive bandwidth to other fast
Fig. 10 eSATA, star-hub configuration. Internal computer interface card connected to 5 eSATA drives. pipelines. Most eSATA drives have a throughput of 150-500 MB/sec. In this configuration, the connection ‘hub’ or card is SAS-based, and has 6 Gbits/sec potential and the Thunderbolt™ has 10 Gbits/sec. There is no bottleneck. Figure 10 shows five eSATA drives connected to a 5-lane PCIe card. This is the fastest possible e S ATA c o n fig u r at i o n i f t h e controller chips are well designed. Each drive is connected to it’s own data ‘lane’ which is directly connected to the internal bus of t h e c o m p u t e r. E v e n t h i s configuration is not fast enough to handle the massive amounts of files emerging from the daily output of todays RAW file producing digital cinema cameras.
-Latency can be best understood as ‘delays’. Every command or request causes a delay in the transfer of data. It’s measured in micro-seconds or millionths of a second. This seems to be a very short period of time, but internal processing speeds are counted in billionths of a second. Several micro-seconds, by comparison, is significant. They add up fast when reading a file from a drive, transcoding that file, then writing it back to a drive. The entire path interjects delays, or latency. And there in lies the beauty of using a flash memory card, like the Fusion Io, to hold files while processing. That card has as close to zero latency as you can get. -RAID or Redundant Array of Independent Disks (RAID or also know as a redundant array of INEXPENSIVE disks) are the best means for moving and storing large amounts of data, fast. The concept is simple; bind several (2 or 4 or 6) of the same drives together through software and hardware. Write the data across the multiple drives, leveraging the cumulative bandwidth and the write speeds, of all the drives at once. Data through-put can be very high, often matching the speed of the connection to the array. The drawback is that the all of the data is not on any one drive , but shared across all drives. If one drive fails, all the data might be lost. The way around this is to us
one of the various methods of ‘striping’ the disks.
Fig. 11 Images (l to r) Two drive RAID-0 striping diagram. G-Drive 8TB array with swappable drives and G-Drive 4TB array with Thunderbolt™. Image courtesy G-Drive
Now, back to that HP Z840 series computer mentioned earlier. HP put in two CPU slots. Each can handle multi-core CPU chips up to a total of 32 cores. Each CPU has its own bus and the PCI card slots are split with some on one CPU bus and the others on the second bus. The internal processing and data moving speed of these machines is about as good as it gets.
RAID-0 is the fastest, but the least safe for the data. Any corruption or dive failure, all data is at risk. RAID-5 and 6 have a special part of each drive referred to as a ‘parity sector’ that tracks the data on the other drives. If one of the drives fail, the failed drive can be replaced and the parity sectors on the other drives, will rebuild the replacement drive. This parity drive arrangement offerers data security at the expense of data
104
6.1 Basic DIT Workstation Hardware
Fig. 12 Images above are (l to r) bare Samsung™ 1TB SATA drive, multiinterface bare drive docking station and plastic album cover for shipping and drive storage. throughput. As part of the DIT process, we already have individual backup drives. The speed of the array is important. That speed is used for other DIT tasks where consistently fast read-write is important. If a drive fails in the array, we can rebuild the lost data from the backups. It’s up to the DIT to determine how much array space is available, but 12TB is the minimum for shoots where the camera is originating in RAW file formats. Many systems carry 48TB or more attached to the system. Remember, you need to hold all the camera, audio files and all the flipped files. If drive space becomes an issue, you can delete the RAW files once they are ‘flipped’, sync’ed with audio and one-light color correction is done. Ultimately, you can never have enough drive space at the ready. Drives For Backups The client or production company, may or may not have a preference for the type or brand of drives used for backups. That’s not their area of expertise and, after all, it’s just an expense to them, like buying food for the crew or building a set. You’re going to be asked “Which drives you prefer for backups”. That question might be answered by the post production house. There are a lot of options. -Bare drives. There are two options when it comes to hard drives; in a case or just the bare drive (Fig. 12). Both have their benefits. The bare drive has plusses and minuses:
• Less expensive to buy because it’s just the drive and has no power supply, case or interface hardware. • Less expensive to ship. • Takes up less space for storage. Remember, each drive in a case has it’s own ‘wallwart’ style power supply and those must be dealt with as well. • More fragile. Without the protection of the metal case, they are more prone to static shock damage, or total destruction when dropped or banged. • Could require a small desk fan to cool them while in the docking station. • Require a small case to protect them for shipping and storage. The cost of a bare drives is less than ones configured inside a standalone case, but there might be a catch affecting performance. How good is the drive interface hardware, inside the docking station? All the savings and convenience might be lost if the docking station is poorly designed and bottlenecks the data through-put. There are units, as the one in Figure 12, with good support hardware and lots of interface connections, making them quite useful. If the post-production house has issues with bare drives on their end, you can suggest a brand of enclosed drives. Then use bare drives for vault storage. Cost wise, you’ll be looking at $60-80 for a good quality, SATA bare drives and well under $100 for a docking station. The drives must have as much memory cache as you can afford. This is dedicated RAM memory that will hold or buffer incoming data temporarily while the drive is writing data. 8 megs of cache is not enough, but is common with cheaper drives. 64 megs or more, will enhance the backup process speeds. Also look for an Oxford chip set in the specifications for the docking station. This processing chip helps manage the I/O (input/output) data stream, and increases the total through-put. Not all drives docking stations or enclosed drives, have this chip.
105
6.1 Basic DIT Workstation Hardware
Fig. 13 The drives shown (left to right) are Other Worlds Computings’ Mercury Elite, G-Drive 4TB, and Edit Vault. All images are © their manufacturer.
-External, in-the-case drives. These vary wildly in cost, performance and durability. Here are a few things to consider: • Noise. Some have fairly loud cooling fans. Some, no fan at all. The units that use passive cooling are good for sound sensitive environments. Personally, listening to fan noise day-after-day, is irritating even if it doesn’t affect the sound department on-set. • Muli-interface. Any drive you purchase should have Firewire™, USB-3 and eSATA interfaces, period. For obvious reasons. Newer drives now have Thunderbolt™, which is a good option for the DIT but the post-house might not have the ability to interconnect. Thunderbolt™ is still an emerging technology for Windows based machines. • Size. Some cases are large and heavy, others are very form fitting to the drive, smaller and lighter. This becomes important when you’re packing around 20 or 50 of these for the production. • Power supplies. Most external drives have external power supplies or ‘wall-warts’. You will soon have an additional box full of orphaned power supplies as well. It is not a good idea to power your external drives off the bus power from the computer. • Expensive. It goes without saying that a drive that is self-contained, having its own case, power supply, and interface board, will cost more. The other considerations; warranty support and ruggedness. G-Drive™ brand is very popular for DIT stations. They seem to have higher quality control, a case design that stacks well, and there quiet. Mercury Elite Pros from OtherWorld Computing are proven work-horses in the field or edit bays. MAXX Digitals’ Edit Vault has stellar performance, and extremely good quality control. Remember, this is the safety net for the productions
images. Saving $5 or $10 per drive is not the best area to cut costs in the long run. If you call a distributor for any of these higher quality drives and order in quantity, the prices typically come down. LTO Tape Backup It might seem odd that we’re talking about using tape to store our digital information when the industry has moved to solid state memory cards, hard drives and SSD (Solid State Device) media. Tape for data storage, has been around since the 1970’s. It has a long, proven track records for archival purposes.
Fig. 14 LTO-6 tape and tape drive. Image courtesy ©Helmut Kobler. LTO (Linear Tape-Open) is a tape cartridge that appears to the computer system as a hard drive. (Fig. 14) It has a directory at the beginning of the tape indexing all the files which allows the system to find the location on the tape much faster. The latest versions, LTO-6, can hold nearly 3.25 TB of uncompressed data and 6.25 TB of compressed files, and access any file at near USB-3 speeds. The best part of the LTO tape system is it acceptance, world wide. Although the drives are expensive ($2500 to $4500), plus SAS interface card, the tapes are inexpensive, starting at $35. Considerably cheaper per terabyte than spinning disks and have a proven long shelf life. We’re confident that they can last 30 years on the shelf, but film has a known track record of 100 years if properly stored.
106
6.1 Basic DIT Workstation Hardware LTO Backwards Compatibility One of the standards for the LTO tape system is backwards compatibility. If you buy the newest version of the format, it has to read the two versions previous. So the new LTO-6 will read and write LTO-5 and LTO-4 tapes. These systems are quite an investment, so smaller post-production houses are understandably slow to bring on the latest and greatest LTO systems. Contracts May Dictate LTO Deliverables Deliverables contracts by Networks, in many cases now, dictate the use of LTO tape. Finished, edited programs, and the assets are shipped to them on LTO tape. Because of the cost of these drives, it can be a source of income for the DIT to rent the LTO drive to smaller production houses who can’t afford the up-front costs. For more information on LTO and how it can be configured for use in digital production, there’s an article that provides good information from a professional cameraman’s perspective: http://library.creativecow.net/kobler_helmut/LTO6-Re view/1 Bus Interface Normal Speed
High Speed
Card Type SD, SDHC & SDXC SD, SDHC & SDXC
UHS-1
SDHC & SDXC
USH-II
SDHC & SDXC
Fig. 15 Shown above are two SD memory cards marked with their speeds. The C with a number inside are the Normal speed cards. If it has XD on the card as well, it’s a High Speed. The U with a number is the Ultra High Speed versions. Image Courtesy SDCard.org.
Fig. 16 SDHC and SDXC are up to class 10. Class 10 and above are SDHC1 or UHS cards. Image Courtesy SDCard.org.
Memory cards are rated in speeds like 2x or 6x or 1000x. The faster the media the more expensive the card.
Bus Speed 12.5 MB/s 25 MB/s 50 MB/s (SDR50,DDR50) 104 MB/s (SDR104) 156 MB/s & 312 MB/s
Table 1
Reading the Camera Media So far we’ve focused on the computer, the connections to the computer and the storage disks. What about the camera cards? They will come in several ‘flavors’ if you will. SxS™ cards (pronounced S by S) pioneered by Sony. CF (or Compact Flash) are cards used in a lot of DSLR’s, and the RED One camera. SD cards also used in DSLR cameras and other brands of HD video cameras. Then there are lots of aftermarket adapters that support solid state media. What separates these from each other is, the physical size, related interface, and the read/write speeds. (Table 1)
Fig. 17 64 GB, Class10 XCSD1 memory card. Image courtesy © RAW Steel.
The core of these cards is flash memory. The two classifications are standard class speeds, or High Speeds, and Ultra High Speeds. This breaks down as shown in Table 1.
The maximum card bus speed is only available if the recording device, or card reader, can support these specifications. You need to look for the SD card markings and logos that certify it is compliant with the card type. (Fig. 15) Notice on the card that has both High Speed and SDXC (or XC I) compatible markings. This also shows the equivalency between speeds. Class 10 SD is equal to a Class 1 SDXC UHS There are several places on the web that have run bench mark tests for these cards. Not all Class 10 cards have the same read-write speeds. Some are drastically
107
6.1 Basic DIT Workstation Hardware
Fig. 18 Sony SxS memory card (lt.) and the Sonnet SxS Thunderbolt card reader (rt).
slower than the standards call for. Buyer beware at this point. SxS cards are in a class by themselves. (Fig. 18) Sustained data rates are 800 Mbit/sec with bursts of 2.5 Gbit/s. The new Pro+ cards guarantee a minimum of 1.3 Gbit/sec. catering to the new 4k video recording demands. We will discuss 2k, 4k, etc. later. But these cards come at a high cost. 64 GB cards run $1000. They are used in the Sony XDCAM and CineAlta™ line of cameras and Arri Alexa™ to name a few. The cards are robust, physically larger than SD cards, which makes them more production-friendly. SSD Drives Solid State Device drives. These are the hottest drives going. But there’s several issues, two of which are: Cost per gigabyte (much higher) and maximum capacity. These drive cases hold flash memory and no moving parts. They are faster spinning disk drives. But there’s a tradeoff. Cost aside, you can’t get SSD drives above 1TB without mortgaging your home. Other issues, not talked about, are media failure and memory mapping. The flash memory has a read-write limit before a particular memory location fails. Then, if you only use part of the drive over and over, that segment of memory will fail early within the drives life span. Better SSD drives have sophisticated software on board to continually map incoming data equally throughout the drives memory, trying to use it all equally, extending the life of the drive.
There are two classes of SSD drives: the consumer versions and the enterprise quality drives. The latter costs many times more than the cheaper ones you see on sale, on-line. The enterprise class drives will last longer and perform better. DITs have been experimenting with SSD drives in various places along the data pipeline, determining a good placement, hoping to increase overall performance. One significant plus, with no moving parts, they can handle the continuous abuse found on production sets. The jury is still out on the current SSD technology implementation. A major SSD developer just announced a totally new memory construction for thumb drives and SSD implementations which will allow for larger capacities, better read-write speeds and longer life spans. It’s a few years off before we will see these for sale. Currently the Black Magic 2.5k and their new 4k camera, use removable, off-the-shelf, SSD media. They eat through this media fairly fast when recording in their
RAW camera format. 256 gigabytes in just 10 minutes of recording time. Companies like AJA make interfaces that w i l l a d a p t t o m a ny c a m e r a s , ( F i g. 2 0 ) allowing recording to SSD drives in their devices, in a variety of formats. This creates additional flexibility in production workflows.
Fig. 19 Kingston USB3 CF/SD card reader (right) and REDMAG SSD reader (Left).
PAK C-Fast media The Cion camera from AJA, and others, are using a new solid state media package called PAK or C-Fast. It’s essentially a SSD drive in a very rugged case. They to are expensive but, as you are learning, anything that has to do with production and can stand up to the abuse production brings, come with a price. Card Readers Most card readers are USB-2 and a few are Firewire™ 800. You will notice the offload speed
108
6.1 Basic DIT Workstation Hardware increase using the Firewire™ 800 interface. There are a few USB-3 card readers now appearing, which would be a good choice IF your computer supports USB-3. Refering back to Table 1, the bus speeds for these cards, if you have several readers on one FireWire bus, the data throughput will hold up. The cards are not as fast as the interface potential.
Fig. 20 Image courtesy AJA Video System
Some camera cards need special media readers. The RED camera now uses their own, proprietary SSD cards and reader. The REDMAG™ Reader. If you’re going to service the RED cameras on a production, you will need their card reader stations. They have two different magazine sizes, so you’ll need both readers. One more thought on card readers…. they can overheat. The folks who create ShotPut Pro software now warn users to monitor the temperature of the memory card when it’s ejected. If you push the data transfer rates to high, it will overheat the card reader and that heat passes to the memory card. Heat will ruin memory cards. They suggest you have several card readers to swap out if one starts overheating. Alternate recording devices AJA (pronounced ‘ah-ja’) and other manufactures, have external systems that can hook to a cameras’ HDMI 1.x port and record in the full 1920 raster. (Fig. 20) The AJA box will also give you additional playback functions, record the native H.264 files from the camera and flip that file to ProRes at the same time. That ProRes file can go straight into editing. Many of these third party recording systems record to SSD media and will capture the full 4k rasters.
Monitors and Color Rendering Now that we have the computer side of things under our belts, lets look at the monitors. So much of what we see on the monitor is driven by the graphics card and it’s ablilites. Most of todays video cards operate in the 18-bit depth space, or higher. The 24 bit TrueColor™ cards offer more than 16 million colors. Compared to the human eye, which can discern 10 million colors and shades, the technical specifications would seem to be more than adequate in relationship to our sight. The top end monitors and video cards now support 30-bit or higher (10-bits for each, R-G-B channels), which renders more than a billion colors and shades. These can work in, and support, several color spaces such as sRGB and YCbCr. This very high color rendering ability is referred to as Deep Color and capible of rendering 1.6 billion colors. The electronics of the monitor may well handle this extended range of colors but LCD and LED based display screens struggle. They also suffer from the ability to render pure whites and solid blacks. There are precious few (and they are fairly expensive) monitors that can handle the Deep Color output in a manner that is functional for color correction. On the other hand, higher end plasma displays can display signals, very close to the abilities of high bit-rate cards. The new OLED technology shows promise for critical monitoring, but it is just now being scaled up commercially, to sizes bigger than 12 inches without costing a fortune. 24” monitors are coming on the market at around $5000 +. The connection to the monitor is critical. (Fig. 21) The old VGA standard was fine for 8 and 16 bit color depth, but limits the potential of newer graphics cards. HDMI connections can handle Deep Color bit depths and multiple monitor spans. Plus rasters up to 4k. DVI connections have the widest range of different connector configurations and each has a specific limitation or feature set. (Fig. 22) SDI (Serial Digital Interface) can handle rasters and data rates to 8k rasters. YUV Now this whole discussion can get way out of control, with overwhelming detail, very fast. But one last piece of information that will help you not be confused when you see this— YUV. It’s the digital version of RGB. ‘Y’ is for the luminance information in the image. ‘U’ represents blue and ‘V’ is red. Green is implied and part of the ‘Y’ or luminance channel. There is typically no image degradation when the RGB is converted into YUV.
109
6.1 Basic DIT Workstation Hardware Monitor resolution- Don’t confuse this with raster. Resolution is the effective dots per inch. Typical LED monitors have 72 to 100 dots per inch (DPI). Raster is the image frame size. For example; 1920 x 1280 is a standard raster size for HD video. The Apple Retnia displays appear to have 200+ DPI resolution, so they are sharper.
Fig. 21 Typical monitor connections. Not show is SDI. Image courtesy www.justmonitors.com.au
Gamma- This is the range of luminance, in the middle area of the full range of luminance, in the image. Most of the visual information we see is in the middle of the total range. Figure 23 has gray ranges from white to black. A properly setup monitor should see each, distinct step. There are several standards for the gamma settings. 2.6 is the standard for the new ACES codec workflow. Mac computers and monitors are based around the 2.2 starting with v10.6 of the operating system (older versions were standardized to 1.8). WindowsOS based machines have been a gamma of 2.2 for some time. This difference between the two has made it hard to create an image that looked the same on both systems. The 2.2 gamma has been the standard for the web and digital photography for some time now. Gamut- Pronounced gam-moot, his is the big one, separating the good monitors from the rest. Gamut is the
Fig. 22 DVI interface configurations with specifications. Image courtesy Reddit.com Monitoring On Set When on set, doing DIT work, the basic monitor connected to your computer will suffice. If it’s a laptop or desktop, basic ingesting of media does not require more than looking at the software interface and maybe a quick look to see if the video looks fine—in other words, not corrupted. If you are doing one-light color correction, and showing video to the client (producers, director or DoP), you will need a better than average monitor. It should be a ‘monitor’ grade display. You have a few, very few, choices and they are all expensive.
Fig. 23 Shown a 23 gray scale stepped chart. Courtesy Dpreview.com total range of colors that can be represented by a monitor. A lower end Dell LED monitor shows the specs of: • Color Gamut (typical): 83% (CIE 1976), • Color Depth: 16.7 million colors. Their very respected high end monitor specs out at: • Color Gamut (typical): Adobe RGB 99%, sRGB 100% and 120% (CIE 1976), • Color Depth: 1.07 billion colors.
Some quick definitions (a quick review):
110
6.1 Basic DIT Workstation Hardware Notice the bolded words. That percentage is important and the resulting Color Depth is critical. Color depth (or Gamut) is comprised of total colors and shades of those colors that can be displayed. The second monitor listed above, will cost 10 times the first one. A true color correction monitor will cost 2 times more than that. But it’s important when your job counts on the ability to render colors properly and consistently. -Contrast ratio, all the rage today in spec sheets, is just that, the maximum difference between black and white. The larger the ratio, the greater the range of blacks and whites displayed. This also affects the luminance range of each color. Marketing claims are slippery at best, so don’t be fooled by claims of 10,000,000:1 and higher.
Deciding On a Monitor For DIT Work It’s split right now, within those doing DIT work, if the core machine should be a laptop or a desktop. There are advantages to both. The disadvantage to the laptop is the monitor. You can attach an external monitor, but that’s something else to carry around. The new Apple Retina displays have very good specs, but not totally what we’re looking for when doing image
Fig. 24 On-set monitor options
There are other considerations when it comes to the monitors we use in digital imaging, but that is beyond the scope of this course. The typical monitor used on production sets are Panasonic brand, production monitors like the BT-LH1710W, 17” studio monitor. (Fig. 24) It is full HD with lots of built-in features and interface abilities. These monitors are built for digital cinema production needs. There is a downside right now, in the current versions of these industry-standard monitors. Their in-ability to display the full raster output of the HD and UHD (Ultra High Definition, rasters above 2k) cameras. These monitors are limited to 1080 x 1920 raster and SDI connections. These 2k and 4k cameras only output 1080 raster due to the limitations of their internal hardware and the output cables.
HP Dreamcolor™ monitor. Image courtesy HP Computer.
The strong emergence of 4k cameras across the board, will surely necessitate the next generation 4k display resolutions. Some offerings are now appearing in the marketplace. At the National Association of Broadcasters (NAB) convention in 2015 most major players showed OLED reference montors that bosted 4k or higher resolutions. The price tag-- mid to upper 20‘s. Yes, $20,000+ . With HDMIv2 just announced which can carry 4k signals, the camera outputs and higher resolution monitors will follow.
quality judgements. The Retina LCD display has a 220 dpi (Dots Per Inch) resolution on the 15” MacBooks. Typical monitors before the Retina came out, had 72 to 120 dpi. Although the Retina is sharper, sharpness has little to do with color rendition. The Retina display is a quality monitor, somewhere between average and reference quality. There is no doubt that a precision, external monitor is the way to go for proper image viewing. On the high end of the scale are Flanders Scientific monitors. These are custom purpose-built monitors, with the sole goal of rendering a digital image as accurately as possible. Everything about them is targeted at the HD digital
111
6.1 Basic DIT Workstation Hardware
Fig. 25. C-stand support bracket for most VISA standard mounts.
image workspace. Many DITs, and colorists, have these with them on-set. The DoP and Director can view a quick color correction as accurately a s p o s s i bl e, i n t h at environment.
be put on stands commonly used on-set. The other option is a high-end computer monitor like the HP Z Series™ series. About the same price range, these monitors provide superb resolution and color gamut. The computer style monitors lack important functions commonly required on sets. The ability to turn on a vectorscope or histogram, right in the monitor, and to show audio levels.
Another option is Eizo monitors. Again, a very high quality monitor built for color correction and reference.
The Z-Series Ultra High Definition monitors (mentioned earlier) can move directly into your edit bay for critical color correction, making them doubly useful. However, there are others in the same price range that are alternatives, provididing great on-set performance. Dell Computer’s UltraSharp™ series provides critical monitoring, every bit good enough for work on set. Some feel their fully acceptable for color correction and more critical tasks as well. The cost is well below $1000. Another plus for this monitor over the Panasonic and the HP, is that it has more than 2500 pixel raster, which might be advantageous in the future. For now, most color grading and one-light work is done in 1920 raster.
More basic infor mation on how to chose a monitor can be found here:
Fig. 26 Monitor hood by EZ-hood. http://www.nytimes.com/2012/08/23/technology/perso
Stepping up from these color monitors, brings us into the realm of true reference monitors. These are hand-crafted, very accurate color monitoring devices with unbelievable customer support. Widely acclaimed, by some, as a ‘gold standard’, Flanders monitors are probably best left in the color correction bay BUT they are robust enough to be hauled from location to location. And if you tweak one or knock it out of whack, the service is free for the life of the monitor. Another very high end reference monitor is the Dolby Reference Monitors. Yes, the Dolby folks deal with more than just audio. These 42 inch monitors are very expensive! (upwards of $40,000), and amongst the most accurate on the planet. If your job depends on accurate color renditions to judge the color grading, this is the monitor to have. But the are not suited for field work. Not because of the price, but the physical size.
Fig. 27 Several manufacturers color display setup tools shown above. Image courtesy © Babelcolor.
naltech/things-to-consider-when-buying-a-monitor.html? _r=0 The Panasonic LH series of monitors will set you back about $2500 plus the mounts, so the monitor can
What needs to be understood here is that there is a substantial difference between a $200, standard computer monitor and a truly professional tool. Many fathers have told their sons’, “if your job depends on it, get the best tools you can afford.” There’s a different criteria here that our fathers did not take into account. The producer or director standing in front of your monitor, typically knows nothing about what they are looking at, accept it has a name on it. Someone told them that XYZ SuperViz was “the only monitor to trust”. If you don’t have that brand, they will constantly be telling you--”wish
112
6.1 Basic DIT Workstation Hardware we could have XYZ SuperViz brand to look at so I knew what was going on with these images.” If you have a good monitor setup tool, and know your gear, you can make a less expensive (not discount, bargain cheap-o monitor) look every bit as informative (as possible) in a production environment. I know one DIT who puts black grip tape over all the names on the monitors to mitigate the brand-preference issue. Any monitor you use for location work must have: • a protective carrying case. Maybe a shipping case if you fly with your gear a lot. • a mount for the monitor to connect to common production C-stand. (Fig. 25) • interconnect cables and longer power cable. • a hood to block light from around the monitor. (Fig. 26) • a calibration device of some kind. This allows you to set the monitor to it’s best output based on your conditions and the computer. (Fig. 27) These range from $200 to $30,000. Needless to say, there are precious few of the higher-priced monitor calibrators around. Most are rented, then returned. Other suppliers for monitor calibration include: — X•rite photo — Pantone Huey Pro --- DataColor Spyder4™elite — CalMan Color Tools
industry. The key creatives are not interested in coming to your facility, they want you and your gear to come to them. If you’re just starting out, it’s as simple as a laptop loaded with the right software, a second viewing monitor, UPS, drive array and a folding table. Not very sophisticated, but it works. As the jobs become more complicated or demanding, you’re system will have to grow to meet the processing demands. There will be big jobs where you will have separate systems for ingesting, one-lights and dailies, cataloging, a pre-edit preparation. That will be a lot of gear to configure and keep running. The pay for those jobs goes up as well, but it will never seem enough. Priceless Advice Always purchase the best gear you can afford. Read lots! Read everything you can about what other DITs have found, that works and what’s junk. It’s not always necessary to have the latest and greatest. It is always necessary to have a system that’s up to the task and works flawlessly when needed. Something Breaks What if a cable goes bad? What about the Thunderbolt™ cable from your computer to the drive
It is to be noted that the cheaper tools are better than nothing at all. Always buy the most you can afford. Your work is reflected in what they see in your monitor. Here’s a good video to watch about how to calabrate you monitor with one of these devices: https://www.youtube.com/watch?v=7RUVrRM0Njk
Support Considerations By “support” we mean the frame or cart or case that holds all the hardware. Earlier in this chapter you saw images of DIT stations on wheels, in travel cases, and even in some purpose driven support table structure. It really depends on what you feel you will need or are comfortable working with. Each DIT station seems to be customized in some way to fit their work style or production needs. Others don’t own their hardware, they rent from companies like Radiant Images You order what you need, and they ship it to you, pre configured, ready to go. This is referred to as ‘aggregation’ in the
Fig. 28 BigFoot Mobile Systems equipment carts. BigFoot Mobile systems creates the carts specifically for on-set audio, video support and DIT systems
array and the SATA hub? If it’s going to fail, it will be on
113
6.1 Basic DIT Workstation Hardware an overnight shoot when nothing is open. No BestBuy, no Radio Shack, nothing. You’re stuck. These cables are not terribly expensive if you know where to look. You should have a stash bag, case, tackle box, of every cable, connector, plug you can think of that could fail. Because it will. It’s the nature of working in temporary situations where you’re setting up and tearing down all the time.
The last solution I would suggest you count on, would be putting all this backed up software on a Cloud somewhere. Don’t ever count on wifi access, and think about the time it would take to download a 2 Gigabyte program like AVID Media Composer over you cellphone’s data access. It has all the makings for a really bad day, or days.
Shipping equipment is very hard on everything. Bon Thomas, noted DIT, ships his gear all over the world. He pulls the main system and array hard drives out of the case and hand carries them on the plane. They are just to valuable to have some baggage handler drop the shipping case and shake the hell out of everything inside. Hard drives do not survive sudden impacts very well.
Checklists The two professions known for checklists are the camera department and pilots. They live by them because there’s just to much at stake. If the camera department forgets or overlooks a simple adapter, the lens might not go onto the camera or the camera battery might not connect up to power the camera. They and the entire production are dead in the water at that point, until this issue is fixed. Production time is the most expensive part of the whole process. Wasting it gets you fired.
It’s always good to do some net-surfing and make contacts with local rental houses where you will be working. They can save your bacon at 2am if you make some friends. It’s all about problem solving and planning ahead. Be ready to replace anything you count on, to do your job, at any moment. Software Issues What if a key piece of software gets corrupted, or the OS on your computer, for that matter. Can you rebuild or re-install fast? A good safety-net is to have all your software on a portable drive and a hard copy of all the serial or license numbers handy. A full disk image is the ideal backup to get the OS back on line. There’s a great little app for the Mac called AppShelf™ by Kendisoft where you can store all your software information. The purchase date, version, serial number, etc. You should still have a hard copy of the information stored in the AppShelf™ program unless you have it backed up on another drive.
You must create a check list of your entire mobile kit. Create it in a spread sheet, and keep it simple. Use this list to go over your kit, bit by bit, before the production. If something is missing or damaged, you’ll have a chance to repair or replace it. Remember that kit full of cables and connectors? Those have a way of walking off and never coming back. You pull a cable to charge your phone between productions, then forget to put it back. If the production department discovers you have BNC-to-BNC barrel connectors, they will be constantly asking to ‘borrow’ one. You will never get it back. At $2.50 each, they add up over a years time. On the other end of the price scale is that $100, long HDMI monitor cable that gets borrowed. Then it gets run over by a grip cart which ruins the cable. Somehow you need to get the production
TECH TIP: Updating operating systems can be fatal to your workflow. The software we use is so sophisticated, a minor OS update could break the software. A good example is Scratch Lab. It’s is certified to work on 10.10.8, but (as of this writing) Apple is at 10.11.1. This broke the GPU acceleration of the video files. Fortunately Assimilate put in tools allowing you to turn the OpenGL rendering off, from within the software, until it is resolved. And it’s not an issue with Apple. It’s an issue with Intel who makes the drivers. Many DITs suggest that you keep a detailed spreadsheet of the latests ‘certified compatible’ versions of your computers OS for the software you’re using. If there’s any questions, you can quickly look it up BEFORE you commit to the update. Don’t update the OS unless you are sure the key software you’re using will continue to work properly.
114
6.1 Basic DIT Workstation Hardware to replace it. Good luck with that on low budget productions. You should use this checklist at the end of the production to see what has been used up, broken or lost. Lots of these items, by the way, are tax deductible so tracking them is worth the effort. RecapNow this was a lot to ingest. The DIT system is just that--a complex combination of bits and pieces that must work together. It’s complex and the total performance is interrelated to all the parts of the system. It would behove you, and your pocket book to become very familiar with the terms and jargon thrown about in this chapter. Take one of them, say the specs and terms used for monitors, and do some web-surfing. Look at the listed specifications and compair them with what would be needed for a DIT workstation. It will all make sense soon.
115
6.2 Workstation Building
The previous section introduced you to the bits and pieces of an asset management workstation. There are lots to think about and lots to figure out. This section should help with this process. Sub-Systems Think of the entire system as several integrated sub-systems. You have most likely figured out what they are by now: the electrical system that powers everything; the data pathway system and the computer itself. Although this might seem simple at first, it’s fraught with ‘gotchas’. Let’s step through them. Electrical System: Here’s what you know at first glance. The entire system will be powered by 120v AC or 220v AC depending on where you live in the world. The computer will run off that wall power and so will the monitor. Most drive arrays use wall power. But there are some sub-systems that need power and most are not the same as the main system. Enter the ‘wall worts’ or in-line power supplies. First rule of thumb when putting anything on the data bus/pathway, never use the power through the data cable to power the device. Always power each item from
its own power supply. And there can be a lot of them. Here’s what we found in a recent kit build: Each of the peripheral items has its own plug needing a spot on a power strip. But a trend emerged that was helpful. There were a number of items that needed 12 volts DC power. What we did was a bit of custom wiring work. We found a single power supply that would handle all the amperage those items needed, roughly 15 amps of power. The nearest source we could locate was 18a at 12v DC. Then we cut all the ends off the factory power supplies to each item and soldered them to the output of the new power supply. We created one source for many items and freed up many plug sockets on the power strip. This is the kind of thinking you need to figure out how to simplify your entire system. All the electricity for the system comes through a UPS with power conditioning. Here’s where you need to size the UPS to handle the combined load of your system. Everything has to remain powered while you are working. So much like the inventory we did for the amount of amps needed to run the peripherals, we made a spread sheet showing every item in system that plugged in to power in any way. That gave us a total of the current needed to run the system.
Item
Voltage
Amperage
SxS card reader
12v DC
0.75
RED mag reader
12v DC
0.95
SD/CF card reader
12v DC
0.95
USB3 hub
5v DC
5
Thunderbolt to SATA box
120v/12v dc
1.5/5
Cooling Fans
12v DC
0.75
LED work lighting
5v dc
0.1
UPS systems are rated in watts. If you know the amps, you can figure the watts by simply multiplying the amps by 120 (or 220 if you use that line voltage). I typically round the 120 to 100 so I can do it in my head. If you need 12 amps to run your system under full load, then: 12 x 100 = 1200 watts. Now that you know the watts, you can research UPS systems to handle the load. The only variable now is how long do you want to run by battery power. Most UPS manufacturers provide a chart to help you decide which unit fits your needs. In our case the on-line APC brand system load calculator indicated we needed
116
6.2 Workstation Building a 1200va (voltAmp) unit to run everything for 20 minutes. Computer system: This is a big one to say the least. Not only do you need to have the processing power and the form factor to meet your needs, but you need to have enough ports to service all the connected ‘stuff ’. Let’s take a MacBook Pro as an example. It has the following connections: 2 x Thunderbolt v1 2 x USB3 1x HDMI 1x audio out That’s it. Now the pencil and paper comes out to start sketching all the data connection needs for your system and how to pare those down to match the available ports. Let’s look at a graphic from before. This
Fig. 1 Basic DIT kit layout. is simplified but it give you an idea of what you need to do, in more detail, to plan the data connection side of your system. If it turns out that you need 4 Thunderbolt connections into the Mac Book, you need a Thunderbolt hub and make sure the items that require this connection, have two ports on them. One for the line in and the other to interconnect (chain) to the next Thunderbolt device. The same goes for USB3 connections. With two available on the computer, and two needed to support the backup drives plus another for the traveler drive, 3 card readers, you have run out of ports. Remember that four items connected to a hub does not mean they will all work at full speed. It’s only one USB3 connection back to the computer. They all have to share that bandwidth.
Is there a solution? Yes. Think about jumping connection protocols. Plug a Thunderbolt into a breakout box that converts the single Thunderbolt input to two Thunderbolts outputs, plus an eSATA or two and maybe, a USB3 connection. Now you can run your system array off the eSATA connection and other USB3 data-hungry connections over this one computer port. Will you or could you saturate this one Thunderbolt connection? Yes. Especially, if you are backing things up to several drives at once while reading and writing to any of those drives and doing other activities with the software. This is the bottleneck you are dealing with when it comes to moving vast amounts of data in a timely fashion. What about monitors? The Mac Book has its own screen and one HDMI port to facilitate an outboard monitor. That will typically be all you need, unless you want to have a three monitor setup where you have the software interface spread across 2 monitors and a client/output monitor. This is a more efficient way to work. How this will be accomplished will depend on the software you are using. If you are using DaVinci Resolve, you will need an outboard box to facilitate moving the program output to its own monitor. Scratch software will, with a click of a box, send the program output to a second monitor and keep the user interface on the first. In this case, the decision is based on the requirements of the software. If you decide to wrap your system around a new Mac Pro, then you are in a different problemsolving realm. It has 6 Thunderbolt ports. This leads you to believe that you have full Thunderbolt speed through all ports. This is NOT the case. There Fig. 2 Mac Pro connection are just two ports. Image courtesy Apple controller chips in the Mac so three ports share the same total bandwidth. Here’s a secret that Apple would rather not have you know about. The
117
6.2 Workstation Building software running the Thunderbolt protocol has a flaw. If you hook a monitor to the Thunderbolt port, it allocates 6 gigabytes of data bandwidth to that monitor even if it doesn’t need that much. It’s a fixed allocation. If you hook a monitor as well as several drives to the other ports, it is highly likely that the monitor will go black. The data stream to it will be cut off in deference to the other data streams. This problem with the protocols is actually an Intel problem. So a lot of ports does not automatically mean ‘problem solved’. Another big issue is cooling for the Mac Pro. It’s designed on the razor edge of thermal efficiency. It is NOT meant to be set on its side unless you have a special rack mount with extra cooling fans helping move the huge amount of heat generated out of the computer. It is unwise to set this computer in its normal upright stance, sandwiched between shelves. It needs free air to stay alive. This does not seem like a big obstacle, but it will create issues when Fig. 3 Mac Pro mounting putting your system together in its bracket. Image courtesy physical Big Foot Mobile Sysenvironment.
tems.
Data PathwaysThis has been touched on in both of the previous sub-system discussions. The subjects are all integrated as you can see. You basically need speed. Think of the problem as a funnel. Lots of data flows into the wide end and gets ‘funneled’ down to one narrow opening. It works both ways for a DIT system. We take in lots of data, then turn the funnel around and send it out through one hole to many. Most DITs spend their first round of capital on getting what they need to start working. Then they focus the next round of money on making the system faster and more efficient. I will tell you that this is much like racing cars. The old saying is “How fast you want to go? How much money ya got?” Getting your system up and running with the ability to keep you afloat using USB3, or eSATA is reasonable.
Thunderbolt adds more expense. Move to SAS or FiberChannel, and you will be shocked at the price jump. The speed increase is significant but not in relationship to the cost. What do you do to get started planning your system? Grab a coffee, a pencil, a notepad and a computer that is connected to the Net. You have hours of research ahead of you. Draw the basic blocks of your system. Start with your chosen computer. This is not just any computer at this point, this is THE machine you are going to put at the core of your system. Sketch boxes around the computer that represent each outboard piece of equipment: monitor, drive array, each of the card readers, ports/connections for backup drives, etc. Note which type of connection each needs. Work from the outboard items into the central computer. Look on the web for peripherals that will help get everything connected. Pay attention to reviews and customer ratings if you can’t have someone who has already been down this path. Now that you have the data path worked out, make a second drawing for the electrical connections. Note the voltage each paripheral device needs. Prepare a spread sheet listing the power each requires total. At this point, you have two choices. Neither is wrong. You can buy all the stuff and figure out what you need to construct or adapt to hold it all, or you can decide on a form factor for the gear to fit into and then make it fit. You still have to some 3D space measuring. What ever you needs to hold everything. This must consider heat, airflow and physical needs like how cords connect and whether they conflict with something else. Don’t ask me how I learned this detail. The next step is to implement your plan; to build it. This will take longer than you think. There’s always something that puts the brakes on progress: wrong screws; wrong connector or a cable which is 1 inch to short. I tend to build so that it works for testing. Then I tweak the assembly based on what I have found to be issues. Finally, I go back into the system and dress all the cables. Be sure the whole system will not fall apart when it’s moved around. Nothing worse than a cable coming loose when you are ready to work and you have to tear into the guts to find the open connection. Now the most important step: testing, testing, testing. If you think that just because all the lights come on when
118
6.2 Workstation Building you fire up, the computer sees all the connection, that you are ready to jump into a paid gig—you are delusional at best. You need to test every bit of the hardware and software. You absolutely need to run data through every part and every pathway. Test what happens when you copy an SD card to your array and to a USB3 backup drive. Does it go smoothly or does it start to crawl? Then you get to figure out why the performance dropped. In one system we found that everything was fine until we hooked the USB3 traveler drive to the USB3 hub. The system seemed to go into low gear. The culprit was the brand of USB3 hub we had installed. It was poorly designed and caused the whole USB3 bus to drop down to sub-USB speeds. We replaced the hub with another brand and the whole system was happier. It’s little problems like which will cause you no end of frustration. They are have to be solved before walking on set, under the gun. Good Money After Good Money Here’s something few will tell you. You will never stop spending money on your system. Not because stuff wears out, but because it becomes obsolete. Speaking with two West Coast DITs at a convention, I heard long and painful stories about the $30,000 dollar this or that which they had to buy to get work, only to find that the entire system no longer needed by productions. This expensive gear sets in their garages gathering dust. On the set of a feature film in the summer of 2015, a past student of mine and one of the first DITs in the state, was talking about just this issue. In his rack was a 36TB array that just takes up space now. It does not work with his current system. It cost a ton of money when he bought it 6 years ago. Now it fills a hole and reminds him daily of how fleeting technology really is. But on the bright side, you get to buy new toys. Actual Kit Build These images were taken while a kit-in-a-Pelican case was being designed and built by students as part of the Digital Cinema program at Utah Valley University. Total cost including software: $18,000. Total weight: 73 lbs. with UPS batteries inside. The batteries for the UPS have been removed, put in an external case and connected to the main case via a power cable. Current weight: 54 lbs. Software: Scratch, Resolve, Avid MC, Premier, ShotPut Pro, Red Cine-x, and a dozen utility bits of software.
119
6.2 Workstation Building Breathing Life Into An Older Mac Pro It is a major decision on which platfor m (Windows or Mac) to build or purchase. But here’s an option that most have discounted far too early. The older Mac P ro t o w e r s o r A K A “Cheese Graters”. There is quite a bit of activity in the forums focused on Mac computers about revamping the older macs, creating a formitable processing machine. And it can be done with a reduced budget. Here’s how it will work: • Mac Pro built between 2007 and 2013. The 2009 and newer models are still pricy on the used market if they are well dressed. But they will be cheaper than a new Mac Pro. • If it’s a pre- 2009, you will need to flash the boot ROM and install a special Boot.efi file so the installation and upgrade of the OS to Yosemite will go smoothly. Mac put code into Yosemite that checks the computer to determine if it is a 64-bit machine. The older Macs are 32-bit processing on the bus but 64-bit within the CPU. If the Mac OS determines the computer is a 32-bit machine, it will not upgrade the OS. This Boot.efi file tricks the OS into thinking the computer is, in fact, 64-bit. This process is not hard to do, but does take some patience. • Install more ram! There’s a sweet spot for the amount of RAM memory and diminishing returns if you add to much. 16 gigs is minimum and any amount over 24 gigs does not show the performance improvement to warrant the cost. • SSD boot drive. This is must for any machine. SSD drives are now cheap and easy to install. On the Mac Pro towers they will not fit the traditional drive trays but a bit of Velcro will make the process work fine. • New video card. Nvidia gaming cards work great. What you are looking for is a card with lots of CUDA cores and at least 1 gig of RAM. Ample GPU processing power is a must and leveraged by more and more programs. The stock Mac video card will cease to work when you upgrade the software, and it is not nearly powerful enough anyway. • PCI cards. The beauty of the older Macs is expandability. I installed a combo card that has 2 x USB3 and 2 x eSATA connections. You can also install in a PCI Thunderbold and a RedRocket card. • Internal drives. There are four internal drive bays in these machines. It seems popular to put in a fast
‘scratch’ drive in Bay 2. In Bays 3 and 4, install a matched set of high capacity hard drives and link them together as a RAID. But be warned, with the new El Capitain OS, the ability to build a software configured RAID within the Disk Utility has been removed by Apple. This is bad news for us who need to RAID stripe drives, but Apple thinks differently. There’s no reason to upgrade to the latest OS for some time to come. • Another place to ‘stuff ’ a drive is in the bays for the CD drives. Some of the machines came with two SuperDrives. One or both can be replaced with hard drives or SSDs. • Upgrade the CPUs. The older Mac Pros came with 2-duo core processors (4 cores total). You can buy two 4 core CPUs used and install them yourself. That will give you eight processing cores. Some have put in 12 core processors but they take some ‘playing around’ with to make work properly in the older machines and they are expensive even on the used market. Some of the 2009 and newer machines have the 12 core processors. There are other ‘tweaks’ you can make to breath more life into these amazing machines that were relegated obsolete by Apple way before their time. A 12 core, updated ‘cheese grater’ will bench mark very close to the new, full-tilt, $6000 Mac Pros. I have recently found 2013 Mac Pro towers used for under $3000 and they are fully configured for maximum processing power. If you decide to venture into this project here are some resources: Yosemite on a 2006 Mac Pro Upgrading firmware Video Mac Pro Upgrade Videos
120
6.3 DIT Software Configurations
app for that’. For what we do, there most likely is a program out there to solve a problem. The workflow dictates the software. In the section covering best practices on-set, we dabbled in the software side of the process. We need to go more in-depth based on function. The workflow can be block diagramed as follows:
Fig. 1 Basic workflow options. There are several softwares that can be plugged into each stage of the process and the outcome will be the same. Notice in Figure 1 that there are three pathways from the ingest to one-light. If you have DaVinci Resolve™, this software can now bring in the native files, re-sync the audio, do one-light color correction and export the files ready for edit. Where things get out-of-whack is when you are working with something different or not widely supported. For example, ARRIRAW files came out well after the camera was released. None of the main players in editing or color correction software had a solution to read the files. ARRI created ARC or Arri Raw Converter. Much like RED did with their initial offerings of software specific to their files, the camera manufacturers created the solution. Sony did this with the XDCAM files by creating XDCAM Transfer. This is bound to happen again in the future with new codecs, cameras and encoding processes. Over time, you’ll amass a number of little programs that can do one or two things well, helping the workflow move along. Remember the mobile device cry, ‘there’s an
Which Software for Which Task? Ingesting is handled by a few commercial software packages on the market and a slew of free ones. We’ll go over these in more detail in the chapter Ingesting Assets. Some programs are free, others are very expensive and all are valuable. The key to the one that fits your workflow is the user interface and the features. The free ones lack sophistication or professional features, but do the job. The larger programs like DaVinci, Scratch, RED Cine-X, Episode, SpeedGrade, ColorFront, are powerhouses for what we do. But they don’t do everything. We often need smaller, ‘work-around’ software to handle the odd files and camera formats. Think of the software for your daily work as a multi-tool, Swiss army knife. It has the basic knife blades to do the main tasks, and a raft of other tools for those special needs. It’s the same way with DIT software. Software For Transcoding/Flipping 'Flipping' files has become the new growth area for software companies that already have one of the parts of the chain in this link, as part of their solution. Black Magic for example, has DaVinci Resolve™. This was is a very high-end, color-correction only-software package. Black Magic, has started to weave in some features developed for their other software packages. DaVinci 10, began the support of: native file ingesting, native camera file support, audio and video syncing, more than basic editing features, flipping to other codecs and full on color correction. You could use this one piece of software for lots of tasks. Unfortunately, it’s not the best at a few of those. It has had limited editing tools, for example, until v12 was released. Some consider it a full-on editor now. Another issue with Resolve is passing metadata. This is an issue for more softwares than just Resolve by the way. There isn’t a full on standard for them to follow, so they all tend to do their own thing when it comes to metadata. The use of Resolve to feed into an AVID Media Composer workflow is fraught with issues. If you want to assure that the information contained within the camera
121
6.3 DIT Software Configurations files and linked audio files, and new data created by either program is then passed to AVID, then Resolve will cause problems. Metadata is becoming more and more important in the digital cinema world, and it still has be worked out. Assimulate’s SCRATCH™ will do a lot of what DaVinci does, but it’s not an editor by any means. SCRATCH™ is positioned as a pre-edit and post-edit file processor. It is a wonderful sound syncing tool, color correcting tool and will create LUTs. It’s outstanding at creating presets for rendering processed files to various formats. Scratch is also very efficiently written, maximizing your computer and then letting you continue to work on other tasks while file processing continues in the background. FilmLight’s program Daylight, is one of the high-end Dailies and color grading softwares in the industry with a price point to match. It is finding a place on-set, but for the most part, it’s relegated to the colorists suite. Another high end player is MTI’s Cortex Dailies™ software. Ideally suited for on set and post file management work, it also has the ability to pass it’s workflow through to editing software, that’s network or cloud connected, with relative ease. As a plus, Cortex Dailies can be purchased with a pre-configured 'lunchbox' style computer, ready to work on set. On the lower end of the price spectrum, and still evolving is RedGiant’s Shooters Suite software. I had the opportunity to meet with the developers in late 2013 and look at this software with one of the lead developers. It shows promise for a suitable DIT, on-set software for lower end productions. They are targeting DSLRs first. RedGiant is known for their very cleaver PluralEyes™ sound syncing software and Magic Bullet™ ‘looks’ plugins. The most substantial drawback to their current release of the Shooters Suite is the lack of key tools. But several of the pieces of software can be helpful with DSLR shoots Several of the non-linear editors will ingest camera native files, sync files, color correct well enough for a one-light, output and, of course, they are full-on editors. One of shortcomings for editing software is ‘batch processing’. If you have a hundred clips to process, it’s better done with software that has robust batching
abilities. Another downfall is their inability to pass the exported files on to other software. That being said, if you’re going to edit in AVID, you could do the whole workflow within AVID. AVID version 7.x and newer now has a very good ‘background’ processing function. You can set a bin full of clips processing, then continue working with more clips. This background rendering is currently not available as effectively, in other NLE softwares. The final shortcoming of using a NLE for file processing is the cumbersome workflow to export dailies with burn-ins. This task is still better handled within something like Resolve, SCRATCH and other software. An advantage of some of the flipping softwares is ‘watch folders’. These are special folders that the software looks inside, frequently. If there is a file in that folder, it will automatically process it, saving the output to specified locations. Some will even send you an email when it’s done. Any part of the process that can be automated, leaving more time for you to take care of other tasks and not babysitting. The ‘Other Tools’ Here’s a short, quick list of software and their abilities that I have not yet mentioned: - 5DtoRGB. A simple program that will take-in any AVCHD codec based camera and flip the files to another codec. Free version does one file at a time. Paid version will batch process. - QtChange. Very valuable tool when working with files that do not have standard frame rates, lack timecode and file names. (GoPro and most DSLRs). There are more small tools out there and more are being added all the time. Creative Workflow Solutions As this evolution continues, it’s up to you to get your hands on some of the new footage/files and create a workflow solution. Some will be awkward for a while until the main software producers catchup. One example was the workflow for RED. The non-linear editors couldn’t handle the R3D files or the 4k-raster size. All editors were basically limited to 1920 x 1280 frame sizes. So, to process the files, you would use RED’s software and convert/flip the files to 1920-raster size and a codec that could be read by the editing software. ProRes for example. But what to do after you have finished editing? You needed to color correct and interestingly enough, Apple’s Final Cut Pro™ could not work with the 4k-raster,
122
6.3 DIT Software Configurations but their Color™ software could. Everyone worked out a re-linking process. Edit in one raster, move into Color, relink to the 4K-files for color correction and export from there. The newer Final Cut-X now handles 4k natively.
- ‘Down-rez’ a file, then ‘up-rez’ that same file. If you need a higher resolution, and the camera original is higher than what you’re working with, always go back to the camera files for the ‘up-rez’ process.
Later, programs like DaVinci Resolve™ could bring in both the edit timeline and relink to the 4k-files. This is the nature of what is dealt with as technology comes into our workspace.
- Constantly flip codecs. Moving from the camera codec (if not ProRes or DNxHD already) to ProRes is fairly painless to the image quality. Moving to H.264 is damaging because it’s fundamentally a lossy-codec. It stands to reason that then re-encoding to ProRes would
TECH NOTE: It can’t be understated that a softwares ability to efficiently process files is critical. Here’s an example of just such a case. A digital feature movie production had a DIT using DaVinci Resolve. By the day four, the DIT had 18, very expensive, camera cards waiting to be ingested and processed. The backlog was caused by the inability of Resolve to do anything else, once the render was started. All you can do is watch and wait. All the while, more camera cards are being filled up. On the other hand, SCRATCH and Cortex can start rendering and the DIT can go back into the program to further work with more ingesting, color correction and audio linking with little or no hit in user experience. The other important factor is how many frames of video will the software and hardware process. Again some tests were recently run with surprising results. On a custom built, ‘fire breathing’ computer, the same digital cinema camera file was run.
• Resolve would process 19 frames per second. • SCRATCH would process 60-80 frames per second. • Colorfront would process 90-93 frames per second. With thousands of frames to process each and every day on-set, the fastest hardware is not always an assurance that the job will get done faster. How the software is written plays a HUGE factor in the actual through-put. There is an old saying when building race cars….”how fast do you want to go? How much money do you have to spend?” They are related. Resolve is free (the full version is $1000). SCRATCH Lab is close to $6000. ColorFront is close to $70,000. If you need to crunch through piles of files, you will have to ante-up the big bucks. Of the editing software available, Sony Vegas™ was the first to accept RED files natively They were followed by Adobe Premier™. Now Final Cut-X™ will do so, as well. Mentioned earlier in the book, do no harm, is the phrase to remember when moving files through the workflow. Several things we don’t want to do:
not be a good work path to help the image stay as good looking as possible. If you need several output formats, then always re-encode from the highest quality codec. Never create an H.264 for, say, an iPad, then take that file and re-encode for the web. It will look awful. - Reducing data rates to save space, then using that file for encoding. There are several version of
123
6.3 DIT Software Configurations ProRes and DNxHD based on data rates. ProRes 4444 is the highest quality and the file sizes are substantial. ProRes LT is a significantly lower data rate and smaller file size. It’s easy on the processor. So using the LT version allows for faster editing with reasonable image quality. On the AVID Media Composer side, DNxHD indicates its quality by stating the data rate right in the name. DNxHD 36, 80, 115, or 175x. The larger the number, the less compression applied. DNxHD 36 is perfect for editing.
Be warned that defragmenting can take hours and ties up all of the hard drive I/O and a lot of your computers processing abilities. It is best to do this overnight or when you’re between ‘gigs’. The other warning comes with the use of SSD drives. Many of them have sophisticated re-mapping and file optimization software built into the firmware of the drive. The over-riding issue with SSD drives is the lifespan of each memory site. There is a finite number of writes to each. This software tries to write files in such a way that the disk activity is ‘leveled’ across the entire drive. Every part of the drive is used equally. Don’t use disk optimizing software on SSD drives.
Emergency Kit“ I teach DITs to have disaster kits with them. Think of it as a First Aid kit.... they should already have with them a tool to UNDELETE / Restore files accidentally erased. It happens. Also it's a good idea to know the contacts for tech support for the media type you're working with... (e.g. Sony & Panasonic both offer services to recover files from bad or accidentally erased cards.)…(and) it may be common sense but one never wants to update the OS or drivers during a project. I've seen many a user automatically accept Apple's latest OSX patch only to find out the card reader driver isn't compatible with it, etc..” Dan Montgomery, President Imagine Software, creator of ShotPutPro software.
Two other interesting tools: a file duplication finder. DuplicateFinder™ for the Mac OS and Windows OS, is a wonderful tool to inventory your hard drives. I was always running out of hard drive space, no matter how many drives I added. I ran this software on my 5 connected drives and found many copies of the same files, hogging up tons of space. The photo libraries and music files were the worst offenders. I cleared 1TB of space when all was done. It was like getting a new drive for free. Where this tool will be handy is for the system drive(s) and not for backup or array drives. In most cases, we want duplicates to exist.
Not everything always goes well: hard drive directories get corrupted; files get accidentally deleted. This is life in the digital media lane. You must have a tool kit handy. This must include software tools that will fix things like bad hard drive directories. These tools would include: - Disk Warrior™ - Tech Tool Pro™ - Disk Drill™ - REDUNDEAD for RED Mags. (free)
The second is Renamer™. This batch file renaming utilty is handy if you need to change the names on lots of files.
There are tools and the offerings depends on your computer operating system. A virus checker is a good thing to have as well. On Mac OS computers, virus attacks are limited, but they do exist. The weak spot for Macs are cookies deposited by web browsers. These don’t require a password to install and can wreak havoc if not caught. MacScan™ is a good tool for this task.
Here’s some thoughts: • the first step is offloading the files, without corruption, to backups. ShotPut Pro is just under $100 and is very easy to learn and use. DoubleData is another popular tool on sets. They current version supports Arri and Red camera file formats. There are others, some freeware and other paid, that do admirable jobs at the data backup task. This is a step in the process that CAN’T be ignored or skipped. Pick one, learn it and use it on everything. After backing up: • if your output is NOT going to professional edit facility with demanding needs on metadata, then Resolve is an easy choice. The free version does have some limitations, but those limits do not effect most indy, broadcast or web destine productions.
Disk drives fragment by constantly writing and erasing files. This slows down the hard drives access speeds as it moves all over the drive looking for pieces and parts of a file. iDefrag™ and other software will defragment the files. The defragmenting tools that come with most operating systems are marginally functional in a professional environment.
Best Configuration of Software This is a loaded statement. As mentioned in the very first part of the chapter, it depends on the workflow. Where the file is coming from (camera), to where it will end up (NLE software).
124
6.3 DIT Software Configurations • If your output is going to be the big screen, and the camera is one of the high end units creating large raw files, then you’ll need Scratch, Cortex or another of the heavy hitters. Your decision is going to be based on shear throughput efficiency and passing ALL the metadata into post production. • If you are handling files from a single-system sound production (picture and sound recorded to the same device), then you could use the NLE of your choice. Hopefully the one that the project will be finished on. Resolve would be a good choice for this workflow as well. We will delve more deeply into each of these suggestions in the coming chapters.
125
6.4 Basic Workflow Issues
“How much money you got? How fast do you want to go?”
the core of the entire system, the mechanical hard drives, are working at maximum abilities.
This is an old addage often mentioned when creating race cars. The two questions are inextricabley linked. Computers and computational speed are also linked to the dollars spent on the hardware and software. The faster the machine the more it will cost to build. If those well spent dollars are carefully targeted at certain parts of the computer system, you will be rewarded later. But be warned-- this is technology and as such, every 90 days or so, there’s something new and more powerful to spend you hard earned dollars on.
Here’s how head contention works; the platters inside the drive are spinning at some fixed rate. Slower drives spin at 5400 rpms. Digital media capable drives whir in the neighborhood of 10,000+ rpms.
A good part of your work day is the computer moving files from one storage device to another, or processing files. Timing on when you do these functions is critical. Let’s look at some real-world examples.
Few files are written contiguously on the platters inside the drive. There’s bits and pieces of the file scattered all over the drive, maximizing the use of the drives ‘memory sectors’. So, the read-write head has to dash all over the surface of the drive to get the complete file. This is measured in read-write speeds in the drive specifications. At some point the arm holding the read-write head can not move any faster. The drive is said to be in ‘head contention’. It has reached it’s physical limits.
On an average single camera production day, using a RED Epic shooting 5k, you might receive 1.25 TB of raw camera data. All these large camera files will come to you periodically throughout the day. Now lets say that they are shooting on 256 gig camera mags and delivering them to you about 80% full, which in turn, takes 45 minutes to offload using FireWire 800 connection. If you were using USB3, it would take about 20 minutes. The most important function is to get those camera mags backed up. So you make it a priority to put the card in the reader and start that process. While backing up, you can start the pocessing of the previously backed up files, for delivery to editorial and creating dailies. Remember that backing up files uses little CPU power but a great deal of the available bandwidth to and from the drive array. You put 20 clips into the cue and start transcoding. Then you notice that offloading of camera mags is slowing down. Why? It’s called ‘head contention’. This tends to slow all your computer functions down because
There are little arms that have readers on the end of them, driven by very powerful electro magnets. The arms a very ridged but light weight so they can be moved fast and easy. Go to YouTube and check out a video showing the inside of a hard drive working. It’s amazing.
So, if your are backing up files (writing to the drive) and, at the same time, reading files from the drive for the software to process--and then write those files back to the same drive, the hard drive will saturate and all the processes seem to be going in slow motion. As a side note, using SSD drives does not relieve you of the mechanical based head contention issue. SSD drives have ‘wear leveling’ software built in that does much the same function as the mechanical drive, when handling data. The ware leveling software tracks each memory location on the SSD drive and attemps to assure that each gets the same number of reads and writes. Again there is a simularity between the mechanical drives and the SSD drives. The mechanical drives break down (fail), because they are mechanical. The parts inside wear out or the surface of the disks fail. The memory sites that comprise the structure of the SSD drive fail as well. There is a
126
6.4 Basic Workflow Issues defined number of times a memory location on a solid state drive can be read or written to, then it fails. The processing of the wear leveling software causes latentsy, or a delay from the time the data bits arrive to the drive, to the time that data is saved into the memory sites. It is less of a delay because the SSD drive doesn’t have moving parts. But there is a delay, albeit not much. Drive arrays help maintain data speeds due to the fact that the data is spread across several drives. The read-write times are reduced and the point where the array is maxed out, is much higher. Arrays are always preferred as the core of the DITs workstation. How Much Time To Make a Backup? Back to our problem. With file transcoding and mag backups going simutaniously, when do you create the additional data backups required by production? You need to create two more before the day is over. Some DIT’s wait until the production day is completed, then do the backups. You might want to think long and hard about this practice. If it required and average of 45 min. per camera mag to back each up, earlier in the day, and your received 10 mags that day, how long do you think it will take to make the final two backups? It’s going to depend on your system and the drives production gave you to use for back up. If they are USB3 drives then you will see faster offloads than the original camera mag-to-array, but simple math tells you that it’s going to be a long! day.
If you remember those painful math word problems in high school, they are now applicable to what you will be doing everyday, on-set. Here’s what you will need to know to solve the basic question of ‘how long will this take?’ - What are the data rates of the codec being used on the production. This directly relates to how much harddrive space you will need to hold all the data. - How many cameras on the shoot. This is fairly straight forward. Two cameras, double the amount of data to process. - How fast will your system process files. This you can’t really estimate. You have to run tests. Resolve and Scratch will display the frames per second (FPS) they are processing. - Interconnect speeds. What is the data through-put of USB3, USB2, FireWire 800, etc. This article in Mac World ‘How Fast is USB3 Really?’ is a strongly suggested read. There is a chapter later in this book (Professional Problem Solving), that will put you through real-world situations where all this information will be needed to answer the questions. If you can answere each of the questions correctly, you are ready to wade into the profession. If you can’t answer them, it might be time to either do your homework so you can answer them or look for a different carreer.
10 cards x 45 min. each = 450 min. Or 7.5 hrs. What’s the workflow solution? Backup when ever you can. Between card ingesting. During lunch break. You will still have some data to backup at the end of the day, but not 7.5 hrs. worth. Ultimately you MUST think of this process as an assembly line. Once started, the data will not stop coming to you until the cameras are turned off for the day. And even though the cameras are shut down, those last mags are just arriving at your part of the assembly line process. Simple MathPart of every job you will do is calculating how you will get the job done with the time allotted. This is directly related to how much you will charge for the job.
127
6.5 Chapter 6 Review Questions
Answers located in Appendix
C.
1) For the DIT workstations hardware configuration, what is the most important consideration? a. A quality output b.A fast, clean, glitch-free data path(s) c. A simple drag-n-drop operation 2) GPU stand for _____? a. Centeral processing units that control the data path of the computer. b.Processing units that assist rendering of video. c. General Pixel Units. 3) If we have a USB3 bus with two connections, one is a USB3 drive and the other is a USB2 drive, which of the following is true. a. The USB3 bus will run at full speed because the bus auto-shifts data rates on an individual drive basis. b.The USB3 bus will run at USB2 data rates. c. The bus won’t work because it has a mix of different USB standards. 4) The entire data path speed is limited by? a. The largest cable in the chain. b.The power supply to the hard drive. c. The slowest connection protpcal in the chain. 5) ‘FLOPS’ is a speed reference to what? a. The CPU processing cycles. b.The excess data loss because of RAM memory leakage. c. The speed of the RAM memory.
7) LTO backup refers to what standard? a. Linear Terabyte Optical drive. b.Last Tape Output c. Linear Tape Open tape drive format. 8) SD memory cards are ‘speed rated’ by what system? a. Class b.Megabytes c. Card Reader 9) SXS (S by S) memory cards are memory storage devices created by Sony. a. True b.False 10)SSD drives are not long-term reliable because? a. The spinning disk inside can fail at any time. b.They are expensive. c. The memory locations have a limited number of read-write cycles. 11)RAID stands for? a. Random Access In Drives b.Redundant Array of Inexpensive Drives c. Read Average In Drive array
12)YUB color space is the electronic version of__? a. CMYK b.XYZ c. RGB
6) I-O stands for Input-Output. a. True b.False
128
6.5 Chapter 6 Review Questions 13) The ability of a monitor to rendeer a very wide range of colors and shades is referred to as the ____ of the monitor. a. Gamma b.Gamut c. Contrast ratio
14)Because it’s your kit system, you don’t need a checklist each time you take it out on a job. a. True b.False 15)Because power is or can be unpredictable on-set, what would you add to your kit to insure contiuous power for your system? a)Solar cells b)A second generator c)UPS 16)You have 650 gigs of data to backup to a USB3 drive. USB3 can move 114MB/sec. Roughly how long will it take to copy this data? a. 150 min. b.1.5 hrs. c. 1.5 days d.15 hrs. 17)A RAID-0 configuration offers what advantage? a. Redundant striping of all data b.Rebuildable configuration if a drive fails c. Data read-write speed
129
7 Ingesting Assets From Camera ‘If you don’t have your data in two places, you don’t own it.’
I’m
sure we’ve heard that before. Make it your mantra. Completion insurance companies have, and they’re very strict about this. If it’s a really low budget project, a corporate industrial, info-mercial, some believe that, due to the nature of the project, backups are not needed. Somehow they feel they’re impervious to data loss. Hard drives don’t differentiate between big budget or low budget projects. They just die. Without question, the relatively biggest damage due to data loss, will be on the lower budget projects. Insurance Coverage Considerations: Production insurance companies now dictate how the data will be handled. Most require the following: • 2 or 3 separate backups. One can stay on set, one goes to a vault off-set, the third can shuttle to editorial. • All files backed up will be Checksummed and a record kept. • LTO tape copy will be struck as well. This varies from production to production and between insurance companies. But the core concept here is based on the fact that tape based storage is a tried-and-true method of protecting data with a longer track record than hard drives. Remember the discussion about the workstation configuration and the absolute need for the fastest interconnection speeds and read-write abilities of the drives. Checksumming leverages some of the CPU power but copying files saturates the pipeline from the camera mag, through the computer to the drive array. You will be moving gigabytes, if not terabytes, of data daily before you even get to the one-light, flipping and other tasks needed for the production.
Copying of files requires the files to be read from the camera mag, and that speed is a function of the card reader, the camera mag and the connection. Higher speed cards are very helpful, with the highest speed cards required by the larger production cameras. Often the slowest link in the line is the actual connection to the reader. It goes without question that USB3 is faster than USB2. But everything in the chain has to be up to that maximum speed. Another consideration is what else is accessing the target drive? If you are copying files to an array that is also reading and writing for transcoding, that array of drives is getting saturated. This is also called Head Contention. This will be explained in detail later. A c o m m o n a n d t ro u bl i n g p h r a s e u s e d by inexperienced DIT’s is “We’ll just copy this over and look at it.” How long will the copying take? This is a vital exercise that you will do daily, as a DIT. Here’s some thoughts about how to approach offering the answer to the above question. - You must know the true speed of your data pipeline. Not the advertised speed, the true throughput. To do this, simply take a known file size and copy it. On a recent film production, SD camera mags were being off-loaded through a USB3 card reader to an array. The 24 gigs took, on average, 12 minutes to be MD5 checksum copied using ShotPut Pro5. Using DaVinci Resolve’s Clone function, it took nearly an hour. You must run tests and avoid reading/believing the marketing hype. - Know the difference between megabytes and megabits. There is 8 or 10 or 12 bit files. For simplicity sake, lets stick with 8 bit file structures. 8 bits would then
130
Ingesting Assets From Ingesting Assets From Camera
make a byte. Divide a megabit file by 8 and you will get the equivalent megabyte file size. That would be, 125 megabytes or 125MB. So the reality of the process become clear when the above mentioned 24 gigabytes took 12 minutes to copy through a USB3 pipeline that advertises multi-gigabit throughput. If one went by the advertised data rate, those files should have copied in three minutes. But reality, and testing in a real world situation, speaks volumes. The rest of this chapter will step through off-loading data to backups.
131
7.1 Setting Up File and Folder Structures
If you set your file structure up properly with simple logic and a naming convention, anyone can look at the structure and figure out what-is-what. Here’s a suggestion that might make sense: • Name of the project • Day-of-days the associated files were shot • Camera number the footage was from • Camera card or hard drive number With this thought process in mind, here’s what the file structure for a project called ‘Wicker’, using two cameras, would look like. We’re starting at the root directory: Wicker Day-1 Cam-A A001 A002 Cam-B B001 B002
Fig. 1 Basic folder structure. If different cameras were used, then the ‘Cam’ folder name might benefit from more information. Maybe Cam-1 is a RED Scarlet and Cam-2 is a RED One. It might be useful to add that info to the camera folder names.
Cam-1_Scarlet Cam-2_RedOne Referring to Figure 1, there are more folders that need to be added because of the workflow and overall organization. Audio will be sending two offloads during the day. You will be creating dailies and, maybe, transcoded files for the editor. Figure 2 shows a simple file and folder structure that works for almost any production.
Wicker Day-1 Cam-A A001 A002 Cam-B B001 B002 Audio Card-01 Card-02 Dailies Editorial
Fig. 2 Full file and folder structure. Notice that the individual day folder has the cameras for that day, and the audio for that day, within the day folder. It is best to keep each days work contained in its own folder structure. The Dailies and Editorial folders will hold the whole of the productions work. The editors do not care what happened on that day, they typically organize by scene and take, which is on the slate. Dailies can be dumped into one folder as well. Once you have done the days work, it is rarely referred to later. However, if you find the producers or director wanting to see a
TECH TIP: Notice the ‘_’ between the camera number and the camera name. This is important if you’re moving your files from one system or environment to another. Some storage environments and OS’s will add the dreaded ‘%20’ where a blank space is found. This would make the the above named c a m e r a fil e f o l d e r l o o k l i k e t h i s : Cam-1%20RedOne. Not very readable. Make it a habit to add an underscore in place of a space.
132
7.1 Setting Up File and Folder Structures specific shot from a specific day, then create a Dailies folder under each Days main folder. Either way works, it’s up to the needs of the production and what might make your job easier. Some backup software will help set the folder structure up, from within that software. It goes without saying that you should know if the software will do this before using it on-set. I have found it helpful to establish the folder structure I feel is appropriate, early on in the production. Typically on the morning of the first day, if not the night before. I create these folders on the DIT workstations drive array for the first days anticipated work. Based on what actually transpired and feedback from the editor(s), I modify that structure for the rest of the shoot. Some post houses appreciate having certain folders first in the folder structure. It’s easy to do this by adding and underscore or a tilde (~) as the first character in the folder name as shown in Figure 3
“Never rely on editors to ‘just get it’ or understand or remember your conversations. It’s helpful to include a text file with the drives you send to editorial, that starts out “Per our conversations…”, says Mindy Trim, a production co-ordinator and asset manager on several reality and drama TV shows. “Then include a quick reference guide to the folder structures, any color code system that you have created on the drive. I always include a ‘letter to the editor’ text file on each and every drive I send forward.” Mindy Trim is currently is assistant editor on the ‘American Ride’ & ‘Granite Flats’ TV series.
Fig. 3 Folders with a ‘~’ move to the top of the folder structure.
Again, it’s imperative that you communicate with the editor and help by providing them what they need to work efficiently.
133
7.2 What is Checksumming?
In reality, it’s file transfer confirmation. Checksuming is one of the methods used to insure a file is copied to the new location as an exact copy of the original. Here’s the basic ‘flavors’ of file copy confirmation: -File copy or finder copy. It’s what we do daily, drag the file from one folder to another, or one drive to another. The only file integrity check performed with this kind of copying is based on the file size. If the sizes match, the system thinks all is well. But what about the bits inside the file? They can get out of order, for example. The file will still be the same size, but the file is fundamentally corrupted and you won’t know it until it’s too late. There have been serious issues with files copied between different OS versions on the Mac. Apple decided to change the code on how it accounts for file size and all the copies made by one version of the OS, which were perfectly fine, couldn’t be opened by the newer OS version. You have probably moved, or copied, thousands of files this way and never had an issue. I wish I could say that. From my own experience, this is the file copying method of last resort. -Checksum. This method of file transfer security operates in a different world than just a straight file copy. The program looks at the file and generates a series of numbers that represent the bits in the file. When the file is copied, the program looks at the copied file and compares it with the Checksum numbers. If they match, the file is an accurate copy. There are various levels of Checksum verifications. Figure 1 is from ShotPut Pro 5. Notice the SHA 256 and SHA 512 Checksum listings. The numbers indicate the depth to which the program probes the file and sets ‘check data’ to reference. The higher the number, the better the checking, and the slower it will process the files.
Fig. 1 Typical list of Checksums with several additional methods of file copying. The MD5 Checksum is becoming the standard for most post production houses. You will need to ask which they prefer, because the checksum file you create on-set, will follow that file through the pipeline. If they use MD5, their programs will only compare the original and copy on the basis of checksum data and not create the checksum data file again, if the files are transferred again. This is a big savings in time when you consider copying hundreds of video clips. CRC32 Checksum is a new, and faster way of doing the file verification. Not common yet across the industry, it does copy files much faster than MD5. If you can use it, and post is okay with this form, save time and select this option. The ‘File Size’ selection is just like the command in your computer… well, not quite. Imagine Software added at bit more ‘smarts’ to their version making it more robust. Still, it’s the weakest of all file verification processes and should only be used as a last resort. Interestingly enough there’s a ‘No Verification’ option. This is used when nothing else works. It doesn’t get any more complicated than this. It’s a simple, automated process that you can set, and forget while it’s working. If there are errors, the software will alert you.
134
7.3 Checksum Softwares
transfer. It’s also insainly slow compared to other software on the market. ShotPut Pro 5 will be used in this book, and by large numbers of DITs world wide. There are other options out there. The Checksum computer code is licensable and many include it as the core part of their specific or overall package. By ‘standard coding’ we mean, the Checksum file meets the agreed upon standards so other softwares can read the Checksum data and confirm the integrity of the file. Some manufacturers of hardware backup devices, have tweaked the code and the files. They will only work reliably with their line of products. Not a good thing if you’re the middle-man and need to hand off a file to someone else. The basic, bare-bones software packages (usually free), don’t offer some of the more professional, on-set-proven features, that the commercial products include. This is an important consideration if you’re being paid to do this work. Important features to look for: • user interface is simple to use and understand, • batch processing, • multiple destination output from a single file, • creates a log file. The last item is more than just a log file. It should contain information that is critical to tracking the offload. For example, it should have the date and time the offload was done. The source path and the destination path for each file copied. Some of the bare-bones software just list the files and the checksum numbers. This will not stand up in an argument with the insurance company or the post house if there is a problem with a file.
Always make sure that whatever software you settle on, it outputs data that can be read by others in the data pipeline. An often overlooked feature of the MacOS is the ability to use the Terminal program and do a command line MD5 checksum. Although not eligant, it works very fast and returns the confirmed checksum numbers for compairson. Software to look at, if you’re building your own DIT station or just wanting better file transfer confidence for your personal files: • ShotPut Pro™, Imagine Software. Student discount available. Very robust features, easy to use, well supported. • Double Data™ NightSky Software (use to be R3D Manager). Now handles all file types. Windows and Mac. • Alexa DataManager, is specific to Arri Alexa camera files. Works wonderfully if you’re tied in to a Alexa based shoot. Windows and Mac. • Checksum Checker™, Free. Not sure of it’s total abilities or interface. • Checksum Tool™ by Soundforge™, in Alpha. Free. Windows only. • Quick Checksum Verifier™, Freeware. Windows only. • MD5 & SHA Checksum Utility™, Freeware. Windows only.
Be advised that as of v11, DaVanci Resolve does have a ‘Clone’ or copy function built in that uses the industry standard MD5 checksum copy. However, the resulting .txt file does not contain any tracking information which makes it highly unlikely that it would stand up under the intense finger pointing that results from a botched file
135
7.4 Exercise 4- File & Folder Structures
In this exercise you will build the folder and file structure on your hard drive that will be used for some of the following assignments. ⌛ This exercise should take 10 min. to accomplish. You will need: • Course assets on your drive and • Your personal hard drive. Working on your personal hard drive create the following folders: 1. Inside the folder AssetMgt Course, create a folder called: Balloon_Festival 2. Inside the Balloon_Festival folder, create the following folders:
Day-01 Day-02 Day-03 3. Open Day-1 folder and create the following folder:
Cam-A Audio Dailies Editorial Notice that all these asset folders now reside in a ‘day’ specific structure. 4. Open Audio folder and create the following folders: Audio-01 Audio-02 The finished file structure should look like Figure 1, inside the Day-1 master folder:
Fig. 1 Folder structure. Submission for grading Turn it in for grading, per your instructor’s directions.
136
7.5 Exercise 5: ShotPut Pro Interface
In this assignment you will go through the basics of the ShotPut software interface. It’s really simple once you get the drift, and that won’t take long. ⌛ This exercise should take 50 minutes to accomplish. You will need: • Course assets • Personal hard drive • ShotPut Pro Software To get started with ShotPut Pro, download a demo version from Imagine Software web site. It will allow for 10 file downloads. Install the software on you computer or your external drive. It does not use system based resources.
1. Make sure your hard drive is plugged into the computer and available. 2. Plug your thumb drive into the USB port on the computer. The USB connection on the keyboard are not as reliable and they’re not USB3 on many systems. Make sure both the thumb drive and the external drive can be seen on the desktop. 3. Open ShotPut Pro on the computer. Most likely located in the special folder you created on your applications dock. On the upper left side of the ShotPut interface (Fig. 1) you should see the the Attached Media listing. It’s called ‘Attached Media’ for a reason. What is available to read-from or write-to, might be any one of several
Then watch the following videos located on the Thumb drive provided with the course. These are located inside of the ShotPutPro tutorials folder. • ShotPut Pro introduction • Offload presets • Preferences • Logs They can also be found on the Imagine Software website. Look to the bottom of the page for the video links. http://www.imagineproducts.com/index.php?main_ page=product_info&products_id=2
Fig. 1 Attached media listing. ShotPut Interface: Scenario: You’re on a location where the production is using multiple 5D DSLR’s to capture the high definition video images. What you have been handed is the CF card from one of the DSLR cameras. Your goals are to get those files into a structured file folder arrangement and ensure the files are intact on the backup drive.
storage devices including network drives, local drives, thumb drives, memory cards, etc. What you will see will differ from the above screen capture, but you should see the external drive and the computer drive. These are the drives/media available for you to move media “from” or “to”.
137
7.5 Exercise 5: ShotPut Pro Interface This exercise requires your personal HD to be visible in this window. We will be treating the assets on the course thumb drive as the CF card from the Canon 5D DSLR. Setup the Preferences The first setups we need to do are in the Preferences part of the program, located with the ShotPut Pro menu. 4. Open Preferences. Think of these Preference settings as the master-control of how the software will work. These do not need to be set or re-set often, but they do effect how the software processes work. Stepping through these settings: - O f flo a d M o d e Preferences has 2 settings. Manual and Automatic. If you’re a one-man-band while shooting, the automatic is Fig. 2 The whole Prefera good setting to tick. But ences settings window. as a DIT, we want control over what happens, when it happens, etc. Typically the setting should be Manual. (Fig. 3)
The next setting check boxes are fairly self explanatory but there are nuances that need to be understood. -Begin Offloading…. this is part of the automatic functions that is separated out for you. You can be in manual mode but set the system to start the offload as soon as you drag the folder into the cue. Typically this is left UN-checked. -Ignore Hidden Files….There are hidden files on the camera memory cards and hard drives that are not needed, so why copy them? It only wastes space and time. Leave this checked. -Bundle Items Dragged Into Cue…. What happens here, is that files and folders you drag into the cue, will all end up in the same folder on the backup drive. Typically this is a good thing, so leave it checked. a feature that allows - Buffer Size…. This is ShotPut to allocate ram memory to help accelerate offloading of data from memory cards or hard drives. You must play with this setting and your machine to determine the ‘sweet spot’ for the processes. If you are using CF, SD or other card readers, it is possible to overheat the card reader if the buffer is set to high, or large. Always test this and monitor the physical temperature of the card reader. Heat will damage both the memory cards and the card reader. If in doubt, set the buffer to small or medium at first, then monitor the memory card and card reader temperature. The File Verification Preferences are settings that will be applied, again, to all files processed. The Checksum type MD5, is the preferred setting for most production facilities today.
Fig. 4 Checksum verification settings.
Fig. 3 Offload Mode settings.
138
7.5 Exercise 5: ShotPut Pro Interface The other settings are different algorithms offering higher or lower processing criteria. The bottom two, File Size and No Verification, were discussed earlier in the chapter. Figure 5 lists features to think about before you commit to them.
Fig. 5 Settings for what is to happen after the files is done being copied.
-Automatic Card Eject is just that, it will unmount the card when it’s done. It won’t unmount the card, if the card fails to copy properly. Most prefer a manual eject. -Auto Format the Card…. this is one that should give you pause. If it re-formats the card, all is lost. It’s not a good idea to do this in most cases because there’s always some file, (or more copies) that need to be made before the card is re-formatted. However, for reality shows, that are are tight on budget, short on memory cards, and operate in a run-and-gun mode, this feature is handy. The card is read, backed-up, formatted, and ready to go into the camera. -Play sound will beep when the process is done. It’s a simple and really nice way to get your attention, but be careful about this if one needs to be quiet on set. -Rename Card. This will work on some cameras and not on others. Many will rename the card when re-formatted. If the camera will respect a card being renamed, and leave it be, you can use this feature. Most leave it un-checked. At the top of this same Preferences window is the Logs setup functions. 5. Click on the Logs icon.
The Log file is the documentation that the file was transferred containing information about what, how, when, and where it was saved. 6. Set the file settings as you see in this screen capture. In the Add notes to log area, put your name instead of the one shown in this screen shot. A quick note about 2 of the settings; the Output formats for the log file has several selections. -TXT setting creates a text file that can be opened by any text editor or word processor. -XML is not used often. -CSV is the standard, g e n e r i c s p re a d s h e e t format. -HTML is just that, a nicely formatted web page document containing the off-load information. TXT is the one most widely used. We’ll use HTML in this assignment. It has a very easy to read layout as you will see.
Fig. 6 Preferences settings for output log file.
-Output Location is cryptic at best. Simply put, where do you want the log file placed? Some DITs like the file to be put inside the offloaded files folder. Others like it to be put into a common folder. The advantage to the common folder structure is simply, you have one place to look for all offload logs. You don’t have to search the drives for a specific mag on a certain day. All the logs are in one-place on that backup drive. Very handy. We’re doing limited file transfers in this assignment, so we’ll have the the system put the log inside the backup folder. Set Up Offload Preferences 7. Close the Preferences window. We now need to set ShotPut Pro’s offload preset. ShotPut will use this information to manage file names and where you want the checksummed files to be copied. Think of this function as macros, that auto setup a specific settings, making the whole process faster. If you
139
7.5 Exercise 5: ShotPut Pro Interface
Fig. 7 Offload Presets ready to edit.
are using two different cameras, you can set one preset for the 5D and the other for the EX-3 camera. By selecting that preset just before you begin offload, the system instantly toggles to the preferred settings for that camera’s files. This is a huge time saver. 8. In the main window click-on the ‘+’ below the Offload Presets box.
file folder will be named 01, 02, 03, etc. But it would be far more informative to have a prefix of ‘Cam-1’ or ‘ACam’ and a suffix that would be representative of the production. In this case, the footage is from the 2013 Panguitch, Utah Balloon Rally. So, to add ‘Balloon’ to the end of the folder name, helps link this footage to an event. However, adding this information to the card name might cause problems with post editing later on. It will create a very long file/folder name which is unwheldy. Although Figure 9 shows what these fields would look like and the resulting file name preview, we will NOT add this information. Notice that the Preview box gives you instant feedback to what the new file name will look like. Very handy.
This creates a new Default Offload Preset. The red ‘X’ indicates that it is not active. The pencil icon to the right is for editing the preset.
Fig. 9 Set custom file prefix and suffixes.
Fig. 8 Offload Presets window.
9. click-on the pencil icon to open the Preset window. 10. Change the name of the Preset Title from ‘Default Offload’ to 5D Mags cam-1. Then hit the TAB key on the keyboard. Notice the name change in the Offload Presets list to the left. (Fig. 7) The Convention Type should be changed to Card Name. The Prefix and Suffix boxes are very handy. They can add more information to the folder where the camera files will be moved, making it more understandable. Without the additional information the
One of the universal features of ShotPut is that all new settings have a red X in front of them. The changes you make are not automatically put in force unless you make that decision. You will need to click on the red X in front of the Prefix and Suffix to make those active. You will know they are being applied by observing the Preview box. It should have an example of the full new name, that will be applied to the file folder name, into which the camera files will be copied. Notice the importance of the ‘-‘ before or after a prefix and suffix. Without them, the file name would be Cam-101Ballon. Hard to tell if it’s camera 10 and file one or what it really is, file offload 01 from Cam-1. Remember, leaving a space between file names gets replaced by a ‘%20’ on some file systems. This makes file names hard to read.
140
7.5 Exercise 5: ShotPut Pro Interface
TECH TIP: Computers organize files according to the 1’s and 0’s and in a logical order that makes sense to computers… not necessary to humans. It’s always advisable for you to name file number 1 as 01 and file number two as 02 and so forth. If not, the listing of files could be 1, 10, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, etc. Putting a ‘0’ in front of the first nine numbers keeps things in order. Set Up Output Destination Figure 10 displays the final setting, the Output Destination. This is where the files will be copied when you select this specific Offload Preset. In fact, you can have multiple destinations for each Offload Preset. Most completion insurance companies require three separate backup drives. ShotPut will do these backups all at once and faster than doing them individually. 11. Click-on the ‘+’ button below the Output Destinations box. (Fig. 11) A navigation Finder
Fig. 11 Output Destinations list. Green checkmark means it’s active. If this is the first destination in the list, it will be automatically checked. If there are other listings, they will have a red X next to them and you must click on the red X, changing it to a checkmark, to make it active. The process you have created will now be copied to all the selected destinations. On-set, this could be all the backup drives plus your system array. You can have many destinations in this list but only those ‘checked’ will be used. Handy if you have one cameras’ footage going to one drive and another camera to a different drive. Everything’s set for assists from the 5D camera #1 for this shoot. You can leave the Offload Preset setup tool open if you like. For now, lets close it by, 13. clicking-on the red close window button in the upper left corner of the window. 14. In the Offload Presets box, make sure to click-on the red X to change the ‘5D Mags Cam-1’ to the active preset. The X will turn to a green check mark.
Fig. 10 Output Destination listing window. window will open showing all connected drives. Navigate to the folders we setup in class on your personal external hard drive. Locate: ShotPut Backup>5D 12. Click-on the Save button and the file path will be loaded into the Output Destinations listing.
Selecting the Files You Want Backed Up In this case, we will be selecting the mock camera mag from the thumb drive. 15. Navigate to the following folder on your course thumb drive: 5D assets>Sp-Assign-1>SP-Assign-1>DCIM Inside that folder is another sub folder 100CANON that contains all the .mov files and a small image thumbnail of the first frame, the .THM files. (Fig. 12) 16. Drag the DCIM folder into the ShotPut Pro interface where it says ‘Drop files here’. (Fig. 13) You now see the DCIM folder listed. If you had the ability to put several camera mags from the same
141
7.5 Exercise 5: ShotPut Pro Interface
TECH TIP:
Fig. 12 Canon 5D file structure on the camera card. camera, in card readers connected to your computer, you could drag all the DCIM folders into this window and ShotPut will sequentially process them. 17. Click the Begin button located in the upper right of the ShotPut interface. (Fig. 13) The processing will begin. 18. Once the file copy is done, open the folder on your external hard drive and see the results. • There should be the folder AssetMgt and inside
If you remember the discussion in the harware section of this book about connections and their speed, it's no more apparent than doing the copying and checksum process. If you do decide to make all three backups copies at once, be aware of the connections and their speed implications. By this I mean that if two of the drives are USB3 on the same bus and the other file destination is your very fast array, the entire process will be held up by the slower USB3 drives. Many DITs will back the camera mags to the fastest storage on the system, then release the card back to set. When time allows the system will then back those files up to the additional drives with the goal of getting the camera media back in play as soon as possible. One camera to be aware of, where backing up as fast as possible is very important is the Phantom. These fill large storage mags fast. Faster than they can be off-loaded. If there's one camera and workflow that requires you to back the card up as soon as it's handed to you, it's the Phantom. the OffloadLogs folder. This has the transfer log for this card. 19. Double-click on the DCIM_log.html file. It will open in a web browser. (Fig. 15)
Fig. 13 Drop files and folders into this area to cue up for processing. that, the ShotPut Backup folder. Inside that, the master 5D folder. • The folder(s) inside the 5D folder will be created by ShotPut. Notice the name is a compilation of the information you entered into ShotPut for a master folder. • Inside that folder is the DCIM camera card folder, which contain all the contents of that card. Then there is
Fig. 14 File structure of offloaded files.
142
7.5 Exercise 5: ShotPut Pro Interface
TECH NOTE: The speed of the copying and Checksumming is largely a factor of the number of processors in your computer, drive connection speed, and the speed of the drive. eSata and firewire 800 drives are substantially faster than USB-2. USB-3 is potentially faster than both, but actual ‘real-world’ tests prove otherwise. It is imperative that you have the fastest drives and connections speeds you can afford if you’re doing this professionally. RAID arrays are mandatory. With the amount of data being generated by todays HD and UHD cameras, you can get overrun. In ShotPut v5 and newer, you allocate a ‘buffer size’ setting in the preferences. This is something you have to play with a bit. If the buffer is set high and the resulting throughput will be high. This can overheat your card reader. Really, it will overheat the circuits on the reader and damage both the reader and the memory card. Professional DITs often have several card readers and swap them out throughout the day. It is suggested that if your memory card has a data rate of 100Mb/sec or less, set the buffer size to small or medium at first and monitor the operating temperature of the card reader. Look over the information in that file. If anyone questions you as to whether a card or file was processed, this is the place to double check.
Submission for grading Submit your .html file per your instructor’s requirements.
Fig. 15 Screen capture of offload log in HTML format.
143
7.6 Chapter 7 Review Questions
Answers located in Appendix C. 1) The only productions that are in risk of losing their data files are the high budget productions. A. True B. False 2) Compleation insurance companies typically require how many backups to be made of the digital assets? A. One to LTO tape B. Two copies to DVD disks C. Three copies
6) Connection speeds between storage devices are not as important as the CPU speed when backing up files. A) True B) False
7) File and folder structures are best setup by which scheme? A. Just one folder containg all the assets. B. Break it down by camera. C. Break it down by day and then camera. D. It really does not matter.
3) What does Checksumming do when transfering files? A. Copies the file to the new location and only checks the total file size with the original file. B. Copies the file to the new location and confirms that the bytes within the file are the same. C. Copies the file to the new location and sends a confirmation back to the camea that the file has been copied. 4) The standard Checksum protocal is fast becoming: A. SHA256 B. End Point Checksum (EPC) C. MD5 5) Once Checksummed and the side-car file has been created, additional copies can be made of the file(s) using the metadata to speed the process. A. True B. False
144
8
Delivering Dailies
Mentioned earlier, ‘dailies’ is the term used when
referring to looking at what was shot the day before. As you remember, it’s actually three days later, in the film based world. Using digital, we can deliver what was shot in the morning, by afternoon or early the next day. It depends on the amount of data, what needs to be done to the footage, and the processing power of your DIT workstation. Von Thomas, noted DIT and workflow educator talks about his work on the movie Maniac, shot on the RED Epic camera: “….keeping the data recorded to a 128GB card under 30%, I can download that card in about 15-20 minutes using R3D DM. I discourage the prolonged shooting to a card, it’s too risky, and NOT a good workflow. Remember if you are shooting multiple setups to a card, and there was a problem, you won’t know it until way later. If your production has to reshoot because of a problem not caught in time, it will cost production time, and money. My average day was about 13.5 hours. There would be a cut off point for what went up as dailies. Usually, I have all but the last 2-3 rolls completed color corrected, sunk audio, transcoded to Avid and ProRes for dailies, and finally uploaded to Wiredrive, that way the files are available that evening to view at wrap. The remaining rolls, will be in next days uploads and dailies. This is a good procedure to follow, because, I’m not sitting on the clock (producers like that, they get same day service, they like that too), and I leave shortly after wrap.”
• Using one of several pieces of software, you will sync each shots audio to the video. • You will then apply a simple one-light color correction or a preset LUT to each file. • Those files will then be put in a queue for processing. • Processed files will be saved to the drive headed to editorial, along with the camera and audio original files. The output of that processing might be two-fold: one file for viewing on an iPad or tablet device and the other for editorial. This means each clip must be processed twice. You don’t ever want to process the clips into the editorial format, then re-process that file into the web version. ALWAYS make the new version directly from the original clip(s). The closer you stay to the original file, the less possibility of interjecting image noise or artifacts. Not processing from the original, on-lighted file, will only create questions. The producer or DoP will want to know if those ‘image issues’ are in the camera original. In upcoming chapters, we will create dailies from a few clips, using different software, so you can experience the workflows. The two scenarios we are going to address are: • The camera files are going to be edited in AVID. • The camera files are going to be edited in some other NLE.
The basic workflow for dailies is: • Offload the camera card/magazine, to the system drive array. • Make additional backups from the array to multiple drives.
145
8.1 Raster Disaster
shooting in anamorphic widescreen? Now let’s look at the RED camera and its native rasters (Table 2): Further complicating the daily workflow is the wide spread use of various cameras on the same production. That was not an issue before digital. You could easily mix film cameras like the Arri, Paniflex, Movicam™, and not have a ‘raster’ issue. They all shot 35mm film. If the same cropping mask was put in the film gate of each of those cameras, you were done. All the frame sizes (or raster as we refer to this for digital cameras) were identical. In the digital realm all bets are off. Here’s an example; You’re on a set and they’re shooting Arri 2.8k, 5D and GoPro. Oh, and they are shooting some high frame rate shots on a Phantom. Here’s what you’re up against.
Camera Arri 2.8 k 5D GoPro Phantom @ 500 fps
The magic adjuster in most software is commonly known as ‘fit width’. It’s not always called that with every software, but there will be something similar. You just have to figure it out. In DaVinci Resolve, for example, it’s a check box in the Project settings called “Scale Entire Image to Fit”. This seems misleading. It leads you to believe that the height and width are adjusted to fit the project raster. Actually this setting fits the width, and letter-boxes the heigth if needs be. (Fig. 1)
Raster 2880 x 1620 (16:9) 23.97fps 1920 x 1080 1920 x 1080 or 2.7k 2704 x 1440 24fps 1024 x 1024
Table 1
What you have is a mess. But it’s now a normal mess, that a DIT must deal with. And what if they are
Camera
Raster
RED One or Epic
2k = 2048 x 1152 4k = 4096 x 2304 5k = 5100 x 2700 6k = 6000 x 4000
Table 2
How do we fix this issue, when multiple cameras are used on a production, and the DIT has to seamlessly prep the assets for post production? You have some prep work to do, and have to learn how your tools will conform the files.
Fig. 1 DaVinci Resolve Project setting for ‘fit width’. Fit width accomplishes two adjustments at the same time. It will take a 2048 pixel wide image and fit it within a 1920 pixel wide raster space. It also ‘proportionally’ resizes the vertical raster size, maintaining the proper image height to width ratio. Step By Step Pre-process A real-world story will illustrate the importance of pre-processing a sample file to assure your workflow is going to work. A production was being shot in the anamorphic image format. The DIT was merrily processing the files
146
8.1 Raster Disaster for dailies and sending them off to editorial. The DoP became concerned with what he was seeing in the dailies. The image did not seem to cover the same area he saw in the camera. The headroom was too small. This is called ‘giving a haircut’ and was not the way it was shot. The DIT assured the DoP that what he was seeing was what was shot. What the DIT had done was to ‘fit frame’ or ‘crop’. The 16:9 frame size is not as wide as the anamorphic frame. If you lay a 16:9 mask over the full anamorphic image, you would see is that both sides and some of the top and bottom are excluded. Another company was hired to process the assets. That DIT did a simple test. He brought in one shot, and used the ’scale’ tool. Think of this tool as zooming back on the entire frame, digitally. He scaled the image back (smaller) until he could see the entire image frame and some black around the edges. At that moment, he knew what was captured by the camera.
you see in the processed frame matches the original, you now have the settings to move ahead with your daily workflow processing. This part of the daily job on-set is becoming more vital as the camera manufacturers create new cameras. It would be advisable to keep a log of what each camera can shoot. Use a spreadsheet or database of some kind, for easy reference. And sure enough, someone will want to shoot with a smart phone of some kind. Then you could be stuck with a really weird raster that you will have to make work. Fortunately, you now know how to make it fit in with the rest. It can’t be emphasized too strongly or too often: be ready to test everything and ready to receive a strange mix of ‘stuff ’ on any given production. The software you will use is only so smart. You have to be smarter with your tools.
He then did a ‘fit width’, and processed the clip. Matching it to the camera original image, he confirmed that every pixel was in the raster. The DoP was ecstatic and it re-affirmed that, in fact, he had shot the picture properly. Unfortunately, this is not that uncommon a mistake by the less experienced DIT. You will not make that same mistake if you implement the following steps with a sample clip from each and every camera type being used on the production. If production is shooting on the same cameras, then only one test is needed of course. 1) Read the metadata from the file (Fig. 2). What is the raster size the camera is Fig. 2 Clip Metadata displayed within Reshooting? This information will be in the metadata solve. within the clip. 2) What is your target raster? You have to know this. 3) If the camera image is larger than the target raster, use Fit Width function to force the software to re-raster the footage. 4) Process a clip (from each camera if, they are different) and compare with the camera original. If what
147
8.2 Basic Color Grading On-set
“Color is a sensation that certain wavelengths of light cause in our brain.” Sareesh Sudhakaran, WolfCrow Blog. As noted cinematographer, Shane Hurlbut, ASC, states “Lift-Gamma-Gain / Shadows-Midtones-Highlights / Blacks-Mids-Whites…they are all the same. I like to remember it with the short hand ‘BMW’. They are the first areas we attack in any shot and can be called the one-light or pre-grade. It’s done fast, maybe 60 seconds spent per shot, then you move on.” NOTE: At this point, we’re going to step into very basic color grading that will be used daily on-set. In a later chapter, we’ll delve deeper into color theory, how we see and perceive color, and what equipment a DIT needs to do color work effectively.
broadcast world we have different maximum and minimum standards. In fact there are Federally-mandated standards we must follow for broadcast.
Fig. 1 Full range image represented on Waveform scope.
The Federal standards dictate that black is really 16, 16, 16 and white is really 235, 235, 235. Just slightly gray on the dark end, and just off pure white on the bright
All colorists seem to agree with the famous line from the Wizard of Oz. When Dorothy asked where to start on her road to OZ, one the Munchkins answered “We found it better to start at the beginning.” • Luminance- This is the black levels (or pedestal if you’re familiar with broadcast terminology) and the white levels, and all the ranges in between. • Saturation- This is the amount of color. Not the number of colors but the intensity of any one color. Consider this the ‘richness’ of color(s). • Hue- Simply put, is the proper color for any give object in the scene. Lets back pedal for the moment. Our color space for digital video is Red, Green, Blue or RGB. If we had full intensity of each (R=255, G=255, B=255 in the digital or Photoshop world) we have pure white. Likewise, if we have pure black (0, 0, 0) that would be the absence of all luminance. This is fine for photography and other uses, but in the film and
Fig. 2 Three steps of one-light on the same image.
148
8.2 Basic Color Grading On-set end of the range. This has to do with TV sets, transmitters and enough techo-babel to make engineers giddy. All you need to know is that your end product, has to fit within these ranges. In the one-light grade, we’re aiming for a ‘quick fix’ to the images, with basic-grading, that will take them from the rather flat, desaturated RAW profile to something, that is closer to what we originally saw on set.
Where Do We Start? To determine this, we need to look at an image from the camera. Figure 3 is an image from an Arri Alexa set to the LAB color profile. The LAB profile is one of the standards and records more luminance information across the spectrum, keeping it well within our limits of maximum blacks and whites. In fact it looks quite flat and typically doesn’t have any solid blacks or full whites.
The images in Figure 2 show a frame from a Black Magic Cinema Camera. The top frame is directly off the camera. The middle frame has the basic Rec. 709 settings. The bottom image has a one-light color correction where the black levels and the white levels are set, and a ‘titch’ of color correction is added to bring the skin tones in line. This is about all we have time for on set. So, how do we attack this process and what do we use? All non-linear editing software has some color correction abilities built-in. Some are more sophisticated than others. AVID Media Composer with the Symphony option is quite complete with both primary and extensive secondary color grading features. Adobe Premier incorporating a new, full featured color grading upgrade. But none of the non-linear software holds a candle to a full-fledged color grading tool like DaVinci Resolve, Scratch and others. That does not mean that you can’t use AVID for complete color grading if that’s the NLE used in the workflow. You can and it will work fine.
Fig. 4 Histogram of Fig. 3. All luminance information is grouped in the middle. In the images histogram (Fig. 4) you can clearly see that there are no blacks. The histogram waveform doesn’t stretch all the way to the left of the screen. The lack of whites is also easy to spot, because the waveform does not go very far to the right. It’s all bunched up in the middle. The first goal in one-light color correction is to stretch the total luminance range of the image, encompassing the full available range. That process begins with the black or pedestal or setup levels. All three mean the same thing but are commonly referred to as ‘black levels’ in our working world. In the software you’re using, you would employ the tools to bring the black level down to an acceptable level,
TECH NOTE:
Fig. 3 Arri 2k RAW image with LAB colorspace. Image ©Arri
There might be confusion with ‘100%’ and ’235’ used to represent ‘legal white’. 235 is the RGB luminance reference. 100% is in reference to the maximum voltage to create white. The Waveform monitor displays the voltage for the brightness of an object in the scene. It’s a 1 volt scale. 100% = 1 volt.
149
8.2 Basic Color Grading On-set just above absolute black. Then you would use the luminance tool to raise the whites to 100%. However, there is ’super white’ which can push the limit to 110% if the destine output will handle that level of white. Once both these are done the image will look a lot better. (Fig. 5)
Fig. 5 Arri RAW with Rec.709 colorspace added. Image ©Arri
unacceptable. You will have to learn the strengths and weaknesses of the software you are using. But some tips when using auto correction tools: don’t select a blown out white area. There are no color pixels to reference in blown out areas. Neutral grays are good, if there are any in the scene. Try and find a true, properly exposed, white if possible. If the auto function doesn’t work, then you
Fig. 6 Histogram for Arri RAW image with Rec.709 colorspace added.
This color correction is still not quite where we want it on the white end of the luminance range. A bit more tweaking will be needed to get the whites to about 100%. But the improvement is clearly visible. Quick Color Balance The next correction would be color balance, or white balance. Cameras which shoot RAW really don’t record a set color balance. They do record the metadata about what the camera was set to, but it’s not ‘burned in’ or ‘baked’. This simply means it can be changed after the fact and this is one of the beauties of RAW camera formats.
Fig. 7 Same ARRI shot with mid-range, or gamma lifted. Image © Arri
AVCHD and other compressed recording formats, are driven by the camera settings. Because the resulting camera files are highly compressed, the color balance and look of the image IS baked in. It’s very important for the camera crew to get the settings right when they shoot in a compressed format, because changing it in post adds noise or unwanted artifacts. Some color shifts can be so far off, the scene would have to be re-shot. Not so, typically for RAW recording formats. Many color correction tools have an ‘eye dropper’ or auto white balance tool to help with color balance. These often work great. When the don’t, they are totally
Fig. 8 Final comparison of original LOG and simple corrected image. 150
8.2 Basic Color Grading On-set have to use scopes and that’s pretty easy. It just takes more time. We’ll go through all this in the exercises. What’s In The Middle? Mid-tones or the gamma is where most of the visual information is stored. A shot can still look flat with perfect whites and rock solid blacks. To improve this we need to work the middle of the luminance range. This is totally subjective of course, and it has the strongest effect on the image. In the shot in Figure 7, of the German Band walking the parade, the scene is all about the people. They are in shadows with bright buildings in the background. We need to see, or bring the people out. By lifting the mid-tones, the people will stand out from the background. This, of course, doesn’t mean that it’s perfect, but if we do a split between the LOG image and the one-light, you can see that it’s a drastic change. There’s a few more quick fixes to make it look even better, but we’ll leave those for the exercise. I don’t want to beat the drum to loudly there, but please remember that dailies and the one light process is not focused on perfection. It’s simply quick adjustments getting the image within legal limits. Nothing more. Then move on to the next clip. Is You Reference Correct It is important that you monitor (discussed in the hardware section) is calibrated. Here’s the first of a series of articles on this topic. Sony OLED Calibration part 1 http://bennettcain.com/blog/2013/4/22/sony-oledcalibration.html
151
9
One-Light & Scope Basics
It
is vital that you understand what the software is telling you. So many people look at the scopes displays and find them to complicated and don’t see a need for them. After all, it’s digital and what can go wrong? Lots! In this chapter we will go over the basic color correction function you will do on set, them explain basic color grading. From there, we’ll step though what the scopes can tell us about the image and vital they are to getting the image properly prepped for post. This will also allow us to get DaVinci Resolve setup for the following workflow. What Is A ‘One-Light’ ? The term ‘One-Light’ or ‘Best Light’ comes from the film processing days. We still process film, but in less volume. The camera negative is processed, then a working print (work print) is created. The camera original negative is passed though an optical printer (Fig. 1), projecting the negative onto positive or reversal film. Exposures done on-set, are not consistent across the entire roll of film. The film technician at the lab looks at several places in the roll of film, picks an average density, and sets the printer to that ‘averaged’ exposure. When projected, some shots would be a bit dark, others a bit light, but all were totally viewable as dailies. Today, projects shot on film stock receive somewhat the same treatment with one striking difference. The negative stock is projected on a digital sensor, or scanner, creating a ‘DI’ or Digital Intermediate. This, many feel, allows the look of the project to be film-like, but with digital working files. Producers like the Disney Channel, until recently, shot all their made for TV movies on film
Fig. 1 Oxberry 1600 aerial printer for 16 and 35mm film. Image ©Oxberry stock, transferred to digital, then went straight into editing with those digital files. Due in large part to the image quality of digital cinema cameras today, Disney and other studios creating content for TV, Cable and Satellite delivery, now shoot digitally. Major shows like Defiance, House of Cards, Warehouse 13, Arrested Development, are shot on the RED camera platform. Likewise, Game of Thrown's, Homeland, The Mob Doctor, Smash, Grimm, Downton Abbey, Elementary and others are shot on the Arri Alexa. The Timed Print The other end of the film process is a ‘timed print’ where each shot is carefully corrected for color balance and exposure, so the entire film appears cohesive. But that the last process is applied to the print before it’s r e l e a s e d t o t h e t h e a t e r s . O f n o t e, i n o u r production-world vernacular, much like we call an audio
152
One-Light & Scope BaOne-Light & Scope Basics
CD released by a music artist an ‘album’, a holdover from the days when you bought a vinyl disk inside a large cardboard cover or album, we still call the final output from the digital imaging process a ‘print’. One-light color correction is way up-stream, from the timed print, and meant only as a way of bringing slightly out of balance exposures to something more viewable. Some would call this a ‘normalized’ exposure. We need to do the one-light on-set now, to compensate for the ‘flat’ looking images derived from the Raw file formats or cameras set for ‘flat’ exposure profiles. This became such an issue when monitoring the output of the new digital cinema cameras on-set, that RED cameras and others, now provide a basic one-light processing when looking at the live output of the camera. It became just to annoying to hear producers and directors keep asking, ‘that image isn’t going to look like that is it?’
Fig. 2 Raw, Log and Rec. 709 comparison. Courtesy Able Cine Technical Resources
In Figure 2, the same frame from a Arri Alexa camera. The portion on the left is the raw image right from the file, as recorded. The middle is Log output which simply means, it’s a logarithmic representation of the colors and luminance range. On the right, the same image with the industry standard Rec. 709 color profile added. By far, the Rec. 709 image is more viewable, and closer to the final look, than the raw or Log image. Still, a long way off from a final grade. What we’re going to do for the rest of this chapter is grade a few shots using two different softwares, DaVinci Resolve and SCRATCH. SCRATCH and BlackMagic DaVinci Resolve are the the most widely used for this task currently. However, as was mentioned previously,
MTI’s Dailies, ColorFront, and a few others are being used with great success for DIT functions.
TECH NOTE: There is a problem that will cause you no end to grief with Apples OS 10.x. There is a very nice function built into the program that allows the preview of almost any file just by highlighting the file and pressing the space-bar. If the OS can open the file, it will show you a larger image of the file and allow you to play the audio or video, or read the text document. This is called Quick Look. Quick Look has a very ugly downside when it comes to AVIDs DNxHD files. Because AVID has to keep these file structures backwards compatible, the files are very fragile. One industry DIT recently said “I have yet to work on a show that requires DNxHD output for editorial that at least one file in a days rendering has not been corrupted…and you’ll never know it.” The tip here is NEVER, EVER open a folder that is being rendered to. Some would think it would be okay to open the folder but not the files. When you open a folder, the operating system immediately goes after each file in the folder and preps it for viewing. This interrupts the rendering process for any files that are not completed. Another ‘gotcha’ is processor usage while transcoding to DNxHD. It seems that if the processors are being split between several CPU intensive activities, the likelihood of DNxHD file corruption goes up. Off loading or backing-up files is more of a computer bus intensive activity and does not seem to bother the integrity of DNxHD files. ProRes, and other file formats, are not as sensitive to these issues. As well, you should always double check DNxHD file exports 100%. When an editor gets them, a file with corruption will load just fine and play until the program hits dropped or corrupted frame(s). The program SCRATCH will load the corrupted files into the Construct but display a black frame for the thumbnail if any frames or part of the clip is bad. It’s a very quick way to see if all your transcoded files are intact. 153
9.1 Scoping the Image
Required Reading: Luma and Waveforms http://bennettcain.com/blog/2009/10/12/luma-and-wa veformshtml
Within all NLEs there are several signal monitoring assists helping us analyze the image without relying on our eyes alone. Most colorists glance at the image, then do their work using just the scopes. In fact, it is the goal of most successful colorists to spend 60 seconds on each shot and only part of that total, albeit short time period, actually looking at the image. Why? We have the ability to look at a bad image and convince ourselves that it looks-good. The longer you look, the easier it is to can talk yourself into a bad color grade. We’ll delve into that subject more, in the section on how we see color. Steve Hulfish’s book, ‘The Art and Technique of Digital Color Correction’, stresses this concept of glancing at the image, using the scopes and so as not to fry your eyes. So, let’s take a few minutes to discuss what each scope measures and what it tells us. Basic Definitions Luminance- the brightness and darkness of the image. This has everything to do with ranges of gray and nothing to do with color.
Fig. 1 SMPTE color bars Fig. 2 Scope targets inditest pattern and what that cated with colors. Courtesy looks like on the Vector©http://www.cir-engineering scope. .com Chroma- This is color. It has intensity or saturation, and hue or difference in shades of the color. Crushed- Typically a term used when talking about black levels. To ‘crush’ the blacks it to lower them to the point that detail is lost. Blow-out or Clip- Typically used in reference to the white levels. The camera can clip off the bright white levels that exceed the levels the imaging chip can handle. In post you can run the whites to a level that they appear to be clipped or ‘blown out’, which adds more contrast. Vectorscope: This tells us everything about color and nothing about luminance. It can tell you what color an object is, how much color that object has and which direction (hue shift) it is, so it can be corrected. In Figure 1, the images show standard SMPTE (Society for Motion Picture and Television Engineers) color bars on the left and the vectorscope on the right, displaying those color bars. Each color swatch falls in the associated color target box on the scope. If you look closely at the upper left part of the scope signal, you see the target box for red, marked on the illustration with ‘R’. To be precise, the red color should be right in the middle of the smaller box, however there is a larger box around the target and any red intensity, falling inside the outer or larger box, is ‘legal’. By legal we mean that it is acceptable within the Federal broadcast standards.
154
9.1 Scoping the Image
The image in Figure 4 is an 11 step Gray Scale chart. On the right is the associated signal represented on a waveform monitor. The white squares are represented by the blocks of image (referred to as ‘traces’) on the scope that stop at 100%. The black swatches bottom out a 0%
Fig. 3 Color-phase vector diagram placing the specific colors within their targets.
Fig. 5 Color image as its represented in an RGB Parade Scope.
TECH NOTE: Figure 3 breaks the basic quadrants down on the scale of the vectorscope. The primary colors: Red, Green, Blue and the secondary colors, Magenta, Cyan and Yellow are shown.
Fig. 6 L-to-rt the Y Waveform, the source image and the RGB Parade waveforms. on the scope. One of the first changes we make when color grading, is to make sure the blacks in the image are at 0% and the whites are at 100% so the image has as much range as ‘legally’ possible. Fig. 4 Standard Gray Scale ‘chip’ cart and its representation on the Waveform scope, shown right. All the lines meet in the middle of the scope pattern. This is pure white, pure black and all the grays between, or the absence of other colors. It’s as a perfect mixing of colors. Again, this tool only tells us information about color and its intensity. Waveform Monitor: On the other hand, the waveform monitor tells us everything about luminance and nothing about color.
RGB Parade: This scope gives us two important pieces of information at once. The brightness of the image and the brightness of the R, G, and B channels. Understanding the relationship of the brightness of each primary color channel allows us to correct color shifts in the image. Hold on to that thought for now. We’ll demonstrate this in the assignment to come. Figure 5 shows a frame from a production at a balloon festival, and what the RGB luminance levels look like. If we compare the image with both the RGB Parade and the Y-Waveform (Y is the overall luminance channel in any image), this balloon shot looks like the image in Figure 6. One of the nice features about an image represented in the waveform monitor, is that the image can be read left to right. The left side of the shot is darker. In the Y-Waveform (the left scope displayed) you see the overall levels sloping down to the left. The right side of the sky is
155
9.1 Scoping the Image lighter, so the scope shows levels higher on the right. The oval shaped object in the left scope in the lower third, about midway in the trace, is the inflated balloon you see almost centered in the shot. Finding parts of an image in the waveform monitor view, will be important when color correcting.
hardware solutions, like the Tektronix boxes are designed for both applications. Thus the higher price. Ultimately, it is better to have a separate box or monitor displaying the scope information so the screen controlling the image software, can be less cluttered. The scopes provided with the editing software are harder to read as well. That being said, if the editing software you are using (AVID Media Composer with Symphony Option, Final Cut with Color, or Premier), have scopes in those softwares, that will get you by. But it’s by far, easier on the eyes and faster to use some form of external scopes.
Fig. 7 (l to r) Hardware based image test scope by Tektronix and Blackmagic Design’s Ultra Scope, software based test scopes. Images courtesy © Tektronix Inc. and Blackmagic Design. Internal or External Scopes All professional non-linear editing (NLE) systems have some level of color correction and associated scopes to help with grading. At best, they are marginally useful. Most serious color correction is done with software outside of the editing system and using either hardware or software based external scopes. In Figure 7, the left image is a hardware-based video monitoring box. Small, portable, it can go from your edit suite to the field quickly. It’s a very precise tool costing some $4,000 and more. The VFM5200 model shown, is $7000. It’s a big investment, but this is the top of the line tool that will last decades and provide you with absolute, rock solid information. The right image is Black Magic Design’s UltraScope™ software based monitoring solution. Much less expensive (under $700), it is very accurate and provides an outstanding tool to assist any NLE or color correction software. Don’t discount the affordable price point. This software package is every bit as accurate as the hardware solution, it’s just not portable unless you have the extra room for an additional monitor. It’s also designed for post production work only. It would be a lot of effort to set this software up with the associated hardware and connectors to monitor the image directly off the camera. The
156
9.2 Exercise 6: Setting Up DaVinci Resolve
For this exercise, we’ll get into Blackmagic Design’s DaVinci Resolve™ v12. (With v12, Lite is now renamed DaVinci Resolve and the full paid version is DaVinci Resolve Studio). If you are using v11, the interface will not match what the screen shots you see in this tutorial, but they should be close enough to be workable. DaVinci Resolve program is free, from Blackmagic, so you might want to install it on your computer to do the assignments and future media processing. The Studio version is $999. In this series of exercises we will go over the following: • Setting up DaVinci’s databases. • Setting up a user account within DaVinci. • Setting up a project. • Ingesting media into the program.
Getting Started With DaVinci: Like any software that’s new to you, it will take some time to get use to how it works. All softwares have their quirks, and DaVinci is not unique. There are certain setup steps that must be done in a specific order, to get everything working properly. Here are the steps we will go through: • Opening the software. • Setting up a new database on your hard drive. • Setting the project settings. • Locating the project video and audio clips. • Importing those clips into the workspace or project. • Linking the audio and video clips. The processing order listed above is important. One of the ‘gotchas’ of DaVinci are the project settings. If you start the project and the settings are improperly configured, there will be issues down the road. The second concern is computer RAM and video RAM memory. DaVinci can not tolerate fragmented memory. It is highly recommended that you restart you computer before going into a DaVinci session.
This assignment will take 20 minutes. You will need: • Your course media files. • Your external hard drive. • Blackmagic’s DaVinci Resolve Lite. • RED CINE-X PRO
Locate DaVinci on the application launcher bar or in your Applications folder on your computer and launch the program. This opens the Project Manager window. (Fig.1) The next settings are to configure the program. We want to set the operational file paths so the program can cache and write working files to the proper place on you computer. A note here about environments that you might be working in with this program. If you are in a classroom lab setting, the computers may be locked down in such a way that the default paths for working files are not available to you. In that case, direct those settings to your external drive. If you are working on your personal computer, you can leave the system default settings. But it is good to know where these settings are located. First we need to setup the database that holds all the information about your setup.
157
9.2 Exercise 6: Setting Up DaVinci Resolve
Fig. 1 Project Manager window. Fig. 4 Create New Database window. 1. Click-on the icon that looks like an tick mark ‘⋎’ pointing down (Fig. 2) and select Database from the menu. This will open the database window and display any databases and their location if any have been previously created. (Fig. 3) We want to create a new one.
Now the next instructions need to be followed carefully. Refer to Figure 4. Label = your name 4. Click-on the field next to Host. This will open a typical navigation window.
Fig. 2 Database access menu.
Host = browse to the CourseAssets folder on your hard drive and select that folder. Driver = Disk 5. Click-on the Create New Database button.
Fig. 3 Database Manager window.
3. Click-on the Create button at the bottom of this window. 4. Enter a name for the new database. Use your last name.
The window will close and you will see the new database i n t h e l i s t . I t ’s important when using Resolve that you select the database you want to use when starting your session. If you have more than one or you switch computers it’s preferable have the database and projects on your
Fig. 5 Project Config. menu selection.
158
9.2 Exercise 6: Setting Up DaVinci Resolve
Fig. 7 Working Folders settings.
Fig. 6 Project Settings window. portable drive. When you select a database from the list, it will turn white and the others will turn gray. Now we can focus on the setup of a project. 6. Right-Click-on the Untitled Project icon in the upper right of the screen. A menu will drop down (Fig. 5). 7. Select Config from the menu. The project configuration settings window will open. (Fig. 6) The top portion, shown in Figure 6, are the settings for the timeline. The defaults are shown, and are good for all the projects we will do as exercises in this book. However, if you are going to work on some other raster (Timeline resolution) or frame rate, it is important to set those here before you begin. That being said, with Resolve v12.x, the timeline is ‘smart’ in the same way most non linear editors are. If the first clip you drop on the timeline is NOT the same as the configured settings, the program will ask you if you want to ‘change the timeline’ settings to match this clip. If the first clip IS representative of the files you will be using, then confirm this change. If that clip is NOT the same as most of the media you are using, you have two choices: stop what you are doing and find a clip that is, or just drop one on the timeline and let the program change the settings. This could create issues later on. Resolve is good at down-sizing larger rasters into smaller rasters, but it will ‘window’ smaller rasters dropped into a larger raster timeline. You can set the program to automatically ‘re-raster’ any files dropped into a timeline that does not match that clip’s settings. 8. Click-on the General Option selection from the list on the left of the window. We are interested in the Working Folders settings (Fig. 7)
You need to direct the Cache files location and the Gallery stills location to you external hard drive if you are working in a classroom lab environment. If this is your personal computer, leave these settings alone. 9. Click-on the Browse button and navigate to the CourseAssets folder on your external hard drive for both of these settings. 10. Click-on the Auto Backup selection on the left side of the window. This is a handy setting that will automatically save versions of your project as you are working. You can’t turn this setting on until after the project is saved for the first time. We’ll do that later. 11. Click-on the Save button in the lower right corner of this window. You have seup the new database for your projects and set the project default drive mapping. The program has just created a database with this first project saved inside. When resolve saves projects they all reside in the database.
Fig. 8 DaVinci Resolve data base file structure on hard drive. The system will now create a new set of folders on your hard drive. If you look at the file structure of your hard drive, you should see the following new folders. (Fig. 8)
159
9.2 Exercise 6: Setting Up DaVinci Resolve NOTE: The free version of Resolve has only a few limitations compared the full, paid ‘Studio’ version. One of those is raster export sizes. It can bring in any raster size, but is limited to less than 4k when exporting. This is more than adequate for most things you will do for assignments and your personal projects. If you really need 2k or higher raster size, the full version comes with the purchase of a Blackmagic camera (good sales incentive) or you have to shell out $995 for the software only. It’s a great value at that price by the way. NOTE: This is a point where you can take a break and return at another time if you need to. NOTE: You will need REDCINE-X installed on your computer to proceed. Not only does this software install the full R3D file asset program but a smaller app called RED Player. Like QuickTime, it opens R3D files for a quick look without launching the full REDCINE-X Pro program. It also installs the codecs for the RED camera files.
TECH NOTE: The term ‘twirl down’ is fairly specific to the Mac OS. It is a function that involves the triangle shaped tool to the left of a folder, ‘▷’. When you click on the triangle, it will rotate (twirl), pointing down, opening the contents of the folder. This feature is written into some Windows OS programs for consistency. This will open the RED RAW camera file in the RED PLAYER software. The image will be much larger than your display but this software reduces it to a more manageable screen size. 4. Using the View menu, select Metadata. Lots of good information here. (Fig. 9) We’re interested in the FPS, which is 23.976 and the Resolution, which is 4098x2304 (or 4k). Remember that, we’ll need it in a few minutes.
Project Setting Configuration When starting any project whether it be in Resolve or you favorite non-linear editor, you must know some information about the footage you are working with. The program needs to know the frame rate of the incoming footage. There’s a difference between 24 fps and 23.978 fps. It also needs to know the raster size. How do we find this information? You must have the REDCINE-X software installed to do the following. 1. From the Finder, navigate to your hard drive, and into the RED Assets folder. your drive> CourseAssets>Red Assets 2. Twirl down (or double click) the RDM folder and then twirl down the first clips folder. 3. Double-click on the first file in that folder. It will have a .R3D extension.
Fig. 9 RED camera file metadata. Take a few moments to scroll down this long list of available metadata. You will see data on the color space, color temperature the camera was set to, and lots of empty fields that can be filled in, storing information for later reference. Metadata is the king in the digital world right now. The beauty of this form of information storage; it stays with the file as it moves down the pipeline. 5. Close RED PLAYER.
160
9.2 Exercise 6: Setting Up DaVinci Resolve Back in Resolve, 6. Restart DaVinci Resolve, if you closed the program earlier. We need to set the frames per second information into the project before bringing in any media. What we’re going to do at this point can either be done before you open the new project, or afterwords, BUT before you bring media to a project. We’ll open a project than change then project settings. 7. Double Click-on the Untitled Project icon. 8. Drop down the File menu and select Project Settings. The Project Settings window will open--the same one we worked with earlier. 9. Select the Master Project Settings. 10. Drop-down the Timeline resolution menu (Fig. 10 )
12. Drop-down the Timeline frame rate menu and select 29.97. (Fig. 11) Again, this is as close as we can come to the original 29.976 fps the camera recorded. Just below these setting are the Video Monitoring
Fig. 11 Timeline frame rate setting. settings and we need to change one of them. 13. Drop-down the Video format menu and select 1080pPsF 23.976. This will set the playback you see on the screen to the same frame rate that was recorded in-camera. The next setting to look at is the playback quality. With large raster RAW files, they can bring your computer to it knees; playback could be jerky. If this is the case, then the next setting can be changed to lower the rendering quality of the video so it will play back normally. The RED footage we are going to work with here is unique and will require a proprietary RED Rocket card to keep up with the huge data load. But, if you don’t have one of those, you can lower the play quality until you are ready to color correct. Then you want full resolution.
Fig.10 Setting Timeline resolution.
Notice there isn’t a listing for 4098x2304 in this list. This is one of the shortcomings of the free version. If you want full 4k or larger rasters, you will need to shell out the $995 for the Studio version. We will select something close. 11. Select 3840 x 2160 Ultra HD. This raster is the ‘4k’ for new consumer TVs. It’s really not ‘full 4k’ but then there are several different rasters for 4k floating around, so you just have to be aware. Now select the Timeline frame rate.
14. Click-on the Camera Raw selection on the left side of the window. Notice the Play Quality. It defaults to ‘As Above’ setting. This is where you would change this setting to Half Res. if you computer is having difficulties playing the footage back smoothly. Now we need to tell the program to pass all the metadata forward. We’re going turn on way more than is needed, but it’s not a problem to have un-used data fields. 15. Click-on the Metadata selection on the left side of the window. 16. Click-on the New button in the upper section of the window. 17. Name this setup ‘RED Metadata’. Notice that all the grayed text below is now white and selectable. The check box above each category is a short cut to turning all the selections within that category. Select the
161
9.2 Exercise 6: Setting Up DaVinci Resolve all categories EXCEPT ‘Stereo 3D VFX’ and ‘Reviewed by’. The next feature we want to set is autosave. As we all know, this is important to bail us out when things crash. And they will. But we can’t set this feature until we have the project saved. 18. Save these changes using the Save button in the lower left of this window. The window will close. 19. Save the project by using the shortcut keys CMND-S. 20. Using the File menu, re-open the Project Settings window. 21. Select Autosave from the left side of the window. 22. Click-on the Yes button. The defaults are set to save 8 versions of what you are working on before it over-writes the oldest version. You can change that number as you see fit. I typically set it to 20. In addition, the default backup timing is 10 minutes. The best way to think of this setting is simply ‘what can I afford to lose and not commit suicide’. I prefer 5 minute intervals. 23. Save the changes and we’re done with the setup. There’s lots of other tweaks to further refine the system. You will learn these on your own as you need them.
162
9.2 Exercise 7: Working With Scopes
2. Click-on New Project tool in the lower right corner of the window. It’s the ‘+’ icon. (Fig. 1)
“Waveform=Luminance. Vectorscope=Chrominance. Parade=Red, Green, Blue values. I can’t stress enough how critical and essential it is to use these tools. Once you embrace the SCOPES, you will be confident to plow through footage and have instant visual feedback to confirm you are making the right decisions. I won’t broach the calibrated monitor issue that is always lurking ($$$) and will just say that understanding and trusting the SCOPES will get you 95% of the way home.”
Shane Hurlbut, ASC.
Fig. 1 New Project button. 3. Enter the project name “Scopes”. (Fig. 2)
In this exercise we will go over the following: • Looking at several images using internal scopes. • Working with controls in the color correction tool. This assignment will take 20 minutes. You will need: • Your course media files • Your external hard drive and • DaVinci Resolve software. NOTE: This exercise can be done with any software that has the ability to display a video clip and show the Wavefor m and Vectorscope. You could use Scratch Lab for example, and discover the same information. ☞ If you have set up DaVinci from Exercise 1, you can start this assignment. If you have not done Exercise 1 or you don’t have DaVinci already setup on your computer, do steps 1-13 from Exercise 1 then return here for this exercise. 1. Open DaVinci Resolve program. At the Project window do the following:
Fig. 2 Name the Project ‘Scopes’. 4. Click Create at the bottom of the window. 5. Double-click on the new project thumbnail. The DaVinci project will open. At this point we’re going to leave all the settings in the preset mode. In the Media navigator window on the left, 6. Navigate to the Course Assets on your hard drive and open the Color Correction folder. (Fig. 3) 7. Select the following files by holding down the Command key and click-on each file individually: - SMPTE_Bars.tiff - SmokeyBallon.tiff - Gray_Scale.tiff - Gray_Scale_opposed.jpg
163
9.2 Exercise 7: Working With Scopes NOTE: In version 12 of DaVinci you can select clips in the Media Pool, right click and select ‘Create timeline from selected clips’ or just drag them to the timeline. These clips are just one frame long and it would be more convenient to have them longer. 12. Click-on the SMPTE_Bars.tiff clip and drag it to the timeline.
Fig. 3 Color Correction folder selected from the CourseAssets folder.
8. Drag the highlighted files down to the Master media bin window just. Refer to Figure 4. Save the project by using the Save under the File menu or Command-S on the Mac.
Fig. 4 Clips in Master Bin. On the lower part of the DaVinci screen are shortcut icons to various parts of the program. With the clips in the bin, we can now move to the Editor and put them on a timeline within the project. 9. Click-on the Edit tab in the lower, center of the screen.
Fig. 5 Expanding clip to 8 seconds long. 13. Select the right edge of the clip. The cursor will turn into twin brackerts. Refer to Figure 5. Drag the right edge of the clip to the right creating an 8 second clip. 14. Do this with each of the remaining clips, one at a time. We now need to move into the color correction part of the program. 15. Click-on the Color tab at the bottom of the screen. The color correction tools and windows will open with the selected clips on the timeline. If you do not see the timeline open in the middle of the screen, Click-on the Timeline icon in the upper right of the screen. At this point we need to explain some tools available in this window. If you did Exercise 1, then this will be a bit redundant. There’s some new information, so hang in there.
This moves us to the editor functions in the Resolve suite. Resolve’s timeline is not accessible until a Timeline is created. 10. Use the Cmnd-N or go under the File menu and select New Timeline. 11. Enter the following into the New Timeline Properties window: - Timeline Name = Scopes - Empty Timeline = checked ✔ -Click-on Create New Timeline.
Fig. 6 Color Wheels and controls.
164
9.2 Exercise 7: Working With Scopes In Figure 6 the area where the color correction tools is displayed with the ‘wheels’ shown. If you don’t see these, select the icon that looks like a target from the tool bar just above this area. The cursor in Figure 6 is pointed at this tool. From left to right there are the Lift (adjusts the black levels), Gamma (adjusts the midrange of the image) and Gain (adjusts the white levels).The Offset is what is called a secondary or ‘gross’ adjustment which affects the entire image and not just the specific tonal ranges.
Fig. 8 Video Scopes sub menu. Fig. 7 Color Wheels. The video scopes will appear floating over the screen. You now need to change which scopes to display. Color Function Icons
Luminance adjuster
Color Adjuster
Color Channel Controls
18. Click in the upper left corner of the scopes display window and select Vectorscope from the drop-down menu. On the right scope display, select Waveform from the menu.
Fig. 9 The Vectorscope on the left and the Waveform on the right.
There now should be two scopes displayed. (Fig. 9) We want these two specific displays to get the right information out of the video files. Figure 7 shows the control tools for this window. We will work with all of them during this exercise and explain them as we go along. Now you need to open the Scopes so we can analyze the images and graphically see, what the controls do when adjusted. 16. Click-on the Workspace menu and select the Scopes sub menu. (Fig. 8) 17. Select ON from the menu. Then go-back to the menu and Select 2 Up.
With the first clip highlighted in the timeline, look at the Vectorscope. This display tells us ONLY information about the color in the clip and nothing about the luminance. Notice how each of the colors in the bars are in or near their respective color boxes in the scope. ‘B’ is for blue, ‘M’ for magenta, ‘R’ for red, etc. Those boxes are targets for determining if a color is broadcast legal. If the color exceeds its box, the color is to intense and therefore, not broadcast legal. The Waveform scope on the right gives us information about the luminance and nothing about color. Notice the bar at the top of the scale represents the white box in the bars graphic. The black areas are shown
165
9.2 Exercise 7: Working With Scopes on the scope at the bottom of the scale. There is something you should know abou the scale on the left of this display: these numbers are relative. Zero on the scale is absolute black. 1023 represents 100% white within the RGB 10-bit color space. See, knowing something about color spaces pays off here. There are two other scopes we need to look at. In the left scope drop-down menu, select Parade and in the right, select Histogram.
colors. It is important to understand that the brightness of a color is different that the ‘saturation’ of that color. The white blocks are the three shapes towards the top of the trace. The black and gray blocks are located in the lower part of the trace. The darkest black box is considered absolute black or ‘0’ (zero) black.
The Parade is a Waveform monitor with each color channels luminance displayed separately. This is very useful for color balancing within the image. The Histogram is the display that shows the range of luminance in the image. It also represents the quantity of pixels in each part of the luminance range. This display is more valuable on-set for exposure than in post image processing, but does offer information that might help steer you to a decision on how to fix an otherwise bad image. In the left scope display select the Vectorscope. 19. Click-on the drop-down menu for the left scope and select Parade. 20. Click-on the drop-down menu for the right scope and select Waveform. 21. Click-on the Color bars clip in the timeline. The scopes shown in Figure 9 represent the color bars image. Notice in the left scope, how each color block falls just below its representative color square in the Vectorscope. The red color in the image causes the scope trace to fall at the edge of the box marked with ‘R’. The other colors, yellow, magenta, green and cyan all touch their respective target boxes within the scope. The boxes are the areas where that color should reside but not go outside of. More importantly, they can’t exceed or go wider than these boxes. At that point, the color is considered over-saturated and ‘illegal’ by broadcast standards. If the colors were shifted in one-direction, that would represent a hue shift. A person looking at an image with a strong hue shift might comment that the image looks overall red or magenta or green. That would mean that the entire color space (or Hue) has moved one direction or another. This scope is great for picking up this problem. The right scope, the Parade, represents the luminance levels of the image. Here we see the brightness of the
Fig. 10 Gray Bars displayed in Waveform scope. To further understand these scopes, 22. Click-on the clip in the timeline that has two rows of gray swatches. This is where the Waveform scope comes alive. (Fig. 10) Each shade of gray is shown as a different luminance point. Now, this representation of the gray scale is not perfect. If it were, the middle gray (approximately #9 or the number 127 on the chart), would be in the center of the scope. Both the ramp up and down of each row of grays would cross just above the middle or close to 60% gray. This is also known as 18% gray in the still photography world. You can see a slight curve in the lines that create the X shape and the horizontal line across the red, green, and blue channel. The reason is that there is a color cast to the image. This color information crept in when the file was converted for use in the tutorial. Remember the lesson on issues with flipping codecs? This is a very graphic example. The grayscale image looks all gray to the eye. The scopes tell a different story. This is digital chroma noise. Something that is to be avoided at all costs when processing files. The horizontal lines on the scope are SEMPTE luminance steps. Each line going up represents a gray that is twice as bright as the last gray.
166
9.2 Exercise 7: Working With Scopes When the grayscale is examined on the Vectorscope (Fig. 11), you notice the dot in the middle. There is no color in the grays, whites and blacks so they all cluster in
Fig. 11 Gray Bars displayed on Vectorscope. the middle of the scope trace. The three traces that do expand out, are the three letters at the top of the chart Y, M, C, or Yellow, Magenta, and Cyan. The dot in the middle would be smaller if the color cast or digital color noise, was not in the image.
The color controls are broken down into R, G, B, which is the native color space for all our images. The luminance is broken down into its own separate controls: • Lift, Pedestal, or Black level (Fig. 12) • Gamma • Gain, White level, Luminance (Fig. 13) The reason for the different names for similar controls is that different software manufacturers choose their own terminology. They are all the same no matter what they are called, and you’ll just have to get used to different labels in different softwares. Basically, the layout of the color wheels window (shown in Figure 7) is straight forward. The color wheels for each of the Lift, Gamma, and Gain are across the mid-section. Below the wheels are sliders that control the luminance gain or reduction within each of the three areas. Highlighted in Figures 12 and 13 by the cursor. Below the Luminance adjuster, are the Y, R, G, and B readouts. These will display the numbers of the color mix when you move the center target control within the color wheel. The ‘Y’ represents the luminance value, not the color yellow. At the bottom of the window are the controls for Saturation, Hue and Lum Mix. You can click on the numbers, hold the mouse down and drag left or right to change these. We will play with all these controls in the next few steps. Adjusting An Image Select the image in the timeline that has one single gray scale. Notice the Parade scope now has nice, clean stair steps representing each shade of gray. The Vectorscope displays one, bright dot in the middle. Each of the three controls, Lift, Gamma, Gain, affect a particular part of the gray scale. These controls ONLY work with the luminance of the image and not the color.
Fig. 12 Lift or black level adjuster.
Fig. 13 Gain or white level adjuster.
Understanding the Color Adjustment Controls We want to work with two basic controls when doing a one-light correction: • Luminance • Color
The Lift control focuses its efforts on the black levels. 1. Click-and-hold the mouse cursor over the Lift slider below the Lift color wheel. (Fig. 12) 2. Drag the mouse to the right. Notice the Parade scope. The blacks are now lifted above bottom of the display. The blacks in the image are now moving towards gray. If you move the slider in the
167
9.2 Exercise 7: Working With Scopes opposite direction, this lowers the blacks. This is commonly called ‘crushing’ the blacks.
clearly warns you that there is damage being done to the image.
Also observe as you lift the blacks, the whites stay put. 6. Command - Z to reset the changes you have made, back Even some of the upper mid-grays don’t move. Each of to the way they were originally the three controls work with approximately 1/3 of the total luminance range. This gives you great control over specific areas within the images range. 3. Command - Z to reset the changes you have made back to the way they were originally. 4. Click-and-hold the mouse over the Gamma slider and drag it to the right. The image mid-range now gets brighter. The black levels stay fairly close to the bottom and the whites should not move much either. Fig. 14 Balloon image represented in the scopes. The Gamma adjustment is the most powerful control you have over the image. 80% of the visual information we see is in the middle grays of any given image. This can also ruin an image faster than any Working On An Actual Image control so be judicious in its use. Adjusting charts is one thing, working on an actual 5. Command - Z to reset the changes you have made, back image is more rewarding and a bit more complicated. to the way they were originally. 1. Click-on the image of the hot air balloons in the timeline. 6. Click-and-hold the mouse over the Gain slider and drag it to the left. This image of the balloons is well exposed as it is, but there are some slight tweaks we can make to improve it. Look at the scopes carefully. (Fig. 14) As many would say, USER TIP: the devil is in the details. At this point within the entire image processing The first thing that is noticeable is in the Parade scope. workflow, blowing-out the whites or crushing The center part of each trace, which represents the sky, is the blacks are not acceptable DIT practice. ramped up from left to right. This is just as you would What goes to the Editor must be the entire expect. The sky is blue, so the blue trace should be higher. But there is an issue with this image that is represented in range of the image. If, in final color grading, the red trace. Notice how it’s slightly lifted from the black the Director decides that the ‘look’ of the film line at the bottom and higher into the whites than the requires more radical changes, that is where other colors. This tells us, even without looking at the these kind of adjustments should be done. image, that there is a red cast over the entire image. You can see the blue trace does not go all the way to the black line, because there is less blue in the blacks than there This adjusts the whites ‘down’ on the scope, more should be. towards the gray areas. When you do the opposite, add more gain the whites, and send them over the 100% level, The first correction the balloon shot needs is to this is referred to as ‘blowing out’ the whites. Try moving balance the blacks. The order we do the following steps IS the slider to the right and see what happens to the scopes the proper method to approach any shot you process. and the image. When the whites go past 100%, all detail is lost in that image area and the scope cuts off the trace. Other software, like AVID, SCRATCH, etc., will display the blown out traces by changing the color of the trace. This
2. Click-on the Lift luminance adjustment wheel and drag it to the left until the bottom of the red traces, touch the lower part of the scope trace.
168
9.2 Exercise 7: Working With Scopes All three traces will come down and a small part of the blue will go off the bottom. This is something we’ll fix in the one-light chapter. 3. Click-on the Gain luminance adjustment wheel and drag it to the right until the red trace touches the upper part of the scope trace. All three traces will move-up. Even this very slight adjustment makes a difference in the quality of the image. This image has a fairly good midrange spread of luminance already, and we most likely would not make any adjustments to the Gamma. But this image is a good one to use for testing the Gamma control and what does to an image, very quickly. 4. Click-on the Gamma luminance adjustment wheel and drag it to the left until the numbers for the Y, R, G, B, all read -.020. Remember that the Gamma adjustment control about 80% of the luminance in an image. The image should look darker. This is in-fact, more like the look of the early morning light that was originally seen as these balloons lifted off at sunrise. When you made this last adjustment, you should have noticed that the white and black levels stayed where they were put to in the previous steps. Resolve is very good at not ‘dragging’ those along with the Gamma adjustment. The green and blue levels do not go all the way to the 100% level. I’m guessing that there is a green and red cast to the image. We’ll work on fixing these later. For now, you should have a basic understanding of the scopes and what they tell us.
Submission for grading: Turn in assignment for grading per the instructor’s directions. ______________________________________
USER TIP: Although ‘fine tuning’ the color balance across all ranges is NOT something typically done onset by the DIT, the blacks in a scene are a dead-giveaway. If there are problems with the color balance of a shot, they show up in the blacks. You can go to most amateur videos on Vimeo or YouTube, and see the blacks are typically not consistent shot-to-shot. It’s more pronounced when you scrub through the video fairly quickly.
To see what you just did, select the CMND-D keys. This will toggle OFF the correction. CMND-D will put the correction back on. It’s a good idea to do this frequently when doing adjustments so you get a clear idea of what you have done. If you want to spend a minute or two more playing with the controls we just have adjusted, do so. It’s one thing to see a chart and quite another to see the controls associated with a real image. Work with the Saturation control as well. Place the cursor on the number box and click-drag left or right. Watch the Vectorscope when you do this. Remember that this scope shows color saturation adjustments.
169
9.4 Scopes Review Questions
Answers located in Appendix C. 1. The waveform scope displays what information about an image? a. Color saturation b. Luminance c. The red color channel
5. The RGB Parade scope displays what information? a. The color saturation of each channel. b. The luminance levels of each color channel. c. Nothing of value for color correction.
2. The vectorscope displays what infomation about an image? a. Luminance b. Chroma/Saturation c. Gamma
3. Lift, Gamma and Gain controls change the _____ of the image. a. Luminance b. Chroma levels c. Color Saturation 4. In what way does the Hue control change the look of an image? a. Raises the luminance levels, making the image brighter. b. Increases the color/chroma saturation. c. Shifts the entire color of the image towards warm or cooler colors depending on how it’s adjusted.
6. When you click and drag the dot inside the wheel (shown), what does it change in the image? a. The mix of colors. b. The saturation of colors. c. The luminance of colors.
7. If an image has to much of one color, how do you adjust the color wheel to correct the image? a. Towards the color cast. b. Opposite the color cast. c. Towards the tertiary color.
170
9.4 Scopes Review Questions 8. The horizontal lines on the waveform scope represents each increment of the SMPTE gray scale chart, and the maxium and minimum levels of luminance in an image. a. True b. False
171
10 On-Set Workflows
At this point, we’re done with the quick overview of the nuts and bolts part of digital asset management. The
remainder of the book will take you though three specific workflows. One based around DaVinci Resolve software, the next based around Assimilate SCRATCH. For RED assists we’ll go though the use of RedCine-X. The goal will be for you to step through the workflow that seems more appropriate for your situation. It will be good for you to be conversant with all of them, and very skilled in at least one. The reason the book breaks at this point into this format is based on practical learning curves. The first parts of the book are basic to any workflow. The next parts are specific to a central piece of software that you might choose to adopt. Remember, the goal is to pass off the footage in your hands to editorial in the configuration they desire. That decision might be based on your resources or the camera’s native format. Most of this will be addressed in each section. For now, lets get started with the DiVinci Resolve, which we have just been working with. If you’re skipping ahead in the book to wade into workflows, please review the first steps in Exercise 1 on how to setup Resolve for the next assignment. If you’re already familiar with Resolve, feel free to continue on.
172
Exercise 8: DaVinci Resolve One-Light Workflow "
DaVinci One-Light Workflow As we have mentioned in earlier chapters, the one-light is one of the important on-set tasks for the DIT. The following exercises will take you thought this process with DaVinci Resolve (the free version) now known as just Resolve with the release of v12. The most important concept to remember, and it can’t be emphasized to strongly; ‘do no harm’ to the image. It’s not the job of the DIT to correct an image to what he/she thinks it should look like. It IS the DITs job to correct luminance problems and do MINOR corrections in color balance if that is noticeably off. The final exercises in this section will step you through exporting the processed files for: • Editorial • Dailies This chapter will take you though the process as if you were on-set. The assets are provided as part of this exercise. If you have similar assets of your own to work with, all the better. It’s always good to work on footage for which you have first-hand knowledge. Please note As of this printing, v12 was just released in the 4th quarter of 2015. There are significant changes in the look of the program, placement of some tools/icons and the Edit portion of the program is now greatly expanded in functionality. One of the new features is the ability to ingest files from the camera mag using MD5 checksum process. They call this function ‘Clone’. Potentially you would not need a piece of software like ShotPut to offload camera mags with the required checksum process. Personally, I would still use ShotPut to do this function, due to the fact that Resolve does not export a conformation .txt file with information about date, time and file destination. Secondly, ShotPut and other simular software are much faster at this task.
In this exercise we will go over the following: • Apply one-light correction using DaVinci Resolve • Work with controls in the color correction tool and scopes • Export dailies and properly formatted clips for editorial This assignment will take approximately 30 minutes. Exercise goals: • Learn how to start a new project • Establish the project settings properly for the incoming video • Use the controls to correctly accomplish a one-light • Output the files per the production requirements
Required Reading/viewing: Setting up a one-light in Resolve: http://www.youtube.com/watch?v=r406FZ8Wtow#t=58 9 This video is focused on v10 of Resolve, but the basics covered are applicable to the current versions. You will need: • Your course media files • Your external hard drive and • DaVinci Resolve software. DaVinci Resolve is a very powerful tool on-set for one-light processing and transcoding of files for editorial and dailies viewing, and again after the project has been edited. At that point, the whole program can be brought back into Resolve for final color correction prior to
173
Exercise 8: DaVinci Resolve One-Light Workflow release. These next exercises will focus specifically on the on-set work. On-Set Folder Structure As mentioned earlier in this book, it is important to set up a workable folder structure on your drives destine for editorial. That structure might look like this: - Production name (at the root of the drive) - Day-1 ! Cam-A !! A001 !! A002 !! A003 !! - ….. Cam-B ! ! B001 !! B002 !! B003 ! ! ! ….. ! Audio ! Editorial ! Dalies
! Note that the Editorial and Dailies folders are not buried within each camera and mag offload. This makes it easier for those who need them to find these files.
1. Open your hard drive in the computers Finder and navigate to the Course Assets folder. 2. Create the following folders and sub-folders within the course assets folder: Course Assets ! ! Black_birds ! ! ! Day-1 ! ! ! Cam-A ! ! ! ! A001 ! ! ! Cam-B ! ! ! ! B001 ! ! ! Notice the two extra folders, Dailies-264-1080p and Editorial-ProRes, shown in Figure 1. These will be automatically created when we start the render process. I wanted to show you what the finished folder structure would look like after rendering. 3. Open DaVinci Resolve program. NOTE: if you have not setup DaVinci refer to Exercise 1 and go through steps 1 - 13, then return here and continue.
You need to create a folder structure on your hard drive for this assignment.
TECH NOTE: It is wise to check with the post production facility BEFORE using DaVinci for your on-set workflow. There are several post facilities that do not like, or won’t accept assets that have been flipped or onelight corrected through DaVinci. It seems that, as of v10, there is still an issue with passing important metadata. The biggest issue comes with exporting for AVID. As of version 10.1 of DaVinci, will not export the proper .mxf, .aaf, and .ale files needed for a feasible workflow within AVID. You will need to find another tool for DIT work when asked for AVID deliverables. If the assets will be edited by you or by a smaller post house where extensive VFX or audio work is not needed, DaVinci is a wonderful tool to use.
Fig. 1 Project folder structure on your hard drive. 4. Click-on New Project tool in the lower right corner of the window. 5. Enter One_Light for the project name. Save the project. 6. Double Click-on the new One_Light project icon in the Project Manager window. The clips we’re going to use come from the new Blackmagic Pocket Camera. They were recorded at 24 fps, 1920x1080 and are ProRes native from the camera. This is the default setting for new projects in Resolve. We’re good to go. Loading New Media Into The Project In the Media Storage window, upper left part of the screen, 7. Click-on your local drive and navigate into the Course Assets, then into the BM-PocketCamera folder.
YourDrive>CourseAssets>BM-PocketCamera
174
Exercise 8: DaVinci Resolve One-Light Workflow
The clips will now populate the timeline, in the order which they appeared in the Master Clip bin. (Fig. 4)
Fig. 2 BlackMagic Pocket Camera Assets.
8. Shift-click on the two file names in that folder. One ends in C0000 and the other in C0016. Refer to Figure 2. 9. Drag the two files down into the Master Bin. This is located right below. (Fig. 3)
One-Light Grading Again, using the navigation tools at the bottom of the screen, 13. Click-on the COLOR tool. The next few instructions will be brief. We have already gone over these steps so, hopefully this will be are refresher. If the scopes are not visible, 14. Click-on the Workspace menu and select the Scopes sub menu. 15. Select ON from the menu. The go-back to the menu and Select 2 Up. The video scopes will appear floating over the screen. You now need to change which scopes to display.
Fig. 3 Music note indicates audio with the clip.
Notice the music note in the lower left corner of the clip. This is a visual indicator that there’s audio attached or recorded with this clip. Pulling Clips Into The Timeline Even with the clips in the Media Pool, they are not available for other operations until they’re attached to a timeline. Using the tools at the bottom of the screen, 10. Click-on the Edit tool. 11. Click-and-drag the cursor over the two clips in the Media Pool. This selects the clips. You could also use Shift-Click on each clip. 12. Click-on one of the highlighted clips and drag it to the Timeline, in the lower part of the screen.
16. Click in the upper left corner of the scopes display window and select Vectorscope from the drop-down menu. On the right scope display, select Parade from the menu. There should now be two scopes displayed. We want these two specific displays to get the right information out of the video files. With the second clip highlighted in the timeline, look at the Vectorscope. It’s fairly obvious from the image alone, that it looks quite flat. The scopes confirm this. The whites are barely above half-way mark. The blacks are lifted off the bottom, and there’s little color saturation. The lady is wearing a red blouse. The image in the monitor is a chroma key shot and the background should be a strong green. Let’s fix all these issues. 17. Click-on the image of the lady, next to the monitor looking screen-right. It is critical at this point that you select a frame from the shot that fairly represents the scene, a ‘hero frame’. This shot pans from left to right and the over-all levels change. The start of the scene is the brightest but it would be nice to see more of her red blouse to set color saturation. 18. Click-on the Play icon under the clip in the preview window and stop the playback when she has both hands in the air.
Fig. 4 Clips placed on the Timeline. 175
Exercise 8: DaVinci Resolve One-Light Workflow 19. Make sure the Color Wheels are displayed by selecting the Color Wheels icon shown in Figure 5. 20. Use the Lift luminance control wheel and adjust the black levels so the blue trace just touches the Fig. 5 Color wheels tool. bottom. (Fig. 6) 21. Click-on the Gain luminance control wheel and drag it to the right until the red trace just touches the top of the white levels on the scope. Instantly this image looks so much better. There is still one more tweak and that’s to the Saturation. The flesh tones look a bit pale and her blouse is not as red as I think it was in reality. Below the color wheels are Fig. 6 Lift adjuster. three more controls. The Saturation adds intensity to the color. The Hue shifts the entire color appearance of the image. The Lum Mix adjusts the amount of luminance that is mixed between the various channels when setting Lift, Gamma, and Gain. The Contrast is better left alone. It’s an 800 lb. gorilla when it comes to adjustments. Contrast is better done using Lift and Gain.
Fig. 7 Saturation adjustment tool. Place the cursor over the numbers to the right of the Saturation. Notice that the cursor changes into a two headed arrow. (Fig. 7) This is a visual cue. If you click-on the number, you can scroll left or right to decrease or increase the intensity of the color in the image. This adjustment affects ALL colors in the image. There are ways and tools to select a specific area or color, and to adjust only that area, but
that’s not something we would do on-set. Those very specific color corrections are left to post production only. 22. Click-on the numbers to the right of Sat. 23. Click-and-drag to the right to add just a bit more color intensity to the image. Setting it to 72 is about all the input we would want to do at this point.
Working With The Second Image The second image is taken from a different camera angle but seems to be shot in the same basic area with similar lighting. The same basic corrections should create good results. 24. Click-on the image of the man in the knit cap, looking frame right. The scopes display the same issues we saw in the first image. • Use the Lift and Gain luminance controls to fix these issues. After you have finished with the Lift and Gain, • adjust the Saturation to match the first image. That would be close to a setting of 72. This image still looks a bit washed out at this point. Adjusting the Gamma will fix this problem. 25. Click-on the Gamma luminance control wheel and drag to the left. The image mid-range now picks up some character. Don’t overdo this. A setting of −0.08 is about all you can do before it causes issues with the overall look of the image. To compare the images and see if your work created a uniform look between both images, 26. Click-on the play head just below the Timeline. 27. Scroll it left and right to view both clips. Do they look very similar? My guess is no. It’s the difference in the Gamma adjustment you made in the second image. 28. Click-on the clip with the lady. 29. Scrub in to the same point at which you made the color correction. 30. Click-on the Gamma luminance control and bring it down to the −0.08 number you set on the shot of the man. If there isn't something, like her blouse or a prop in the both shots that is the same, we can’t get this color
176
Exercise 8: DaVinci Resolve One-Light Workflow
correction perfect at this time. But it does look a lot better than before, when comparing both shots. 39. Click-on the File menu and Save the Project. (CMND-S) Creating Burn-ins Burn-ins are the text graphics that are overlaied on each clip with information that might be important to the production company. It will be up to the production company if they want burn-ins, where they will be placed on the screen and what information they want. Typically the required info is Source File Name and Source TC (timecode). Figure 8 shows what you will be creating.
Fig. 10 Data Burn Metadata.
Fig. 8 Typical burn-ins shown over clip. With most programs geared to on-set work, you can create the burn-in and save it for later use. It is important to understand that you NEVER put burn-ins on the files headed to editorial unless they ask for it. The tool we are looking for is in the center of the screen, below the timeline. It looks like a flame. Resolve refers to this as the ‘Data Burn’ tool. The symbolism is not lost on us. (Fig. 9)
Fig. 9 Data Burn tool. This tool opens in the lower-center of the screen and it has two parts; the metadata tools (Fig. 10) and the graphics/placement tools (Fig. 11). The metadata tools in Figure 11 captures information that is part of the clip. In our case, the client wants the
Fig. 11 Text layout tools for Burn-ins. timecode that was generated by the camera and recorded into the clip. They also want the clip name so they can refer back to it later. Notice there are two timecodes listed. The Record Timecode is similar to your old VHS VCR. No matter where you put the tape in, it starts at 00:00:00. This is not helpful for referring to a specific spot within a shot. We need Source Timecode. This is the actual TC recorded on each frame of video, provided the camera can record TC. Most DSLRs can’t and this BlackMagic Pocket Camera wasn’t able to when this was recorded. There’s also several types of clip or file names. The one that almost always reflects the name of the file created by the camera is ‘Source Clip Name’. Select the two metadata items you see checked in Figure 10 for this assignment.
177
Exercise 8: DaVinci Resolve One-Light Workflow Two graphics open over the clip displayed in the viewer. But they need to be placed in specific locations on the screen. The production company will dictate where these should go. If not, upper or lower part of the screen works well. To move these around the screen (refer to Figure 12): 1. Click-on the Source Timecode words in the list. This will focus your work on that graphic. 2. Click-on the ‘move up or down’ tool indicated by the cursor in Figure 12. Drag the mouse left or right and the graphic will move in the frame. You want to move the graphic right to the top edge of the frame. make sure its in the frame line. 3. Click-on the ‘horizontal move’ tool just to the left of
Exporting Clips For Dailies Resolve is well suited for the next function. Exporting the clips for both editorial (non-AVID) and review as a Dailies one-light. We are now going to move into the DELIVER mode for the program. Refer to Figure 14. This will require a number of settings. The great feature of this program, and others like it, is that we can save the settings for re-use at any time in the future. 1. Click-on the DELIVER tool at the bottom of the interface (Fig. 14). Take a minute to look over this interface. In the upper
Fig. 14 Deliver Tab. Fig. 12 Graphic placement tools. the vertical tool. Drag this to the left and the graphic will move to the left edge of the screen. Again keep it mostly to the left edge of the frame but now going outside the frame line. 4. Click-on the Source Clip Name from the list on the left, and repeat the process. Make them match what you see in Figure 8. We will now save this setup for later use. Above the graphic tools is the button Create. Refer to Figure 13. 5. Click-on the Create button. 6. Enter the following: BlackBirds-TC+SFN then click OK. This preset is now saved for recall in the next steps.
Fig. 13 Burn In Preset naming window.
Save your project.
left are the Render Settings. Upper right is the Render Queue, where you can stack, or ‘Queue’ the files to be rendered. The program will then step though them until completed. The Master Timeline display window is across the bottom of the screen. We’re going to move from top to bottom on the Render Settings so you can better understand the implications of each option.
TECH NOTE: One of the downsides of using DaVinci Resolve on-set, is that while the program is rendering the clips in the Render Queue, you are stuck doing nothing. You can’t go back and import more clips and start to process for new, incoming media. You would need to use Assimilate SCRATCH for that feature. SCRATCH is far more expensive than even the paid, full version of Resolve. If your workflow is anything more than a casual weekend shoot, or music video, Resolve could cause you to fall behind in short order.
178
Exercise 8: DaVinci Resolve One-Light Workflow First we need to select the clips to which we want to apply the settings, during the render. 2. Right-Click-on the first clip in the timeline. By default, the playhead is at the start of the clip. 3. Holding down the Shift key, click on the second clip. If you had a hundred clips in the timeline, this would work the same way. Select the first clip as the in-point and the end of the last clip as the out-point for the render. You could also use the Select All button just above the timeline on the left side. We’re going to build a preset for dailies first. These will be Quicktime (.mov files) using the h.264 compressor. In the Render Settings window in the left side of the screen: (the following steps will refer to Fig. 15) 4. Click-on button to the left of Individual Source Clips. We want the program to render each clip on the timeline as its own clip.
• Save as: Use Source Filename When you get to the Render to: field, click the Browse button and point the program to the Day-1 folder on your hard drive. ! ! Your drive:Black-Bird>Day-1>Dailies 6. Click-on the +More options. 7. Scroll-down until you see Data burn-in. Drop this menu down and select the name of the burn-in you just created. You could have selected ‘Same as project’ because there’s only one preset saved. But if there were several custom presets, you will want to select a specific one. This export preset is finished. There’s a way to save these for recall later, saving a great deal of time. In the upper right corner of the Render pane is the Create button that allows this preset to be saved. (Fig. 16) 8. Click-on the Create button and enter the following in the Enter preset name field: h.264-1080p+TC+SFN With this naming style you immediately know the
5. Click-on each field, starting at the top and make sure your settings match.
Fig. 15 Render settings.
• Render timeline as: Individual source clips • Video Format: QuickTime • Codec: H.264 • Set compression quality to: Medium • Data Rate: Automatic • Key Frames: Automatic • Resolution: 1920x1080 HD • Set to video or data level: Auto • Audio: ✔ Export Audio • Render: 2 channels of audio • Set audio bit depth to: 16 Scroll the window down and set the following:
Fig. 16 Render Preset name field. basics of the preset. It will render to a h.264 compressor in the 1080p raster and Timecode plus Source File Name will be burned on the file. 9. Click OK to save the file. 10. Scroll-down to the point where you can see Add to render Queue and click-on this button. A file location requestor window opens asking you to navigate to the basic drive and folder where you want these renders to be saved. In this case we want to navigate to: Your Drive>CourseAssets>BlackBirds
179
Exercise 8: DaVinci Resolve One-Light Workflow
After clicking OK, the render you created is on the right side of the screen in the Render Queue listing. While we’re in the Render Settings area, we can build another preset to create the files for editing. This creates a queue or batch setup for these files or project and avoids the need to render one configuration then setup another and render that one, and then... You get the idea. But there is a catch that doesn’t let you totally off the hook. A few notes about the settings before we move on. • The Video Format: setting has lots of file format outputs to choose from. It defaults to QuickTime. • The Codec: setting. Again lots to chose from here. It is typical to set the output to the highest compression which creates a smaller file. In editing, the editor can re-link later, to the camera original for the best quality. • Most clips will have audio (either linked or from the camera originally) so make sure the Audio box is selected. If you have a timeline with ALL MOS clips (clips with no sound) you can leave this box un-checked. But if you have just one clip with sound, you must check the box otherwise that clip will be exported without sound. • Render timeline as: is really important. 'Individual source clips' is what you want. Otherwise you will render on long clip and that will make the editors really angry. ‘Save as’ should always be set to Source File Name. If you don’t preserve the camera original file name on these rendered files, then editing can’t relink or cross match later in the process. • Render to: directs the final render of the output files to a specific folder. That would normally be inside the Day Folder, and then into the appropriate Dailies or Editorial folders to keep things organized. The Sub folder: is a convenient way to set the final destination on the fly. The program will create that folder for you before it renders the first file. • Data burn in preset. We can chose to set it to NONE which will not put any burn-ins over the images, or we can set it to Project. This will automatically select the burn-in preset for the project and apply it to all the clips in the timeline There are lots more ‘tweaks’ that can be set depending on the production requirements. These basic ones we have gone though are what you will use 90% of the time.
Render Setup For Editorial Output
Fig. 17 Additional Output settings window. The next output setup will be the files for editorial. This ‘short cut’ created in the software has its problems. Scroll down to the bottom of the Render Settings pane and select Add Additional Output. This a short list of settings you can change as you can see in Figure 17. Here’s the gotcha for this additional outputs tool; the only settings you can change are those shown. If you want to render the files out in a different raster size, maybe for release to a web site, then you WILL have to create a whole new preset for that output. But if all the settings are the same except for the Video Format and Codec, this is a great shortcut. Set the various fields to the following settings: • Video Format: QuickTime • Codec: QuickTime ProRes 422(LT) • Data burn in: None • Sub folder: Editorial Starting The Render At the bottom of the Render Setting window, on the rights side is the Add to Render Queue button. This will add both presets to the queue. 1. Click-on Add to Render Queue button. A file location requestor window opens asking you to navigate to the basic drive and folder where you want these renders to be saved. In this case we want to navigate to: Your Drive>CourseAssets>BlackBirds
180
Exercise 8: DaVinci Resolve One-Light Workflow
After clicking OK, the render you created is on the right side of the screen in the Render Queue listing. 2. Click-on the Start Render button in the lower right corner of this pane.
It’s a very good idea to double check your work from time to time, until you are sure there are no problems. First make sure they were saved to the proper folders as seen in Figure 18.
The process will begin with the software giving you the status of the render and how many frames-per-second it’s able to process. This FPS is a good number to remember. Some codecs take longer and some zip right though. Resolve uses the GPUs on the video card for this processing. The better the card, the faster it will render. The free version is limited to using only one graphics card, unless you are have a new MacPro computer that has two GPUs. Those are integrated into the machine and Resolve can’t just turn one off. If you want to run more than one (up to four) GPUs, you must purchase the full Studio version of the program.
These files would be handled on-set, in the following manner: • Files transcoded for editorial go on the drive that will
On my upgraded, 2007 Mac Pro Tower with 8 cores and one older GPU card, it averaged 23 FPS for the h.264 renders and 18 FPS for the ProResLT files. It is slower because the camera files and the files being rendered are on the same USB3 drive. If I put the camera files on one drive and render files to another drive, the FPS would be higher. Still, this is not good enough to keep up with a day’s work. To increase the FPS I could install a much more robust graphics card. With the full Studio version of Resolve, you can have up to four graphics cards. Here’s the ‘deal breaker’ for using DaVinci Resolve on-set--once you start a render; you can’t do anything else in the program until it’s finished. On a busy, multi-camera shooting RAW day, you will get run over fast. On a simple DSLR shoot, this program will work fine. Reviewing The Output
Fig. 18 Folder and files created by rendering settings.
TECH NOTE: There is a good reason why we would want to have Resolve export the dailies first, then the same files for editorial. There is a feature called ‘Create Additional Output’ in the Output area of the Render Setting. By clicking on this button, you can add several of additional outputs that Resolve will process in the same pass. But there is a problem with this feature. If you need to limit the data rate or change the raster size of the output, those selections are not available with this tool. In our case, and it is very typical of on-set work, we will want the Dailies to be rendered with the H.264 codec and to keep the file sizes small, we want to set the render quality to Medium. When we render the files for editorial, using ProRes (LT) codec, there isn’t a quality setting available. That codec has the data rate pre-set internally. Knowing this, you want to first create the render that will need more ‘tweaking’. Additional renders, from the same render settings preset, will have to be ones that don’t require any modifications other than the basic codec. If you need dailies in 1080p and 720p, you will have to create a separate render setting, from scratch, to get the downsized H.264 file properly rendered. be handed off to editorial. On that drive you should save the camera raw files, daily audio files, and the transcoded files. • The one-lighted files with burn-ins will go on your local drive array.
181
Exercise 8: DaVinci Resolve One-Light Workflow
Production handles the distribution of these files to producers and directors on a typical production. They should provide you with a thumb drive or other portable drive to copy them on for later use. Some productions will provide access to a cloud-based file-sharing arrangement. You will be required to upload the day’s work, before you call it a day. The next part of this tutorial will go over syncing audio to video. This is actually something you would do before you do color correction. We did the one-light first in this case, so you could get a bit more experience with color correction and the tools to do this. It’s often the most time-consuming part and practice is the only exercise that will make it easier and faster. Chapter Recap You might be thinking “Wow, that was a lot of back and forth within the program.” It was. This is typically not the workflow you would use on-set. The way it was presented here was to facilitate teaching flow. On-set or in any other asset workflow using DaVinci, you would do as much of the process at any given time (or window) that is possible, before moving on to the next. An example would be the addition of the burn-ins. Once you are in the color grading part of the program, you would create the project preset BEFORE moving on. In fact, most DITs make that step part of the pre-production functions, much like creating the folder structures ahead of the first day’s production. The more you can accomplish before the stress of production, the further ahead you will be. If this workflow stuff seems confusing, it may help you to create a written check list. The first few times through, use the list. It will become second nature after that.
Fig. 19 Blackmagic Pocket Camera. Courtesy © Blackmagic Camera Submission for grading Submit this assignment per your instructor’s requirements.
NOTE: The reason we’re using the title ‘Black Bird’ is that the footage for this project was shot for a short film called The Ravens, directed by Jennifer Perrott and shot by Cinematographer John Brawley. The primary footage for the documentary used the RED camera. These behind the scenes (BTS as they are called in the business) were captured with an early version of the Blackmagic Pocket Camera, (Fig.19). Then they offered the footage to play with and see what the brand new camera could do.
Where does audio fit into this whole process? These clips have audio on them from the camera microphone. Double system sound productions will need an extra step in the Edit area of the program to sync up the files. We will go over that process in the next few exercises.
182
Exercise 9- DaVinci Resolve Audio Sync
In this exercise we will go over the following: • Linking audio with video files • Apply one-light correction using DaVinci Resolve • Work with controls in the color correction tool and scopes • Export dailies and properly formatted clips for editorial This assignment will take approximately 30 minutes. Exercise goals: • Learn how to add media to a project • Find sync for audio and video clips • One-light color correct • Create a LUT • Export dailies. Required Reading/viewing: Syncing Audio in Resolve: https://www.youtube.com/watch?v=GRKoJuRjajE You will need: • Your course media files • Your external hard drive and • DaVinci Resolve software.
We’re now going to create a new project, add the media to the project just as you would on set, and then link that audio to the video clips. We’re going to do the ‘syncing’ process two ways: a) ‘worst case’ where the audio does NOT have timecode, so we have to manually bring both clips into alignment, and then, b) using the auto-sync abilities of the program. Then we will export dailies files. It should be said at the start, that DaVinci is not the smoothest tool to do this with, but it works just fine once you get the flow down. As well, the manual for DaVinci is not very clear at explaining this process. We’ll clear that up.
NOTE: What is shown here is from v12.2 of DaVinci Resolve. It is similar to v11 in workflow, but v11 has different interface icons. Importing New Media Into Existing Project With the DaVinci program open to the Projects window, 1. create a new project called ‘audio sync’ 2. Open the new project by double-clicking it.
Fig. 1 RED Assets folder opened in the Media Storage pane.
183
Exercise 9- DaVinci Resolve Audio Sync 3. Navigate to the Course Assets folder on your hard drive and then into the folder titled Red Assets. (Fig. 1) At this point we need to create some basic organization for the incoming assets from the filming set. In Resolve, you can create ‘Bins’ to hold the day’s work. It is noteworthy at this point, that as a DIT, you really don’t care if audio and video are mixed in the same folder. Actually, Resolve requires the assets that are to be linked, to reside in the same folder so the program can find them when doing the auto-syncing process. 4.
5.
6.
Right-Click in the lower left Master bin area and right-click on the Master Bin. Select Add Bin from the menu. (Fig. 2) Click-on the fi r s t b i n (named ‘Bin 1’) and rename it Day-1. Right-Click-on Fig. 2 Add New bins. the Day-1 bin, highlighting the bin and select New Bin from the menu.
10. Click-on the Day-1 Cam-A bin to select it.
Fig. 4 Day-1 Cam-A bin with video assets inside. 11. Click and drag the following video files from the Red Assets folder within the Media Storage Pane, into the Day-1 Cam-A bin: (Fig. 4) A001_C014_102873.mov ! A001_C025_102873.mov 12. Click-on Day-2 Cam-A then Click and drag the following video files from the Red Assets folder within the Media Storage, into the Day-2 Cam-A bin: ! A001_C003_1204RN.R3D ! A001_C015_1204XE.R3D 13. Click and drag the following audio files from the Red Assets/Audio folder within the Media Storage, into the Day-1 Cam-A bin: (Fig. 5)
Fig. 5 Day-1 Cam-A bin with video and audio assets inside. ! !
Fig. 3 New bins created. 7. 8. 9.
Rename the new bin ‘Cam-A’ Do the same for the bin named ‘Bin 2’, Click on the bin name and rename it Day-2. Right- click on Day-2 bin and Create a new bin called ‘Cam-A’. (Fig. 3)
SC004c_T004.wav SC004g_T004.wav 14. Click and drag the following audio files from the Red Assets/Audio folder within the Media Storage, into the Day-2 Cam-A bin: ! Scene01D_007.wav ! Scene01A_002.wav NOTE: These audio and video clips come from two separate productions, both using the RED One camera. The first video clips were converted to
184
Exercise 9- DaVinci Resolve Audio Sync .mov files for ease of use in these tutorials. The second set of RED assets, are the camera RAW files, as indicated by the R3D file extension. As well, notice the different audio file names. Each audio person will have their own way of naming files. It won’t vary much from what you see here. Audio mixers typically name the file to indicate the scene and take. That information is visible on the camera slate, and hopefully, it matches. Syncing Non-timecode Matched Clips The first task will be to link audio with video clips, that don’t have matching timecode (TC). This would be the same process used if the production shot with a DSLR camera or recorded audio with something like an H4N. Neither generates useable or ‘sync-able’ TC. The camera crew should use a camera clap-slate for each shot so lining up the clips (syncing) can be easily accomplished. The interface needs to set up so we can easily see the both the slate and the audio wave form. 15. Click-on the musical note located in the top bar of the right hand corner of the screen, until you see either audio waveforms or audio meters displayed. 16. Click-on the Waveform button, located above the audio mixer display pane. (Fig. 6) The interface should now have the video display on the center part of the screen and the audio display to the right. The audio display is divided into two parts. The top is a wider display of Fig. 6 Audio waveform display. the audio clip’s
waveform. The lower part of the split is a highly magnified section of the waveform. This allows you to see small portions of the audio file making it easier to locate that very short moment in time, when the slate claps shut. 17. Click-on the video clip in the Day-2 Cam-A bin A001_C003_1204RN.R3D . 18. Click-on the audio clip in the Day-2 Cam-A bin Scene01A_002.wav . 19. Click-on the play icon ▷ under the video clip (you can use the space bar instead), and stop the playback right when the slate claps shut. Using the left and right arrow keys you can move one frame left or right to locate the exact spot where the slate is shut. This will be difficult for this clip. The 2nd assnt. camera did not have the slate fully within the frame when the clapper was closed. In situations like this, you will have to look for visual cues. In this case, the slate will move sharply down, right at the moment the slate is closed. 20. Click-on the play icon ▷ under the audio waveform. Play the clip until you hear the slate clap shut. Make sure you have the right slate marker. Listen to the audio file further in and see if there is a second slate clap. NOTE: What you hear in this audio file is very typical of the production cadence on a set. The female voice you hear is the First Assnt. Director taking control of the shot. You will typically hear the audio slate by the audio mixer or slate operator (2nd assnt. camera person), then the slate clap. Good 2nd assnt. camera persons will say ‘marker’ right before they clap the slate, letting you know that the next sound you hear, should be the slate. If the slate is close to the actor, they will clap the slate quietly out of respect for the actor. These are called ‘soft slates’ and they can be hard to see in the wave form. Once the video clip is paused on the slate clap and the audio clip is also paused on the waveform indicating the sound from the slate, you’re ready to link both clips. In the lower right corner of the waveform display pane is an icon that looks like a chain link. (Fig. 7)
185
Exercise 9- DaVinci Resolve Audio Sync 21. Click-on the link icon.
Fig. 7 Audio link icon. The video and audio clip are now linked. The visual indicator is the music note in the lower left corner of the clip thumbnail. Now, you will notice that the other clip has the same indicator. Actually, the original footage had in-camera audio recorded on-set. This is not a bad idea actually. There are times when the in-camera audio can help with those clips that are more difficult to sync. It’s time to save what we have done. 22. Save the project (CTRL - S). 23. If a file name requester window opens, name the project ‘Audio Sync’. 24. Click-on Save. The project is now saved within the DaVinci database. It would be good, at this point, for you to repeat this process with the other video and audio clip in the bin. This clip will be easier. The slate is clearly in-frame and there’s only one slate clap on the audio file Syncing Timecode Matched Clips The best position to be in, is having both the video and audio files delivered with matching timecode. Most all video editing and DIT focused software have some form of auto-sync function. Resolve and others have batchsyncing functions which can do any number of clips very quickly. You will use the clips from Day-1, Cam-A. 1. Click and drag over all the clips in the Day 1 Cam-A bin. This will highlight them indicating they are selected. The visual cue is that each clip will have a light color boarder around it. You could also hold the Shift key and click on each clips icon to select them. 2. Right-click on any clip in the bin. 3. Select Auto-sync Audio Based on Timecode. (Fig. 8)
Just that fast, the program will look at the timecode on each video clip, then sort through the audio clips in the bin and match them together. Each linked video clip will now have the music note in the lower left corner, indicating it has its associated audio file. A note here about on-set workflow rhythms. Camera cards will come to you at the lunch break. At that point, you will get your first audio files. It’s typical for you to bring in all the video files, organized into one Resolve bin for Day-1. You can then put all those video files on a timeline, do the one-light color correction and hold any further processing until you get the audio files. When you are handed the card from audio, you will make a backup(s), then, using the media browser in Resolve, bring all the individual audio files into the same bin with the video files. Provided all video and audio files have matching timecode, you can follow the previous steps to auto-sync all the clips. At that point, you go into the Edit area of the program. All the video files on the timeline should have their ‘synced’ audio attached. Your next step would be to export the clips for editorial and dailies delivery. Because of the limitations of Resolve, it’s best to get this rendering started before you step away for lunch. Once the program starts rendering, you can’t do anything else but watch it render.
Fig. 8 Auto sync menu selection.
C o l o r Correction, One-light With the previous project still open, click-on the Edit
186
Exercise 9- DaVinci Resolve Audio Sync button at the bottom of the screen. This will take you to the Edit functions of the program. A Timeline will be open but empty.
It’s fairly obvious from the image alone, that it looks somewhat flat. The scopes confirm this. The whites are low. The blacks are lifted off the bottom, and the color saturation is a bit low. Let’s fix all these issues.
1. Drag the video clips (two men setting inside an ambulance) to open timeline and drop the clips on the V1 video track within the Timeline. The video clips should have audio displayed on the audio track of the timeline. Play the timeline to assure sync. Notice that in the Media Pool in the upper left, the new Timeline thumbnail is visible with the name ‘Timeline 1’. Reneame the Timeline by 2. Clicking-on the words ‘Timeline 1’ below the thumbnail image. This will highlight and select the text. 3. Enter ‘Audio Sync’. In an actual production environment you would name the Timeline for the individual mags. In this case it would be A-001. 4. Save the project by using the shortcut keys CMND-S. 5. Click-on the COLOR tool located at the bottom of the screen. The next few instructions will be brief. We have already gone over these steps, so hopefully this will be a refresher. If the scopes are not visible, 6. Click-on the Workspace menu and navigate to the Video Scopes. 7. Select ON. 8. Click-on the upper right corner of the scopes and select 2-Up. (Fig. 9) The scope in the right window should show the Parade and in the left window, Waveform. Refer to Figure 10.
Fig. 9 2-up Scopes display selection icon.
Fig. 10 Selected clip with Parade and Waveform scopes.
Again, it’s critical at this point that you select a frame from the shot that fairly represents the scene. This shot stays static (no pans or camera movement). But we should always scroll though the scene and find the part of the shot that represents the whole look of the shot. In this case it would be good to select a frame where both of their faces are clearly seen. 9. Click-on the Play icon under the clip in the preview window and stop the playback when both actors’ faces are visible. The timecode in the upper right of the preview window should read 1:00:21:08. 10. Use the Lift luminance control wheel and adjust the black levels so the blue trace just touches the bottom. 11. Click-on the Gain luminance control wheel and drag it until the three traces just touch the top of the white levels on the scope. Instantly this image looks much better. There is still one final tweak; to adjust the red level in the highlights, which are low. 12. Click and drag the control, located inside the color wheel portion of the Gain control, until the red channel’s highlights are equal to the levels of the green and blue channels. It will take a moment for you to move the control around until you get a feel of the how moving one direction affects all channels. It’s is all we need to do at this point. The next step is to save LUT from this shot to apply to other similar shots.
187
Exercise 9- DaVinci Resolve Audio Sync 13. Click on the File Menu, and Save Project. CMND+S is the shortcut key stroke. Creating LUTs The biggest time saver for this quick color correction process, is to save what you just did and apply it to other similar takes in the same scene. LUTs (Look Up Tables) are a metadata information file containing all the corrections done to a clip. As you learned in the chapter on LUTs, it is highly unlikely that a LUT from one scene can be applied to shots in another unless they are shot with very similar lighting and camera setups. That being said, if you have more that two clips from the same scene, it’s worth the time to create and save the LUT. It’s really easy. 14. Right- click-on the thumbnail of the clip. (Fig. 11) 15. Select Generate 3D LUT (cube). 16. Name the LUT something that will help you remember what it’s for. In this case we’ll call it ‘Ambulance-cab-1-light’. To check to see if the LUT was saved and available in Resolve, move over to the Node pane located in the upper
automatic feature that lets you know the clip originally used to establish this LUT. With this LUT now created, you can select the second clip thumbnail on the timeline, right-click and select 3D
Fig. 12 Node Menu. LUT from the menu. The LUT you created will be in the list. Select that LUT. It will now apply the corrections contained in the LUT to the shot. How does it look? The scopes tell what you need to know. Looks fine so let’s move on. LUT didn’t save: If you are in an educational lab environment, it is quite possible that DaVinci will NOT save the LUT to the proper folder. The LUT should be saved to the following folder on a Mac: System>Library>Blackmagic Design>LUT
Fig. 11 Generate 3D LUT menu selection. right of the screen and, 17. right-click to drop down the Node menu (Fig. 12). 18. Select the 3D LUT menu item. The first item in the list is the new LUT you created. Notice it appended the clip name. This is a handy
Here’s how to fix this issue: • Re-save the LUT to the computer’s desktop or to the root of your hard drive. • Goto the menu: File>Project settings. • Select Lookup Tables from the left side of this window. • Look for the button Open LUT Folder. This folder is buried deep in the system folders of your computer. Leave this Finder window open. • Open a new Finder window and navigate to where you saved the LUT you created (Desktop or your hard drive). • Drag the LUT into the DaVinci LUT folder window. Make sure it does not get saved into another folder. • Close all the Finder windows and the Project Settings window.
188
Exercise 9- DaVinci Resolve Audio Sync • SAVE your project and Close DaVinci. • Re-launch DaVinci and re-open your project. • The LUT you created should be the first one in the listing. Render With Burn-ins The export we will create will be for the director and producers. They want MPEG4/h.264 compressed clips with timecode and clip name burned in to the image for easy reference.
numbers, depending on the camera. This is the real, camera-generated file name. In the Data Burn window, (Fig. 15) 2. click-on Source Timecode and Source Clip Name from the list. b. Just to the right, are the list are tools and modifiers. We’ll leave all of them alone accept
To set this up we need to stay in the COLOR mode for the program. There are a lot of tools we have not explored, just above the color wheels and centered on the screen. 1. Click-on the far right icon which looks like a flame. Refer to Figure 13. This is where we set the information to burn-in over the image. The tools are, in some respect clunky, but they
Fig. 15 Data Burn In settings.
Fig. 13 Color Mode, sub-mode tools. get the job done. There are just a few bits of metadata we want burned in: the Source Timecode, and the file name. Refer to Figure 14. The Source Clip Name is preferred to other clip or file name identifiers. This can be a long list of letters and
Fig. 14 Placement of the burn-ins.
the ones that position the burn-in. The [◁▷] moves the position left and right. The controller to the right moves the position up or down in the clip frame. Refer to Figure 16 where the cursor is pointing. In the metadata list to the left (Fig. 15), select Source Timecode. The placement tools will now focus their actions on the Source Timecode graphics. Using these controls, move the Source Timecode to the lower-left part of the screen. 4. Click-on the numbers next to Left-right tool and drag the mouse to the left. The position of the displayed timecode should move to the left side of the screen. 5. Click-on the Up-down tool and drag it to the left until the Source Timecode information is just above the bottom edge of the frame. Make sure it is in-frame or it won’t be seen when rendered. Next move the Source Clip Name to the right and down. 6. Click-on the numbers next to Left-Right tool and drag the mouse to the right. The position of the displayed timecode should move to the right side of the screen. Using the Up-Down controls,
189
Exercise 9- DaVinci Resolve Audio Sync 7. Move the burn-in to the lower right corner of the clip frame. The finished burn-ins should look like the image in Figure 13. Using the Create button (upper right corner of Fig. 16),
3. Click-on the Individual source clips button. 4. Move down the settings and select the following: • Render timeline as: Individual source clips • Video Format: QuickTime • Codec: H.264 • Set compression quality to: Medium • Data Rate: Automatic • Key Frames: Automatic • Resolution: 1920x1080 HD • Frame rate: 23.976 • Audio: ✔ Export Audio • Render: 2 channels of audio • Set audio bit depth to: 16 On the left side of the screen is a +More Options toggle. Click on this to reveal more options. • Data burn-in: Same as project
Fig. 16 Burn tools and modifiers. 8. Click-on the Create button and enter the following into the Data Burn Preset window: STC+SCN 9. Click-on the Okay button. The STC stands for Source Timecode and the SCN equals Source Clip Name. You can now recall this at anytime for use on any clip. Exporting Clips for Editorial Again, this is a review of Exercise 8 so we’ll move through the process quickly for practice. 1. Click-on the DELIVER tool at the bottom of the interface. 2. Click-on the Select All Clips button right above the clips in the timeline. We’re going to build a preset for H.264 renders. In the Render Settings window there are a dozen or more settings that can either help or totally mess up the rendered output. Some of these settings are not straight forward as to what they do, so pay careful attention to what you select or set. When the possibly hours of rendering are done, it could be all wrong. So starting from the top of the window, we’ll setup a render preset. The first item is how we want the files exported. We always want Individual source clips selected.
NOTE: if you have more than one burn-in created within this project, you will need to drop the menu down and select the one you need. If you drop the menu down now, you will see the one you created. Scroll the window down and set the following: • Save as: Use Source Filename • Render job to: Set this to render to your hard drive into the Completed Assignments folder. You can use the Browse button to navigate right to the folder. On the left side of the screen is a +More Options toggle. Click on this to reveal more options. • Sub folder: enter A001-4G. This the folder inside the Completed Assignments folder that will be created to hold the newly rendered file. On set, it’s a good idea to specifically place rendered files into folders that represent a scene. It’s up to you to decide if you will put each camera’s output in a separate folder for that scene, or combine all of the camera’s files in the one folder. The remainder of the settings in this part of the window can be left alone. 5. Scroll back to the top of the Render Settings window and click-on the Create button. 6. Enter 47-min_h.264-w-burns. 7. Click OK. This new setup is now saved in the Easy setup listing for recall at a later time.
190
Exercise 9- DaVinci Resolve Audio Sync Starting The Render To select the clips to render: 8. Click-on the Select All Clips button right above the clips in the timeline. 9. Click-on the Add to Render Queue button located in the lower right corner of the Render Setting window. It might warn you that it could over-write what’s in the folder already. Allow the system do it. On the right side of the screen is the Render Queue. This is a listing of all the renders in queue to be done or that have been completed. The render you just created is the first item in the queue. 10. Click-on the Start Render button located in the lower right corner of the Render Queue pane.
Submission for grading Submit the assignment per you instructor’s instructions.
The program will now process the files. You will be surprised how long it takes to render h.264 files. They are very processor intensive. Resolve is designed to make use of GPU processors. The more you have in your video card, the faster it will render. After the render is completed, look in the Completed Assignments folder and then into the A001-4G folder. There you should see the file with the original clip name. Double-click on the file and review your work. Reviewing The Output It’s a very good idea to double check your work from time to time, until you are sure there are no problems. As a review; on-set, these files would be handled in the following manner: • Files transcoded for editorial go on the drive that will be handed off to editorial. That drive should contain the camera raw and audio files, and the transcoded files. • The one-lighted files with burn-ins will go on your local drive array for distribution as the production dictates. Typically, production handles the distribution of these files to producers and directors. They should provide you with a thumb or other portable drive to copy them on for later use. Some productions will provide access to a cloud-based file sharing arrangement. You will be required to upload (or start the upload) before you call it a day.
191
DaVinci Review Questions Answers located in Appendix C. 1. The key to setting up a project in Resolve is correctly establishing the, A. Raster and frame rate. B. The file or project name. C. The color space. 2. In Resolve and other color correction programs, the color wheels listed from left to right are; A. Gain, Lift, Gamma B. Lift, Gamma, Gain C. Gamma, Lift, Gain 3. Match the following: A. Lift
Black levels
B. Gamma
White levels
C. Gain
Mid-range Gray scale
4. Putting clips in the Media Pool allows you to move directly to the Color Correction part of the program and work with those clips. A. True B. False 5. To setup the Burn-in tool, you will look in which area of the program? A. Color B. Delivery C. Edit
7. When asked to burn-in the ‘name’ of the camera files and the ‘timecode’ recorded on the clips, you would select: A. Source Clip Name & Source Timecode B. Record File Name & Record Timecode C. Source File Name & Keycode 8. If all clips (audio and video) have matching Timecode, using Resolve you would select all the audio and video clips that are in the same bin, and use the audio sync tool ( a chain link icon). A. True B. False 9. A basic Rec. 709 color correction adjusts the following:(select all that apply) A. Gain B. Black Levels C. Saturation D. Hue 10. A video clip with audio either recorded with it in camera or synced up, is visually denoted by what in the clip thumbnail? A. A folded edge in the clip thumbnail. B. A box with lines in the edge of the thumbnail. C. A music note in the corner. D. A gold highlight around the thumbnail of the clip.
6. Resolve allows you to continue working on new files once you start a render. A. True B. False
192
SCRATCH
SCRATCH™ by Assimilate is currently one of the most popular pieces of software for doing several DIT tasks and for final, high-end color grading. It is very good at linking audio and video files, creating one-light dailies, and exporting several versions of a clip efficiently. SCRATCH™, is a formidable finishing software. It’s also expensive, at $3000 for a perminent license or $650/year in a rental agreement. Recently, Assimilate has made a bold move in upgrading their product. SCRATCH use to offer a full version and a Lite version. Both are now merged into one software package with other, great DIT specific tools. Such as Live View™ which allows for camera to monitor (through SCRATCH) color corrections and pre-views, live on set. They have included their Play and Web product as a total package. And, if that weren’t enough, they have a cloud-based product that will put your dailies up for client review, anywhere, anytime, with a web connection.
Be advised that the SCRATCH products have a very different workflow and interface from most softwares you have encountered. Once you get the hang of it, there is a method in their seeming madness. It’s all centered around working fast and efficiently. In the following tutorials, we will introduce you to SCRATCH setup, then simple audio syncing, followed by one-light color correction, and exporting for review. The term(s) used within the program will be listed on the left and the more commonly used term they equate to, will be listed on the right.
The software is fully ACES compliant and supports a very wide array of cameras, and codecs, and video cards. And if all this still wasn’t enough, it processes at speeds not available by any other software.
NOTE: There are many terms used in the SCRATCH interface that either make zero sense when used in the manner they are intended. Some of the terms used by their programmers, are not representative of what is commonly used in the industry. I will include the following ‘secret decoder’ text within the descriptions to clear up as much misunderstanding as possible. It will appear as follows: Statistics = Scopes Separate folders* = separate files
193
Exercise 10: SCRATCH Project Setup
SCRATCH has a trial version that can be downloaded to your computer and used in these assignments. Go to: http://www.assimilateinc.com/scratch_trials/registrati on.aspx?lab=1 ⌛ This assignment will take 15 minutes to accomplish plus the time to view the video. This tutorial represents version 8 which has much requested user interface improvements. Those changes will be reflected in these tutorials. Required viewing: Scratch Tutorial 101 http://vimeo.com/98121609 You will need: • A computer with SCRATCH installed • Your external hard drive with the course assets installed on it.
We’re going to start as if you’re working on-set, for a real production. The first steps will help you get things organized before the craziness of working on-set begins. We will begin by setting up some folders on your hard drive. On your hard drive, inside the Course Assets folder, create the following folder: SCRATCH
NOTE: The following folder structure and folder placement is recommended if you’re working in a classroom lab environment. Often the lab computers have system specific folders and pathways locked down so they can’t be written to. If this is the environment you’re working in, please create the following folder on your personal hard drive to circumvent the write-permissions issue. Once you start SCRATCH and point it’s system and project folders to this folder, SCRATCH will auto-create the following folders inside: (Fig. 1) ! ! Defaults ! ! Plugins ! ! Project ! ! Render ! ! Settings ! ! Users
Fig. 1 Proper file structure for SCRATCH. 194
Exercise 10: SCRATCH Project Setup These folders will support the files required by SCRATCH. Nor mally, these would be set up automatically on your computer’s system drive. In a computer lab environment, and many professional DIT stations, configuring them on the external drive is more the norm. Setting Up DIT Render Folders The folder structure you see below, is a very common one for on-set work. It’s the one we will use for the next few exercises. In order to make use of it, we need to create this on your hard drive. The folder structure will look like this: - Project or program name ! Date or day of production ! Cam-A ! ! A001 ! ! A002 ! ! A003 ! ! etc…. ! ! Cam-B ! ! B001 ! ! B002 ! ! B003 ! ! etc…. ! ! Audio ! ! Editorial ! ! Dailies To do this, open your hard drive with the computer OS’s Finder. Within the Course Assets folder create the following folders; Day-01 Cam-A ! ! A001 ! ! A002 ! ! ! Cam-B ! ! B001 ! ! Audio! Editorial Dailies With these set up, we’re ready to work. Setting Up SCRATCH The first setting is the System. SCRATCH has some files it needs to read and write-to, located in the system folder on your computer.
If you’re working on classroom lab computers, most are set up (or locked) to keep the tinkering, within the system folder, to a minimum. We need to direct where SCRATCH writes these files. Opening SCRATCH 1. Launch the SCRATCH program. Most likely located in your computers Applications folder. If you’re using v7, it the program will be called ‘Assimilator’. If your using v8, it will be called ‘Scratch’. These tutorials are based on v8.1 and there were big changes in the interface between v7 and v8, and minor changes in the latest v8.3 On the left side of the interface are the four basic buttons allowing us to setup the System, Project, User and Session. We need to go into the System area first. 2. Click-on the System Settings button. Refer to Figure 2. This will open new fields where we need to make the first changes. These set up changes are NOT covered in the video you watched.
Fig. 2 SCRATCH’s primary setup buttons. The first task on the list (Fig. 3) is to point the program to the drive where it will save any support files. This is your external hard drive. Under the User Folder, 3. Click-on the Set.. button. 4. In the left column scroll down and select Volumes. This should reveal all the hard drives attached to the computer. The next row that is opened is that listing of storage devices. 5. Navigate to the SCRATCH folder your created previously and select that folder. 6. Click-on Select in the lower part of this window.
195
Exercise 10: SCRATCH Project Setup
Project Settings We’re now going to set-up the Media, Render and Cache file pointers.
Fig. 3 User and Project directory settings. 7. Do the same with the Project Folder and the Shared Folder settings. You have just pointed the SCRATCH system to your hard drive for the storage of User information. Session Settings This is SCRATCH’s version of what others call a ‘project’. Think of it as the production you’re working on. Inside of that Session you will bring in the files for processing. 1. Click-on the Session button. Under the ‘Enter Project’ button located in the lower third, center screen, is a blank field with a grayed out ‘Add’ button to the right. 2. Click-in the blank field and type in the new project name ‘Scratch Workflow’. (Fig. 4) 3. Click-on the ADD button. This creates a new project in the database. Any other projects you create in the future will be listed here for quick access. Notice that the Project Settings button on the left side of the screen just became accessible.
1. Click-on the Project Settings button in the left part of the window. 2. Click-on the first Set.. button for the Media, Render and Cache Folder… (Refer to Figure 5). 3. Navigate to your hard drive and select the SCRATCH folder. 4. Click-on the word Select in the lower part of the window. The software will auto-fill the other fields by default.
Fig. 5 Media and Render pathway settings. This selection sets the Media, Render, and Cache to the same drive. If you are rendering to a local or attached drive which is different from the media drive, you can click-on the SET.. button next to the one you want to change and direct it to the proper drive. In the lower-right side of the Project Settings window (Fig. 6) are the fields where we tell the program what raster, frame rate and Gamma settings we want to use. Like with most software, you need to know what the camera files specifications are before you set-up the work environment.
USER TIP:
Fig. 4 Adding new project field.
One of the ‘different’ user interface processes within SCRATCH is that the fields are ready to enter information. They are ‘modal’ or already active. The ADD button will allow the action to happen. In most pieces of software, you click on Add, then enter information. SCRATCH is backwards in this workflow. Most all fields in SCRATCH operate this way.
196
Exercise 10: SCRATCH Project Setup
Everything should be as you see it in Figure 6. If it isn’t, click-in the field areas where the settings are, and select the settings that match. It is vital that we set the format to HD 1080-23. Most of the work you will do on
Fig. 6 Project Settings, Default Media Settings dramatic productions will be 23.976 or 24 fps. If you’re in Europe, then the frame rate would typically be 25. Although SCRATCH can and will deal with varied frame rates, if you don’t set this properly here, when you get to the render part of the process, there will be problems. Before we leave this window, look to the far right part of this window and you see a drop down menu titled ‘ACES versions’. If you remember the reading on ACES, then this will start to make sense. SCRATCH keeps up to date on the latest, released version of ACES. Now that you have done this exercise, if you watch this video, it will explain some deeper tricks and techniques in dealing with raster sizes. You will also get a glimpse into the hidden power features of SCRATCH. https://vimeo.com/134191206
197
Exercise 11: SCRATCH Syncing Audio
SCRATCH has a trial version that can be downloaded to your computer and can be used to do these assignments. Go to: http://www.assimilateinc.com/scratch_trials/registra tion.aspx?lab=1 This exercise will take 25 minutes to accomplish. Required viewing: http://vimeo.com/102211492 This is an overview of what we will go through step by step over the next few chapters. In the video the presenter glosses over some important information. Or at least, goes very fast. You will need: • A computer with SCRATCH installed • Your external hard drive with the course assets
PROCESS NOTE: We are going to sync the audio clips first, before we delve into color correction (one-light). This might not be the way the workflow will happen on-set. It is more typical to do the one-light color correction then re-link the audio. The reason for this is simply workflow on production sets. It is not uncommon for audio to give DIT the assets during lunch or at the end of the day. You can, however, prep all the shots for linking and rendering without audio. I have presented this workflow out of order so that you might have a good overview of what, some think, is the most difficult part of the process. ☞ If you are continuing from the previous exercise, skip to Naming the Group and the Construct. Opening SCRATCH and Launching a Project Make sure your external drive is connected to the computer. 1. Launch the SCRATCH program. 2. Click-on the Session button located on the left side of the screen. (Fig. 1) 3. From the User listing select your name.
The video assets used in this assignment are from a Utah Valley University Digital Media senior projects group short film ’47 Minutes’. The film is an adaptation of a stage play looking at three groups of people whose lives intersect when the planes crash into the Twin Towers on 9/11.
Fig. 1 Session User and Project selection window. 4. From the Project list, select the Scratch Workflow project.
198
Exercise 11: SCRATCH Syncing Audio Then the Enter Project, located above the New Project list, will highlight. 5. Click-on Enter Project.
6. Click-on the Edit button. A little icon will appear next to the Group and Construct 1 fields.
☞ Naming The Group and The Construct
7. Click-on the word Group to select the field. 8. The word Group should highlight and you can now enter Day-01. (Remember the discussion about numbers and how computers order things. The ’01’ instead of just ‘1’ is important.) 9. Click-on the Construct 1 field. 10. The word Construct 1 will highlight and you can now enter A001. This indicates Camera A and magazine 1, from that
In place of a timeline, SCRATCH calls it a ‘CONstruct’. It’s sufficiently different from the behavior of a typical timeline that CONstruct is quite fitting. One of the fundamental parts of the DIT job is organization. We’ve done it with folders on the hard drive and we’ll continue here. In the upper left part of the CONstruct screen are the organizational tools for projects (CONstructs). Think of a ‘Group’ as the day you are working. If this is a three-day production then the first day (group) would be ‘Day 1’. Within that day, we will have multiple camera offloads, maybe several from more than one camera. A
Fig. 2 Project Group and Construct Listing. ‘CONstruct’ can be used for each camera. It might seem overkill to create so many CONstructs, but in the end you will see the logic behind compartmentalization as we progress through the process. That’s what counts on-set.
User Note: This is a good pre-production process to go through. When you know how many cameras they are going to use on a typically shot or scene, you can pre-build several days worth of Groups and Constructs for several magazines from each camera. You then have a head start on the job. camera. We need to create another Construct for a second camera magazine. 11. Click-on the ADD button to the left of the Edit button. A new Construct field is opened. We’re still in the Edit mode, so we can directly rename the new Construct.
NOTE: Make sure you click on the Edit button to closeout the editing function. Otherwise you can cause issues with Construct names because when in Edit mode, they are all active for change. In the upper left of the screen in Figure 2, you will see the basic Group and the default Construct 1. We need to change the name of both. Directly below the Group and Construct 1 list,
Fig. 3 Project Media and Tools buttons. 12. The word Construct is active so enter A002. We want to bring the camera footage into the A001 Construct first, so 13. Click-on the A001 Construct to make it active. Importing Media Media is imported using the Load Clip(s).. button (Fig. 3), located to the right of the Media button. You
199
Exercise 11: SCRATCH Syncing Audio You should have two clips loaded, one in each column (Fig. 5).
Fig. 4 File Locater Window. will be able to import multiple clips or just one at a time. We’ll import them as individual clips. There are only two clips for this exercise. But you saw in the video a way of importing lots of clips (Multiple) in one setting by configuring the system to look for a specific file type and to look deep into folder structures for those files. 1. Click-on the Load Clips.. button. The upper part of the screen will now display the file locator window. (Fig. 4) You will drill-down to the RED Assets folder. Your hard drive>Course Assets>RED Assets The two .mov files are what we need to import. Although we can bring in both at once, lets do one at a time so you can see how the program handles importing. It’s a bit different than most. 2. Click-on the file A001-C025-1028M6.mov. 3. Click-on the Open button in the lower part of the window. The clip will now be attached to your cursor. 4. Move the clip over the first column in the Construct and click. The clip will drop into the first column in the Construct. We need to repeat the process for the next clip. 5. Click-on Load Clip(s).. 6. Click-on the next clip, A001-C014-102873.mov. 7. Click-on the Open button in the lower part of the window. 8. Move the clip over the second column in the Construct and click.
Fit Width This is a really important setting for your clips. As mentioned earlier, Fit Width assures you are seeing the entire frame within your project settings. Note that some DITs do this process as soon as the media is imported, others wait until they are doing the one-light. It’s your choice. For now, we’ll do it at this point in the process. Here’s how it’s done. 1. Move to the Matrix part of the program by pressing F5 on the keyboard. 2. On the left side of the lower third of the screen, select the Config button. (Fig. 6)
Fig. 6 Config button within the Matrix window. About mid-way across this lower section of the screen, is Framing and this controls the Fit Width selections. There is an order to this process that must be adheared to. The Fit Width tool, much like other tools, can apply it’s changes to just the clip the playhead is on, or to all the clips in the timeline. You can also use the Paste Forward function, but we will get to that later.
Fig. 7 All and Fit Width selected. Fig. 5 Construct window with two clips in two columns. 200
Exercise 11: SCRATCH Syncing Audio To apply the Fit Width function to all the clips in the timeline, do the following: 3. Click-on the ALL button. (Fig. 7) It will highlight in the upper right corner. 4. Click-on the field just to the left of the ALL button and select Fit Width from the menu. (Fig. 7) You won’t see any changes to these clips in the timeline, because they match the project settings already. However, if you have files from a GoPro or DSLR, there will be a noticable jump in all the clips as the program adjusts the raster to match the project settings.
In the lower part of the window is the Find Audio button (Fig.10). 4. Click-on Find Audio..
Syncing Audio and Video In a ‘dual-system’ recording scenario, audio and video are recorded to separate devices. Both should have the same timecode. This makes marrying them together fairly easy. If they are not timecode-matched, there is still a simple process to link both together.
Fig. 10 Find Audio button. Drill down into the RED Assets folder, then into the Audio folder. There are four clips there. The system will try and match any clips with the same timecode as the video clips (Fig. 11)
If we have one or 100 clips in the Construct, we can attempt to link all video clips to their respective timecode-matched audio clips with a few mouse clicks. 1. Shift-click-on both clips to select them. They should highlight around the edges of the thumbnail images. (The faster and much better method for selecting all the clips in the timeline is the short-cut key strokes CTRL + a) 2. Click-on the Media Browser button (Media Browser..) located in the right side of the window (Fig. 8). This will open the Media Browser (Fig. 9) where we can link the audio files.
Fig. 11 Located matched audio files. Files are not linked at this point. 5. Click-on the Select button. The matching clips will appear lined up with their video clips. They will be bright yellow indicating they are what the program feels is the best match but are NOT linked to the video files yet. To link the files, 6. Click-on the Close button in the lower right part of the window. The clips will change color to match the video files color in the listing.
Fig. 8 Media Browser.. button.
The clip thumbnails, in the Construct, now have a gold icon in their top bar indicating that audio has been linked.
User Note: Fig. 9 Media Browser Window. In the top part of the window are tabs. 3. Click-on the Audio tab.
SCRATCH will import RED’s .r3d files without transcoding. The files we’re using for this exercise have been transcoded for ease of use.
201
Exercise 11: SCRATCH Syncing Audio Confirming Audio Link and Checking Sync SCRATCH has an interesting and fast workflow from this point on. Remember, it’s all about speed on-set. We can go into the Player part of the program to make a
Fig. 12 Editor Timeline Navigation Tool Bar. quick check of the audio sync. If it’s not in sync, then we can quickly move to the Editor functions and adjust the alignment of the audio and video clips. To move into the Player part of the program, find the icons right below the columns of the Construct (Fig. 12). They look like the traditional play, rewind, fast forward icons found in most editing programs. 1. Click-on the Play icon. ▶ The screen will change to the Editor part of the program. The EDITor window will open with the first clip displayed. Look over the interface for a few minutes. There is a timeline, of sorts, across the bottom. To play the clips, you must click on the right arrow ‘▶’ under the clip in the View Port. SCRATCH does not use the typical ‘spacebar = play’ modality used by other programs. However you can use the RETURN/Enter key to start and stop playback ! !
! View Port = video preview window ! Return = Play, stop playing
You can also click and drag the playhead (refer to Figure 12 where the cursor is located). Click on the playhead and drag the cursor back and forth on the timeline for rapid playback.
NOTE: What if the speaker does not highlight? Somewhere in the computer you are working on, there’s some setting somewhere, that allows you to select how the audio is input and output. SCRATCH has the same feature and here’s how you check to see if the setting is correct. Referring to Figure 16, there’s a Settings tab. Click to open the settings window. There a field labeled ‘Audio Device’. The field right below it displays where or how SCRATCH is sending out the audio from the computer. It looks at your computer settings and offers the same settings to choose from. However, it might not be the one you have selected in your computer preference. Drop this menu down and select the output that goes to the headphone jack of your computer. For an Apple Mac that would be ‘Apple Inc.- Built-In Output’. This should solve your problem with not hearing audio. The sync should be fin e f o r t h e s e c l i p s because care was taken on-set, assuring the camera and audio recorder were recording the same timecode. But let’s assume they were off alignment. The Editor part of the program is where we can fine tune the alignment. To move into the Editor part of the program, 3. Right-Click-on to the right or left of the View Port. The view port is the window
Fig. 13 Floating Navigation window.
2. Play back the video clips to check if the video matches the audio.
Note:
If you don’t hear audio, look at the speaker icon on the playback tool bar. If there are no sound lines emanating from the speaker, the sound is turned off. Simply click on the speaker to turn the playback sound on.
There are several windows where the actions you take will NOT be applied until you click on Apply Changes. Just be aware that if you leave the window without clicking Apply Changes, your work will not be affected.
202
Exercise 11: SCRATCH Syncing Audio displaying the clip. A navigation window will open. (Short cut key to the Editor is F4). (Fig. 13) It might already have been open when you entered the Editor window. SCRATCH remembers your preferences from project to project. If you left it open, it will be open when you return. If you want to keep it open over most windows, then click on the KEEP button in the lower left of the floating window. 4. Play back ‘▶’ the video clips and check if the video matches the audio. Notice that there aren’t any audio tracks visible in the timeline located in the lower part of the screen. On the lower-left side of the screen are several function buttons (Fig. 14). 5. Click-on the Audio button.
Fig. 15 Top-Swipe Tool Bar.
Fig. 16 Audio Mixer Detail display. channel and listen to just that channel or select all the available/recorded channels at the same time. 7. Click-on the Detail tab located on the top part of the Mixer window. Detail = Waveform display
TECH INFO:
Fig. 14 Audio reveal tool. The waveform opens near the video clip. This is confirmation that your linking of the clips worked. The next check is to confirm that the slate clap is the same in the video clip and the audio clip. As you saw from the videos, SCRATCH uses ’swipe’ gestures. If you swipe the mouse up (no mouse click, just move the cursor up in a short move that goes off the top of the screen), a menu will appear. This is where we open the 16-channel audio mixer and other features we will visit later. 6. Click-on the Mixer button at the top (Fig. 15) and the Audio Mixer detail window will open (Fig. 16). The Audio Mixer is multi-functional. Across the bottom of the display are numbered buttons which represent the channels. You can click on any active
The increments of adjustment within the Audio Mixer Waveform are NOT in frames. They are in ‘samples’, a much finer adjustment. If you need to move by frames, use the left or right arrow keys to move a full frame back or forward. If you use the cursor and mouse to move the audio waveform, it’s in very small, precise increments. This allows for high accuracy when needed. This will display the waveform at the point where the playhead is setting, on the timeline. RESIST the temptation to drag the play head around on the audio waveform. This is reserved for adjusting the audio position relative to the video. You can tweak the audio here to align out-of-sync relationships very easily. Use the cursor on the video timeline, scrubbing the first clip, to the point where the slate is just closing. If you need to move frame-by-frame, use the left and right arrow keys on the keyboad. Look at the audio waveform. There should be a spike in the audio when the slate closes. Are they in the same place?
203
Exercise 11: SCRATCH Syncing Audio Note: TECH TRIVIA: The author of SCRATCH felt that his love of both what he was doing with the software and his favorite movie should be represented in the program. The name, SCRATCH, came from his frustration with other similar programs. He decided to write something from ‘scratch’. It’s no more complicated than that. The color scheme and the specific area where you do color correction within the program, came from his favorite movie, The Matrix. The ‘Construct’ is his vision of the base from which all work is done. In editing software, it would be the timeline. But that’s not what this program does. The Construct is where you construct your project for further work. There are some other interesting, albeit weird, associations with some of the names. Typically Waveform monitors and Vectorscopes are called ‘scopes’. He chose to buck the industry standard and call them ’Statistics’. Technically, they do represent visual statistics about the image, but the rest of the world calls them scopes.
keystrokes are CMND + right arrow key to move to the next clip. Look at the waveform. Does it line up? If the indicator line is anywhere inside of the audio spike caused by the slate, it’s close enough. If they don’t match up, do the same as you did with the last clip and drag the mixer audio wave spike caused by the slate clap closer to the peak of the sound wave. We now have clips, sound-synced and ready for export. This is the same process you would perform, day-in-and-day-out on-set, with double system sound from any digital camera. The only difference is when the timecodes don’t match, or there aren’t timecodes on the files. You can use this same tool to match the slate clap with the audio sound wave. If there isn’t a slate clap, you then need to find a spoken word that you can match with the actors lips. Maybe there is a door closing, or something being set down that makes a distinctive sound and is visible in the frame. All these are tricks to matching ‘wild’, non-synced audio with video shots.
The audio waveform from the clip, shown in Figure 16, has the playhead position indicator before the slate clap by just a small amount. 8. Click and drag the waveform to the right, so the position indicator is located just at the start of the sharp audio spike. 9. Click-on the playhead in the video timeline and drag it back to before the slate claps. 10. Click-on the right arrow (play button) below the mixer or use the RETURN key, and listen to the clip.
Submission for grading Follow your instructor’s instructions for submission of this assignment.
Do the slate and the sound match? Let it play until the actors talk. Do their lips match the audio? If so, you’re done. If not, scrub back on the video clip to the slate clap and move the cursor on the mixer audio waveform just a bit one-way-or-the-other to fix the out-of-sync problem. Let’s do the same to the next clip in the timeline. 11. Drag the playhead on the timeline, to the next clip and find the slate clap in the shot. The short-cut
204
Exercise 12: Syncing NonTimecode Matched Files in SCRATCH
⌛ This exercise will differ from others you have done. It is conceptual. From the past exercises, you now know the tools and the workflow. The change here is the method in which you select an audio file. Unlike the previous section where you linked timecode matched files, and Scratch found the files for you, this is a totally manual method.
This is where things get a bit weird. You have to play the timeline and listen for the slate clap. It will be important for you to make a mental note where it is in the audio waveform. It is hoped that there isn’t a long delay between the time the audio is rolled and the video rolls. As a DIT, you can offer some politically correct feedback to the crew making them more sensitive to the effect it has down the road.
The process begins the same: - Open SCRATCH - Setup a CONstruct - Import the camera files - Use the Media Browser to locate the audio files folder. This is where it differes from the automatic relinking. Without timecode, you must select a single clip on the Construct instead of selecting all clips. - Then you go into the Media Browser and locate the folder for the audio files. - Now select the audio file for that shot. This is where proper file naming on-set, really helps. The audio technician should have named each file with scene and take. Typcially this looks something like S108-T1 for scene 108 take 1. It should match the information on the camea slate. -After you select the audio file that matches the video file scene and take, click on APPLY, like you did when the program found the clips automatically. - Do this linking with each clip on the CONstruct. Once all files are linked, - Move into the Edit area of the program. - Reveal the Audio Mixer and set its display to Detail, revealing the audio waveform. - Remember to reveal the audio Mixer, you Swipe-up to the top of the screen to reveal the menus. - Click-on the Mixer button at the top.
Fig. 1 Top-Swipe Tool bar. - Now, Click-on the Detail tab located on the top part of the Mixer window (or on the left side depending on the version of Scratch).
Detail = Waveform display
- Play the clip (Return Key or the play icon) until you see the slate close. - Scrub the audio waveform back or forward until you find the slate clap audio spike. Sometimes this is off by some distance and it’s better to play the clip until you see the sound signature of the slate. This will give you a better idea where the sound is in reference to the visual slate clap. - Once the slate clap point and the corresponding audio point are aligned, you’re done with that clip. You simply move down each clip, one by one, and bring the two together. Laborious? You bet, but that’s the nature of the process. I check each clip even it it was auto-aligned with timecode. Fundamentially, both should be sycronized if the timecode is the same. In reality, the camera clocks drift and as the day progresses, the alignment can be off as much as as a half-second, which is 12 frames. We
205
Exercise 12: Syncing Non-Timecode Matched Files in TECH INFO: Just a reminder--The increments of adjustment within the Audio Mixer Waveform are NOT in frames. They are in ‘samples’, a much finer adjustment. If you need to move by frames, use the left and right arrow keys to move a full frame back or forward. If you use the cursor and mouse to move the audio waveform, it’s in very small, precise increments. This allows for high accuracy when needed. notice a lack of lip syncornization when the the timing is off by just 3 frames. The keys to a non-timecode production working efficently in DIT and post are: - Rolling audio and camera as close as possible to each other. - Naming the audio files properly with direct reference to the slate. - Making sure the slate is clearly visible in frame.
206
Exercise 13: SCRATCH One Light Workflow
In this exercise we will go over the following: • Apply one-light correction using SCRATCH • Work with controls in the color correction tool and scopes.
When it comes to one-light work, SCRATCH is a very powerful tool on-set. It can create, apply and manage LUTs with little effort. The ability to recall those LUTs for repurposing is unique, offering lots of flexibility throughout the process.
Exercise goals: • Use the controls to correctly accomplish a one-light • Output the files per the production requirements
We’ll open the same project we used to sync audio and video files in Exercises 11 and 12.
This assignment will take approximately 30 minutes. You will need: • Your course asset files • Your external hard drive and • SCRATCH software.
1. Make sure your drive is connected to the computer. 2. Open the SCRATCH program. If the program does not find your project, here’s how to relink everything. 3. Click-on the System Settings button.
NOTE: It is assumed that you either have the last project open or will open it before starting this exercise.
Fig. 1
User and Project directory settings.
Set the following directory paths: (refer to Figure 1) • User Folder: your drive/Course Assets/Scratch/ • Project Folder: your drive/Course Assets/Scratch/ • Share Folder: your drive/Course Assets/Scratch/ 4. Click-on the Project Settings button. Set the following directory paths: (refer to Figure 2) • Media and Render Directories: your drive/Course Assets/Scratch.
207
Exercise 13: SCRATCH
One Light Workflow
NOTE: The screen shots you see from the SCRATCH program, are from the full SCRATCH v8.3 software.
Fig. 2 Media and Render pathway settings
As an aside, SCRATCH is also touch screen compliant. It is quite fast to abandon the mouse and interact with the screen, much like a tablet device.
When you click on the Sessions button, you should see the ‘Scratch Workflow’ project. 5. Click-on the Enter Project. We’re going to color correct the RED footage for this exercise. 6. Click-on the A001 Construct selection in the left Construct column. One-Light Color Correction Working with the color corrections tools is found with the area called the Matrix. If you are in the Edit window, right-click on the upper part of the screen, the mode navigation menu will appear. Select Matrix (Fig. 3) or press the F5 key, which is the shortcut.
Fig. 3 SCRATCH Mode selection Menu
Fig. 4 Color tool selection.
We do not have the time to go through each of the buttons shown in Figure 4 at this time. The one that helps the most for our goal of a fast and efficient one-light is the Color button. If it’s not already selected, click-on it now. The bottom of the screen displays the ‘wheels’ (Fig. 5). This name is in reference to the physical color correction surfaces, often used in place of a mouse or keyboard. These surfaces are a combination of programable buttons, track balls and controls that spin around the outside of the trackballs, like wheels. Refer to Figure 6.
They allow the colorist to work quickly and more organically than the mouse and keyboard interface. SCRATCH supports several brands and models.
Fig. 5 Color wheels. The color correction wheels shown in Figure 5, going from left to right, are: • Color-A or input controls. This control affects the color and luminance of the image ‘pre’ the rest of the
Fig. 6 Euphonix MC Color control surface. controls. This is not typically used by DITs, but a real power tool for full-on color correction. What is different about this tool is that the wheel that surrounds the color circle, actually controls color saturation. • Lift, Gamma, Gain are the same controls you have experienced in DaVinci. • Color-B works on the color signal after, or post, the previous controls. This would be referred to as ‘secondary’ color controls. Like the Color-A wheel, the adjustment wheel or ring on the outside, adjusts saturation of color.
208
Exercise 13: SCRATCH • R located at the bottom right of each control is used to reset the settings you have dialed in for that tool. Click on the letter and the changes you just made will revert back to neutral. (Note: M is shown here due to a problem with screen capture software and this control button).
Fig. 7 SCRATCH’s Lift color wheel.
The center of the color control area is called a track ball, (Fig. 7) and has a small ‘+’ or cross hairs in the middle. When you click-and-drag this around, it will shift the color to the color you drag it towards. If you drag towards the red part of the circle, the red in the image will be increased.
The outside of the color control is the actual wheel. When you click on it, move your mouse in a circle and it will adjust the luminance gain of that particular part of the image. In the case of the Lift control, it raises or lowers the black levels. Click and circle right, the black levels will rise. Opening The Scopes Scopes are somewhat hidden in this program. Both in location and in how they’re labeled. To access this menu, swipe the mouse up, to the top of the screen. Statistics = Scopes The second issue is what the scopes are labeled ‘Statistics’. With the top menu displayed, 1. Click-on the word Statistics. (Refer to Figure 8)
Fig. 8 Statistics button opens scopes. The scopes should then be displayed. This new window floats, allowing you to drag it around the screen, even onto a second monitor. 2. Click-on on the scope traces and drag them where it is more convenient.
One Light Workflow
There are four scopes that can be displayed as shown in Figure 9. Youre system will most likely open with just the histogram displayed. If you do close any one of the displays, the program will remember your last settings as a user preference when you open the program again.
Fig. 9 All four scope displays open.
The scopes displayed, from left to right, are: • RGB Histogram. This is a typical Histogram with each of the three primary color channels displayed on top of each other. Each is delineated with their specific color as an outline. Not the friendliest display, but if you just look at it as a typical Histogram, it’s very valuable in setting luminance levels. • RGB Parade. Scratch labels the button ‘W’ to open the RGB Parade. This is a familiar scope from other programs you have seen. In this representation, none of the traces have the color they represent, but they do in the program. • Vectorscope. Again, this is the same scope you have seen before. SCRATCH labels this button S. • Curves. This is a tool we have not discussed thus far in the book. Curves control of the colors and luminance in the image is very powerful, allowing for adjustments focused on narrow ranges within the image. It’s beyond the scope of what a DIT would work with, so we’ll omit it for now. In the lower left corner of the Scopes window are buttons allowing you to turn the individual displays off or on. • H is for Histogram. • W will turn on or off the RGB Parade. • S will display the Vectorscope. • C will display the Curves scope. • + will widen the Histogram so it’s easier to make fine adjustments. For one-light work, we only need two: the Histogram and the RGB Parade. To turn off the other displays,
209
Exercise 13: SCRATCH 3. Click-on the S and the C (Refer to Figure 10).
Fig. 10 Scope displaying the uncorrected first image. Making Basic Adjustments The first image where the paramedics are looking frame right, you notice they look fairly good as-is if you were to just consider the shot. Looking at the scopes tells a different story. The blacks are a bit high and the whites are already close to being topped out. The first adjustment is always setting the black levels. No matter what program you’re using, always adjust the blacks first. In SCRATCH, the black level adjustment is called Lift. 1. Click-on the Wheel area that sur rounds the Lift control, and move your mouse in a circle, to the left. (Fig. 11). Circling to the left will reduce the levels: to the right will increase the levels. The black levels should move down in the RGB Parade. The left Fig. 11 Lift Track side of the Histogram trace Ball and wheel. should expand to the left. Bring the black levels to the point where they just touch the thin line, in the lower part, of the RGB Parade display. If you go below that line, the blacks are being crushed. The Histogram should display the traces now touching the thin, vertical line on the left side of the display. 2. Click-on the Wheel area that surrounds the Gain control, and move your mouse in a circle, to the right. Keep adjusting until you bring the whites to the upper, thin line at the top of the RGB Parade. This is the 100% line. Remember, when you adjust the whites, it pulls the blacks up a bit. Think of there being a rubber band between the two. So once you have the whites set, recheck
One Light Workflow
the blacks (Lift). Going back and forth between the two will get your images luminance range in proper order. Some details to notice in the RGB parade. The top of the left trace, for the color red, is lower than the other traces (Fig. 12). In fact if you drew a line from the tops of the traces in the red channel to the tops in the blue channel, it would ramp to the right. This is the opposite of what is normal for proper color rendition of flesh tones.
Fig. 12 First image scopes display with correction applied. In normal circumstances, we would leave the shot alone as it is right now. But if you wanted to fix this, it can be done with the Gain color correction wheel. You can click on the center ‘+’ and drag it up until the red traces move closer to the 100% line. This is a small adjustment but the flesh tones and the white shirts appear more natural. Now, move the cursor to the second clip, the reverse angle for this scene, and make the same or similar adjustments to so that the shots match. Also notice that, in this shot, the blue trace in the RGB Parade, touches the bottom line sooner than the red trace. This tells us that there is an unbalanced color condition in the blacks. It’s really slight, and not worth fixing at this point. In final color correction, these little corrections will be made. For practice, one-light color correct the second clip of the ambulance scene. NOTE: There is a really nice function in SCRATCH that allows you to apply the color changes made on the first clip to all the clips down the line. It’s refered to as ‘Paste Forward’. This might not work well for some of the clips, shot at different locations or times of the day, but it could get you in the ball park, faster.
210
Exercise 13: SCRATCH
One Light Workflow
NOTE: It is a good political move to make your one-light corrections on a few clips and let the DoP see them. Let him/her know that what you want is guidance in the one-light look at this point. This is NOT color correction. It is just a quick fix to make the files look presentable for editing. The formal color correction will happen after the edit is done. There are DoP’s who want a darker or lighter image than you might have targeted, because that’s the ‘look’ they were ultimately going for. Once you have their input, you can create a LUT and name it ‘DP’s LUT’. Do NOT apply this LUT to the dailies or the files headed to Editorial. You want the footage forwarded to edit as close to a ‘linear’ Rec.709 LUT as possible. The additional corrections by the DoP, (we’ll call them a ‘look’), should be forwarded as a separate 3D LUT file. There is a deeper explaination on this concept in the chapter ‘Naked Workflow’.
Submission for grading Submit this assignment per your instuctor’s instructions.
211
Exercise 14: SCRATCH LUTs
In this exercise we will go over the following: • Crating and saving a LUT in SCRATCH. Exercise goals: • Using the current color settings, save them as a LUT. This assignment will take approximately 15 minutes. You will need: • Your course media files • Your external hard drive and • SCRATCH software. NOTE: It is assumed that you either have the last project open or will open it before starting this exercise.
Creating a color preset, know as a LUT, is both easy and a time saver later on. As you have read earlier about LUTs, they can be used various places in the production workflow. LUTs can be applied to the output of some cameras. Some newer high-end reference monitors will utilize a LUT to help with accurate image display if the camera is not capable of handling a LUT on it’s output. A LUT can be passed along to the editor for reference and applied if desired. As a DIT, the best use of a LUT is to save time NOT making the same adjustments to each clip in a similar scene. But a LUT preset is not the magic bullet to fixing all shots throughout the entire program. Color and contrast will vary significantly from indoor shots to night shots, to mid-day shots. The same LUT can’t be applied to a different camera (e.g., DSLR and RED mixed) shooting the same scene. The ‘looks’ of the cameras are far to different. But you can quickly create a LUT, with only 2 or 3 extra mouse clicks, that then can be used over and over. REMEMBER, passing color clip-to-clip is not a wise choice when using LUTs. Luminance-only LUTs are the safest bet. Cameras like the ARRI ALEXIA, have preset LUTs which are available for pre-load into SCRATCH and other programs. These LUTs are ISO specific. You choose the LUT based on the exposure ISO of the camea. Opening The Project We’ll open the same project we used to sync audio and video files in Exercises 11 and 12. 1. Make sure your drive is connected to the computer. 2. Open the SCRATCH program. If the program does not find your project, here’s how to relink everything. 3. Click-on the System Settings button. Set the following directory paths: (refer to Figure 1)
• User Folder: your drive/Course Assets/Scratch/
• Project Folder: your drive/Course Assets/Scratch/
• Share Folder: your drive/Course Assets/Scratch
212
Exercise 14: SCRATCH LUTs
Fig. 1 User and Project directory settings.
Fig. 3 Scratch Mode selection Menu
4. Click-on the Project Settings button. Set the following directory paths: (Refer to Figure 2) • M e d i a a n d Re n d e r D i re c t o r i e s : y o u r drive/Course Assets/Scratch
Fig. 4 Color tool selection.
Saving The LUT In the lower left of the screen, refer to Figure 4, there is a Save.. button. 1. Click-on the Save button. The file manager window opens for you to select the type of color correction data file type and where you will save it. 2. Mouse-click on the default menu selection Color Settings (*ccr). There are several file type settings listed in this menu. For this exercise we want to use the 3D LUT (*.3dl) file type. 3.Click-on the 3D LUT (*.3dl) from the menu (Refer to Figure 5). You now need to direct where the program saves the file. It is best to have a folder on your drive named LUTS. The assets included with this book, that were copied to your hard drive at the outset, has this folder already created for you.
Fig. 2 Media and Render pathway settings. When you click on the Sessions button, you should see the ‘Scratch Workflow’ project and your user name. 5. Click-on the Enter Project. We’re going to continue working with the RED footage for this exercise. Getting Into The Matrix We’ve working with the color corrections tools is found with the area called the Matrix. This is where LUTs and other color presets can be saved. 1. Press the F5 key, which is the shortcut for the Matrix. With the color corrected clips on the timeline, 2. Click-on the first clip in the timeline which will place the playhead on that clip The color information is now loaded for that clip. We just need to save it to a file.
Fig. 5 Color LUT file types menu.
Fig. 6 LUT file type save window.
213
Exercise 14: SCRATCH LUTs In Figure 6 the file name defaults to the name of the file it is referencing. In this case A001_c014_102873.3dl. You need to give it a more meaningful name. 4. Click-on the current file name and enter 47_min_amb_scene. You need to tell it where to save the file. In the upper part of the screen is the file navigator that you have seen before. 5. Navigate to the LUTs folder inside your hard drive>DGM2340 folder. 6. Click-on the Create button. The LUT is now saved in an easy place to find with a name that makes sense. Applying a Saved LUT To a New Clip This LUT can now be applied to a single clip, groups
Fig. 7 LUT application tool. of clips or exported for another use. 1. Click-on the LUT button located in the lower left corner of the screen. Refer to Figure 7. 2. Click-on the Load.. button to the right. The same file navigation window will open. You want to point the program to the folder you saved the LUT in. YourHardDrive>DGM2340>LUTs 3. Click-on the 47_min_amb_scene.3dl file. 4. Click-on the OPEN button. The LUT is opened and applied to the shot that is under the playhead on the timeline. This shot is now corrected to match the other similar shot. Some things to think about at this point. If the lighting is different or the camera exposure is not the same, this LUT will not give the results you might desire. At this point, however, the heavy lifting has been done. You can easily click on the Color button and do minor tweaks to bring it in line with the other clip. The other option, which is used a great deal in the industry, is to simply select a preset REC. 709 LUT and apply it to everything. This is a look that is ‘sort of ’
TECH NOTE: There is a good explanation of what the various color information file types are within the SCRATCH Users Manual. Here’s a few more good links to read. It is important that you know this stuff inside and out. For now, here’s the short version: • .ccr format is specific to SCRATCH. If you’re only going to work within SCRATCH and do NOT need to pass anything along to another system, this is the format that should be used. Avoid it otherwise. • .lut is a 1D LUT. A LUT is not the complete color correction information but an approximation of that information. The 1D is the lowest form of this information but the most widely accepted right now. • 3D LUT has much of the same limitations as the 1D LUT, however it does have more information. It will be a more accurate selection ONLY if the other systems can support it. You will always want to ask. • CDL is a Color Decision List. This is an industry standard. It contains lots of information and if supported, is a great way to move that information down the pipeline. expected from on-set, quick color correction. However, time permitting, you should create your own LUT. If the DoP is allowed to provice input, then you will be his friend and he’ll trust your judgement. The next step is to export the synced and one-light corrected files in formats required by editing and for the dailies. LUTs and The AVID Workflow The handling of LUTs in Media Composer (version 7 or newer) is, by any standard, genius. If you create editorial output headed for an AVID Media Composer workspace, then you will be loved by the editor. Here’s the overview: • On-set, apply your quick LUT or a Rec. 709 LUT to the dailies and the files destined for Editorial. • Save any other LUTs you create under the direction of the DoP or Director, as a separate 3D LUT. DO NOT burn this Look/LUT into the files to be sent to edit. • In the edit bay, the editor will get the Rec. 709 corrected files. They can add the DoP Look/LUT if they like, or turn it off, with a mouse-click.
214
Exercise 14: SCRATCH LUTs Here’s what this looks like graphically: DoP/Directors Look/LUT file ➠to edit as separate file Clip w/Rec. 709 LUT applied ➠Burned-in to flipped clips, on-set. Audio file attached to clip In post using AVID, the Look/LUT can be applied to any clips in the bin. Because it’s now ‘layered’, AVID has a checkbox that can turn it on or off. The next stroke of genius with Media Composer workflow revolves around later changes. If the DoP or Director want to change that Look/LUT, or create a different look for different parts of the project, it’s as simple as modifying the LUT and sending it to edit. When the editor drags it into the LUT folder, all instances of that LUT are updated immediately. No other editing software offers this capability. Many editors using Media Composer don’t even know this power exists within their software.
215
Exercise 15: SCRATCH Output for Edit and Dailies
In this exercise we will go over the following: • Work with the tools within SCRATCH that allows for multiple file exports. Exercise goals: • Output the files per the production requirements This assignment will take approximately 45 minutes plus render times. You will need: • Your course media files • Your external hard drive and • SCRATCH software.
The standout ability of SCRATCH is the way it handles output. It’s very easy to understand, both in process and interface. The beauty of the highly crafted computer code that underlies this program really shines for this function. The program is written in such a way that it will maximize any and all processors AND allow you to keep on working within the program, while it renders in the background. There is almost no reduction in user experience as well. This is HUGE on-set when you are facing deadlines. Setting Up Folders For Exported Files There is a way, within SCRATCH , to auto-build folders during exporting. For this exercise, we’ll build the folders manually first. 1. Make sure your drive is connected to the computer. 2. Navigate to the Completed Assignments folder on your hard drive. 3. Create a folder named Scratch One-lights. ☞ If you’re continuing from the last exercise, skip to Setting Up The Output.
Fig. 1 User and Project directory settings. Opening a Past Project In SCRATCH We’ll open the same project we used to sync audio and video files in Exercise 11. 4. Open the SCRATCH program. If the program does not find your project, here’s how to relink everything. 5. Click-on the System Settings button.
216
Exercise 15: SCRATCH Output for Edit and Dailies Set the following directory paths: • User Directory: your drive/Course Assets/Scratch/Users • User Directory: your drive/Course Assets/Scratch/Project
Simple enough. But in actuality you’re bouncing between several screens to setup each Node, and the Nodes must be in a certain order. This proper order is not clearly defined in the manuals or most on-line tutorials, including the selected ones on the Assimilate web site.
6. Click-on the Project Settings button. Set the following directory paths: • Media and Render Directories: your drive/Course Assets/Scratch When you click on the Sessions button, you should see the ‘one-light Workflow’ project and your user name. 7. Click-on the Enter Project.
It would be worth your time to watch THIS video . (http://www.assimilateinc.com/tutorial/scratchlab/scratc h-tutorial-output-node) Omitted from the above tutorial: • Adding burn-ins like file name and timecode to the output • Turning on the audio tracks for the output files • Setting the output file name • Setting up the output file paths These are easy to add and we’ll go through this process step-by-process. It is important to get this process figured out. You will do this daily on set, with dozens of files distributed to different hard drives, in lots of folders. The real power-key to this process is the ability to save the output Node configuration as a template, as shown in the video.
Fig. 2 Media and Render pathway settings. ☞ Setting Up The Output The two green screen clips with the men driving in the ambulance, should be in the Construct. The next steps are to move to the parts of the program where we can export the files per the production requirements. The part of SCRATCH that is the most confusing for new users is its output workflow. If diagrammed, it seems very logical. It is. But the implementation might be confusing once you’re in the interface. In addition, many of the ‘gotchas’ are not defined in the Scratch User Guide.
If you’re at the Construct, 1. Click-on the Outputs button located just below the Construct columns, on the left side of the screen. (Fig. 3) The screen will change to the output mode. This is a very graphic interface where you quite literally build nodes, one after another, from the first node, to create the processing pathways. It’s very easy to follow visually. The first box or node attached to the center of the screen represents the clips/file(s) you have just been working on. It might be one clip or 100 clips, but it will be shown as one thumbnail in this view.
Physically within the program you move from: Import files into Construct ➠ Editor Editor ➠ Process (Output) Within Process, add nodes and Queue those nodes for processing Graphing the Process/output workflow looks something like this: Originals➠Audio sync➠color Correction (LUTs)➠Plug-ins (burn-ins)➠Plug-ins (AVID mxf and ProRes) or other output nodes.
Fig. 3 Output selection button We need to add Nodes to process the file(s) for exporting. The order they are in is important. The order should be as follows:
217
Exercise 15: SCRATCH Output for Edit and Dailies • Master file ——> Burn-ins——> codec ——> output
Burn-ins are the overlaid graphics or text, that give the viewer some information about the clip. Typically it includes the file name, the timecode on the file and other requested information. These burn-ins should ONLY be applied to the dailies, so producers and directors can make notes about a shot or scene that will help link the notes back to their original files. The Codec Node is the transcoding process to move the camera files to something more friendly for playback or editing. If it’s for dailies that will be viewed on an iPad or mobile device, then H.264 codec within the .mov file format is typical. For editing, it will depend on the editing software being used. AVID will use DNxHD36. Premier and Final Cut Pro or X will use ProRes LT. The post production supervisor or line editor will dictate which codec and data rate they prefer. Adding a Node The first workflow pathway we want to create is the output for editing. We will assume it is for AVID. We’ll add one node that will contain all the setup information to process the clips into individual files as DNxHD36 files. These will be ready to import, .mxf files, which are native to AVID. There will also be the .aff audio files and the .ale meta-data files required for a smooth AVID workflow. In the lower left of the screen, 1. Click-on the Add Output drop-down menu. (Fig. 4) A menu will open (Fig. 5) with several output options. 2. Click-on the MXF AMT Export. It should be noted at this point that Scratch will create more than just the DNxHD file. It will create the full AVID media package for direct drag-and-drop into AVID’s media managed bin structure. Extremely efficient workflow. Notice in Figure 6 that a Node has been added, to the right of the master node, that is labeled MXF AMT Export. The next process it to set the raster size and data rate (amount of compression applied) for the exported .mxf files. In the lower part of the screen is the Output Settings, General and Post Render tabs. The information displayed in the Output settings tell us what will be applied to the exported clips. (Fig. 7).
Fig. 4 Add single Output selector button. What is not displayed is the type (Codec) and amount of compression. The Format Settings tab is where we can change these settings. 3. Click-on the Format Settings tab. (Fig. 8)
Fig. 5 Add Output format selection menu.
Fig. 6 Primary Node with AVID MXF Node attached. There are three settings that can be selected here: • Codec • Audio channels to be exported • Range. Think of this as the Gamut of the image. We will leave this alone for now. What we’re after is the codec data rate and raster size for editorial to use. They will want something with good (not perfect) quality, that’s easy on the computer. The DNxHD
218
Exercise 15: SCRATCH Output for Edit and Dailies so they now do have audio. In most cases that would be Ch.1 and Ch.2, but there are cases where multiple mics are used and kept on separate channels for mixing later. For now select audio channels 1 and 2. (Fig. 10). The next step is to set the destination path for the rendered file. By default it sets the render pathway to the Render folder created by the program when it was first
Fig. 7 General settings tab. codec (like it’s cousin ProRes) is high quality and not so demanding on the CPU, allowing for fast editing. However, there are very high quality versions of this codec that can be used for finishing and final output if needed. The typical workflow is for the editor to work with the low-resolution files while cutting the show, then re-link to the camera original files for color corrections and other finishing touches. We will want to apply the DNxHD 36 1080p/23.976. 4. Click-on the drop down menu to the right of the word
Fig. 8 Format Settings tab information. Codec. Refer to Figure 8. 5. Click-on the 1080p/23.976 DNxHD 36 selection. Refer to Figure 9. That raster, compression and frame rate will be applied to the output file. Word of caution here: there are
Fig. 10 Audio channels 1 and 2 selected for output. installed. This is not what we want. Unfortunately, SCRATCH calls this pathway Media. To the right of the Media pathway shown in Figure 11, is the Browse.. button. 6. Click-on the the Output settings tab again. (Fig. 11) 7. Click-on the Browse.. button The typical file and folder navigation window will open. We want to send these files to very specific places.
Fig. 11 Output settings tab, Media pathway. You will be creating an Output for each day’s cards (the circled or good takes only). Those rendered/transcoded files will go into their specific folders based on the shot day and the camera. For this exercise we’ll put this file into Day-1 and Cam-A folder. The folder structure should look like this:
Fig. 9 AVID MXF Codec and Raster settings menu. a LOT of selections in this list. Be extra careful that you select the right one! The audio channels need to be selected for export at this point. The clips you are using do NOT have audio attached. We did, however, link the recorded audio tracks
- Project or program name ! Date or day of production ! ! Cam-A ! ! A001 ! ! A002 ! ! A003 ! ! etc…. ! ! Cam-B ! ! B001 ! ! B002
219
Exercise 15: SCRATCH Output for Edit and Dailies ! ! ! ! ! ! ! !
! ! ! ! ! ! ! !
B003 etc…. Audio Day-1 Day-2 etc….. Editorial Dailies
We want to to put the files in this folder pathway; ! Your Hard_drive: 47_min>Day-1>Editorial>Cam-A It is good practice to create a new Construct for each day and camera card. Not only does this compartmentalization make it easier to find shots and takes later if need be, but it also makes it easier to export just what you want to a specific folder. 8. Click-on the Cam-A folder, then the Select button in the lower part of the file navigation window. The pathway is now set for this Construct Node At this point, we’re done with building a rendering pipeline to create AVID ready files from all the pre-processed clips. The next export is for dailies. This will include a node that will add burn-in information and another node that will transcode the files to H.264 codec within the .mov file format. Burn-in Node Creation Before we get started, there’s some ‘splanin to do Lucy’, if you remember the old I Love Lucy TV show. SCRATCH, like DaVinci Resolve, has a very visual node-based interface in some areas. But making it work properly is less than straight forward. As mentioned before, the proper order of this process is critical. Figure 6, a few pages back, shows the primary node created by the clips in the timeline, which are selected from the CONstruct. All output nodes attach to this primary node and they can, quite literally, fan out like tree branches, creating separate rendering paths for each output. Here’s the trick to the SCRATCH logic when it comes to burn-ins, the burn-in node has to come before the compression node. For example, you need an h.264 QuickTime render for dailies with burn-ins of the timecode and file/clip name. You will put a new branch
off the primary node with the burn-in first, then the h.264 compression node. There’s some great logic behind this structure which will become apparent when we work more with the output side of SCRATCH. 1. Click-on the Master clip node titled ‘A001’ This process is a bit different from the last node creation. It’s just something, that once learned, you’ll have to remember. The burn-in can be accessed through the File format node selection menu. The same one you used to select the AVID file format from. Or through the Matrix window. We’ll do the latter. 2. Click -on on the Matrix selection in the navigation menu (F5). 3. Swipe right in the upper half of the window. A new display slides open. This is the Versions manager. (Fig. 12) In a nutshell, this is a tool set that allows you to see which nodes and versions of those nodes have been created without going back and forth to the Output window. The importance of this tool set for us now is that we can create a new node, directly attached to the primary node, without going to the Output window. Here’s how this works. 4. Select the SHOT button Fig. 12 Options tab for in the lower left area of the window. (Fig. 13) the Versions pallet. 5. In the Versions pallet, located on the far right side of the screen (might be hidden under the floating navigation pallet, select the Outputs tab. 6. Select primary node (the lower or first node in the stack). It will highlight in gold. The Insert.. button now becomes available and can be found in the lower left part of the button cluster where you clicked on the Shot buton. 7. Click-on the Insert.. button. This will open a listing of plug-ins that can be applied to a new Node, that will be created when we’re done here.
220
Exercise 15: SCRATCH Output for Edit and Dailies
• Timecode from the clip • The clip name • Watermark (Not For Distribution, production company name, etc.)
Fig. 13 Matrix window, Shot button Selected.
8. Click-on the SPA Plug-ins from the list on the left. 9. Now, Click-on the Burn-in plug in to the right. (Fig. 14). In the lower part of this new window you see a video window displaying the clip. Below that is the Apply Selection button. 7. Click-on the Apply Selection button. (Fig. 15)
Where these are placed on the screen will also be determined by the production company. Always ask what they want to see and where on the screen it’s to be placed. What you’re going to create can be seen in Figure 16. With the plug-in window closed, the bottom of the screen has now changed. The Text tab is highlighted (Fig. 16) and there’s a box open below it, fto enter text. On the clip in the ViewPort, there’s a box with two handles, much like you would see in PhotoShop for resizing an object.
Fig. 14 SPA Plugins selection window. Think of what we have created at this point as an empty Node with the ability to take text and superimpose it over the video frames of the clip(s). What text we use is our choice.
Fig. 15 Apply Selection button.
With the plug-ins window now closed, the bottom of the Matrix window has changed. There’s now a way to enter text and change the color, size and position for the burn-ins. The production company will dictate what they want but typically it’s:
Fig. 16 Burn-in displayed over clip image. (Fig. 17)
VideoPort = video preview window
The box on the bottom of this white rectangle area will increase the font size of whatever is in the rectangle. The box on the lower right will change the width of the area the font will be displayed in. Think of it as the right margin setting in a word processor. T h e fi r s t infor mation we want to add is the company name in the upper left of the screen.
Fig. 17 Text entry button. 8. Click-on in the dark gray rectangle box located just below the Text tab. The cursor will be flush left on the box. 9. Type-in the following 47 Min. Prod. You will see that displayed now in the rectangle box over the video clip. To make the text larger,
221
Exercise 15: SCRATCH Output for Edit and Dailies 10. Click-on the small box, in the VideoPort window, in the lower part of the white rectangle and Drag it down. The box will resize and the font will get bigger. Make it just
Fig. 18 Burn-in display tool. a bit larger than it was before. 11. Click-inside the rectangle box and drag it down so it’s within the video framing and in the upper left corner. To create another burn-in, 12. Click-on the New button located just below the Text tab. This will create a new entry box and a new layer over the image. Now we want the system auto-fill some information based on the metadata within each clip. We want the displayed information to change with each clip, so it’s ‘clip-specific’. The data we need to capture is the timecode from that clip and the name of that clip. To the right of the dark gray box we have entered text into, and just below the Guides tab is the Insert Metadata menu. 13. Click-on the Insert Metadata button. A window will open with lots of preset metadata selections. (Fig 19) 14. Click-on the Source TC selection. In the text entry window you will see #stc. This is a place holder for metadata that tells the program, ‘fill this area with the source time code from this clip’. Now we need to put in the file name metadata. NOTE: if the information shown on the clip looks like little boxes, a useable font has not been selected. This is easy to fix. Do the following: To the right of the metadata button you just used is a field showing the chosen font. (Fig. 20). 15. Click-on the Select button and a listing of all the fonts you have on your system will be displayed. (Fig. 21) 16. Scroll down and find something neutral like Helvetica. 17. Click-on OK to close the window. The font should be applied to the burn-in and now readable. We now need to add more fields to the burnin. 18. Click-on the Insert Metadata button again. 19. Click-on the Source Filename selection.
Fig. 19 Source TC metadata selection. You will see #sfn right next to #stc. Looking at the image in the ViewPort, you now see what looks like a
Fig. 20 Font selection field for text used in burn-ins. mess. The timecode runs right into the long file name created by the RED camera. To fix this you simply place the cursor between the ‘c’ and the ‘#’ and
Fig. 21 List of fonts on your system. 20. Click. 21. Press the space bar a few times until the timecode and the file name displayed are on both sides of the image.
222
Exercise 15: SCRATCH Output for Edit and Dailies 22. Click-in the new rectangle text box and drag the timecode and filename display down to the lower part of the image area. Refer to Figure 14 a few pages back.
Adding Node for Dailies Output Again, we need to add a node by doing what we did before. The difference this time is that the Node will follow
We’re done with burn-in creation. To see the new Node, 23. Click-on the CONstr… button on the left side of the screen, about midway down. The new Node is directly attached to the Primary node and labeled ‘Burn-in.Day-1’ (Fig. 22). This naming is not that helpful due to the fact that we can reuse it for anything off this same or any other project that might have the same requirements. To rename the node, 21. Click-on the Burn-in Node to select it. The Node will highlight around the edges. 22. Click-on the Output Settings tab. (Fig. 22) 23. Click-and-drag over the name text ‘Burn-In.Day-1’ right next to the Output field. This is located right below the Output Settings tab. 24. Type in ’47 min. +STC+SFN’. 25. Press the Tab key to leave the field. The Node name is now updated. To save the node setup for later use, look to the lower left of the screen where you’ll see a field named Output Templates. 26. Click-on the field and enter ‘’47 min. +STC+SFN’.
Fig. 23 Output Settings tab for access to renaming the selected node. the Burn-in Node 1. Click-on the Add Output button. (Fig. 24) From the list, 2. Click-on the QuickTime-Export from the menu selections. You have now set that output to be a file that will use the QuickTime wrapper. We need to set the codec and the destination for the processed files. In the lower middle part of the screen there is an entry field titled Media. (Fig. 25) Media = Path to where exported files will be saved
Fig. 24 Add Output menu button.
Fig. 22 Primary Node with AVID MXF and Burnin Nodes attached. 27. Click-on the Save button. The node setup is now saved for later use on this same production.
The information in the Media field is the pathway to where the processed file will be saved. It will default to the Render folder within SCRATCHs self-configured folder structure. That’s not what we need for DIT work. The file must be ‘sent’ to a pre-described folder for proper organization. Remember the folder structure mention earlier in the book? This is where it comes into play.
223
Exercise 15: SCRATCH Output for Edit and Dailies
Fig. 25 Node information display. The common structure would be: Date or day of production ! ! Cam-A ! ! A001 ! ! A002 ! ! A003 ! ! etc…. ! ! Cam-B ! ! B001 ! ! B002 ! ! B003 ! ! etc…. ! ! Audio ! ! Day-1 ! ! Day-2 ! ! etc….. ! ! Editorial ! ! Dailies The target location for the output will be the Editorial and the Dailies folder. To set this, 3. Click-on the Browse button next to the current render file path.
Referring to Figure 25, there is a second tab titled General that will take us to the coded and other output modifiers. 5. Click-on the General tab. 6. Click-on the Motion JPEG A menu selection to the right of the word Compression. Most of the settings here are fairly self evident. The Compression setting is a huge list of possible configurations and codecs possible with .mov files. From this list, 7. Click-on the H.264 menu selection. (Fig. 27) The system will pick preset settings that will output
Fig. 26 General Tab information display. good quality files, but you can change those setting if you like. Next to the H.264 codec setting is the Quality. This is set to Normal by default but it can be set higher, for better quality renders (which will take longer to render) or lower quality outputs. Unless someone is complaining about what they see, leave it at ‘Normal’. The audio channels, if there are any, might self-select. But if they don’t, you can turn them on by clicking on the channel number. The right side of each button will highlight, visually telling you channel is selected. In most cases we would have channels 1 and 2 selected unless
The typical file navigation window will open. We need to direct the program to save the file to the proper folder. In the ‘real, on-set’ world, this would most likely be the drive array attached to your system or another hard drive destined for editorial. For now, we’ll save the files into the folder structure we created earlier. In the file navigation window, 4. Click-on the folder Dailies, within the follow pathway. ! Your drive: Course Assets>47_min>Day-1>Dailies The system will now place all files processed off the Dailies Node into this folder. The next process is to set the proper codec.
Fig. 27 Codec selection window.
224
Exercise 15: SCRATCH Output for Edit and Dailies
TECH NOTE: The key to understanding how SCRATCH works when it outputs is fairly simple. You always want a new codec output to link directly to the Primary node. It is not a good practice to have the system convert to one codec then use that output to make a new output from an already compressed file. But there is a nice way to use this ability to your advantage. For instance, when the production company wants both 1080p and 720p H.264 outputs, with burn-ins, for dailies. Now that you have created the burn-in Node, you would link two more Nodes to the Burn-in Node. Each would be set to a different codec or raster size. There are many more advanced output tricks possible because of this unique and very flexible output pipeline created within SCRATCH. audio’s daily report indicates more channels were recorded. Follow the report. You can also set the audio output bit depth and stereo/mono configuration and the audio codec (called ‘quality’). For now, leave all these other settings alone. The resulting file name will be taken from the node name by default. This is a problem because all clips will be named ‘QuickTime export’ or what ever name you entered for that node. This is an issue for SCRATCH and the software will stop the process so it will not over-write your previous work. There is a metadata tag we can use to automatically apply the original file name to the exported clip. But first, we need to do is tell the program to export each clip in the timeline or CONstruct as ‘single files’. Scratch defaults to combining all the clips into one long file. Not what is needed for DIT work. This is not an intuitive process. You can search the programs screens until you go blind and you won’t fine a button or icon will give you any clue as to how this process is started. If there’s a glaring fault to SCRATCH, it’s the non-intuitive interface. Here’s the key; the next field to the right of the Output Node name should say something like ‘QuickTimeExport.mov’. That is what you selected for the Node type. If left alone, each file would be named ‘QuickTimeExport.mov’ and that won’t work.
8. Click-on the QuickTime-Export.mov words. The File Name Specifications window opens up. (Fig. 29) Notice that the top field has pre-filled in metadata code. #name is the name you gave the node. We’ll change that, and it will be very much like what you did creating burn-ins. But first you must tell SCRATCH how you want the
Fig. 28 Completed Output nodes. files exported. On the right side of this window are the Mask Templates. The field right below this is the menu allowing for export requirements. 9. Click-on this field and select Separate folders*. 10. Click-on the Apply button. (Fig. 30) Mask Templates = File name structure when saved Separate folders* = Separate files when saved
Fig. 29 File Name Specifications window. Single File Sequences* = Save all clips into one large file Referring to Figure 31, we now need to change the metadata tags so the resulting processed file will have the same name as the camera orginal. In the field where you see ‘#name.mov’, 11. Click-and-drag over the words in that field 12. Click-on the field that displays ‘Code:Reel ID. The metadata tags window opens up. 13. Select the File Name tag from the list. 14. Click the Insert button.
225
Exercise 15: SCRATCH Output for Edit and Dailies DPX, TIFF or JPEG sequences instead of standard EXR sequences. It’s good to note at this point that OpenEXR is the basis for the ACES workflow. We’re going to spend a dedicated chapter on this later in this book. To accommodate the VFX folks, you would simply crate a new Node attached directly to the Primary Node. This would be an Image Files type of Node where each frame would be a separate image. Fig. 30 Mask Templates tool. 15. Click-on the word Insert that now appears where Code: Reel ID’ use to be. 16. Select File Extension from the list. If there’s not a period ‘.’ between the two codes, Add one.
Just an aside at this point, Arri RAW, BlackMagic DPX RAW files are comprised of separate frames. VFX might be happy just getting a copy of these files without any transcoding. Something to ask about when starting the first days work on-set.
The resulting code should look like what you see in Figure 31. 17.Click-on the OK button at the bottom of this window and what you now see in the Node
Queuing The Render There are two modes for outputting the files/Nodes. You can select a Node and immediately process that Node, or you can put all the Nodes in a queue and the program will to handle all the various outputs.
Fig. 31 File Specifications with proper metadata codes.
As mentioned earlier, SCRATCH will leverage all CPUs and GPUs allowing for multiple files to processed at the same time (parallel processing), and allocate some computer power for you to continue working on a new CONstruct’s worth of files. This is the real on-set power of SCRATCH. Other programs will lock you out of doing more work within their program while it’s rendering. It is very easy to get way behind with processing files if you have to wait hours for a batch to process. There is no ‘free lunch’ in this part of the business. You either buy a second computer and network it with the first (extra expense) or you pay the roughly $650/year for SCRATCH software. In the end, both choices will cost close to the same.
information area should reflect the file name that the playhead is resting on. Each transcoded clip will now be saved with the original file name to the folder you specified. You should do this same process with the DNxHD node, so files being exported for editorial will remain the same as the camera orgianals for relinking later. In order for AVID, and other software, to relink with camera original files for finalization, the file names between the camera originals and these transcoded files must match. Other Outputs When would you want to have other Nodes besides output for editorial or dailies? Special requests by the VFX or compositing post production will want very high quality output to do their ‘magic’. They may request
The file conversion pathway for RED .r3d files looks like this: 12 bit .R3D camera files ➠ 16 bit OpenEXR files (Internal to SCRATCH)➠ exported to your choice of formats and file types. To add a Node to the output Processing Queue, 1. Click-on the H.264 Node. 2. Click-on the Add to queue button located in the lower right part of the window. To add the AVID export, 3. Click-on the MXF AMT Export Node. 4. Click-on the Add to queue button located in the lower right part of the window.
226
Exercise 15: SCRATCH Output for Edit and Dailies Both files are now added to the queue. To see what’s in the queue, 5. Click-on the Process queue button (Fig. 32) The Process Queue will open listing all the files, or jobs, ready to be processed. Refer to Figure 33. Double check that the output paths are correct. If not, you can cancel the queue, reset the path, then come back to the queue. The output path will then be updated. In the lower left corner of the Queue screen, are the buttons to start the job (Fig. 34). You can either press ’Start’ to get the processing going immediately, or you can set at timer to delay the start of processing.
Fig. 32 Process Queue listing button.
Tech Note: There is a short cut to starting an individual file processing. Referring to Fig. 31, there is a Process button. This will immediately start the transcoding of any highlighted Node(s) in the Pipeline graphic display. This saves a few steps if you are looking for a one-off output for testing or to give to production.
WARNING: The AVID MXF file format is very fragile and easily corrupted by the MacOS. There is a feature called ‘Quick Look’ in every OS since v. 10. If you open a folder, the OS opens each file in the background, creating a thumbnail and offering you the ability to simply click-on a clip and press the space-bar to quickly view this file. If the file is not written fully at that point, the file will be corrupted. This will happen even if you just open the folder with .mxf files being written to it. Always wait until the rendering and file writing is totally done before opening any AVID .mxf files.
Fig. 34 Process Queue with the two export files ready to process. for each clip (Fig. 35). The program looks at your computer and allocates processing resources based on it’s power. If you have lots of RAM memory and many processors, several clips may start processing at the same time. If your machine does not have a lot of power, then the program might decide to take each clip individually, and process the queue from top to bottom. Fig. 33 Process Queue start or delay start controls.
Why would you want to delay the processing? It might just be that there’s so much work coming in that you don’t want to be slowed down. You can queue the work and have it start over lunch break, or after you leave for the day. The processing will proceed, un-restricted by any other computer activity. For now, 6. Click-on the Start button. The visual display of the program is quite intuitive. A progress bar will start to appear under the Status column
Saving a Node Preset Any Node configuration can be saved for later ‘oneclick’ use. This is an obvious time saver on a production where you might be working for days or weeks, doing the same processes on all camera files passing through your hands. In the lower left of the Outputs screen, this cluster of buttons represents the Node Templates and output side of the processes Right below the Output Templates is the dark gray box. 1. Click-on in the box.
227
Exercise 15: SCRATCH Output for Edit and Dailies
TECH NOTE: SCRATCH Lab does not like writing over a file with the same name. It will cause SCRATCH to crash during the render process. In fact, it will crash as it starts to render over the ‘samenamed’ file, but before it actually does. This could be a feature or a bug, depending on how you view this. Virtually anytime SCRATCH crashes while rendering, it’s because it’s trying to over-write an existing file with the same name. Start your troubleshooting here.
Fig. 35 Process Queue status display during processing. 2. Type in Burn-in TC+SourceName. 3. Click-on the Save. Above where you entered the new template name, the newly created template displays under the Output Templates. This is a listing of all created templates, that can be selected, then Load(ed)… for any Node. Checking Your Work We always want to spot check our work, just to make sure everything is processing as planned. To do this, you will navigate to your hard drive, and into the Completed Assignments folder.
This has been a long chapter with lots of twists and turns within the software. Here’s the key points to remember: • Import video files into the CONstruct • Move into the Editor to sync sound and trim clips if needed. • The Matrix is the workspace to add/create LUTs and do color correction. • Output will configure your various deliverables. Be
Fig. 36 AVID export file structure. aware of the order you create nodes. All LUTs, Burn-ins, other plug-ins go before the codec and raster format node. • You can output a single selected node using the Process button. • You can review all the outputs in the Process Queue window. Submission for grading Turn in the assignment per your instructor’s instructions.
Inside the DNxHD-36 folder (Fig. 36), which was created by SCRATCH based on the file Node name you entered, are the .mxf files (which are the actual video files), the AAF and XML files. Give this entire folder structure to an AVID based editor and they will be able to quickly bring all the transcodes in for immediate use. The H.264 file(s) will stand on their own within the directory you specified. But there is a potential for problems here. If you export a number of H.264 clips, and all have the same name, the system will potentially over-write each other. This would be a bad plan. Make sure to set the file names and paths so the files will all have unique file names. Quick Review
228
SCRATCH Review Questions Answers located in Appendix C. 1. The area within SCRATCH that new media is imported into is called the, A. Timeline B. Construct C. Node
8. Refering the image below, the timecode burn node will be applied to which output? A. The Dailies Node (the one without the image) B. The Editorial Node (the one with the image in the lower right) C. Both Nodes D. Neither Node
2. Before you can move into a project within SCRATCH, you must configure the A. User interface B. Output Nodes C. Scratch and Media folders 3. SCRATCH refers to Scopes as A. Data B. Output indicators C. Statistics 4. To access the scopes in SCRATCH, you must do what? A. Locate them in the floating shortcuts menu. B. Swipe left in the Matrix. C. Swipe up in the Matrix. D. Swipe down in the EDIT area. 5. If you can’t hear sound on a file you know has sound, you click on the A. Speaker icon B. Track identifiers C. Audio tab in the tool bins
9. Referring to the image below, you can adjust the black levels in an image by A. Clicking on and moving the outer area marked A. B. Clicking on and moving the cross hairs in the middle, marked B. C. Clicking on and dragging left and right over the word Lift, marked C.
6. When manually syncing audio and video using a slate, you align the audio slate clap with the visual slate clap, and then do what? A. Nothing, once aligned they are in sync. B. Click on the ‘sync’ icon. C. Use the TC sync tool. 7. To reveal the audio tracks attached to a clip in the EDIT area of the program you click on the Audio button in the tool area to the left. A. True B. False
229
11 REDCINE-X Workflow It might seem that the preferred software for handling RED footage is REDCINE-X Pro. However, with the new versions of DaVinci Resolve, SCRATCH, and others, you no longer need REDCINE-X. Well, maybe you do anyway. REDCINE-X Pro is the massive re-incarnation of the small software tool that RED created to handle their RAW files back in the day. It is a powerful, well-featured tool that allows you to do color correction, basic clip in-and-out editing, and exporting to various codecs and containers. So far all the software choices are the same. One of the limitations of the DaVinci Resolve (the free version) is raster size. If you need to export in a higher raster size (e.g., 4k) , you’ll have to lay out $995 or use the free REDCINE-X. SCRATCH retails for $650/yr., so that might not be an option if you’re just starting out. SCRATCH can be rented by the day, week, or month. Like any of these software tools, REDCINE-X has its own workflow, button placement and quirks. For example, the workflow video you watched is only a year or so old. It’s already way out of date, as you will see when you step through this video. But it gives you an idea of the basics of how the software flows. The user manual for REDCINE-X is not of much help either. They are, after all, a camera company and the supporting software is free--for now. The manual lacks good explanations and detailed processes. It often refers to a tool set but never shows you a screen shot of what you’re looking for. A good example is the Audio Tool. Good luck finding a labeled image of that part of the screen. These processes will be done in a manual mode which means that the timecode of both files do not match. So the use of the slate is important. If the time codes did match, then this software has an auto-sync feature that will link the two files together with one button click. That being said, the RED camera has a terrible track record for keeping solid timecode-sync once jam-synced with the audio recorder. It may start to drift out of sync as the day progresses. Many productions use a Lockit Box to feed accurate timecode into the RED camera, circumventing the issue.
With all the quirks aside, REDCINE-X is hard to beat
for doing the task we’re about to step through. be:
The process steps we will follow in these tutorials will - Launch the software - Interface overview (v35.1.38xxx) - Navigate to the RED RAW video files - Set a slate marker point in the video file
- Locate the related audio file - Sync the audio file to the video file - Do one-light color correction - Export for dailies and AVID workflow It would be very helpful for you to download the software, install it on you computer and have it open, when following along. You can get the latest version HERE. There are Windows and MacOS versions available. They’re free.
230
REDCINE-X Workflow REDCINE-X Workflow
basic areas. Moving from the upper left to the bottom, - Media Pane - Color correction scopes - Player window (below the color correction scopes) - Image control tools pane (far right) - Timeline (across the bottom) We’ll get to the Export window after going through this window’s functions. The upper left, Media Pane, is where you locate the footage and audio files that you’re going to process (Fig. 3). It’s a simple drill-down process where you select a drive, then a folder on that drive where the assets are stored.
Fig. 1 Redcine-X Edit interface.
Software Overview With v35 there have been some major interface layout changes and they are really nice additions. Like any of these bits of software, they have their differences which you just have to learn. Let’s step through the interface (Fig. 1) for a few minutes and then wade into using it for the tasks thrown at the DIT.
Here, the RED clips are shown when the Red Assets folder within the Course Assets folder is opened. If you look on your harddrive at this folder, you will see several .mov files and a folder named Audio Assets. RCX will only show .R3D files and audio files. It is not able to process anything else. Much like Arri’s Viewer tool, which specificially handles only Arri RAW files, RCX is RED file specific. You can look at the files as thumbnails (dafault setting) or as a list. The list view is often more convienent when you have a lot of files. The Show menu at the top of this pane gives you access to all the files you can display, or filter out from being displayed.
Fig. 2 Edit and Export buttons.
Fig. 3 Media Pane.
Much like DaVinci Resolve, Redcine-X (RCX) now has buttons/tabs to take you directly to the major areas or functions of the program. At the top of the interface are the EDIT and E X P O RT buttons. (Fig. 2) This screen is broken up into 4
With both the File Browser and Project panes closed (Fig. 4) this looks like there’s nothing there. By using the ‘twirl-down’ tool ▶ you can open the pane or collapse it for more working space, when needed.
Fig. 4 Collapsed File Browser and Project panes. If you open both and then select shots/clips from the Media Pane, dragging them down into the Project Pane, those shots are now selected for use later on (Fig. 5). If the shots are not in the Project Pane, they can’t be processed. You can double-click on the clip to look at it and apply
231
REDCINE-X Workflow REDCINE-X Workflow
color correction and even export the clip. But if you are working with several clips, you must add them to a project to use the power-output features of this software. Notice on the left side there’s a folder titled ‘Project 01’. (Fig. 5) If you were using this software on a RED camera specific project, you could re-name the folder, to reflect the name of the project. Then you could add more folders to that master folder and treat them as bins to hold each camera offload. For example: A001, A002, B001, Audio, etc.
Below the Clip Viewer are the various controls for the clip. (Fig. 7) You can drag the playhead (gray bar with white vertial bar), or click on the very typical player controls you have seen in other softwares. In RCX, the space-bar stops and starts playback as you would expect. The right and left arrow keys moves one frame back or forward. There are other tools available here, and we will get to the important ones for you daily work, in the exercises.
Fig. 7 Clip viewer controls.
Fig. 5 Project Pane expanded in list view.
The right side of the interface holds many tools for image control. Like the Media Pane area, there are several sub-panes accessable by twirling down the tools on the left of each of the tool sets. (Fig. 8)
If you Double-click on a clip either in the File Browser or in the Project Pane, it will be displayed in the Clip Viewer. The associated exposure information will be diplayed above in the scopes. (Fig. 6)
Fig. 8 Tools Pane with tools collapsed. Across the top of the Tools Pane are tabs giving further access to other options. And if one is not listed there, the tool cog ☼ on the right side will allow you to add a custom tab or select one that is not currently displayed, like Audio for example. Across the bottom of the screen is the timeline. Yes you do have some very basic editing functions within this program. For the DIT, it’s basically useless unless the client would want a all of the days’ shots exported as a single clip. Not likely, but the option exists.
Fig. 6 Scopes and Clip displays. Take a few minutes to click around the interface and see what’s hidden under the tabs and twirl-downs.
232
REDCINE-X Workflow REDCINE-X Workflow
It should be noted at this point, and was mentioned in the hardware chapters, RED makes a card that accelerates the processing of these files. The RED-Rocket card. It’s expensive at $4700+, and really needed if you are processing a lot of RED footage. However, RED seems to be making an attempt to phase out this card and replace it with software code that leverages GPUs. Some of that code is already in place. If you go into the Preferences part of the program, there are options to select OpenGL or Cuda processing. There could be more options in that list depending on what graphics or GPU card you have installed. If you plan on using this software, keep an eye on the RED User forums for the latest GPU updates. In the next few exercises, we will pull in a few RED-One clips and run them through the whole DIT process. This process is the same for all RED camera footage no matter which RED camera it comes from.
233
Exercise 16: REDCINE-X Audio Sync
Required Viewing: Basic Workflow- This is a RED created, brief overview of the interface. Understand that these are for v10 of the software but the flow and basics are still applicible. And Audio Syncing with RED Cine-X This assignment will take aprox.: 60 min. You will need: • Course Assets • Personal hard drive • REDCINE-X Pro software installed Objectives for this assignment: • Learn how to open RED camera files in REDCINE-X Pro software. • Apply simple one-light color correction to a specific clip. • Link audio from the on-set recorder.
1. Make sure your hard drive is connected and available on your computer. 2. Launch REDCINE-X PRO software. In the upper left of the interface, are the File Browser tools. This seems fairly standard, as we have seen, with programs like Resolve and others. 3. Locate your hard drive under the Devices pane. (Fig. 1) 4. Click-in your hard drive from the listing. In this example, the drive is named ‘Tartus’. 5. Twirl down the Drive and you will see all the folders on that drive. 6. Locate the Course Assets folder on your drive. Fig. 1 Drives and files 7. Twirl down the folder Browser in REDCINE-X. and locate the RED Assets folder. 8. Click-on the folder. The RED assets will be displayed in the pane to the right. (Fig. 2) The software automatically looks in the folder and finds any RED related folders with the camera clips in them. RED cameras save each clip in its own folder with the associated metadata.
Factoid: The three shots in this folder are courtesy Utah Valley University Digital Cinema program, shot for student capstone workshop project. Under the direction of student director, Paul Hunt, the film ‘Foreign Exchange’ which was entirely produced, crewed, and post produced with Digital Cinema students, and shot in one 12 hr. production day.
234
Exercise 16: REDCINE-X Audio Sync Link to finished ‘Foreign Exchange’ movie.
Fig. 2 RED camera files folder opens all clips to the right.
9. When you click-on the folder containg the RED camera assets, the program will look for all .RDM folders, and the program will automatically load the clip files in those folders into the File Browser.
We’ll be working w i t h t w o o f t h e c l i p s i n t h e l i s t i n g. A101_C003_1204RN and A101_C015_1204XE
Now, it’s not necessary to put the clips on the timeline to do DIT work. DITs don’t edit, they just pass the files along, processed as needed. So any files that are either displayed in the Source Clip Viewer, or in a project can be processed and exported. Lets move the two clips we’re going to work with down into the project with some orgaization. 6. Twirl-down the Project area. We can’t change the name of the project but we can add bins for each camera offload. 7. Right-Click-on the name of the default project Project01* and select ADD Bin from the contextual menu. 8. Double-click-on the new bin. A new window opens
In order for our work to be successfully completed when all is said and done, the two clips we will work with, need to be added to the Project. This is just like DaVinci Resolve. The simularities end there however. In RCX you can export just the clip in the Viewer, or all selected clips or the entire bin. You don’t have to put them on the timeline to be able to work with them..
Fig. 4 Renamed bin to A001. allowing you to rename the bin. (Fig. 4) Name the new bin A001, and click-on Accept. 9. Drag the two clips into the area next to the bin. (Fig. 5)
Fig. 3 Timeline tabs and Timeline tracks opened. At the bottom of the interface are several tabs that change that window accordingly. (Fig. 3) The proper workfow here is fairly simple: - Locate the media (files) - Add them to a project - (optional) Drag the desired clips from the project to the timeline.
Fig. 5 Clips moved into new bin.
A101_C003_1204RN and A101_C015_1204XE The next items are the related audio clips. You can move the audio into the video clips bin, but remember,
235
Exercise 16: REDCINE-X Audio Sync you get audio twice a day and that audio dump is for any and all cameras up to that point. It will be better to put the audio into its own folder. Inside the RED Assets folder is the folder containing the audio files from part of that day’s work. But lets create a bin for them first. 10. Right-click again on the Project01* folder and Add bin. 11. Rename the bin Audio. 12. Click-on the Audio folder in the File Browser area. All the audio clips in that folder should appear to the
NOTE: There is a specific order to do this file linking. You MUST set the slate clap mark first, then the audio slate clap mark. If that does not seem to line them up properly, Clear the audio marker and reset it. RCX does not seem to behave properly if you try and reset the new marker position without first clearing the old position. 15. If you press the space bar, the video clip should play. Here, the man in the shot is reading a book, looks up, sees something, then stands. Below the Preview window displaying the clip are buttons/icons that represent the most common features needed to work with video clips. (Fig. 7)
Fig. 6 Audio clips shown in list view. right. (Fig. 6) 13. Click-and-drag the following audio files into the Project bin. ! Scene01D-T007.wav and Scene01A-002 In the project bin, Select the A001 bin and 14. Double-click on A101_C015_1204XE and it will open in the Preview window. It does not matter if you link audio first or do the one-light color correction. Either way will work. If you are ingesting clips before you get the first audio card for the day, you could do one-light color correction on all the clips, which gets you ahead for the day. Lets sync the audio.
Fig. 7 Preview window tool bar and playhead. Just below the Preview window tool bar is the Play/Scrub head. You can click on the playhead and drag it back and forth through the clip. Note that the typical J, K, L keys work like other editors. And the left and right arrow keys will move one frame forward and one frame backward. J = play backward K = stop playing L = play forward ◀= move one frame back ▶ = move one frame forward
There are two ways of syncing the audio: manually if you don’t have matching timecode on the audio recorder, and Auto Match, where the program automatically locates the proper clip with the matching timecode. We will do the first clip manually and the second clip using the Auto Match function. To start the audio syncing process, the first item of business is to set a marker where the slate claps on the video clip, then do the same for the audio clip.
Fig. 8 Slate closed. 236
Exercise 16: REDCINE-X Audio Sync 16. Use the mouse and scroll down the timeline to the point where the slate just slaps shut (see Figure 8). Note: if you don’t see the slate at the start of the clip, that means an ‘in-point’ has been set on this clip. This In point effectively trims the clip (non-distructively) so the clip will start at the In point and not at the true start of the clip.
this Set In-point tool and select Clear in-point. You should now be able to scroll to the start of camera roll for this clip. 17. Click-on the Set Slate Point tool on the right side of the tool bar, below the clip preview window. (Fig. 9) This places a marker point on the video clip that the audio file marker, will match when we set it. On the right side of the REDCINE-X interface is a vertical tool box area. The tools displayed are driven by the tabs and the drop-down menu across the top. (Fig. 10)
The gear menu ☼ has all the available tabs plus ones that are not displayed.
Fig. 9 Set Video Slate Point tool/icon. In the lower left of the tool bar is what looks like a bit of film with a slash though it. If this is highlighted, that indicates an in-point has already been set. Right-click on
18. Select the Etc tab from the tabs at the top. The ETC tool box area can display information about Commands History (think of this as the History in PhotoShop), a listing of markers and the audio tools we need to sync to the video clip. The first setting will be pointing the software to the audio file location on the hard drive. In this case, it’s inside the RED Assets folder.
Fig. 11 Audio Control External Audio File Selection Tools.
19.Click the Control twirl down to display the External file linking tool. (Fig. 11) The program creates tracks for the audio channels recorded by the camera automatically, even if there isn’t audio on the camera channels. The audio recorded to a second audio system will be treated as ‘external’ audio tracks. The program will analyze the external audio file, then add the needed extra audio tracks.
Fig. 10 Image and Audio tools.
237
Exercise 16: REDCINE-X Audio Sync For example, if the on-set audio mixer recorded two isolated tracks and two mixed tracks (four tracks total), REDCINE-X will create 4 external tracks when the file is imported. To locate this take’s audio file,
Fig. 12 Choose Audio Match folder on your hard drive. 16. Click on the ‘ …..’ to the right of the External Audio File: field. 17. Navigate to the RED Assets folder on your hard drive. Inside that folder is another folder ‘Audio Files’. (Fig. 12) 18. Double click-on the file named Scene01D-T007.wav. That audio file is now loaded into the program. 19.
20.
Twirl down the Playback Channels bar and drop down the L Channel output selector. Select Track 5 (External) (Fig. 13) Do the same for the R Channel, selecting Track 6 (External) from the track selector menu.
Now what we have done is a perfectly fine workflow for single files, but you will be doing dozens of files each day. Here’s where the power tools come into play and the benefit of setting up bins. 21.Twirl down the Auto Match bar. (Fig. 14)
Fig. 14 Auto Match tools. By checking the Preferred Auto Match Directory, you can point to the single folder containing all the audio files. You can even tell the program how deep into folder structures to look. Once you point the program to the audio folder, you can then tell it to Match by ’Clip in Viewer’, ‘Clips in Bin’ or ‘Selected Clips in Bin’. The ultimate key to this working properly is that the audio and the video files MUST have matching timecode. This can be a very powerful tool set and a great time saver in these situations. But what if this particular filming project did not have the audio and camera synced. We’re going to continue to do a manual re-syncing . At the very bottom of the REDCINE-X interface are tabs to change the display of the lower area, around the Timeline. (Fig. 15)
Fig. 13 Audio Auto Match tools.
Notice that the program reserves the first four channels of audio for the camera. Most professional cameras have the ability to record 4 audio channels internally. If the camera didn’t records sound, you could ‘map’ the external audio to channels one and two without issue.
Fig. 15 Timeline display selection tabs. 22. Click-on the Audio Tab.
238
Exercise 16: REDCINE-X Audio Sync The waveform of the camera audio clip should be displayed. As you can see from the straight line, there is no audio. We now need to select the audio channels to be displayed using the Audio Tool Box, located to the right of this display. Remember we assigned the External Audio to External Audio for both the left and right channels eariler.
On the lower right of the window is a drop down list that allows you to select the number of frames displayed in the timeline. Change this to +/- 75 frames. (Fig. 17) You can now drag the video playhead to just before the slate clap. I will give you a hint here; don’t always assume that the first slate clap you hear is the right one. 24. Scroll into the clip to the point a few seconds before the slate claps. 25. Press the spacebar and listen to the audio track. Don’t pay any attention to the video playback. When you hear the slate clap, stop the playback. You should see the spike in the waveform (Fig. 18).
Fig. 16 Audio Track playback selection for Tracks 1 and 2. Fig. 18 Slate Clap waveform. 23. Drop down the Track 1 audio track selector menu and highlight External Audio 1. (Fig. 16) Do the same for Track 2, and select External Audio 2. The audio timeline track display now has visible waveforms representing both externally recorded audio tracks. If you drag the playhead back, to before the slate claps on the video clip, then press the Space bar, the video clip will play and you NOT see the audio wave form move with the playback. This is an irritation with the current versions of RCX. However, you can help how you see the audio timeline by expanding the area you can see in the timeline.
26. Click-in the audio timeline and you will see a a vertical line appear where you clicked. This is the sync marker. Place the mouse pointer right over the audio spike and click. 27. Click the Audio slate point to scrubber position linking tool. It looks like a little slate. (Fig. 19)
Fig. 19 Audio Sync at Scrubber point button. The audio should now be linked to the video slate point. If it worked, the audio waveform most likely jumped to a totally different place in the timeline.
Fig. 17 Timeline view drop down menu.
28. Drag the play head on the video clip back to the start of the shot. Press the Space bar and listen to the
239
Exercise 16: REDCINE-X Audio Sync audio. When you hear the slate clap, stop the playback by hitting the Space bar.
recorder, but be ready to double check each and every clip before releasing the footage.
Does it match? Play the clip a bit further and see if the talents lips match with the audio track. If they don’t, you set the audio slate point to the wrong slate. Remember the warning earlier? To fix this, you MUST do the following to break the links you set, so you can reset sync. 29. Play the clip to the point where you hear the second slate (in this case), and stop the playback. 30. Select the slate clap spike waveform in the audio timeline. 31. Click-on the audio sync slate icon (Fig. 19).
Submission for grading Check with your instructor on how to submit this assignment for grading.
Review what you have done by playing the clip from the start. Make sure you let it play to point where you can see the actor’s mouth say words. It should match at this point.. Hint: listen carefully to the audio file first, at least to where you hear ‘action’. This file has two slates. The 2nd AC got ahead of the Assistant Director and had to re-slate. This is called ‘second sticks’ and should be noted by the Script Continuity person, on the script. Also, if the actors voice sounds like it is across the room and not close to the microphone, then you do not have the audio file linked properly. It would be good for you to practice with the other clip you bought into the Project Bin. Look at the slate to figure out which audio clip you should select to sync with this clip. To save this project, 34. Go to the File menu and Click-on Save. Name the project ‘Foreign Exchange’. 35. Save it in the Completed Assignments folder on your hard drive. You can imagine how time consuming this will be on-set if you are faced with a lot of these sloppy slates. This manual syncing of files is more the norm than the occasional activity when working with RED camera footage. As mentioned before, RED cameras drift and can lose TC sync in a few takes. By lunch break on a normal production, the files will be more than a second (24 or 30 frames) out of sync. The Auto sync will get you close if the TC’s are matching between the camera and the audio
240
Exercise 17: Flipping RED Footage Using REDCINE-X Pro
Required Viewing: Watch this video. It’s a longer overview of the entire process from import to export. Basic Workflow- (This video will take you from video ingest to setting up a one-light.) http://www.youtube.com/watch?v=n2E44PoVngI&list =PLD527AE320C28636F These video demos an older versions of the software, but the processes are basically the same. And this video from RED focusing just on the export settings: https://vimeo.com/47204496 ⌛ This assignment will take aprox.: 60 min. You will need: • Course Assets • Personal hard drive • REDCINE-X Pro software Objectives for this assignment: • Learn how to open RED camera files in REDCINE-X software. • Apply simple one-light color correction to a specific clip. • Build two custom export presets: one for AVID Editing and the other for iPad viewing of the clip. • Export the same clip with both and discover the files created by the software.
1. Make sure your hard drive is connected and available on your computer. 2. Launch REDCINE-X PRO software. In the upper left of the interface, are the clip browser tools. This seems fairly standard, as we have seen, with programs like Resolve and others. 3. Open the last project by going to the File menu and selecting Open Recent Project. 4. Select the Foreign Exchange project. If everything on your hard drive remained the same, all the clips should be in the project window and we’re ready to proceed with one-light color correction and exporting. If you saved the project, quite and are re-opening the software for this exercise, use the File menu and select the ‘Foreign_Exchange’ project from the Open Recent Project menu selection. In the left part of the screen, make sure the Project is selected. 5. Double-click on the clip in the Project bin where the slate reads 1D take 7.
REDCINE-X is hard to beat for doing the task we’re about to step through. NOTE: This tutorial follows on the last assignment where you synced audio with the video clips. We’ll use the same project to continue on.
Fig. 1 Open Color Correction Scopes above Preview window by clicking on the arrow above. 241
Exercise 17: Flipping RED Footage Using REDCINE-X Right above the preview window is an arrow ▽ that will open the Scopes’ pane. (Fig. 1) The clip should now be visible in the Waveform and the RGB Parade scopes. You will be using these in a few minutes to give a quick color correction to the clip. (Fig. 2)
Fig. 2 Waveform scope on the left, RGB Parade scope on the right.
We now need to do some quick color correction, or one-light. On the right side of the interface we need to change to the Look tools. (Fig. 4) 8. Click-on the Look tab. The color and image correction tools are under the Look:Image portion of the interface. If you don’t see what is displayed above, twirl down the triangle next to the word Look. Notice that the REDcolor Color Space is already applied on both the Color Space and Gamma Curve. (Fig. 5) The color space is just that, the rendition of the colors. This is based on a pre-set mix of color intensities. The Gamma Space is the luminance range, or the difference between blacks and whites and how they are spread across the total luminance range.
6. If you press the space bar, the clip should play. 7. Use the mouse and scroll down the timeline to the point where the slate just slaps shut (see Figure 3).
Fig. 5 Color Space and Gamma Curve settings. Like most camera manufacturers, RED has their own way of encoding color and luminance. To decode this properly, you must have the proper ‘default’ settings as part of the software. These are referred to as SDKs and most camera manufacturers provide them to the software companies so the files can be opened and properly displayed.
Fig. 3 Drag playhead to the point where the slate is closed.
These images were shot with an early model RED One with the associated early color science. RED cameras have come a long way since then. The software will look at the metadata created by the camera and select the default color science for that camera for both these settings. 9. Drop-down the Gamma Curve menu and select REDlogFilm.
Fig. 4 Look tab selected.
Depending on the power of your computer, this could take a while to render. Notice how flat the image looks, and the change in the readout of the scopes. This is how the camera actually records the image. 10. Change the Gamma Space back to REDSpace.
242
Exercise 17: Flipping RED Footage Using REDCINE-X If you don’t see REDspace in the list, Click-on the [M] just above the words ‘Color Space’. This will reset these menus. We’re now going to do a ‘quick-fix’ on the color of the image. Cameras that record in the RAW file formats, do not really color balance. You can set a color balance in the camera, and the image in the monitor will reflect that change, but the camera file does not actually change. What is recorded as metadata, along with the image, which are the camera settings. The setting for white balance is there, and applied, but it’s easy to change without causing more image problems, like added visual noise. Below the Gamma Space menu are lots of adjustments. You can see all the metadata points collected about the color, exposure, ISO and more. We are going to do an auto-white balance to see how good the internal tools are at correcting the image. There’s a tool that looks like a rifle scope sight ⊕. This is just like the eye-dropper selection tools in many programs.
TECH NOTE: The original video output of the RED camera, for viewing on-set, looked just like you see when switching to RedSpace inside REDCINE-X software. Producers, Directors and DoPs complained. They did not understand, and could not envision what the final image would look like. The Directors and Producers got tired of hearing ‘it will look just fine, trust me.’ RED then updated the software, creating a REC709 standard ‘look’ that could be applied to the output video, going to the monitors. Although not perfect, it was far better to look at than the flat Log-look. The questioning went away. Most cameras that produce high end, digital cinema images, now have REC709 or a variant, for monitoring onset. Now we want to set the next two important aspects of the image for the simple one-light color correction. The black and the white levels, or Lift and Gain. If you look at the scopes now (Fig. 7), the whites are quite low, barely 50%, and the blacks are just above the
Fig. 6 Place the cursor over the white stripe on the slate clapper and click. 11. Click-on the ⊕ and the cursor will change into the same icon. 12. Put the ⊕ over the white bar on the slate clapper and click to select that as the reference white as shown in Figure 6. The image color balance should shift, and most of the green cast will be gone. Over all, it should look warmer. Automatic tools can help or hinder color correction and should be used with caution.
Fig. 7 Scopes of image before changing the Lift or Gain settings. bottom line, or 0%. But we’re only looking at the slate. Rule number one when doing color correction of any kind, find a ‘hero shot’. That’s typically the most important framing in the shot. The close up of the actor for example. Scroll through the shot until you see his face in the screen. 13. Scroll down the tools window on the right until you see the Look: Lift:Gamma:Gain section. If you don’t see the color wheels, Twirl down the Wheels tool reveal section. (Fig. 8)
243
Exercise 17: Flipping RED Footage Using REDCINE-X The top wheel is for LIFT or the level of the blacks in the shot. The slider below the first wheel will raise or lower the overall black levels of the scene. This is just like the Resolve interface controls.
SPEED TIP: There is a time-saver that you could perform at this point. If there are a number of shots in the same location with similar lighting, you can spend more time making the first shot look better, then save those settings as a LUT (Look Up Table). For the next shot you open from that same scene, just click on the prebuilt LUT and all the settings are applied instantly. If it doesn’t quite work, it’s typically easier to fix one or two of the settings than all of them, each time on every clip.
The wheel itself is for adjusting the balance of colors in the black parts of the scene. For now, we’ll leave that alone. 14.
Use the slider below the Lift wheel and move it to the left.
Fig. 8 Color and luminance adjustments
The black levels in the shot should get darker. Look at the RGB Parade scope above the image while you move the slider. Bring the blacks down so they just touch the 0% line on the scale.
15. Now, click and drag the Gain slider to the right, moving the highlight regions (the bright windows in the shot), up to 100%. This makes the image look somewhat better already. Look again at the black levels. Are they still down at 0%, or have they drifted up? If they have moved up, you need to drag them down again. This is a typical process in all color correction software. Think of the blacks and whites, connected with a rubber band. Move one and the other gets ‘pulled’ just a bit. It’s normal to tweak these adjustments a few times to get them settled in.
Fig. 9 RGB Parade displaying the AFTER Lift and Gain adjustments. Above the preview window are several drop down menus. 16.
Select the menu that displays ‘1/8’ and change it to Full.
There are other things we could fix at this point, but the basis of a one-light is quick and dirty corrections, then move on to the next clip. Your adjustments should look like Figure 9. There is one other setting we want to look at before we move on. Because the RED files are so large and complex, it takes all the computing power of the fastest machines to keep up. This slows our work down and that’s not a good thing to have happen on set. REDCINE-X has the ability to set the preview resolution of the clips as your work. This saves the processing power that can be focused on your work, and not being spent on rendering the image. See Figure 10.
Fig. 10 Display resolution setting for viewing. Lower settings allow for easy, fast playback, but lower image quality. Notice that the image looks better and there’s more information displayed in the scopes above the image. If
244
Exercise 17: Flipping RED Footage Using REDCINE-X you press the space bar now, the playback will most likely be choppy. There’s just too much data for the computer to process. You would change this setting to Full for final color correction so all the visual information can be clearly seen. For now, change the setting back to 1/8.
The right side of the screen will now contain the presets you create for exporting.
For practice, bring the next video clip into the Preview window and repeat the process, correcting that clip to match the one you just finished.
2. Click-on the ‘+’ button below the list to create a new preset.
At the bottom of the Presets area is a plus sign ✚ that starts the creation of a new export preset. (Fig. 12)
Exporting For Edit and Dailies We need to export this clip for two purposes: editing in AVID Media Composer and viewing on an iPad. We’ll use the built-in compression abilities of REDCINE-X, and create some reusable export settings. Although we are doing this for just one clip, you can batch dozens of clips into the export queue and let the computer process automatically until queue is empty. Export Settings At the top center of the screen you see the EDIT and EXPORT tabs. We’ve been working in the EDIT tab so far.
Fig. 13 Create Preset window. Fig. 11 REDCINE-X mode tabs. 1. Select the Export tab. (Fig. 11) The interface changes quite a lot. The Project Media on the left now occupy the entire left third of the screen. The Timeline also changes but not so noticeably, yet.
Fig. 12 Create a new preset button.
The Create Preset window opens (Fig. 13), giving access to lots of settings, depending on what is required. The first requirement is to determine the file type we need to export. In our case, we need an AVID file structure, which is MXF (DNxHD) for editorial to use. At the top of this window is a field to enter a name of the Preset. 3. Enter AVID DNxHD36 for the Preset name. 4. Drop down the File Format menu and select AVID AAF & MXF. The MXF is the video part of the file structure; the AAF is the audio. We now need to select the codec that will go inside this MXF container. To the right of the menu you just selected is a Setup.. button. 5. Click-on the Setup… button to the right of the menu you just worked with. This will open a long list of the DNxHD codecs and their data and bit rates.
245
Exercise 17: Flipping RED Footage Using REDCINE-X 6. Scroll down and select 1080p/24 DNxHD 36 8-bit from this long list. Then click-on OK. The Output Resolution setting we will leave per their defaults. But we will change the Output Location. You want to these rendered files to end up in a specific location on the client’s drives. By default it displays ‘Ask for Output Filename’.
There is 12 output channels available and all might be selected. We only want the first 4 channels exported typically. In the case of our files here, there wasn’t camera audio on channels 1 and 2, and we put the external audio on channels 5 and 6. We could just export 5 & 6, but lets do the first 4. The easy way to do this is, 11. Click-on the Clear All button. 12. Select channels 1 thru 4. (Fig. 15) 13. Select the Save tab.
Fig. 14 Custom Output menu selection. 7. Click-on this drop down menu and select Custom Output. (Fig. 14) The Setup... button to the right is now selectable. 8. Click-on the Setup... button. A new window called Output Settings opens. 9. Click-on the Browse... button and navigate to your hard drive. Inside your CourseAssets folder, create a new folder called Foreign_Exchange-Avid-Renders. Then Click the Select button. All the rendered files will now be saved to this folder. Editing is not so concerned about which camera they came from. They can read the slate for that information. They just want the files for import. You can leave the rest of the setting alone and click-on the OK button. There are more settings that can be selected depending on the RED camera used. For now we’ll move on to the Audio tab. 10. Click-on the Audio tab at the top part this window. There are two settings that are important here. One is that the audio is Enabled, and the second is the number of tracks to be exported. The codec and the container are now set for compressing the file. We need to select which audio channels to export, what to add to the image when exporting and what raster size to use.
Fig. 15 Audio Output channel selection.
Exporting Dailies We’re set for the AVID export. Now let’s set up the export for the dailies that may be viewed on iPads or any device that will play back a .mov file. The process is the same to the one we have done before, so the instructions will be brief. 1. Click-on the new preset icon ✚. 2. Name the preset H.264 720p. 3. Select Quicktime from the File Format drop down menu. 4. Under Setup... select the Video button. (Fig. 16) The video button opens the typical QuickTime set-up window (Fig. 17) for the video portion of the file. What we need is the H.264 codec at a high data rate and high quality output.
Fig. 16 QuickTime setting window.
246
Exercise 17: Flipping RED Footage Using REDCINE-X
TECH NOTE:
Fig. 17 QuickTime Compression Settings window. 5. Set the compression setting the same as you see in Figure 18. Make sure you select H.264 and not X.264 from the Compression type list. (Fig. 18) I would run a test to see if the output file looks good
Fig. 18 Compression selection listing. with no visible compression artifacts. If it looks the least bit blocky or has visible noise, raise the limit data rate to 5000. 6. Click Ok to close the video compression window. 7. You don’t need to change anything on the audio side. The defaults are fine. 8. Under the Output Resolution setings, Drop down the menu displaying ‘clip’ and select 1280x720-HD720. 9. Output Location should be set to Custom and directed to a new folder on your hard drive named Dailies. 10.The Audio tab should be set the same as before. Select the first four audio channels. We need to add Burn-ins to these files. 11. Select the Burn In tab.
Because RED R3D files are so hard to process, RED has a special processing card that can be purchased for the DIT systems. It will off-load the processing to that card and compress these files in near real time. That means a 3-min. clip will take about 3-min. to compress. The file we are compressing here is about 30 seconds long, yet takes more than 5 min. to process with a multi-core i7 processor. In reality, only 4 of the 8 cores, are working and even those are only processing at 80% of their potential. Welcome to the efficiency of multicore processing, or lack thereof. If you’re going to be working with RED files, the $4700 plus price tag for the Red-Rocket processing card is well worth it. Otherwise you will never process a day’s worth of shooting overnight. There are new improvements with the REDCINE-X software that will leverage the GPU cores on some specific video cards. These will be far less expensive than the RED Rocket card. You would be well advised to keep track of this development for RED file processing. This opens the burn-in setting window. (Fig. 19) Here we can add visual information to the shot that might be requested by the producers or director. They almost always, have TCW (Time Code Window) burn-in or overlays on the selected shots. In this way, the Director can refer to a specific place in the shot by timecode (TC).
Fig. 19 Burn-In settings window. Another good bit of information is the clip name. Some productions request Frame Number. This is the frame number within the clip, starting at the first frame
247
Exercise 17: Flipping RED Footage Using REDCINE-X of that clip. Visual Effects (VFX) folks like frame numbers. The major difference with this Burn In tool and the others is that the placement of the burns is not as flexible and there’s only four boxes available for information. Typically, this limitation is not an issue. Start at the top of this window and click the Enabled box. In the lower part of the window are four drop-down menus that allow for the selection of preset readouts. There are many more custom setting, but what we need is within the basic presets. 12. In the first menu on the left, select clip name. 13. In the next one to the right, select preferred timecode. This will place this information in the lower left and right portions of the frame respectively.
Fig. 20 Render queue showing the file preset AVID export being processed. The rendering process will start with the first preset, then move down the list. This will take a few minutes to process, but we can still continue working on other clips. Looking at The Output: We now need to look at the export files from the program and see what the processing actually did. 1. Navigate to the Avid Render folder on your hard drive. You should see something similar to what’s shown in Figure 21.
Everything else can be left alone. We’re done with this export setting. 14. Click-on Save. The new iPad preset will now be displayed in the list. If the iPad setting is not highlighted, single-click on the setting to select it. The other setting that’s important is what you want to export. In the lower right corner of the Presets pane is a drop down menu that defaults to Bin(all clips). If you have just one clip to process, then the Clip in Viewer selection is just fine. But you could have created a timeline (rough edit) and exported the entire timeline or the clips in the timeline separately; or, all the selected clips in the bin. This could be very helpful. We want to export each clip separately. 15. In the lower right corner is a drop down menu displaying Bin(all clips). Change this to Clip in viewer. 16. Hold down the Shift key and click both the presets we created displayed in the Presets list. 17. In the lower right corner is the Export button. Click-on it to start the process. The rendering cue will now display both the AVID and iPad render, and start the processing. (Fig. 20)
Fig. 21 Files exported for AVID editing. The files in the bin are AVID specific. The MXF folder contains the original clip, now converted to something that AVID understands. Actually you can drop these clips directly into the AVID MediaFiles folder on the editing hard drive and AVID will open them up straight away. The AAF folder contains the audio files for that clip. The AvidExport.ale (ALE stands for AVID Log Exchange) file is a database file that can tell AVID and other editing software which supports this format, about these clips. Having these files in this configuration makes AVID editors very happy. A lot of their work is done. If you then navigate into the Dailies folder, the QuickTime .mov file should be there with the original clip name. This is the iPad version. Double-click on it and you can view your work with the QuickTime player. Notice the burn in information at the bottom of the clip (Fig. 23). This is very useful for those who have to make notes and reference parts of the clip.
248
Exercise 17: Flipping RED Footage Using REDCINE-X
Submission for grading Check with your instructor on how this assignment is to be submitted for grading.
Fig. 22 A frame from the QuickTime exported file. Now, in reality, we would have transcoded (or flipped) these files into a folder structure much like what we discussed in the SCRATCH and Resolve tutorials. The iPad version would have gone into a folder named ‘Dailies’. The AVID-centric files would have gone into the ‘Editorial’ folder.
249
Exercise 18- REDCINE-X Pro LUTs
Objectives for this assignment: • Learn how to do a quick color grade in REDCINE-X software. • Save the color correction as a Look/LUT. • Apply the Look/LUT to other clips. ⌛ This assignment will take aprox.: 60 min. You will need: • Course Assets • Personal hard drive • REDCINE-X Pro software installed
Starting a New Project in REDCINE-X The first part of this exercise is a repeat of what we did in exercise 16 and 17. Lots of learning has gone on since then, so we’ll refresh the process. 1. Make sure your hard drive is connected and available on your computer. 2. Launch REDCINE-X PRO software. 3. Click-on the REDCINE-X PRO menu, and select Preferences. 4. Make sure the Automatically Save RMD Files is Checked. Refer to Figure1.
Fig. 1 REDCINE-X Preferences setting. 5. Click-on the OK button to confirm the change. This program system setting allows any Looks/LUTs to be saved for exporting to the camera itself. In the upper left of the interface, are the File Browser tools. This seems fairly standard, as we have seen, with programs like Resolve and others. 6. Locate your hard drive and Twirl-down the drive link. 7. Twirl-down the Course Assets folder and then the RED Assets folder.
250
Exercise 18- REDCINE-X Pro LUTs 8. When you click-on the RED Assets folder, the program will automatically load the clip files in that folder into the bin to the right. (Fig. 2)
Fig. 2 Drives and files browser in CINE-X. We’ll be working with the first and third clips in the l i s t i n g. A 0 1 0 _ C 0 0 3 _ 1 2 0 4 R N and A010_C033_120404. 9. Double-click on the A010_C033_120404 clip and it will open in the Preview window.
save that LUT for application to another clip, shot in the same general area for a different part of this scene. 10. Click-and-drag the playhead below the clip, to the point in the shot where she’s seated and looking more towards the camera. Refer to Figure 5.
Fig. 4 REDCINE-X Preview monitor display of the clip. On the right side of the program window are all the controls for processing the image. Select the Look Tab and all the tools will be revealed. (Fig. 5)
This shot was filmed in 2k, at high frame rate, so it appears to be slow motion. The young lady walks to the left, the camera pans with her as she sets down. This is what the man, when reading his book, was distracted by. Looking at the scopes (Fig. 3) above the image, you know now there are several issues.
Fig. 5 Look tools collapsed. 11. Twirl-down the Look:Image bar to reveal the color tools.
Fig. 3 REDCINE-X scopes display. First, the image is flat looking and there’s a lot of blue in the highlights. (Fig. 4) We’ll do a quick one-light then
The first two items in the listing are Color Space and Gamma Curve (Fig. 6). The default settings are RedSpace for both. This is RED’s older color technology used for the RED One camera, which this scene was shot with. The other selections within these menus are better color science for RED footage, but only available
251
Exercise 18- REDCINE-X Pro LUTs
TECH NOTE:
Fig. 6 REDCINE-X Look:Image tools. for use with footage from the RED Scarlet and Epic cameras. 12. Click-on the Gamma Curve drop down menu and select REDSpace. 13. Click-on the Color Space drop down menu and select REDspace. Scroll down further in the list of tools and 1 4 . Tw i r l - d o w n t h e Look:Lift : Gamma : Gain bar to reveal the color wheel tools. (Fig. 7) These look familiar from previous exercises. Using the knowledge you have gained from working with Resolve and SCRATCH, adjust the Lift, Gain and Gamma controls to open up the luminance range of the image. 15. Adjust the Lift and Gain controls by using the sliders below the color wheels.
RED Cinema has done a great deal of work with color science as it relates to their cameras image files. They have offered all this effort for free. The science behind these color spaces is solid and worth doing some research about. Because the .R3D raw image files are proprietary, it’s in their best interest to do whatever it takes to pull the best image possible out of these files. As a secondary note, the footage you are working with is from a RED One (pre-MX chip). The newer Epic and Scarlet cameras are able to use the latest color space RED offers. You would select those if using the newer cameras. 16. Scroll-down the tool area, bar to reveal the color wheel sliders tools. These are more intuitive controls for color balance and easier to use with a mouse, than the color target wheels. The first adjustment is to bring the blue level down. 17. Click-and-drag the Gain blue slider to the left, reducing the blue cast in the highlights. Bring it down to around 80% on the RGB Parade scope. Refer to Figure 8. The image looks quite a bit better already. There are
Fig. 7 REDCINE-X Color Wheels tool.
Once you have set the black levels close to the 0% level, raise the Gain so that the highest point in the green trace is just about at 80%. The blue trace should now be off the top of the 100% line. You noticed that the blue trace for the highlights is quite a bit higher than the other color traces. This trace represents the white wall in the upper right corner of the shot. The wall is being lit by outside daylight streaming in from windows high up in the space where this was shot. But that blue tint and the slight green tint needs to be corrected in the flesh tones.
Fig. 8 REDCINE-X Gain sliders some minor adjustment we can do with the Gamma (mid-tones) that will give the overall image some lift. 18. Click-and-drag the Gamma Global slider, to the right. This will lift or brighten the mid-tones. Bring it up to a readout of .116 The blacks could use just a bit of work now. Using the Red Lift slider,
252
Exercise 18- REDCINE-X Pro LUTs 19. Click-and-drag the Red Lift slider to the left, reducing the red and bring it down to match the green and blue.
4. Name the preset Atrium. All the settings within the Look tools are now saved under this one name.
If you want to see a better representation of this clips image, just above the image is a drop down, quality menu. 20. Click-and-drop the menu down, selecting Full. Refer to Figure 9.
Adding Preset Look/LUT To Another Clip Now the test. Will this LUT work for other clips filmed in the same set area, on the same day? Will the settings be close enough to save time, even with minor tweaks needed, or will it be so far off, it would be faster to do it from scratch?
Fig. 9 REDCINE-X Render Resolution selector menu. Saving The LUT or Look For Later Use All the color correction settings are now set for the one-light. Saving those for later use is very easy. 1. Scroll-up the tool area to the top where you see Look Presets. 2. Twirl-down the Look Presets tool bar to reveal the Look/LUT listing. (Fig. 10)
Fig. 10 RED Cine-X Look Presets. In the lower left corner are the tool icons to open 📁 , add ‘+’ and remove ‘−’ presets. (Fig. 10) There is the Apply to: tool drop down. This allows you to apply any preset to the clip in the viewer or to all the clips in the timeline/project. We must first create a new Look. 3. Click-on the + icon.
1. Click-on the A010_C003_1204RN clip from the Browser on the left side of the screen, to make it active. 2. Double-click on the A010_C003_1204RN clip thumbnail to the right of the Browser listing, and the clip will open in the preview window. 3. Click-on the Atrium preset you just created, in the Look Presets listing window. 4. Click-on the Apply button in the Look Presets listing window. 5. Using the play head just below the clip, scroll into the clip to a frame where you can see his face clearly over the book. What’s the verdict? Look better? Worse? Even close to what is needed? I would think not. Even though these shots were done just 40 feet from each other in the same set area, the lighting is different enough that the LUT preset did not even come close. However, in this particular short movie, more than half the scenes take place where the lady is sitting. The rest happens where the man is reading foreign language books. You could save time by creating one LUT for each area. Exporting Look/LUT For Use in Camera These looks or LUTs can be exported from software and then imported into cameras that will accept the uploading of such a file. RED cameras are able to use LUTs for both recording and for monitoring. Watch this video about the workflow for a LUT back into a RED camera. A word of caution here. Think twice about using a LUT to alter the color setting for the recorded image on a camera that records RAW images. The beauty of recording RAW is the great latitude you have in post to correct the image. If you ‘burn in’ the look at the time of recording, it will be much harder to undo those settings in post; possibly impossible without doing great damage to the image.
253
Exercise 18- REDCINE-X Pro LUTs The safe bet is to load the LUT into the camera and use it to drive the video output to the on-set monitor. That way, the image seen on set will represent the future color corrections that might occur in post production. To save a Look/LUT for import into the camera, 1. Click-on the original file we color corrected in the Browser listing. A010_C033_120404 Does it still look color corrected? If not, apply the Look we created by, 2. Double-clicking on the A010_C033_120404 clip thumbnail to the right of the Browser listing, and the clip will open in the preview window. 3. Click-on the Atrium preset you created, in the Look Presets listing window. 4. Click-on the Apply button in the Look Presets listing window. 5. Right-click on the clip’s thumbnail in the browser listing. Fig. 11 RED Cine-X Show (Fig. 11) 6. Click-on the Show RMD in Folder menu seRMD in Folder lection. menu selection.
reference when outputting the image to the on-set monitor. In Resolve, SCRATCH, AVID, etc., they have a specific output setting referring to LUTs in either 2D or 3D. Because REDCINE-X Pro is specific to the RED camera, it only exports its LUT/Looks in a format usable by their cameras. If you need a more generic LUT, maybe to share with other cameras on-set, then using another program will be the best way to approach the task. This wraps up our DIT overview of working with RED camera files in REDCINE-X. Resolve, SCRATCH and other on-set tools can work with RED files as well. But few of them are free and none of them were written by the camera manufacturer, which some one say is closer to the knowledge well.
If the RMD file has been saved, you will see a file path window open. (Fig. 12)
Fig. 12 File path for RMD file.
Submission for grading Submit the RMD file for grading, per the instructor’s directions.
The RMD file is saved with the camera original file. This file can be saved to the camera’s memory card and accessed once the card is re-mounted into the camera. There is a menu setting in the camera, that allows this RMD file to be uploaded to the camera and selected for
254
12 Independent Workflow Keeping It As Simple As Your Production Requires
later with the higher resolution camera files for finalization.
Independent
productions often have a simpler workflow due to the lack of production complexity or lack of budget. Either way, it’s all still data and it’s the culmination of everyones hard work. If data is lost or mishandled on a $100 million dollar budget production or a $50,000 production, it’s still gone. It’s no less painful to those involved. So how can we cut some corners but still maintain workflow and data integrity? The workflow is still the same: Camera➠ Backup software➠ Several hard drive copies➠ Transcoding for editorial➠ output Here’s what you can save money on in this workflow: - Cloning. You can’t risk losing the data during the copying process from the camera mag to the backup drives. You still need a checksum software. Most are under $100. The one in Resolve, albeit slow, works fine. - Two backups. You still need at least two copies of all your footage, so no savings on hard drives. However, most post editing situations are better off working from a RAID for mass storage. The second backup drive can be a less costly USB3, multi-teribyte drive. -Transcoding software can be the NLE you’re going to use for editing. This can be a significant savings in dollar outlay, and it’s one less piece of software to learn. Even though the newest incarnations of most NLE softwares will ingest lots of camera original files without transcoding, there is typically a performance hit. The software and computer struggles to keep up with on-the-fly transcoding. It’s frustrating to be slowed down when the creative juices flow. Most editors will transcode their clips even though the software can work with the camera files. As mentioned earlier in this book, the lower resolution files created for off-line editing can be relinked
No matter what editing software you’re going to use, something like the free version of DaVinci Resolve can make transcoding, the application of a one-light, and syncing audio files far less of a pain to deal with. Here’s a real world example of what problems could happen: A production was shooting with the Arri Alexia camera which was saving the camera files in ProRes444. This is fine for lots of NLE softwares accept the one they were using---AVID. The DIT was ill-equipped to handle the project an only had Resolve in his software toolbox. At the time, Resolve did not export AVID compliant files, even though it can produce .mxf files. The other two components were missing containing the metadata connection between the video and audio files. This is a huge issue for Editorial to deal with. At the end of the first week of shooting, the DIT still had not produced any files for editorial to use. They spent so much time trying to trouble shoot, the to-be-offloaded camera mags were stacking up. If the DIT owned, and knew how to use, AVID Media Composer, they could have processed all the files for AVID, in AVID. However, Media Composer is very time consuming to create dailies with burn-ins. Resolve could have still done that part of the deliverable with little pain. AVID is also very slow by comparison, with more capabible transcoding softwares, when it comes to exporting files. Painfully slow rendering even though v.7.03 introduced background rendering. What this story reveals is that it’s always best to work out the workflow before the production starts. It might be
255
Independent Workflow Independent Workflow
a good idea to work in and stay in the editing software of choice. But you must work this out before the first day of production.
chances of both the on-set drive and the smaller backup drive, containing the same data, failing are fairly small if you buy good drives.
Data Management Warning There is a bad trend in small, low budget productions, to skip some of the steps. You will always pay for this somewhere down the road. This is nowhere so important than asset management.
To figure out how much drive space will be needed based on a specific camera, there are free, or cheap, iOS and Android apps that will help predict what amount of data will be amassed.
This book has been stressed proper folder and file structures are vital to keeping things understandable, the further you get from that days shooting. Weeks, months later, no one will remember what they were thinking about in the heat of battle. Always set-up logical folder structures on your hard drives. Name your hard drives. Keep a log of what’s on those drives. AVID is the industry leader in managing your assets. Their motto has long been ‘let AVID manage the assets. You just edit.’ Premier and other NLE software allow the whole asset process to get very ugly. It becomes quite easy to have assets splattered all over multiple drives and locations and not even know it. Lose a connection to one of those drives and your project comes to an untimely halt until you fix the problem. Guaranteed that this will happen at exactly the wrong time. NEVER, never! start the DIT part of the production without thinking through and implementing a course of action, and method, for organizing your assets. Simple Indy Configuration So here’s what a totally workable indy DIT setup might look like: • Computer- Good laptop is a minimum. As current a CPUs as possible. 16 gigs. of RAM memory. Firewire 800, eSata or USB3 connections. Thunderbolt would be preferred for maximum data transfers. HDMI or Thunderbolt display port for a second monitor. Don’t underestimate the power of the Mac Mini computer. Properly configured, there are few computers out there with more bang-for-the-buck and it’s small. • Storage- How much you need is dependent on the camera files size and the format for transcoding for Editorial. Then you need two copies, so double that number of hard drives. It’s better to have several than put all your eggs in one basket. If you figure it will take 16 TBs to cover the camera assets for the show, then you need 32 TB of drives. For your on-set master, you could get away with several 4 TB drives, but I would get eight 2 TB drives for the backups that will go off-set. The
These drives need to be as fast as you can get. 7200 rpm spindle speeds with 16megs or more of cache RAM. They should be USB3 or Thunderbolt able. If you can get drives with both, this is a bonus. Remember the discussion about DIT hardware and the bus speeds? The MacBook Pro I am writing this on has 2 Thunderbolt and two USB3 connections. Both the Thunderbolt connections share the same driver board. Same goes for the USB3 ports. If I hook 2 drives to either, the data flow is half of one ports speed. But if I hook one drive to the Thunderbolt and one to the USB3, the copy speeds will stay higher. Very close to maximum potential specifications. When you have to move gigabytes on a daily basis, this increased data-pathway speed is very important. UPS/Power- A UPS (uninterruptible power supply) is a heavy beast to lug around. But if the power to your workstation is pulled/lost, that weight means nothing. Your laptop will continue on battery power, but the hard drives won’t. And you should never power your hard drives off your computer. Getting the proper capacity, but not to big of a UPS, is fairly easy to calculate. Add up all the watts or amps your equipment will draw, add 100% to that and this will give you the size of UPS to buy. It’s such a good feeling when the lights go out that you’re still able to work without data loss or downtime. External monitor- This might be a luxury for some indy productions. Most laptops have average quality screens. Apples’ Retina displays are quite good if calibrated. You will still want to get a decent monitor. There are a few in the $500 to $800 range that are quite acceptable for on-set work. As mentioned in the hardware part of this book, a ‘real’ monitor will set you back $2000+. They are well worth the price in the long run. Save your pennies and make that one of your first big purchases. Minimum Software• Checksum software. DoubleData, ShotPut Pro, and others can be had for under $100. A key to these pieces of
256
Independent Workflow Independent Workflow
software are their automated functions. On indy-shoots, someone without any DIT expericence is going to pinch-hit doing some of the functions. If the copying software you own has the ability to be configured so when the card is inserted into the card reader, it automatically backs it up, this is one less process to distract your already short-handed crew. • 5DtoRGB. This little free bit of software will transcode DSLR and other H.264 based files to any codec you have installed on your computer (accept AVID files). Great for creating H.264 and ProRes files. The paid version ($50 from the Mac App Store) does rendering in batches. • REDCINE-X (free)if you’re working with RED footage. • Arri Raw Converter (free) if you’re working with Arri Raw files. You will have to register with them to get it, but it’s painless and they don’t send any emails. • Blackmagic DaVinci Resolve (Free). If you’re not going to output for AVID, this software will work as an acceptable ‘core’ to a DIT station. • NLE software. Premier is a minimum. If you’re in an AVID-centric region, Media Composer is recommended. However, unless you see a need for it, most DIT’s don’t have it even if the editors are using AVID. • MpegStreamClip. (Free). Another really good transcoding software that is efficient and exports good quality. You will have to buy the paid version to get batch processing functions---which is a must on-set. • PluraEyes- This Red Giant software tool is amazing at auto-syncing separate sound and video files from almost any camera. It shines with DSLR files. There’s half a dozen other software utilities that you will want to gather as your DIT efforts expand. If you handle XDcam footage, you will need to get a tool to un-wrap that codec. Sony has two that are either free or cheap. Get this extra software as needed to further build a professional tool box. Overall, don’t make it complicated. Just make it a solid data workflow path that will get the job done with safeguards in place assuring the intergrety of the data files.
257
13 ACES Workflow
We touched on ACES in the codecs chapter because
the over-arching ACES standards, address a great deal of the issues of compression, bit depth, color space, etc. For the first time in the digital workflow, we have an all encompassing process. It can start at the camera and follow through the entire pipeline to your hand held device.
maufacturers in that they have the most accurate understanding of the inner-workings of their codec. At this point, the media is within the ACES envelope. • The ‘media’ will stay in this format for the rest of its journey. Understand that you will most likely transcode into a codec that is more ‘friendly’ to the editor’s system. The editor can then relink back to the ACES files for color correction. • ACES- The OpenEXR file format will contain and
Fig. 1 ACES workflow.
A simplified representation of this workflow looks like Figure 1. This video should be watched before continuing this chapter. Aces Workflow Overview As well. watch this video. Based on what you just looked at, the process breaks down to the following core points: • IDT - Input Device Transform. This is the point where the camera files are converted to the ACES format. The IDT is largely driven by the camera
maintain the quality of the image/file. • RRT- Reference Render Transform. This part of the process is where any work that was done to the files (color, style) is preserved. This is all metadata and actual ‘tweaks’ to the image/file, are now preserved in one place. As well, the RRT is ‘tweaked’ for the for the way we see color. ACES has such a wide potential for gamma and gammut display, transcoding down to a smaller color gammut is complex, with the possibility to alter the image look in undesireable ways. The RDT part of the equasion works hand-in-hand with the core of ACES creating an image that we can understand visually.
258
ACES Workflow ACES Workflow
• ODT- Output Display Transform. This standard assures that the original look, created on a specific monitor and software, will look the same on the next monitor or display device. Once adopted, media will look as close to the original as possible if that device can support the ACES LUT. Of course, to work properly the device must have the ACES drivers or code built-in. The strength of film stock is that the ‘look’ of the image is preserved in the original negative. The color correction phase of the film process, further ‘encodes’ the creative vision of the original artists. Make a copy/duplicate of that finished negative and it will look
• 16-bit color bit depth (floating point) • Ability to hold more than 25 stops of dynamic range Remember the discussion we had in the chapters about color, how we see it and how it’s rendered? There, you saw the CIE color space graphic with the Rec. 709 overlaid (Fig. 2). In Figure 3, the same chart with ACES color space
Fig 2. CIE RGB chroma chart. like the master negative. This is not true today in the digital realm. Each re-encoding changes the color space and appearance. What you view the file on, causes the most drastic visual changes. All this would be leveled out with the ACES standard. The ODT is the point wher all the media is in one colorspace and color corrected; you can output to any format or file type. Think of this as the film negative that other ‘prints’ are struck from for distrabution to theaters. Some of the characteristics of ACES are: • Image Container is OpenEXR, with support for many channels that can hold not only the wider gamut but considerable amounts of metadata. • Color space greater than the gamut of the human eye
Fig. 3 ACES color space compared to RGB varieties and DCI. displayed. It’s virtually impossible to ‘blow out’ colors. It’s that large. To learn more about ACES, presented in a straight forward manner, read: What is ACES?. And read: ACES-Academy Color Encoding Specification. The preceeding information was taken from these articles. The importance of ACES going forward can not be overstated. It will be part of the DIT workflow very soon. OpenEXR files
259
ACES Workflow ACES Workflow
“OpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications. OpenEXR is used by ILM on all motion pictures currently in production. The first movies to employ OpenEXR were Harry Potter and the Sorcerers Stone, Men in Black II, Gangs of New York, and Signs. Since then, OpenEXR has become ILM's main image file format.” ‘Open EXR’, Openexr.com ILM gives this format away. It is implemented in most software that manages digital file formats. Created as an answer to the limitation of 8 and 10 bit image formats, EXR provides a container that can universialy handle wide gammut, compressed and un-compressed codecs, universial across computer platforms, and it is natively supported in the Nvidia graphics cards. The greatest hurtle to ACES implementation is the size of the data pile when done. OpenEXR files are big. 5k RED footage converted to OpenEXR creates an around 20 megabyte file for each frame of video. It takes massive CPU power to churn though each frame during the conversion. Considerations For Implementation Right now ACES is unevenly supported across the software specturm as might be expected for leading edge technology. Take the three pieces of software you have used in this book. REDCINE-X does not have an ACES selection. The two reasons one can surmize for the lack of the ability within their software; they feel their color science is already good enough or, they don’t want to take the time to create the supporting code (IDT) for each senesor in their camera line. But RED files can be opened and worked with without issue in the other software, with a notible issue. DaVinci Resolve has the settings for the IDT and ODT but lacks robust ability to process the files properly unless the camera manufacturer provides the IDT. As of v11 of Resolve, it supports their camera raw files (DNG), Arri and Sony F65 and F55 cameras. Pulling REDRAW files into the software does not offer as great an advantage as if there was specific ‘encoders and decoders’ within the the sosftware.
SCRATCH and ColorFront are different animals. There programmers have taken the time to build the ACES code into their programs. Almost any format can be ‘adapted’ to that container. RAW files will realize the greatest benefit but compressed formats like ProRes 4444 and DNxHD will find greater lattitude for color correction once in the EXR format. It will not harm any format to be converted. The EXR container will never restrict the files chorma and luminance range, but a compressed file format already has limits imposed by the compressor and the bit-depth of that format. Good tutorials on how to implement ACES can be found with these links: • for SCRATCH v8.1 and newer • for Resolve v9.x and newer So what is the best way to implement this workflow if there’s obstacles like this? Simply re-use one of the proven workflows we have in place already. • Convert camera files to NLE friendly codec for editing. • Convert camera files to ACES (EXR) format during or after the production is done. This means setting the IDT to ACES and the ODT (what you are monitoring) to Rec.709) • Re-link the edit NLE files to the ACES files for color correction and final output • The final re-linked project can then be mastered to an ACES compliant master file for creation of distribution copies. Fundamentally, the core design of ACES is to create a master file, early in the file creation process, that will last for decades without concern of future improvements in picture quality (bit depth, raster sizes or color space) changes, outdating the ability for it to be decoded, reused or played back on future systems. But for now, all the different ‘end uses’ of our media have different requirements. The magic bullet is this---always deliver a file that is as high a resolution and quality, to the head-end of any compression stream, as possible. ACES is designed to do this. Remember the old saying about data entry, “garbage in, garbage out.” ACES is very important to follow right now. As it gains favor, and the bugs are worked out, it will become an
260
ACES Workflow ACES Workflow
integral part of your DIT life. The TV show ‘Justified’ shot it’s 2013-14 season using the ACES. This is the first episodic show to embrace this workflow. Several more have followed suit. The sci-fi feature ‘Chappie’ used the ACES workflow to help streamline the process. Chappie was shot with RED Epic cameras. The raw r3d (REDcode) files were brought into the DCI-P3 color space to take advantage of the wide color gamut of digital cinema projection. - See more at: http://www.vizworld.com/2015/08/the-academy-promo tes-aces-color-workflow-at-nab-2/?utm_source=newsletter &utm_medium=email&utm_campaign=TaoOfColor#sth ash.lo0oujgV.dpuf Implementation of ACES is strongly in the hands of both the camera manufacturers and the software developers for the NLEs and other image handling tools. Once new updates to the ACES standard are released (and they are released about once a year), the down-stream users have to adapt their wares to the new features and changes. Like anything else in our digital world, it’s in flux most of the time; but in its current state, very beneficial.
261
14 The Naked Workflow This is the most advanced workflow idea to date for data movement from camera to editorial. It’s proven under the rigors of full-on feature productions.
• to stop the long standing complaint from the DoP that their work looks terrible in post, and nothing like what they created on-set.
Daren Smith of Radar Mobile Studios spent months testing this workflow with the assistance and feedback from editors and colorists.The basic ideas are:
The biggest complaint by DoPs when they get into editing is ‘this stuff looks awful’. It now falls, in a bad way, on the DIT. I will state this as a matter of fact right now; most people creating, handling and processing digital images have no clue what a LUT really is and how it is to be used. If you skipped ahead to this chapter because you could not resist the title, stop here and go back to the chapter on LUTs for a full foundation.
• to lighten the load of the DIT when creating assets for use in editing. • that the ‘look’ of the files can be changed in 30 seconds, if the DoP decides that his or her ‘intent‘ needs to be altered. The word ‘intent‘ is key and we will get into that.
1)
Here’s how this Naked workflow, works: (refer to Figure
Fig. 1 Naked workflow flow chart. 262
The Naked Workflow The Naked Workflow
• Files from the camera are transcoded for editorial without the traditional Rec.709 (one-light) luminance adjustments. Figure 1 number 1. • A DoP and the DIT sit down, pick a shot and create an ‘intent’ LUT. This is a color corrected shot that represents the vision of the DoP. This specific shot and that LUT are saved into a folder named for the shot and scene. This ‘side-car’ LUT file along with that corrected shot are sent to editorial and DI (Digital Intermediate).
Fig. 2 DoP Looks folder with each look organized by the shot the LUT was created from. the shot and scene containing the reference shot and the LUT that created that look.
• Dailies are created with the Rec.709 corrections and burn-ins per production’s requirements. The LUT for this phase is a modified camera LUT. Figure 1 item 3. The directors or DoP’s LUT is not applied to all shots. If you read the section on LUTs you know why.
Here are the underpinnings of this workflow and why it is so wonderful: time, speed and versatility. It also helps with problems in the signal chain.
• Transcodes to VFX are .dng or .exr files (as requested) without any corrections. The sidecar LUT created by the DoP, is sent to VFX along with the shot the LUT was created for. Figure 1 number 5.
The signal chain is computers and monitors. The DIT has their computer and monitors that are setup, hopefully, to standards. The editor sits in front of their monitors; but these could be different monitors in brand or quality. They may be setup with different calibration tools so, in the end, they are close but not exactly the same as the DIT is seeing.
Graphically this looks like Figure 1. Stepping through the process with the software of your choice: 1. Camera and audio files are synced. 2. Transcodes are created; one for editorial and one for VFX. These are NAKED. They are not corrected in anyway. If the camera shot in the Log-C or S-Log or REDcolor spaces, they stay in those colorspaces. The ‘look’ is not changed during the transcode to DNxHD or ProRes file formats. 3. A modified camera LUT is sent with the transcoded files. This will be a Contrast and Saturation adjustment LUT. It is vital that NO color correction be done before creating this LUT. 4. That modified LUT will be used to create the dailies. Dailies are not critical color-correct files and the basic Rec.709 correction will suffice 5. The DoP created ‘look’ or intent LUT for the one selected clip will be sent with that clip, in a separate folder. (Fig. 2)This will go to both edit and DI/Color correction. The DoP might spend time with the DIT creating specific LUTs for shots in different scenes. Those will be saved in the fashion. A folder indicating
Editorial and The Signal Chain
If the DIT applies a Rec.709 LUT to all the files before transcoding, that look is ‘burned-in’ at that point and can not be changed unless all the files are transcoded again. If the DIT creates a basic luminance and saturation LUT, conforming the camera files to Rec. 709, then sends that file along with the transcoded files, and the editor can choose to apply that LUT or leave it off. If the LUT is close but not quite what the editor would like, they can modify/adjust the LUT on their system in short order. This is a huge reduction in transcoding work for the DIT if changes happen. And they will.
The Blind Leading The Uninformed This has been and currently is one of the biggest problems with the traditional workflows mentioned earlier in this book. It is spawned by ignorance about what a LUT is and what it does. Few DITs and DoPs understand the following concepts: • A LUT contains the differences between what the shot looked like and how we want it to look. Read that again so you fully understand.
263
The Naked Workflow The Naked Workflow
Think of it this way; there is a given amount luminance, saturation and hue in the basic shot. Using color correction tools, you adjust that shot so it has full whites, blacks and the color/saturation to your liking. The LUT file ONLY contains the differences from the original file or the changes you just made. That’s it. The LUT does NOT contain any of the color and luminance information from the original shot. Therein lies the limitation to using the same LUT on multiple shots. The next shot or scene might be significantly different from the shot you used to create the LUT. ‘Adding’ the changes/differences held in the LUT data file, can make the next shot look totally wrong. • A LUT is not sentient. It cannot think nor can it analyze what its working with. The LUT can’t analyze the next image and adjust the whites down a bit if the white level adjustment it contains, will push the whites in the shot out of range. It’s absolute arithmetic. If the LUT is created with the following assumptions:
look they like on-set with the DIT, then they say “apply that to everything.” With what you now know about how LUTs work, it is all too clear that the outcome is going to be inconsistent at best. When the DoP sits down in editing, they will be confronted with the clips that won’t look the way they wanted. Their ‘intent’. DoP Intent Here’s what must be understood about the DIT interactions with the DoP when creating a ‘Look’. That look is the visual intention of the creative outcome when the edited project makes it to color correction. That’s a ways down the road from the work done on-set. A colorist wants to know the intention of the director and DoP regarding the visual look of the project. If the DoP can communicate ‘this is what I want this whole scene to look like’ to the colorist, they are both working on the same visual outcome. This is where that on-set “look” LUT communicates the DoPs vision clearly.
X is amount of white in the original shot Y amount of white ‘change’ to reach the desired white level. X + Y = LOOK The LUT contains the difference or Y. New shot ( Z) has more white level to start with so: Z + Y ≠ desired outcome or we can look at this way: 50 units of white in the original shot + 10 units of white change from the LUT = 60 units of white. The LUT contains JUST the additional 10 units of white. Let’s say the next shot has: 60 units of white and then you add 10 units from the LUT ≠ 60 units of total white. It will now equal 70 units of white total and the shot will look ‘over’ bright.
When the locked-cut of the project hits the desk of the colorist, they will link the edited timeline back to the camera original files. Those files are just as they came from the camera; maybe flat, maybe full range, depending on the camera settings and file types. If the DoP sat down with the DIT and created a “look” LUT for each scene or stage of the visual story, the LUT is tied to a specific shot in that scene; the shot which the LUT was created from. The colorist can now pull up that shot and apply the DoPs intent LUT for that shot and instantly they know what the DoP is striving for. They then use that reference to match all shots. If there are different or modified looks for different parts of the storyline, the DoP can work with the DIT, create a new “intent look” LUT. That shot and that new LUT are put in a folder, clearly named for the scene, and passed along. There is a huge plus to this workflow; if the DoP changes their mind (and they do often), a new LUT is all that needs be sent to editorial and color correction. It’s a small file with the new intent inside. Unlike days of retranscoding the files, a LUT can be applied in a few clicks of the mouse, and the whole workflow process keeps moving ahead.
• Any LUT is based on the shot it is created from. The common ‘uninformed’ workflow is the DoP creating a
264
The Naked Workflow The Naked Workflow
It is important to do two things when implementing this workflow: • Study and fully understand Figure 1. • Be ready to explain to the DoP what you are going to do (and to editorial as well, for that matter). Be kind to the DoP’s lack of understanding about this new digital workflow world. Educate them to what a LUT really is and why it will NOT work they way they imagined. Now you know how to implement the most versatile and adaptable workflow to date. The outcome is that you won’t be on the other end of the comment ‘that DIT screwed up all the work I did’.
265
15 Professional Problem Solving What few in the industry will tell you up front is that
getting camera mags, processing files and handing off full hard drives is the easy part of your daily job. Figuring out how long it will take, how many hard drives you will need, etc. is time consuming and often a head scratcher. What follows are real world examples of what you need to figure out as part of your job. Accuracy is important. Miss the calculation and you might miss a deadline or underbid your job, causing you to loose money.
Each problem presented get progressively harder to solve. The information is typically what you will get when presented with the job. Spend some time figuring these out. The answers are in Appendix A. Problem OneThe production wants ProRes 422 LT, 1080p deliverables for editing. What is the data rate per second of ProRes 422 LT? How minutes of material will fit on a 2 TB drive?
Problem ThreeHow many minutes can fit on a 256GB mag for the RED EPIC with the MX sensor shooting at 4K? Based on the average of a production creating 1.25 hrs. of footage per day, how many cards will a single camera give you in a day? Problem FourYou’ve been hired on a single day shoot for a commercial. They are shooting a single Sony F65 camera in SQ with a basic C-Log LUT applied. The footage is MOS. Deliverables are DNxHD36. You will be receiving 1.5 hrs. of camera footage. Call time is 7am and wrap is 9pm. 1. How much total storage will Production need to provide you? 2. Assuming you can transcode at 20 FPS, how long will the transcoding take? 3. When will you be able to make final delivery?
Problem TwoThe average shoot day produces 1.25 hours of footage per camera. Shooting an Arri Alexa at ProRes 444, how much data is that per day on a two camera shoot?
Hints: Think about the entire process and function of the DIT, on-set. This is a higher level question that has to be solved on every production you will go on, not matter which camera is being used. Problem FiveYou can’t leave set until everything is backed up to your system array and two copies struck.
266
Professional Problem SolvProfessional Problem Solving
How long will you be waiting to leave set, after the last cards are handed to you? Knowns: - Shooting RED Epic, 4k raw footage. - Camera dept. is using 64 gig mags. - Typically a mag is given to you 85% full. - Off load times have been running 45 min. per mag at 85% capacity. - At wrap, your receive 2 mags. One is 85% full and the other is 35% full. - You have one mag reader. - The mag currently being offloaded has 15 min. left before it’s done. How long will you be on-set before all mags are backup up?
The answers to these questions are in Appendix A.
267
16 Final Thoughts Planning, Planning, Panicking The job is never what it seems in the pre-production meetings. Trust me on this.
The
full scope of this reality will come with experience. Suffice it to say, be ready for anything you can imagine. For example, the production agrees to your services being limited to: • supporting the camera department • just backing up the footage to two backup drives and • an occasional look-see of a few shots. The backup part is going fine, but within a few days, the Director and DoP are hanging at your workstation between setups and scene changes. They now want to look at every take. Then they are distracted by the lack of synced audio so they ask for audio and video files to be synced. You’re now spending a lot of time in another piece of software. Something you paid good money for, and are now giving it away for free. What you’re doing is a service and that service should be paid for. Then--- editorial hears you’re doing file syncing and they’re behind, so they ask if you can do the job of an assistant editor, and prep the files. It’s time to find the Unit Production Manager (UPM) and ask for the ability to bill for these new services or not do them at all. Then a big one. The production company brings on a RED camera and wants to see the dailies. Without a $4700 RED Rocket card, the processing of those files takes a long time with just CPU power. The production is very clear about this. They want the dailies and willing to pay the additional fees. Where do you get a RED Rocket
card, as a rental, and how do you hook it to your system? All good questions and all of them take time on the phone.
TECH NOTE: The latest version of RED Cine-X now leverages GPU processors. It is limited to just a few compliant card models at this point, but the cost is far lower than the RED Rocket and they are readily available. This is a very smart marketing move for RED Cinema and will help your system budget as well. However, this transition is not religable as yet. As an aside, Adobe has been leveraging GPUs for several revisions of their products. If the same card will work for both RED Cine-X and Adobe products, it’s a 2-for-1 investment. Other softwares are also writing code to take advantage of these very fast, reasonably priced hardware solutions to speed the render process. To give you an idea of the issue you are confronted with, using RED footage without a RED Rocket card; a new model MacBook Pro with lots of ram and multi core CPUs will process about 4 frames a second at its best. That’s 24 seconds to process every second of footage. One 4 minute take will require 96 minutes to transcode. With a RED Rocket card doing the processing, you will see the frames being processed at 28-32 frames per second. The same 4 minute clip will be transcoded in under 3-1/2 minutes. As you venture into the world of asset management for productions, you will need to build contacts with others
268
Final Thoughts Final Thoughts
who might have hardware that you can rent. I say ‘rent’ in this case because RED Rocket cards are custom built, always seem to be backordered and are only available from RED. If they have one in stock, you will be putting close to $5000 for the card and the PCI expansion case (to hook it to your laptop), shipping and such, to get up and going in 48 hours or less. Are you able to do that? Surprisingly it’s okay to say the word that is never supposed to be said on set. “NO, sorry, but no”. Film production is a ‘get it done’ business so having a plan and resources to react to seeming emergencies, makes your services even more attractive to production companies. We’ve covered simple things in this text that can stave off a crisis: • making lists of your equipment kit • having spares of cables, connectors and batteries • having a backup of key software, even your whole system drive • a ready phone directory of people who can help you out • having a services contract ready for the production company to sign • testing your system and the software. Know your tools!!! • stay very focused and professional • if you make a mistake, own up to it • deliver what you promise • manage you income. This is beyond the scope of this book, but if your in business doing DIT to make a living, run it like a business. If you don’t know how, take a basic business accounting class. Do it sooner than later. Set Politics And the final and most important point of all. Always be positive. Production sets are full of negative, ‘woe is me’ attitudes. People like to work with happy, positive people. I’ve seen more cases where the person is sort-of good at what they do, but has a wonderful attitude and personality. They get hired way more than the highly qualified person who is grumpy and negative. Production days are really long, mind numbing ordeals. It will take all you have to stay focused. Everyone around you will be just as stressed and tired. It can be really hard not to bite someones head off when things get tense. Take a deep breath. Think before you speak.
There is a pecking order on sets that should be respected. There are people you can and can’t talk to. Learn these rules fast: • The Executive Producers can talk with you. You can’t talk to them. • The Director can talk to you. If you need to talk with him, do it though the DoP or UPM. • You can talk with the DoP and UPM anytime and you should have a very open line of communications with both. • You can’t talk with talent. They can talk with you. • Make sure you have good working relationships with the grips and electricians. • Never give advice unless you’re sure it’s correct. • This project is always the ‘best project you’ve worked on’ (you’ll be forgiven for little white lies). • There will be lots of people in high positions that don’t know what they are doing. They are related or a friend of someone higher up. Just go with the flow. • Never tell someone else how to do their job. • Never gossip. • If crew call is 7:00am and you arrive on set at 7:00am, you’re late. However, at DITs work doesn’t start, typically, until the first camera offload. Unless you are poviding other services or the location is such that you have to be there at Call. • Never show anyone that is not cleared, any of the footage. A gaffer or key grip or special effects supervisor or maybe head of wardrobe or makeup, can ask to look without permission from higher ups. But that’s as low as you dare go on the organization chart. • Never share any footage with anyone. Even if the production has been over for a month or a year. It’s not yours to show or share. Hell hath no wrath like a producer or actor that finds some take of them in the wild ‘interwebs’ without permission. It it’s tracked back to you, you are done career wise and most likely in court for a financial draining. Multi-Tasking Is The Name Of The Game It’s not enough that you’re organizing and processing all the incoming and out-going mags, setting up batches of offloads, managing the hard drives for backups, keeping records, but you will be asked to create one-light color grades from time-to-time. This is a service that should be billed for as a separate service and agreed to before the production starts. There is a lot of discussion in the emerging DIT ranks about what to charge for and when. Their billing for
269
Final Thoughts Final Thoughts
services and equipment (in the case of the DIT it’s more software than hardware), has a parallel with the grips, camera and electric department crews, and their personal gear. In the industry it’s called a ‘kit’ or ‘box’, and hiring contracts will include the line item ‘Box rental’. For example, an electrician might own a special light kit. The production company might like to use it on some shots but are not sure when or where. The day arrives, the DoP decides that the light package will be perfect for the shot. That electrician then gets a daily kit or ‘box’ rental rate. That’s a pre-determined fee for the use of the equipment in the crew members personal box. This payment to the crew member is in addition to the daily employment rate (or ‘day rate’) for working. The DITs ‘box’ should be no different. But it’s seen by some producers and UPMs differently. They seem to feel that software is different than a physical light kit, and because it’s on the computer ‘system’ they rented, they should be able to use it at will. It’s about the physical vs the virtual. They can see and can touch the lights, but the software isn’t, somehow, real. But at this point, if they want a one-light color correction and it was not agreed to at the outset, you can legitimately ask for a rental and service fee over and above the day rate you might have quoted. Some DITs, like ((RADAR)) Mobile Studios, have a one price, everything included format. They do custom pricing if the budget is tight, but their day rate includes anything and everything on their rig. This does make billing easier. A Process of Patterns It is strongly recommended that you setup a pattern of work and NEVER break it. With a pattern to follow, you can be interrupted then come back and know where you left off. Here’s an example of one DITs workflow on a show that shot Arri Alexia cameras and required sound sync dailies along with the backups. Each camera card, once offloaded, needed to be reformatted before returning to set. - “Camera A reload” is called over the radio. - Camera department puts RED tape over the end of the mag and lables it with the camera and mag number. - They lock the mag so it can’t be recorded over accidentally. - DIT (or data wrangler) moves to camara and retrieves the card/mag. - The camera mag has red tape over the end of the mag. It’s labled with the camera and the mag number.
eg A14. (typically red tape is camera A, blue is camera B, white is camera C) - The red tape is removed and the mag is inserted into the card reader. You do nothing else. Once the tape is removed it must be put into the reader. - The mag backup is started using the software of choice. - Once the backup is complete, the data wrangler opens the target drive and assures that the files are there. - The mag is ejected from the reader. - The lock is turned off on the mag. - The mag is insterted into the reader. - Disk Utility (on a Mac) is used to erase the mag and formatting it to exFAT. This is the only time of the day when I won’t let anyone talk to me. I won’t answer phones or the radio. I’m erasing the master files from the camera magazine. I don’t want to make any mistakes at this point. - The mag is ejected from the reader. - The Disk Utility program is then closed. Don’t leave this program open. Misguided clicks can do catastrophic damage. - Green tape is put over the pin-end of the mag indicating it is now ready for re-use. - The mag can be returned to set. - Inside SCRATCH, the mag is loaded into a construct named for the mag. In this case A014. - Each clip has ‘fit width’ applied to that the maximum image size is correctly fit into the frame. - Each clip has a basic LUT applied. In this case the standard rec709 for the given ISO the camera. The days process is now on hold until lunch, when audio will give you a memory card or thumbdrive with a copy of the mornings audio recording. - The audio files are copied to your system array and an additional backup drive. - Within SCRATCH, media manage the files so you can link the audio and video files automatically based on timecode. - Go through each shot! making sure they are in sync. Find the slate and look at the waveform for match. If not, tweak the position of the waveform to correspond with the visible slate slap in the frame. Once the day is wrapped, you will get the last cards from camera(s) and the final audio offload. These cards will be ingested as before.
270
Final Thoughts Final Thoughts
With all the days footage sized, LUT applied and sound synced, you can start the render process creating dailies. If at all possible, I will render the mornings work as soon as it is ready. That way, you will only have the last half of the day to complete after the last cards are handed to you. Even with that split in rendering, one show I was on generated 4 camera A cards and 2 Camera B cards at the end of the day. That amounted to 1 hour of ingesting and file prep for export, then 4.5 hours of rendering. Which I let go overnight. There’s other patterns you will create to keep yourself on track. What is valuable about this almost religious attention to a step-by-step methodology is that if something gets out of sync, it will stick out like a red flag, causing you to pay attention. And it can’t be stated loud or long enough-- your basic, fundamental job is to DO NOT HARM. Along with--Don’t screw up. Keep those two targets in your sights and you will do well. it.
You now have the digital dragon by the tail. Run with
Very good article about the DIT by a DIT: Defining DIT: What You Need To Get Hired http://nofilmschool.com/2013/10/dit-table-dit-profes sional
271
16.1 Paperwork, Forms and CYA Stuff
We’ve spent a great deal of time on the technical and
mechanics of the DIT. The one area that needs just as much attention is the business side. This is a business and you can make a good living. But like so many photographers, graphic designers and artists, the business side of the brain gets pushed back for the creative that we so much like to do. The road is littered with great, talented media creatives that could not manage day-to-day business affairs. In so many words, they didn’t manage their money and could not pay the bills. Being your own business owner is not for the faint of heart. A total look at the neuonces of running a business is beyond the scope of this book, however, here’s some really good advice handed to me by my mentors over the years. - Take a course in basic business and accounting from your local community college or community education program. If you don’t know how a checking account or basic banking functions work, you won’t survive as a business. The Better Business Bureau and the Small Business Adminstration typically have great resources for new business owners. And they are often free. - Work for someone else for at least a year. What can be learned on the job, without the pressure of running a business, is invaluable. There are tricks and processes that can only be learned from someone who has been there, done that. You will also get a really good grasp on what gear to buy and what not to waste you money on. - Be professional from day one. This production business is a very small, tight, network, and you should act as you would like to be perceived. Respect and you will be respected. Always be helpful and up beat. - Learn and understand the on-set politics and ediquitte. Nothing gets you in trouble, or fired, faster than breaking set ediquite. Paperwork and Forms Mentioned early on was the need for a written understanding between the production company and your business. Services and rates need to be clearly defined. If not, what was verbally agreed apon as a simple ingest,
sound sync, and transcode will grow into a full on dailies, with color correction and more. “We have a pre-production meeting form here at Radar. It allows us to pick the brain of the producer trying to understand, and help them understand, what they need. Rule of thumb; if they say they’re shooting 2 cameras but have a third camera body in reserve, we plan on media from three cameras. To date, we’ve never been wrong.” Darin Smith, CEO ((Radar)) mobile studios. The basic pre production form can be made in the form of a spread sheet listing the services you offer. For expample; - Ingest. From what camera using what media? SD cards, SxS or SSD mags. - Who’s providing the card/mag readers? Typically camera department, as part of the camera rental package, has readers you can use. - Who’s providing the offload hard drives? This is strictly a business call. If you’re providing the drives and they will re-emburse you later, you better have a very good feeling about their funding and ability to pay you back. Once those drives leave your hands, with their work on them, it’s very hard to regain possession. If they are providing the drives (preferred route) then you will need to estimate the number of drives and the size of the drives. It will be very important to specify the manufacturer and interface of those drives. If you don’t they will buy the cheap USB2 drives and you’re doomed when it comes to offloading times. - What is the workflow. Which editing software are they using; AVID (DNxHD), Premier or Final Cut X (ProRes)? - Where is post happening? How are you getting the assets to them and how often? - Do they want dailies? And how are they to be viewed? H.264 is the standard codec right now for dailies delivery. - Do they want burn-ins on the dailies? - Do they want a custom LUT applied or is the stock rec709 LUT adiquite? - How are the dailies being delivered? Thumb drives distributed? You upload to iPads and distribute.
272
16.1 Paperwork, Forms and CYA Stuff - Is there going to be a prep/camera test day? - Is the DoP going to want you to set exposure? This is a big one right now. Some older DoPs are afraid of the digital exposure and want the DIT to it. If so, then you’re going to need a data wrangler to handle card off-loading throughout the day. That’s an additional expense which the production company has to be aware of and agree to. - Lodging and travel. Even if the locations are within your state, it might be an hour or more drive to and from set. It’s going to be a negotiating point on lodging and mileage re-embursement. If you are required to fly or travel long distances, it must be stipulated who pays for the travel, room and food. End Of Day Reports The production company tracks all kinds of stuff. How many went through the lunch line, when the first shot got off, and so forth. You will have to submit an accounting of what you did as a DIT. This is typically simple data; how many cards/mags, how many gigs/TB of data was handled. Your in and out times for the day. Here’s an example of one from a feature film where there was a data wrangler and a DIT: Bob 0900- 2300pm Daren 0800 - 2400 Data- 287 gigs. Military time is preferred and easier to underestand. The gigs of data represent the total handled from all camera mags, audio cards and any other recording devices you were handling. Typically these are text messaged to the 2nd AD at the end of the day or first thing the next.
273
16.2 The Digital Dilemma
In 2013, the Academy of Motion Picture Arts and Sciences (AMPAS), released a two part report on the ever mounding problem of digital assets. With the move away from mastering on traditional film stock, to all digital. The over-ridding issues are two fold; how to preserve assets shot on film and how to preserve the new productions, shot in digital. As someone who will be adding to the mountain of digital based assets, you must have understanding of the long term storage of what you produce. We’re talking time frames far past the ‘next month’ deadline for your production. We know film stock will last 100 years if held in a controlled environment. We also know the average life span of a hard drive is five years. If, after five years the hard drive is still alive, what do you do then? What do you move those assets to? Will it last longer? A case in point for a current solution for archive is LTO tape based storage. LTO stands for Linear Tape Open. Magnetic tape has proven to last a long time on the shelf. Like film stock, old technology is often the best solution. The issue with LTO is that every year or so a new version come out, offering more features. The standard only requires that the players be backwards compatible for two generations. If your facility uses LTO 3, and LTO 6 is the current standard, you can’t play the version 3 tapes back on the new machines, if the older ones fail. What does this mean for a production facility that wants or needs to keep what they have produced accessible for a long period of time? They have to implement a plan to move all the older archives to the new storage media version periodically. This is a commitment to lots of man-hours and the capital investment in the new tape stock and recorder/player. This is ultimately a financial overhead for the company long into the future.
Film Still Viable What we know for sure is film stock will last 100 years. Nothing digital comes close to that proven record. DVD media claimed a 50 year life span based on short term testing. That has proven to be way off target. 10 years seems to be the maximum when meticulously handled and burned onto the best quality disks. It now is clear, with current laser scanning technology, that a digital project can be moved to film stock successfully for long term storage. Conversion to film stock is expensive short term, and rather inexpensive long term. To scan the digital assets of a feature film to film stock costs around $80,000.00. The long term storage costs, factoring in the storage facility costs, man power, etc., are $1,200 per year. If you were to keep the digitally created movie in digital storage, not film stock, the costs are astronomical. First you need three backups of all assets. That will be several Petabytes of hard drives. Every five years all those drives will have to be backed up to new hard drives, or some other storage media. Average all those costs plus environmentally controlled storage, and it will run $12,800 per year for a digitally mastered production to stay digital in an archive. This is a ongoing cost that, for most, is prohibitive. Independent productions are at the most risk. They neither have the inclination or financial resources to take care of their projects once they are completed. As one indy filmmaker said, “We’re on to the next project”. Many have revisited the older projects, dutifully stored on it’s own drive, only to find out that the drive has failed or the operating system will no longer read the files. The project is, for all intents and purposes, lost forever. I highly recommend that you take the time to read the Digital Dilemma reports found on the Oscars.com web site. If nothing else, you will be given a new appreciation to the issues now facing the digital media production industry. You will be asked to sign in to get the reports. The Academy uses this information to track interest in the research and nothing more.
274
16.2 The Digital Dilemma As the author puts it in his presentations, “I choose the word ‘dilemma’ for a specific reason. The word ‘problem’, by its nature, is something that might be hard to do, but obtainable. The word ‘dilemma’ is a choice between two paths of action, where both have undesirable outcomes. In other words, the lesser of two evils.” Digital IS more expensive. But it is the future. Someday, film will be gone. We need to think about this long and hard.
275
Appendix
276
277
Appendix A -Professional Problem Solving Professional Problem Solving
Problem OneWhat is the data rate per second of ProRes 422?
what the camera settings are to accurately figure out the data rates.
The second gotcha is frame rate. The numbers above are for 23.97fps. Corporate industrials, documentaries and reality shows typically shoot at 29.97 fps. That bumps the LT version of the codec up to 87 mb/sec.
Based on the average of a production creating 1.25 hrs. of footage per day, how many cards will a single camera give you in a day? One 256 gig card. However, this is very risky. If you can convince the camera department, have them send you a card every few scenes. This spreads the risk over smaller chunks of data and it allows you to work on footage throughout the day and NOT get all of it at the end of the day. That would mean you would be working all night to backup and process the data.
How minutes of material will fit on a 2 TB drive? Again, what frame rate? We’ll assume 23.97 fps and they want to use the LT version of the codec.
Problem Four-
126 mb/sec. However, when being asked to deliver ProRes files to editorial, be sure to ask if they want 422 or 422 LT. The LT (or light) version is 70 mb/sec.
79.3 hours of material. Problem TwoThe average shoot day produces 1.25 hours of footage per camera. Shooting an Arri Alexa at ProRes 444, how much data is that per day on a two camera shoot? We’re assuming 23.97fps in 2k raster. Did to take into account that this is 12 bit footage? 4+4+4 = 12bits. 247.5 gigs/day
You’ve been hired on a single day shoot for a commercial. They are shooting a single Sony F65 camera in SQ with a basic Log-C LUT applied. The footage is MOS. Deliverables are DNxHD36. You will be receiving 1.5 hrs. of camera footage. Call time is 7am and wrap is 9pm. 1. How much total storage will they need to provide you? 239.7 gigs for SR footage. 23.3 gigs for DNxHD footage 260 gigs for all footage.
Problem ThreeHow many minutes can fit on a 256GB mag for the RED EPIC with the MX sensor shooting at 4K? This is not a trick question, however you must remember that a 256 gig card, when formatted, will have about 248 gigs for recordable space. That would hold the 72 min. at 8:1 compression. Again, compression is key. RED cameras have several compression settings which means you need to ask
3 backups required so 3- 500 gig. drives Time to copy 128 gigs via USB-3 = 50 min. BUT, it’s slower if you’re making 3 copies at a time. So a good rule of thumb for this calculation is the 2 hrs to duplicate the 260 gigs of files plus 10% for overhead. So 2 hrs. 15 min. aprox. for just the backups.
278
Appendix A -Professional Problem Solving Assuming you can transcode at 20 FPS, how long will the transcoding take? 480 min. or 8 hrs. 2. When will you be able to make final delivery? This part of the questions is designed to help you think about your contract with the production company. If you promised delivery at the end of the day, then the answer is no. You won’t make it. You could, however, promise mid-day, the following day. Once you put the render into the cue, it can process overnight IF your system has proven to work without error, unattended. Otherwise, it will be along, sleepless night.
Problem FiveYou can’t leave set until everything is backed up to your system array and two copies struck. How long will you be waiting to leave set after the last cards are handed to you?
the hard drive heads are now at maximum rate. Even if data could be written faster, the mechanics of the drive won’t allow for it. The typical process is to copy the camera mag to the system array first. Then make the other two backups. With it being the end of the day, we’re assuming that you have made the additional two backups for all other mags during the day and the only ones needing full, 3 drive backup are the one currently downloading and the two new cards. It would be advisable to start the other two backups of the current mag right after it’s done. Then the next two mags, strike the three copies at the same time. With the system copying to three drives at once, the process down a bit as well. About 10%. So what’s the answer? 15 min + 55 min for the current mag. 55 min. for the next mag. 25 min. for the final mag. Total time= 150 min. or 2 hrs. 10 min.
Knowns:
- Shooting RED Epic, 4k raw footage.
- Camera dept. is using 64 gig mags.
- Typically a mag is given to you 85% full.
- Off load times have been running 45 min. per mag at 85% capacity.
- At wrap, your receive 2 mags. One is 85% full and the other is 35% full.
- You have one mag reader.
- The mag currently being offloaded has 15 min. left before it’s done.
How long will you be on-set before all mags are backup up? This brings another reality into the mix. Head contention and connection pathway saturation. As you remember, head-contention is where you’re system is reading and writing at such a rate that the mechanical movement of
279
Appendix B - Post House Specifications - Post House Specifications
Specification sheets from post houses. The following post house spec sheet give you some idea of what requirements they have. It’s fairly well laid out and they take seriously one of the last lines on the sheet “If anything is unclear--contact us”.
ON THE SHOOT, PLEASE ENSURE THE CAMERAMAN STOPS THE CAMERA AFTER EACH TAKE TO CREATE A NEW FILE FOR EACH TAKE. ! •! EACH INDIVIDUAL FILE SHOULD HAVE A UNIQUE FILENAME AND TIMECODE THAT WILL NOT BE DUPLICATED AT ANY POINT DURING THE SHOOT. ! •! ALL RUSHES MUST BE TRANSCODED TO DNxHD 36.MXF FILES BY THE DIT OR POST HOUSE. THIS IS THE FASTEST METHOD FOR US TO BEGIN LOADING THE RUSHES. CERTAIN CAMERA FORMATS (EG PHANTOM AND R3D) CANNOT BE INGESTED BY THE AVID AND MUST BE TRANSCODED.
WORK DIGITAL TECH SPECS 2013 THIS OUTLINES OUR TECHNICAL SPECS FOR: RED CAM / EPIC RED / PHANTOM / CANON 5D & 7D / ARRI ALEXA.
IT IS ESSENTIAL FOR US TO RECEIVE BACK UP DRIVES AS OPPOSED TO MASTERS. AS WE CANNOT BE HELD RESPONSIBLE FOR DATA LOSS / DRIVE FAILURE. THE MASTER COPY MUST ALWAYS BE SENT TO THE POST HOUSE!!!
WE REQUIRE MAC FORMATTED HARD DRIVES WITH FIREWIRE 800 CAPABILITY. THIS ALLOWS THE FASTEST IMPORT POSSIBLE. PLEASE NOTE A STANDARD USB CONNECTION TAKES UP TO THREE TIMES AS LONG FOR US TO IMPORT AND WILL SEVERELY DELAY THE LOADING PROCESS.
! •! THE MXF FILES MUST BE AN EXACT DUPLICATE OF THE MASTER RAW FILE INCLUDING ALL METADATA AND FULL UNTRUNCATED FILE NAME. IN PARTICULAR BE AWARE THAT DAVANCI SOFTWARE CAN ALTER METADATA. ! •! FOR THOSE CAMERAS THAT DO NOT CREATE TIMECODE NATURALLY (e.g. Canon 5D/ 7D) WE REQUIRE THE NATIVE RAW FILES TO BE STRIPED WITH TIMECODE BEFORE CONVERTING TO MXF. ! •! WITH REGARDS TO THE ALEXA, WE WOULD PREFER TO RECEIVE MXF FILES WITH REC 709. ALTHOUGH THIS SHOULD BE CHECKED WITH PRODUCTION BEFOREHAND. ! •! IF SHOOTING SOUND, IT IS VITAL THAT CONSISTENT DIGI-SLATES ARE USED. IF TAKES ARE NOT SLATED OR THE DIGITAL CLOCK IS NOT IN VIEW OF THE CAMERA IT WILL EXTEND THE LOADING TIME DRAMATICALLY. IN
280
Appendix B - Post House Specifications ORDER TO SYNC THIS FOOTAGE AS SWIFTLY AS POSSIBLE. THE SOUND RECORDIST MUST SYNC THE SOUND TO THE CAMERA TO THE FRAME WITH MATCHING TIME OF DAY TIMECODE.
IF ANYTHING IS UNCLEAR, IT IS VITAL THAT EITHER PRODUCTION OR DIT CONTACT US BEFORE THE SHOOT.
Work
10-11 St Martin's Court London WC2N 4AJ
T + 44 (0) 207 845 6220 F + 44 (0) 207 240 5415 www.workpost.tv
281
Appendix B - Post House Specifications
282
Appendix B - Post House Specifications
283
Appendix C Chapter Review Questions The answers for the chapter reviews are below. Chapter 1 1. C 2. B 3. True 4. False 5. A 6. B 7. True Chapter 2 1. B 2. True 3. False 4. B 5. True 6. True 7. A 8. C 9. False 10. False 11. False 12. False 13. True 14. C 15. D 16. C 17. D 18. B 19. C 20. B 21. A 22. B 23. Interframe has all ‘I’ frames. Each frame is a full image. Intraframe uses the last frame or series of frames, to create the difference and only saves that difference. Chapter 5 1. A 2. False 3. A 4. C 5. False 6. B 7. False 8. A 9. False
Chapter 6 1. B 2. B 3. B 4. C 5. A 6. True 7. C 8. A 9. True 10. C 11. B 12. C 13. B 14. False 15. C 16. B 17. C Chapter 7 1. False 2. C 3. B 4. C 5. True 6. False 7. C Chapter 10 Section 3 - Resolve Review 1. B 2. B 3. Lift = black levels Gamma = Mid range Gain = White levels 4. False 5. A 6. B 7. A 8. True 9. A, B, C 10. C Chapter 11 1. A 2. Lift=blacks, Gamma=mid-range, Gain=while levels 3. False 4. A 5. False
284
Appendix D - Web Links & Resources
The following links might prove helpful in your research for more supporting information. DIT Forums, general topic web information- Lift Gamma Gain , DITuser, CreativeCOW.net Dork In a Tent (fun revamping of the DIT letters). Blog that gets updated on occasion by a very busy DIT. He uses another popular software LiveGrade.
LiveGrade- Software for the creation of LUTs that can then be uploaded into LUT boxes which reside between the camera and on-set monitor. The monitor can then display the ‘look’ the director or DP are after. RED- camera and image handling software downloads.
International Cinematographers Guild Local 600- DITs fall under this union.
RED GIANT- DSLR specific tools (right now). Shooter Suite includes: Pluraleyes for relinking audio, Denoiser II to clean up images, Instant 4K to up-convert footage, Frames to de-interlace images and LUT buddy for the creation of LUT for use in post production.
TAO of Color. All thought more of a color correction web site and blog, they have a wonderful news letter with links to great articles every Sunday.
Silverstack- Software that can ingest, checksum and organize all assets on-set. It will export in limited formats depending on the codecs you have installed.
Software
Velarium- Ingest and quick export software. New and developing offering.
ARRI camera support software Assimilate- Creators of Scratch, Scratch LAB, Play software. Clip Browser- Sony software for use with their XDCAM file formats. View, edit and output. ClipHouse- RAW/DNG camera file handling software. Ingest with checksum, minor colorgrading (1-light) and exporting to H.264 & ProRes file formats. Colorfront- Express Dailies, On-Set Dailies, and TransKoder software. Expensive and really solid offerings. CORTEX Dailies- DIT on-set software. Same price and feature range as ScratchLab. One of the only Windows OS based softwares available. Episode- Transcoding software. Very powerful if you need to move quantities of file formats to other file formats. Imagine Software- Creators of ShotPut Pro and several other great tools for data management.
Hardware Light Iron- configures DIT workstations for sale or rental. However, they have been moving away from hardware and are now considered more of a DIT service provider. PostBox Systems, has migrated the DIT station in a Pelican type case to something still very portable yet crazy cool. ditworld- they offer DIT services and complete system builds. DIT workstation builds
MacBook Pro centric build
Windows centric build
Another Windows build
Breathing Live Into An Older Mac Pro
285
Glossary
ACES encoding system ACES (Academy Color Encoding System) is a new solution to long term archival of digital images within a very large color space using a universal encoding format. Basic job tasks for a DIT Backup camera digital files, make backups of those files, sync audio and video, minor one-light color corrections, transcoding for delivery to editorial and dailies viewing. CMYK The color model used in color references for printing. This is a color reference for reflective color rendition. CMYK stands for: C = Cyan M= Magenta Y= Yellow K= Black Constant Bit Rates (CBR) A compression setting that forces the compression software to use the same level of compression on each frame no matter what the frame might require. This creates a constant quality for the entire file/shot/ scene. CPU Central Processing Unit. The core processor of the computer. Dailies The term used to identify the quick processed files that are viewed by the director and producer within the next day or two. Clips have audio and video synchronized and a simple one-light (rec709) color correction. DIT job Handling digital assets and quality control of the camera image. Full filling asset backup requirements for insurance. (see also Basic job tasks for a DIT) DNG Short for Digital Negative. DNG camera files are comprised of a series of still frames, each one a complete image. This is typically found only in RAW camera recording formats. DNxHD The native codec format for AVID and Lightworks editing systems. It is very similar to ProRes in quality and file sizes. DoP Director of Photography, sometimes called Cinematographer. Responsible for the visual image of the program.
286
Glossary Editorial Another name for editing. exFAT Updated of the Windows OS Disk Operating System (DOS) that allows for files to exceed the 2 gig limit imposed on the older FAT file system. Flipped (aka Transcoded) The process of changing a media file from one format or codec, to another. For example: flipping a video file from AVCHD to DNxHD codec. Gain The term used to indicate adjustment of the white levels in an image. The ‘gain’ control controls the upper third of the luminance range. The term ‘gain’ is also used with cameras indicating the control that electronically increases the sensitivity of the camera. It also induces digital noise into the image. The equivalent in the camera world is changing film ISO/ASA. Gamut The term used to indicate the range of colors and luminance available. The higher the number, the better the image potential. Gigabytes A thousand Megabytes equals one gigabyte. Head contention The term used to define a slowing of the hard drive I-O when the drive is being asked to read and write from several computer driven tasks. The moving read-write head inside the drive can’t keep up with the system demands for data, thus the system seems to slow down. Lift The term used to indicate the adjustment of the black levels or pedestal of the luminance range of the image. Lift controls the lower third of the luminance rage. LTO Linear Tape Open. This is a universal standard tape backup system where digital data is stored on a tape cartridge. Noted for it’s long shelf life, but at the expense of slower access times. Metadata Information that describes something. Metadata about a camera file would include the f-stop, shutter angle or speed, frames per second, raster size, date and time shot, etc. One-light Very basic corrections to the overall color balance, luminance levels. Typically to rec.709 levels. OS Short for Operating System. Mac OS, Windows OS, Linux OS, etc. ProRes Standard codec created by Apple for their Final Cut Pro editing software. Also friendly to Adobe Premier and other non-linear editors. Until recently, not a good codec to use if editing on Windows OS systems because it’s not natively supported.
287
Glossary Raster The height and width of the camera image. For example, 1080p would be 1080 pixels high by 1920 pixels wide. RAW A camera file format that is comprised of the data ‘about the image’ not the image itself. RAW images need special software to decode the data to create an image. These files are very low on compression, and contain the ‘raw’ data off the sensor, giving them the most complete representation of the camera image possible. Rec.709 Industry standard luminance range. Whites set to 100% and blacks set to 0%. The image will either be compressed or expanded into this range. Shoulder The upper curve of the exposure range. The graphic to the right shows a typical exposure density curve with the shoulder represented between the numbers 3 and 4. This is the area of the exposure density where the light gray areas ‘roll’ to total white. Image courtesy http://fotogenetic.dearingfilm.com sRGB RGB and it’s derivatives are used for light or transmitted color rendering. Due to the standardization of sRGB on the Internet, on computers, and on printers, many low- to medium-end consumer digital cameras and scanners use sRGB as the default (or only available) working color space. As the sRGB gamut meets or exceeds the gamut of a low-end inkjet printer, an sRGB image is often regarded as satisfactory for home use. However, consumer-level CCDs are typically uncalibrated, meaning that even though the image is being labeled as sRGB, one can't conclude that the image is color-accurate sRGB. Much software is now designed with the assumption that an 8-bit-per-channel image file placed unchanged onto an 8-bit-per-channel display will appear much as the sRGB specification recommends. LCDs, digital cameras, printers, and scanners all follow the sRGB standard. Devices which do not naturally follow sRGB (as older CRT monitors did) include compensating circuitry or software so that, in the end, they also obey this standard. For this reason, one can generally assume, in the absence of embedded profiles or any other information, that any 8-bit-per-channel image file or any 8-bit-per-channel image API or device interface can be treated as being in the sRGB color space. However, when the correct displaying of an RGB color space is needed, color management usually must be employed. Wikipedia http://en.wikipedia.org/wiki/SRGB Terabyte A thousand gigabytes. Toe The Toe area of the exposure density curve is represented in this graphic between the numbers 1 and 2 in the graph. This is the area of the image density that ‘rolls’ from dark gray to blacks with no detail. Image courtesy http://fotogenetic.dearingfilm.com
288
Glossary
UHD Ultra High Definition. Raster sizes above 2k. Variable Bit Rates (VBR) A compression setting that will set the compression software to analyze each frame of footage, determining the maximum compression that can be applied and still create a stable image. The compressed file will display data rates that vary shot to shot, raising the data rate (reducing compression) or lowering the data rate (increasing compression) based on the contents of the frame and scene.
289
ACES encoding system ACES (Academy Color Encoding System) is a new solution to long term archival of digital images within a very large color space using a universal encoding format.
Related Glossary Terms Drag related terms here
Index
Find Term
Chapter 2 - 2.1 Codecs The Magic Sauce
Basic job tasks for a DIT Backup camera digital files, make backups of those files, sync audio and video, minor one-light color corrections, transcoding for delivery to editorial and dailies viewing.
Related Glossary Terms Drag related terms here
CMYK The color model used in color references for printing. This is a color reference for reflective color rendition. CMYK stands for: C = Cyan M= Magenta Y= Yellow K= Black
Related Glossary Terms Drag related terms here
Constant Bit Rates (CBR) A compression setting that forces the compression software to use the same level of compression on each frame no matter what the frame might require. This creates a constant quality for the entire file/shot/ scene.
Related Glossary Terms Variable Bit Rates (VBR)
CPU Central Processing Unit. The core processor of the computer.
Related Glossary Terms Drag related terms here
Dailies The term used to identify the quick processed files that are viewed by the director and producer within the next day or two. Clips have audio and video synchronized and a simple one-light (Rec. 709) color correction.
Related Glossary Terms Drag related terms here
DIT job Handling digital assets and quality control of the camera image. Full filling asset backup requirements for insurance.
Related Glossary Terms Drag related terms here
DNG Short for Digital Negative. DNG camera files are comprised of a series of still frames, each one a complete image. This is typically found only in RAW camera recording formats.
Related Glossary Terms Drag related terms here
DNxHD The native codec format for AVID and Lightworks editing systems. It is very similar to ProRes in quality and file sizes.
Related Glossary Terms Drag related terms here
DoP Director of Photography, sometimes called Cinematographer. Responsible for the visual image of the program.
Related Glossary Terms Drag related terms here
Editorial Another name for editing.
Related Glossary Terms Drag related terms here
exFAT Updated of the Windows OS Disk Operating System (DOS) that allows for files to exceed the 2 gig limit imposed on the older FAT file system.
Related Glossary Terms Drag related terms here
Gain The term used to indicate adjustment of the white levels in an image. The ‘gain’ control controls the upper third of the luminance range. The term ‘gain’ is also used with cameras indicating the control that electronically increases the sensitivity of the camera. It also induces digital noise into the image. The equivalent in the camera world is changing film ISO/ASA.
Related Glossary Terms Drag related terms here
Gamut The term used to indicate the range of colors and luminance available. The higher the number, the better the image potential.
Related Glossary Terms Drag related terms here
Gigabytes A thousand Megabytes equals one gigabyte.
Related Glossary Terms Terabyte
Head contention The term used to define a slowing of the hard drive I-O when the drive is being asked to read and write from several computer driven tasks. The moving read-write head inside the drive can’t keep up with the system demands for data, thus the system seems to slow down.
Related Glossary Terms Drag related terms here
Lift The term used to indicate the adjustment of the black levels or pedestal of the luminance range of the image. Lift controls the lower third of the luminance rage.
Related Glossary Terms Drag related terms here
LTO Linear Tape Open. This is a universal standard tape backup system where digital data is stored on a tape cartridge. Noted for it’s long shelf life, but at the expense of slower access times.
Related Glossary Terms Drag related terms here
Metadata Information that describes something. Metadata about a camera file would include the f-stop, shutter angle or speed, frames per second, raster size, date and time shot, etc.
Related Glossary Terms Drag related terms here
One-light Very basic corrections to the overall color balance, luminance levels. Typically to rec.709 levels.
Related Glossary Terms Rec.709
OS Short for Operating System. Mac OS, Windows OS, Linux OS, etc.
Related Glossary Terms Drag related terms here
ProRez Standard codec created by Apple for their Final Cut Pro editing software. Also friendly to Adobe Premier and other non-linear editors. Until recently, not a good codec to use if editing on Windows OS systems because it’s not natively supported.
Related Glossary Terms Drag related terms here
Raster The height and width of the camera image. For example, 1080p would be 1080 pixels high by 1920 pixels wide.
Related Glossary Terms Drag related terms here
RAW A camera file format that is comprised of the data ‘about the image’ not the image itself. RAW images need special software to decode the data to create an image. These files are very low on compression, and contain the ‘raw’ data off the sensor, giving them the most complete representation of the camera image possible.
Related Glossary Terms Drag related terms here
Rec.709 Industry standard luminance range. Whites set to 100% and blacks set to 0%. The image will either be compressed or expanded into this range.
Related Glossary Terms One-light
Shoulder The upper curve of the exposure range. The graphic to the right shows a typical exposure density curve with the shoulder represented between the numbers 3 and 4. This is the area of the exposure density where the light gray areas ‘roll’ to total white.
Image courtesy http://fotogenetic.dearingfilm.com
Related Glossary Terms Toe
sRGB RGB and it’s derivatives are used for light or transmitted color rendering. Due to the standardization of sRGB on the Internet, on computers, and on printers, many low- to medium-end consumer digital cameras and scanners use sRGB as the default (or only available) working color space. As the sRGB gamut meets or exceeds the gamut of a low-end inkjet printer, an sRGB image is often regarded as satisfactory for home use. However, consumer-level CCDs are typically uncalibrated, meaning that even though the image is being labeled as sRGB, one can't conclude that the image is coloraccurate sRGB. Much software is now designed with the assumption that an 8-bit-per-channel image file placed unchanged onto an 8-bit-per-channel display will appear much as the sRGB specification recommends. LCDs, digital cameras, printers, and scanners all follow the sRGB standard. Devices which do not naturally follow sRGB (as older CRT monitors did) include compensating circuitry or software so that, in the end, they also obey this standard. For this reason, one can generally assume, in the absence of embedded profiles or any other information, that any 8-bit-per-channel image file or any 8-bit-per-channel image API or device interface can be treated as being in the sRGB color space. However, when the correct displaying of an RGB color space is needed, color management usually must be employed. Wikipedia http://en.wikipedia.org/wiki/SRGB
Related Glossary Terms Drag related terms here
Terabyte A thousand gigabytes.
Related Glossary Terms Gigabytes
Index
Find Term
Chapter 1 - 1.4 The Bumpy Transition To Digital
Toe The Toe area of the exposure density curve is represented in this graphic between the numbers 1 and 2 in the graph. This is the area of the image density that ‘rolls’ from dark
gray to blacks with no detail. Image courtesy http://fotogenetic.dearingfilm.com
Related Glossary Terms Shoulder
UHD Ultra High Definition. Raster sizes above 2k.
Related Glossary Terms Drag related terms here
Variable Bit Rates (VBR) A compression setting that will set the compression software to analyze each frame of footage, determining the maximum compression that can be applied and still create a stable image. The compressed file will display data rates that vary shot to shot, raising the data rate (reducing compression) or lowering the data rate (increasing compression) based on the contents of the frame and scene.
Related Glossary Terms Constant Bit Rates (CBR)