Thursday, December 2, 2010
Thursday, September 30, 2010
A DIT Tent that Comes in Camo? Uber-Awesomeness!
This is cool. Paying out $1500 bucks to Village Blackout is not.
So what's a cash-strapped Digital Imaging Tech to do? How about buying a Hunting Blind instead?
Is the Gaffer giving you a hard time 'cause your DIT tent is bouncing more light than his 20x? I suggest getting the Undercover 360 in Mossy Oak.
Working on a feature deep in the heart of Siberia? You won't go wrong with the addition of a Snow Camo Cover.
Shooting on the prairies of Nebraska and you need room for three? Don't sweat it. Beavertail's got you covered. Literally.
By the way, the above is probably the best picture ever taken and/or created in Photoshop.
I bought my camo rig at the Bass Pro Shop in Rancho Cucamonga for a buck and a quarter. It's got windows, a roof flap, and a hook for hanging up any fresh meat. Best of all, I don't have to move out of the wide shot when shooting on a golf course.
Hey! Check out those extras hanging out next to that bush! Nice!
I'm gonna spend the $1375 I saved by seeing Scott Pilgrim Vs. the World at the Academy in Pasadena over and over again 'cause it's the BEST MOVIE EVA! And if you haven't seen it you should be ashamed of yourself for not supporting a studio film chock full of creativity, originality, and awesomenesses.
Own an ALEXA and Put It On Your iPad... For Free!
Don't have 75K in your back pocket? Still at the bottom of the waiting list? Just can't get enough cool toys for your tablet?
Well give it up to ARRI once again. Now you can have your own ALEXA in cyberspace. Or better yet, download the ALEXA Camera Simulator right to your iPad. Now all you'll have to do is figure out how to attach that 27mm Master Prime.
Tuesday, September 28, 2010
What Does ALEXA's Log C Look Like?
I've been really nutso busy lately and am still deciding how I'm gonna make sense of the ALEXA ASA/Gain test I did a week back. But for now I thought I'd share a short clip of the ALEXA's Log C capture with and without a color-corrected LUT applied. This was a 3D LUT made in the field as a quick reference color grade and not a simple transform LUT. You can really see (even on the small, small screen) how much detail the ALEXA picks up in the shadow areas of the frame (pay close attention to our ladies hair).
And check out this article from ARRI's website. It's a quick Q&A with DP Anna Foerster on her experience using the ALEXA for the first time.
ALEXA Log C + LUT from Adrian Jebef on Vimeo.
And check out this article from ARRI's website. It's a quick Q&A with DP Anna Foerster on her experience using the ALEXA for the first time.
Monday, September 13, 2010
ALEXA Initial Impressions
I have been lucky enough to have gotten my hands on Arri's new ALEXA digital cinema camera. Actually, I've done three separate jobs with the ALEXA so far and am starting another one tomorrow. For about six months I have been waiting with baited breath in anticipation of this camera system. I had hoped it would deliver. I had at least pleaded that it would deliver 75% on its promises. I can tell you right now, unequivocally, that this camera, simply stated: over-delivers.
Everything you may have heard is true... and then some. This is the new standard for digital acquisition. I can say, for the first time, that this camera looks as good as 35mm film. In fact, I doubt that anyone could tell the difference between properly lit and exposed 35mm negative and properly lit and exposed ALEXA Log C capture. This is the first digital cinema camera. Period. It does not look like a video camera. It is smooth and silky. Much like film. And has such unbelievable dynamic range that you will crap yourself if you haven't already.
Arri has outdone themselves. This is a company that I have always admired. That I have always put at the top of any list. This company does not sit still. They are constantly evolving and pushing the boundaries. However, they are German. And being German requires a certain level of commitment to excellence. Sometimes this commitment can be an Achilles heel. For example, the 35mm BL was a workhorse of a camera, but it needed to be replaced. So Arri introduced the 535a, a 35mm film camera built like a tank and weighing almost as much. It had an electronically adjustable shutter, speed ramping, arriglow, a huge and bright viewfinder, etc. All of the specs screamed perfection, but the camera was just too damn big and bulky. Have you every tried a hand-held shot with a 535a and a 400ft mag? How about riding one on a steadicam? The thing just kills. About ten years later Arri introduces the Arricam ST and LT. C'mon. We can say it together: The Arricam LT is the best 35mm sync-sound motion picture camera ever designed and built. Period. And so it was with the Arri D-20. A great design on paper, but not so much on location. The D-21 proved Arri's point that it could produce a digital camera that looked great. As long as it was a day exterior or you had an uber budget for G&E.
So here we are. It's September of 2010 and Christmas has come early. Arri has produced the standard. Nothing else even comes close. Out of the box the ALEXA is stunning. It is small, compact, well-designed, intuitive, simple, and elegant. It also happens to weigh quite a bit more than you'd think (I've gotten accustomed to calling it "The Brick"). But the weight adds confidence. This is a motion picture camera. It is not made of plastic and rubber. It will not crack or throw a tantrum. It was built like a tank (in a good way this time). And it has a soft side. The ALEXA's pictures are simply gorgeous. There is an incredible smoothness to this ALEV-III sensor that I've never seen before in digital cameras. The ALEXA is sharp, of course, but not like a F35. It is a new world. And we should be happy.
This was a bit of school-boy gushing on my part but I just had to get it out. My next post will include an ASA/Gain test that I just finished this afternoon. With ALEXA's native 800 ASA sensitivity I wanted to objectively measure any reduced dynamic range under middle grey when the camera's ASA is set lower (like to 400 or even 200). From just observing the camera in the field I suspect that any reduction of DR is going to be purely academic, but I'll have the real data up soon and we can take a better look.
One last thought... I know there are those out there amongst us that may be impressed with the ALEXA but are still waiting on the next generation of digital cameras from RED. I admire your loyalty. I'm sure the Epic or the 83.2K Super Iliad will be just awesome. I am intrigued. That being said, there is not a lot that I would change on the ALEXA. I do not see much room for improvement. Resolution? Sure. But as any film professional will tell you there is more to resolution than merely resolution. Dynamic range, sharpness, contrast, all of these affect resolution. A higher K doesn't always add up. But one thing is for sure. When the history books are written our film scholars will no doubt note the impact that RED had on the future of digital cinema. Because without the RED ONE dropping into our laps those few short years ago we would never have Arri's ALEXA. Here's to more industry shake-ups!
And here's a few screen grabs from my last few ALEXA jobs...
These stills were captured from ALEXA's LOG C out, routed through a Cine-tal 24" Class 1 HD Monitor, color-corrected via SpeedGrade OnSet, and outputted to a QuickTime movie file. They represent a quick and dirty snapshot from a much more beautiful machine.
ALEXA @ ASA 320
ALEXA @ ASA 800
ALEXA @ ASA 400
ALEXA @ ASA 1600 with a Cooke S5i 24mm Lens at T1.4 and lit entirely with a source 4 backlight and a kino barfly about 15ft from our star.
Saturday, August 7, 2010
Is Your Waveform Stuck in the Past?
Having a professional-grade waveform monitor on the set to analyze exposure is absolutely critical when dealing with digital capture. And if you were to take a survey of all the sets across the entire country on any one day you'd find that 99.9% of the waveform monitors used to judge exposure are set to an IRE percentage scale. Unfortunately, the IRE scale is simply the wrong tool to use for digital capture. Let me explain...
The IRE scale is named after The Institute of Radio Engineers which was first formed in 1912. The IRE merged with another engineering society in 1963 and is now known as The Institute of Electrical and Electronics Engineers or IEEE. IRE's as a unit of measurement are designed to measure an incoming composite video signal in milivolts (mV) and then convert it to a 0 - 100 percentage scale where 0% is black and 100% is white. The reason IRE's use a 100% scale is because a composite video signal is an analog signal. There are no discreet points in an analog signal as an analog signal by definition can be thought of as a continuous wave not as specific points on a graph. An amplitude scale from 0-100 makes a lot of sense when dealing with analog signals because this scale gives you a relative understanding of video gain. The brighter the pixel in the image the closer to 100 IRE. The darker the pixel the closer to 0 IRE. But because this is a relative scale each luminance value in a pixel is simply an approximation when transferred onto an IRE % waveform. The pixel's luminance does not correspond directly with a specific IRE % because the pixel is not discreetly sampled like it is in digital video. The IRE scale is still a great way to monitor your video images. As long as they are analog video images. Which is another way of saying that the IRE scale was designed and is intended for standard definition video only.
So why do we still use an SD scale for HD video? Simple: legacy. It's human nature. Once you get used to something, why change? Many of the engineers that were designing this stuff twenty years ago are still around designing it today. All of your brand-spankin' new Leader waveform monitors will come equipped with an IRE scale. We've all gotten accustomed to it. Our DP's are finally accustomed to asking that skin tones settle in to 40 IRE. What will they think when we tell them this whole thing has been a sham right from the beginning?
Don't sweat it just correct it. In digital video a camera sensor's ability to capture differences in illumination is dependent on its quantization level or bit depth. The higher the bit depth the more discreet steps of luminance a sensor can discern. An 8-bit sensor can make out 256 discreet differences in luminance. A 10-bit sensor 1024. 12-bit 4096. And 14-bit 16,384. Depending on the bit-depth range each pixel on a digital sensor will assign a discreet value for every step of luminance difference within a scene. This number is known as a coded value.
When dealing with digital video you want the ability to use a waveform monitor to identify discreet differences in luminance within a scene. A coded value scale is the only scale that will give you these discreet readings. For most applications a 10-bit scale is standard and user-friendly. Again, there are 1024 discreet steps of luminance in a 10-bit image. Looking at a waveform coded value scale we would find 0 at the bottom of the scale representing total black and 1023 at the top of the scale representing total white. This scale will usually also be identified with 100/75/50/25/0% markings. A coded value of 940 will correspond to 100%. 721 is 75%. 502 is 50%. 283 is 25%. And 64 is 0%. Here's a scale that includes both IRE % and 10-bit coded values for Sony's S-Log gamma curve.
We can see that these 10-bit coded values get close to an IRE % scale. But for what I do, close is not good enough.
It's about time that we start embracing the coded value scale when we discuss luminance within a digital video signal. It is the only correct way in which to truly monitor HD video. One British company that insists on using code values on all of its waveform scales is OmniTek. They're propelling HD test measurement into the future where it belongs, not into the past where just about everyone else is stuck.
So the next time your DP asks for skin tones to hit 40 IRE let her know that they look fantastic at 420d.
Tuesday, July 6, 2010
Why the Weisscam is My New Best Friend
So you wanna shoot high-speed HD? Get a Phantom Gold, right? Sure, if you enjoy headaches. Now I don't want to knock Vision Research too much here, but truth be told: the Phantom line of high-speed digital cameras is a pain to work with. I like to compare a Phantom HD Gold to a 2-year old toddler: every thing's just dandy until you take away their ice-cream or forget to perform a black balance every ten seconds.
These cameras can produce stunning, high-resolution images BUT you MUST baby them. Their sensors are extremely susceptible to temperature changes. This is why it is a requirement to constantly check your image and make sure it is clean before every take. If you don't, you'll end up with superbad mojo in post and even worse juju in life. I'm also not a huge fan of the windows operating system that is the foundation of the Phantom's capture software. It works, but then it doesn't. Freezes. Reboot. Not a fan. Remember, Arri never had me boot up Windows XP in order to shoot 150fps on a 435. I really don't see the need for Vision Research to insist on such a weak interface.
OK, enough about the Phantom. Like everything else, it's a tool. Until it fails...
Which brings me to the Weisscam HS-2. As far as I know there are only two HS-2's in LA via Clairmont Camera. Tentative steps...
The Weisscam solves some of the Phantom's shortcomings by running an automatic black reference calibration that constantly, and on the fly, evaluates the dark noise in an image and adjusts for the best image quality. This is a huge advantage as there is no need to interrupt shooting to perform any "Wizard-of-Oz" techie tricks.
The Weisscam's workflow approach is similar to the Phantom's in that the camera is meant to constantly store frames (at a specified frame rate) in an internal RAM module while waiting for the operator to tell it to stop. Once a recording is stopped the preceding footage must be output to either the Weisscam Digital Magazine or via the camera's HD-SDI outputs where it can be captured in real-time (1000fps played back at 24) with a SRW-1, nanoFlash, or other HD recording device. The internal RAM buffer stores image frames as sort of free floating information. None of it is actually recorded onto any media or hard drive. It simply holds your shot there temporarily until you send it out to a recorder or begin recording right over it again. That's why if the camera loses power you've also lost your shot. It was never actually stored anywhere, it was just present for a fleeting moment...
That's precisely why there are two power inputs on the camera body. In case of a battery failure any shot stored on the internal RAM buffer won't vanish into thin air like it would if you have to reboot Windows (hint). Power redundancy is a good thing to have in high-speed digital capture. Does the new Phantom Flex have two power inputs? Yes, I think it does. About time, boys.
And did I mention that the Weisscam comes with this small, super-sweet, touch sensitive LCD, remote bluetooth controller so you can turn the camera on and off effortlessly when you've got it mounted on a 50ft Technocrane? Oh, right, VR also introduced their RCU. About time, boys...
Tech Tips with Your Nano
Here's a few observations from the field when using the nanoFlash on high-end, digital cinema cameras like the Sony F35 and Arri D-21.
The nano is a very smart machine. It will automatically detect an incoming HD video signal and display its format at the bottom of the LCD screen. If you are unsure of what a camera is sending you via its HD-SDI out make sure you take a moment to check with the nano.
I've used the nasnoFlash to record directly off a Sony F35's interface box with no problems. But the "monitor out" SDI outputs will NOT pass any Timecode or audio information. Take note because this is important. If you need embedded audio and TC then you must connect the nanoFlash to one of the SDI outs on the interface box. And if you are shooting 4:4:4 then your best bet is to separate the SRW-1 from the camera body, connect it to the SRPC-1 Video Processor (aka: The Toaster!), and use the monitor out SDI output. Just be sure to take note that this signal is set at a default 59.94i frame rate. This means that if you have set the F35 at 23.98PsF you will NOT be getting a true 24P signal coming out of the toaster. If you've enabled your nanoFlash to record PsF but have not added a 3:2 pulldown to the 29.98PsF out you'll end up with motion artifacts.
Fortunately, you can easily add a 3:2 pulldown after the fact to get your 23.98PsF back. But you're best off by either setting a 3:2 pulldown via the SRW-1's video menu or enabling a 3:2 pulldown in the nanoFlash itself.
Also of great interest is the ability for the nano to accept a Timecode Trigger to begin recording. If you are working in a Record Run TC mode this means that the nanoFlash will begin recording when you hit your camera's record button (as soon as the TC advances). If you're working with a single-system (audio + video are one) this is a sleek and efficient process. If you're running a dual-system (audio + video are separate and will be matched later in post) then you're most likely using a Time-of-Day TC mode and you'll have to trigger your nano manually.
Some more tidbits: A 32GB CF card will easily hold more than 50min. worth of 23.98PsF material at 50Mbps Long-GOP (HDCAM SR anyone?). At 100Mbps you'll get about 40min.
You can record any Log Gamma modes with a nanoFlash including S-Log, Panalog, etc. as long as you're sending the nano a 4:2:2 signal. Your D-21 set to Log-C 4:4:4 will have you seeing funny things in post production.
You may need to add the XDCAM plugin to Quicktime Player in order to view nano QTs properly. Check: Sony Support or Calibrated Software or Perian.
Be sure to consult Convergent Designs approved Media page when picking out CF cards to use with your nanoFlash. It never hurts to test something before you push the button and sink the Titanic.
Finally, the nano3D rig is right around the corner. Having two synced nanoFlashes for 3D television and commercial work will be just awesome. Plus I bet you can record 4:4:4...
Is the nanoFlash the Best Thing Eva?
OK, so this product has been on the market for about a year now. It was aimed at the prosumer video shooters as a way to bypass a camera's internal, and most likely inferior, recording codec by capturing a HD stream via the robust Sony XDCam codec at a very impressive max 280 Mbps.
There are many things to like about the nanoFlash. But the most important is the fact that this little device (measuring approx. 4"x4"x1" and weighing in at under a pound) records your choice of Quicktime or MXF files (and a couple others) directly to solid state media. In this case, pro-level compact flash cards. What this means is that for a mere $3K you can enter the fantastic realm of a simple data-centric workflow and leave all your tapes (and tape decks) behind. Because the nano is built and engineered by an independent company, Convergent Designs, you are not tied into any proprietary recording format or recording media. And because this unit accepts both SDI and HDMI (mini) inputs you can use it to record just about anything that will send out a HD signal.
A few tech specs: the nanoFlash is an 8-bit video recording device that delivers 4:2:2 color sampling at selectable compression bit rates. For max quality you want I-Frame at 280 Mbps. What does this mean for a 4K Digital Intermediate? Capturing at 8-bit instead of 10-bit will reduce your dynamic range. And 4:2:2 won't give you all the color information that a 4:4:4 recording will. But if you've got the cash for a 4K DI then you're probably shooting film anyway.
So what about the small screen? Last time I checked the max bit rate of HDTV broadcasts was a mere 19.4 Mbps. Not to mention that what's sent over the air is usually 4:2:2 at 8-bits. The nanoFlash has the ability to deliver fantastic picture quality for episodics and commercials. 'Nuff said.
Why spend the cash on HDCAM SR when you can Velcro a nano on the side of your F35? Shooting Panalog on a Genesis? The nano can record it. What about RED? Though the R3D workflow has dramatically improved it can still be a pain. So if you're OK with 720p, why not dial in a good lookin' image via the RED Video Menu and capture it with a nanoFlash? And though the current Canon 5D Mark II doesn't output full raster images via it's HDMI out perhaps the next interation will. And then, nano-it!
The truth is this little box can cut your post-production costs dramatically. With new high-end digital cinema cameras hitting the streets everyday the nanoFlash can be your secret weapon. A file-based workflow is inevitable. It holds the promise of immediacy, redundancy, and affordability. Flash memory will get cheaper. CPUs will get faster. And 1s and 0s can already be infinitely copied and stored.
Panasonic had it right when they introduced their P2 workflow. RED got it right when they decided to record directly to CF cards. Everyone is excited about Arri's ALEXA because it can record Apple ProRes files directly to flash memory. And Sony may be late to the game, but they'll be updating their HDCAM SR format to record directly to solid-state media as well.
The thing is, the nanoFlash is already here, it's non-proprietary, it's inexpensive, small, draws almost no power (6.5 watts when recording), and works like a charm. No wonder I own two of them...
Thursday, April 15, 2010
Sony F35 Menu Control Over the Wire
Not many out there know this, but Sony added Ethernet menu support for the F35 through a recent firmware update. It's an intuitive and simple way to keep consistent control without the need for a MSU (master set-up unit). This will work on a Mac running OS X 10.5 and later or on a PC running Windows XP or Windows Vista.
To access the F35's menu system via a computer over an Ethernet cable you first need to define or find out your laptop's IP address. On a Mac, since that's what I use, open up "System Preferences" and click on "Network". Open up your Ethernet preferences and note or write in your IP address.
Then open up the F35's main menu system by holding down the click-wheel and pressing the VF Menu / Display button. Open the "Network" menu and choose "IP Addr Set". You need to enter in your computer's IP address here with an additional numerical step. For example, if your laptop's IP address is 192.168.0.1 you should enter 192.168.0.2 as the F35's IP address.
Now comes the fun part. Connect a cross-over Ethernet cable from your computer to the camera and open up your web browser. This will work with Internet explorer 6 or 7, Firefox 3, and Safari 3. In your browser's address bar type in http:// plus the IP address you entered into the camera. In our example above, you would enter http://192.168.0.2 and hit enter. Now, behold, the almighty awesomeness of navigating the menu from your laptop!
Note that while this is indeed pretty rad any changes you make through this Ethernet menu tend to be a bit slow. If I'm painting with the color matrix I tend to select a channel and hold the "up" or "down" keys on my Mac. This seems to take the edge off and make it a bit easier. Of course, if you need direct control and production is willing to pay for it, get a MSU.
Wednesday, April 14, 2010
SRW-1 Colors
The Sony F35 recording to HDCAM SR with a SRW-1 has been the defacto camera of choice on most episodics shooting here in LA. With that in mind here's a little tip to trick out your "A" and "B" cameras.
You can color-code the LCD panel display on the SRW-1 by accessing the diagnosis menu. Simply hold down the "Home" key and then push the "System" button. Use the thumb wheel to navigate the menu and click on "LCD Color" near the bottom. This will give you access to a bunch of different character displays on the LCD. Note that once you make a selection you then have to manually dial in your color preference by mixing different values of RGB. It takes a bit of practice, but it's well worth the effort to pimp yer tape recorder.
Thursday, April 1, 2010
So You Wanna Be a DIT?
Though the International Cinematographer's Guild, IATSE Local 600, has yet to officially recognize and support the Digital Imaging Technician classification our friends over in Minnesota already are. Why pay an $8000 initiation fee when you can begin taking Introduction to Digital Imaging for a tenth of the price? Get certified! It's the quickest way to tenure.
Thursday, March 18, 2010
Video vs. Raw : Log vs. Lin.
As our world keeps replacing reality with 1's and 0's it's a good idea to get everyone on the same page when we discuss certain words and/or processes. I've found through experience that just because we have gotten accustomed to using terms like RAW and S-Log does not mean we actually understand the choice we're about to make. Here is a conversational rather than technical explanation to clear the air.
When it comes to digital acquisition we have three specific choices that we can make on-set which will directly determine the quality of our image and our ability to make changes to it in post-production. Do we record Raw, Log, or straight HD Video?
First we must decide in what form we want to capture our image. We have two choices here: Raw or Video. Raw is a digital camera sensor's unprocessed linear luminance values. Because this is just data a raw file will not include any white balance adjustment, color and saturation adjustments, gamma correction, or debayering (if it is a CMOS sensor). Simply stated: a Raw file is NOT an image. It is just the luminance values from every pixel on the sensor. It is data: 1's and 0's. A digital Raw file is analogous to 35mm negative. Though you can interpret an image by looking at a film negative you must color time, print from the neg to a release stock, and finally project the print to view your image as intended. It is the same with digital Raw files. You can interpret an image from the coded luminance values in a Raw file but you won’t actually see your image until it has been processed through either a camera’s firmware (like the Sony F35 which is a Video camera, not a Raw camera) or post-production software (like Speedgrade DI or RedAlert).
Once a Raw file is processed to create an image it ceases to be Raw and instead becomes Video. Raw is data, Video is the image. RED R3D files are your Raw files, but once you open a R3D in RedAlert it becomes a HD Video clip. You will never be able to actually view your Raw file as an image. If you did, it would look something like this:
The reason that Raw is preferable as a capture choice over HD Video is that any processing manipulation that is done to your image during post will be done directly from the source data (each pixel’s luminance values). During the Digital Intermediate process any change to the Raw file simply transcodes the original values into the newly “timed” values while also adding other mathematical transfer functions to the data set such as gamma curves and gain reduction to achieve your overall “look”. This is often referred to as Rendering. If you decide to change your “look” all of your changes will be rendered from the original Raw file and not your previous rendered “look” thereby maintaining the true values found in the Raw file. This is why color-timing Raw files is considered “non-destructive” to your original captured image.
Here's our Raw file after color timing:
Color-timing from a HD Video source is destructive. Any changes made to your image are “burnt in” to your new “look”. Once these changes are made they cannot be unmade. There is no reset button. This is why some cameras are capable of recording in a log video mode. By recording in log the camera attempts to stretch and squeeze the image to capture as closely as possible each pixel’s luminance values within the context of a HD Video signal. But more on that in just a bit.
A great way to think of Raw vs. Video is to imagine constructing a house. A Raw file is your blueprint. Though the overall shape and size of your new home is set you still retain the ability to make a wide range of choices. From adding closets to raising the ceiling height any changes can be redrafted and visualized before construction begins. With Video you assume the role of the foreman. Not only has the final blueprint been drafted, but the foundation has been laid and the house has been framed. Though you still have the ability to make changes, every window you add or room you extend requires physically cutting into your home's frame and either throwing away or adding material. Any change, big or small, will fundamentally weaken your home's foundation. Too many radical revisions can bring the whole structure right down into a pile of junk. Then again, a bad artistic eye can be just as fatal...
Okay, let's take a breath. Much better. Now for something completely different: what is the difference between a linear (though used commonly, the proper technical term would be "gamma-corrected") and log video signal? First off, it would help to briefly explain how a digital sensor "sees" light vs. how our eye "sees" light. The distinction is between the physical process the camera sensor is using to interpret light in a scene and the fundamentally different process that human vision uses. All CCD and CMOS sensors "see" luminance in linear form while human vision has a logarithmic response.
So what exactly is lin and log?
In digital photography we are fundamentally concerned with brightness (luminance) in a scene that needs to be converted into a coded value (dependent on bit-depth) of video signal strength (sometimes represented in milivolts: mV) in order to reproduce an image. To make it simple we can say that a digital camera will assign a number to a specific amount of brightness in a scene and that number will be output as voltage. On-set we can view the intensity of this voltage by running our video signal through a waveform monitor and noting its IRE value. A digital camera's ability to interpret variations in light intensity within a scene is directly related to its bit-depth. The bigger the bit-depth the more luminance values a camera can discern. An 8-bit camera can discern 256 intensity values per pixel per color channel (RGB). A 10-bit camera can discern 1024 values. A 12-bit: 4096. And a 14-bit sensor: 16,384. It's easy to see why bit-depth has a huge role in a camera's dynamic range.
A digital camera encodes these luminance values linearly. That is, for every discreet step of difference in luma, the camera will output an equal step of difference in voltage or video signal.
The human eye is sensitive to relative, not discreet, steps of difference in luma. For example, during a full moon your eye will have no problem in discerning your immediate surroundings. If you were to light a bonfire the relative illumination coming from the flames would certainly overpower the moon light. Inversely, if you were to light that same bonfire during high noon you would be hard pressed to notice any discernible increase in illumination. This is why we use f-stops (the doubling or halving of light) to interpret changes in exposure.
What we can learn from the difference between linear and logarithmic responses to luminance is that a linear approach will be able to discern more discreet values in the highlights of an image while a logarithmic approach can discern more subtleties in the shadows. This is because a digital camera only has a finite number of bits in which to store a scene's dynamic range and most of those bits are used up to capture the brightest portions of the scene. Art Adams over at ProVideo Coalition did the math for us with a 14-bit sensor that will distribute 16,384 discreet luminance values over 3 channels (RGB):
As Art points out: "The first four stops of dynamic range get an enormous number of storage bits--and that’s just about when we hit middle gray. As we get toward the bottom of the dynamic range there are fewer steps to record each change in brightness, and we’re also a lot closer to the noise floor." Well said.
Because our eyes respond to relative changes in luminance we don't perceive much difference in highlights. If the sun beaming down on Death Valley is hot and bright at 11:00am it's not going to appear hotter and brighter at noon. It's just still gonna be sweltering. But in low-luminance environments our eyes will give greater weight to slight changes in brightness. Turn the lights off at home at night and your eyes will soon adjust to begin making out your surroundings. Ambient streetlight, moonlight, and the soft green glow coming from your charging iphone will all contribute significantly to your visual perception.
Now let's take a look at a scene captured with a digital sensor. The linear pixel values when processed into a video stream but without any gamma correction applied will look something like this:
Notice the extreme contrast and lack of shadow detail. Since most of the sensor's bits are used to capture highlight detail the mid-tones and shadows appear almost black when rendered linearly (or literally). This is because there simply isn't the same fine incremental differences of data between the shadow areas as there is in the highlights. Like a 50-cent Ace Comb with fine teeth on one end and large on the other.
This brings to mind a scene from one of the great comedy masterpieces...
Alright, in order to view our scene as intended the linear light values need to be gamma corrected. That is, the sensor's linear output needs to have a logarithmic transfer function applied in order to stretch out the shadows and mid-tones while pushing back all the highlights. We're basically taking a straight line and making it into a curve by adding relative brightness to the areas of the image that need it most to appear correct to our eye. A gamma curve of .45 has become the standard as it closely approximates our eye's own logarithmic response.
So here's our image with gamma correction applied:
Much better. This, of course, brings us to our next question: what is log in a HD video camera and why should we care?
The S-log record mode in a Sony F35 is simply an unique gamma correction. The difference is that log mode is gamma correction for post-production purposes while every other gamma correction is intended to deliver an image with the proper contrast for viewing. The F35 allows you to select a standard gamma correction (rec709), a number of hypergammas (variations on rec709 that will lift the mids or pull back the highlights), S-log for post-production, or you can create your own gamma curve.
The reason that recording in log mode is so beneficial during post production is that instead of providing an image with proper contrast for immediate viewing, log seeks to preserve as much of the fine differences in an image's dynamic range for later manipulation in post. Those differences are shades of gray. This is why a log HD video signal always seems to appear washed out and flat. Once you take all those shades of grey and crush them in post to make deep blacks or burn them out to create specular highlights you inherently loose your image's original dynamic range, but you gain a gorgeous, contrasty, and sharp image.
Here's 10 stops from a log image:
And here's the remaining 5 stops from the final color time:
Let's finish this off with the pros and con's of our three choices. Raw gives us the best image quality and dynamic range because our image is built directly from each pixel's discreet luminance values. Because a Raw file has not been processed or compressed (unless you're a RED ONE owner), shooting Raw produces the largest amount of data-sets and will require the largest storage capacity for all of the files generated. Raw data also usually requires intensive post-production manipulation to arrive at your final image. This process can be expensive and time-consuming.
HD Video can give us a fantastic image with the understanding that post-production manipulation is limited. It is best understood that what is seen on-set during the shooting day is what will be shown to an audience later. An experienced D.I.T. and colorist can work with the DP to provide the final visual look of the project during recording. This can mean very little to no post-production color-timing and can easily shave thousands off a budget. Of course, any radical deviation from the set look during post can severely degrade your image. A Video stream can also be easily compressed to tape (HDCAM SR) or through a video codec to cut down a project's archival requirements.
HD Video captured in a log record mode is sort of the best of both worlds. Though you are recording a video image that can still be tweaked and tuned by the DP and D.I.T. the image recorded makes the best use of the sensor's dynamic range. Your footage also will not need to be rendered as it is already a usable image. However, your images will need to be manipulated in post-production to arrive at your project's final look. This can easily add expense and time to a project's turn-around.
Most HD cameras on the market today have the ability to record in either Raw or Video modes but not both. There is really only one that has a proven track record of being able to record Raw, log, and straight gamma-corrected HD video. And that's the Arri D-21.
I did a commercial recently for Texas Energy. The production company was small and wanted to ingest the footage immediately after wrap, cut on Final Cut Pro, and deliver the spot within a few days. Using the D-21 in a 4:2:2 HD video mode recording to HDCAM SR was the obvious choice. This allowed for a great image on-set that could be approved on the spot, an in-house edit and color-time, and a super-fast turn-around.
"Lie to Me", the television series on Fox, uses D-21's in log mode to get the most out of their images while still keeping their post-production budget and episodic turn-around time in check.
And if you're David Fincher, I'm sure you'll insist on using the D-21 to record straight Raw files to a S-2 Digimag or Codex Recorder. Because Justin Timberlake and Tobey Maguire wouldn't have it any other way...
Hey, did I mention I'm looking forward to Arri's new line of digital cameras?
When it comes to digital acquisition we have three specific choices that we can make on-set which will directly determine the quality of our image and our ability to make changes to it in post-production. Do we record Raw, Log, or straight HD Video?
First we must decide in what form we want to capture our image. We have two choices here: Raw or Video. Raw is a digital camera sensor's unprocessed linear luminance values. Because this is just data a raw file will not include any white balance adjustment, color and saturation adjustments, gamma correction, or debayering (if it is a CMOS sensor). Simply stated: a Raw file is NOT an image. It is just the luminance values from every pixel on the sensor. It is data: 1's and 0's. A digital Raw file is analogous to 35mm negative. Though you can interpret an image by looking at a film negative you must color time, print from the neg to a release stock, and finally project the print to view your image as intended. It is the same with digital Raw files. You can interpret an image from the coded luminance values in a Raw file but you won’t actually see your image until it has been processed through either a camera’s firmware (like the Sony F35 which is a Video camera, not a Raw camera) or post-production software (like Speedgrade DI or RedAlert).
Once a Raw file is processed to create an image it ceases to be Raw and instead becomes Video. Raw is data, Video is the image. RED R3D files are your Raw files, but once you open a R3D in RedAlert it becomes a HD Video clip. You will never be able to actually view your Raw file as an image. If you did, it would look something like this:
The reason that Raw is preferable as a capture choice over HD Video is that any processing manipulation that is done to your image during post will be done directly from the source data (each pixel’s luminance values). During the Digital Intermediate process any change to the Raw file simply transcodes the original values into the newly “timed” values while also adding other mathematical transfer functions to the data set such as gamma curves and gain reduction to achieve your overall “look”. This is often referred to as Rendering. If you decide to change your “look” all of your changes will be rendered from the original Raw file and not your previous rendered “look” thereby maintaining the true values found in the Raw file. This is why color-timing Raw files is considered “non-destructive” to your original captured image.
Here's our Raw file after color timing:
Color-timing from a HD Video source is destructive. Any changes made to your image are “burnt in” to your new “look”. Once these changes are made they cannot be unmade. There is no reset button. This is why some cameras are capable of recording in a log video mode. By recording in log the camera attempts to stretch and squeeze the image to capture as closely as possible each pixel’s luminance values within the context of a HD Video signal. But more on that in just a bit.
A great way to think of Raw vs. Video is to imagine constructing a house. A Raw file is your blueprint. Though the overall shape and size of your new home is set you still retain the ability to make a wide range of choices. From adding closets to raising the ceiling height any changes can be redrafted and visualized before construction begins. With Video you assume the role of the foreman. Not only has the final blueprint been drafted, but the foundation has been laid and the house has been framed. Though you still have the ability to make changes, every window you add or room you extend requires physically cutting into your home's frame and either throwing away or adding material. Any change, big or small, will fundamentally weaken your home's foundation. Too many radical revisions can bring the whole structure right down into a pile of junk. Then again, a bad artistic eye can be just as fatal...
Okay, let's take a breath. Much better. Now for something completely different: what is the difference between a linear (though used commonly, the proper technical term would be "gamma-corrected") and log video signal? First off, it would help to briefly explain how a digital sensor "sees" light vs. how our eye "sees" light. The distinction is between the physical process the camera sensor is using to interpret light in a scene and the fundamentally different process that human vision uses. All CCD and CMOS sensors "see" luminance in linear form while human vision has a logarithmic response.
So what exactly is lin and log?
In digital photography we are fundamentally concerned with brightness (luminance) in a scene that needs to be converted into a coded value (dependent on bit-depth) of video signal strength (sometimes represented in milivolts: mV) in order to reproduce an image. To make it simple we can say that a digital camera will assign a number to a specific amount of brightness in a scene and that number will be output as voltage. On-set we can view the intensity of this voltage by running our video signal through a waveform monitor and noting its IRE value. A digital camera's ability to interpret variations in light intensity within a scene is directly related to its bit-depth. The bigger the bit-depth the more luminance values a camera can discern. An 8-bit camera can discern 256 intensity values per pixel per color channel (RGB). A 10-bit camera can discern 1024 values. A 12-bit: 4096. And a 14-bit sensor: 16,384. It's easy to see why bit-depth has a huge role in a camera's dynamic range.
A digital camera encodes these luminance values linearly. That is, for every discreet step of difference in luma, the camera will output an equal step of difference in voltage or video signal.
The human eye is sensitive to relative, not discreet, steps of difference in luma. For example, during a full moon your eye will have no problem in discerning your immediate surroundings. If you were to light a bonfire the relative illumination coming from the flames would certainly overpower the moon light. Inversely, if you were to light that same bonfire during high noon you would be hard pressed to notice any discernible increase in illumination. This is why we use f-stops (the doubling or halving of light) to interpret changes in exposure.
What we can learn from the difference between linear and logarithmic responses to luminance is that a linear approach will be able to discern more discreet values in the highlights of an image while a logarithmic approach can discern more subtleties in the shadows. This is because a digital camera only has a finite number of bits in which to store a scene's dynamic range and most of those bits are used up to capture the brightest portions of the scene. Art Adams over at ProVideo Coalition did the math for us with a 14-bit sensor that will distribute 16,384 discreet luminance values over 3 channels (RGB):
As Art points out: "The first four stops of dynamic range get an enormous number of storage bits--and that’s just about when we hit middle gray. As we get toward the bottom of the dynamic range there are fewer steps to record each change in brightness, and we’re also a lot closer to the noise floor." Well said.
Because our eyes respond to relative changes in luminance we don't perceive much difference in highlights. If the sun beaming down on Death Valley is hot and bright at 11:00am it's not going to appear hotter and brighter at noon. It's just still gonna be sweltering. But in low-luminance environments our eyes will give greater weight to slight changes in brightness. Turn the lights off at home at night and your eyes will soon adjust to begin making out your surroundings. Ambient streetlight, moonlight, and the soft green glow coming from your charging iphone will all contribute significantly to your visual perception.
Now let's take a look at a scene captured with a digital sensor. The linear pixel values when processed into a video stream but without any gamma correction applied will look something like this:
Notice the extreme contrast and lack of shadow detail. Since most of the sensor's bits are used to capture highlight detail the mid-tones and shadows appear almost black when rendered linearly (or literally). This is because there simply isn't the same fine incremental differences of data between the shadow areas as there is in the highlights. Like a 50-cent Ace Comb with fine teeth on one end and large on the other.
This brings to mind a scene from one of the great comedy masterpieces...
Alright, in order to view our scene as intended the linear light values need to be gamma corrected. That is, the sensor's linear output needs to have a logarithmic transfer function applied in order to stretch out the shadows and mid-tones while pushing back all the highlights. We're basically taking a straight line and making it into a curve by adding relative brightness to the areas of the image that need it most to appear correct to our eye. A gamma curve of .45 has become the standard as it closely approximates our eye's own logarithmic response.
So here's our image with gamma correction applied:
Much better. This, of course, brings us to our next question: what is log in a HD video camera and why should we care?
The S-log record mode in a Sony F35 is simply an unique gamma correction. The difference is that log mode is gamma correction for post-production purposes while every other gamma correction is intended to deliver an image with the proper contrast for viewing. The F35 allows you to select a standard gamma correction (rec709), a number of hypergammas (variations on rec709 that will lift the mids or pull back the highlights), S-log for post-production, or you can create your own gamma curve.
The reason that recording in log mode is so beneficial during post production is that instead of providing an image with proper contrast for immediate viewing, log seeks to preserve as much of the fine differences in an image's dynamic range for later manipulation in post. Those differences are shades of gray. This is why a log HD video signal always seems to appear washed out and flat. Once you take all those shades of grey and crush them in post to make deep blacks or burn them out to create specular highlights you inherently loose your image's original dynamic range, but you gain a gorgeous, contrasty, and sharp image.
Here's 10 stops from a log image:
And here's the remaining 5 stops from the final color time:
Let's finish this off with the pros and con's of our three choices. Raw gives us the best image quality and dynamic range because our image is built directly from each pixel's discreet luminance values. Because a Raw file has not been processed or compressed (unless you're a RED ONE owner), shooting Raw produces the largest amount of data-sets and will require the largest storage capacity for all of the files generated. Raw data also usually requires intensive post-production manipulation to arrive at your final image. This process can be expensive and time-consuming.
HD Video can give us a fantastic image with the understanding that post-production manipulation is limited. It is best understood that what is seen on-set during the shooting day is what will be shown to an audience later. An experienced D.I.T. and colorist can work with the DP to provide the final visual look of the project during recording. This can mean very little to no post-production color-timing and can easily shave thousands off a budget. Of course, any radical deviation from the set look during post can severely degrade your image. A Video stream can also be easily compressed to tape (HDCAM SR) or through a video codec to cut down a project's archival requirements.
HD Video captured in a log record mode is sort of the best of both worlds. Though you are recording a video image that can still be tweaked and tuned by the DP and D.I.T. the image recorded makes the best use of the sensor's dynamic range. Your footage also will not need to be rendered as it is already a usable image. However, your images will need to be manipulated in post-production to arrive at your project's final look. This can easily add expense and time to a project's turn-around.
Most HD cameras on the market today have the ability to record in either Raw or Video modes but not both. There is really only one that has a proven track record of being able to record Raw, log, and straight gamma-corrected HD video. And that's the Arri D-21.
I did a commercial recently for Texas Energy. The production company was small and wanted to ingest the footage immediately after wrap, cut on Final Cut Pro, and deliver the spot within a few days. Using the D-21 in a 4:2:2 HD video mode recording to HDCAM SR was the obvious choice. This allowed for a great image on-set that could be approved on the spot, an in-house edit and color-time, and a super-fast turn-around.
"Lie to Me", the television series on Fox, uses D-21's in log mode to get the most out of their images while still keeping their post-production budget and episodic turn-around time in check.
And if you're David Fincher, I'm sure you'll insist on using the D-21 to record straight Raw files to a S-2 Digimag or Codex Recorder. Because Justin Timberlake and Tobey Maguire wouldn't have it any other way...
Hey, did I mention I'm looking forward to Arri's new line of digital cameras?
Thursday, January 7, 2010
ARRI DCS
Stephan Ukas-Bradley over at ARRI Burbank has cued me in that the company's new line of Digital Cinema Cameras will be available for sale this summer. But it seems the Beta's will be out and about here in LA sometime in the next month or two for some test shoots. I'm doing my best to be on-hand for this first round of west-coast R&D.
Personally, I'm extremely excited about these cameras. I believe ARRI is going to catch RED off-guard, not to mention forcing SONY to reconsider it's high-end camera strategy.
I'll keep you posted on developments.
Personally, I'm extremely excited about these cameras. I believe ARRI is going to catch RED off-guard, not to mention forcing SONY to reconsider it's high-end camera strategy.
I'll keep you posted on developments.
Subscribe to:
Posts (Atom)