+44 (0)20 7871 0700 Facebook Twitter Google Plus

HDR DISCUSSION: Is High Dynamic Range more important than 4K for broadcast and cinematography?

Thursday, December 17, 2015 12:05 PM

Last Wednesday WTS hosted a discussion forum focussing on high dynamic range, looking at the impact it is expected to have, and the challenges that it throws up. The event at WTS' central London TIMA TV studio facilities at New Zealand House brought together a packed room of industry experts eager to discuss HDR with our panellists Michael Byrne (WTS’ workflow specialist), WTS sales manager Duncan Payne, colourist and Sony 4K workflow specialist Pablo Garcia Soriano and WTS production specialist and freelance cameraman Patrick van Weeren. In this blog, Patrick discusses some of the pressing issues that arose during the discussion.

Feel free to join the conversation using the comments section at the end of the article, or via twitter – @wts_broadcast and @Cam_Patrick_WTS

Download this article as a PDF

Panellists at WTS' HDR discussion forum

At IBC 2013, Hans Hoffmann, the Vice President of Standards for the SMPTE, said in an interview that “the industry, driven by the consumer part, is only focusing on resolution… We seek a big improvement [that] includes four parameters: higher dynamic range, higher frame rate, wider colour space and then concentrate on the questions of resolution.”

Making the step from 2K to 4K, the average human eye is still able to discern a difference. For instance, when Spiderman 2 was scanned and finished in 4K, according to the July 2004 edition of American Cinematographer Magazine July 2004, cinematographer Bill Pope ASC did the Pepsi proof on 2K and 4K input and output for the producers, editors and director of the film, and all of them preferred the 4K output. But will we benefit significantly from a leap to 6K or 8K? That seems more dubious to me.

Of course, shooting in higher resolution certainly has its benefits, and we’ve seen producers gradually warm to the idea of having the safety net of 4K. Yes, the higher cost of data storage needs to be taken into account, and it might be hard to measure the value of the added flexibility in post and grading. But especially in the case of colour grading, SFX, stabilising or re-framing, the benefits seem to be worth it.

However the difficulties increase when pushing the 4K envelope further and attempting to broadcast content in 4K. Here, there are bigger hurdles to overcome. In a lot of situations, especially in consumers’ living rooms, the ideal set-up for watching 4K hasn’t been – and may not be – achievable, and the question arises as to whether an increase in resolution is actually justified by the benefits it brings to viewers.

 With 4K resolution, the best viewing position is at a distance of approximately 1.5 times the height of the screen. Just to clarify, according to Sony’s website:

  • HD       30-inch screen – best viewing distance: 4 feet or closer
  • 4K        30-inch screen – best viewing distance: 2 feet or closer
  • 8K        30-inch screen – best viewing distance: 1 foot or closer

Even if we assumed that a couch was only 5 feet away from the screen (and I would guess that many people have their couch a good bit further away than that), the equivalent of an average HD screen size in 4K would be 80 inches and in 8K, 160 inches if such a thing exists. (Source SAM/SNELL)

Taking the next steps to higher resolution and beyond

Stepping up to 4K and UHD seems to be accepted but beyond 4K to 6K or even 8K – cameras capable of this are already on the market – might prove to be too big a challenge for some eyes. Canon has already started developing cameras with more pixels than there are photoreceptors in the human eye and manufacturers keep developing new technologies while we speak.

As usual the industry standards need to keep up with the technology, to realise some conformity (even in deciding on a resolution standard, we've ended up with two standards – 4K and UHD) and avoid rising costs and a mismatch of formats. While this has been hard in the past, we seem likely to be able to adjust several different new technologies simultaneously within the near future – if we can keep up.

UHD implementation schedule

As I highlighted in the first paragraph of this blog, it’s not all about resolution, even though the buzzword status of ‘4K’ seems to throw the focus onto resolution. Higher dynamic range (contrast), higher frame rate and wider colour spaces are somewhat overshadowed by the omnipresence of 4K in the discussion, but there’s a strong argument to say that these are the most feasible and important parts of the upgrade from HD to 4K.

New technology provides springboard for leap to 4K, HDR, high frame rates and wider colour spaces

With online streaming services such as Amazon and Netflix blazing a trail, 2015 has seen a great swathe of technological advancements aimed at realising not just 4K for broadcast (aka UHD) but also high dynamic range, wider colour spaces and increased frame rates.

Codecs such as HEVC 265 have emerged. Sony, with the HDC-4300, Grass Valley with its LDX 86, and other OB camera manufacturers have introduced UHD broadcast cameras. Canon and Fujinon have developed and launched complete ranges of B4 UHD-compatible glass. Most manufacturers – including newcomers – have already implemented 4K technology on camcorders and even cellphones.

Sony brought the impressive OLED BVM-X300 to the professional market, boasting OLED technology capable of displaying stunning 4K and HDR, while Canon showed up with two different-sized LCD 4K monitors (the 30-inch DP-V3010 and the 24-inch DP-V2410), Dolby with the 42-inch PRM-4220 and Panasonic with the BT-4LH310. The big players have all introduced professional broadcast monitors boasting 4K resolution, high dynamic range and wider colour space.

Wider Colour Space

Interestingly the industry standards for a wider colour space are probably the first ones to have been established, notwithstanding the fact that we expect there will be an increase in possibilities in the future. Ironically it was this same process of establishing standards which delayed technological advancements in colour space for more than 10 years, even though the cameras and monitors were able to capture and display wider colour spaces.

Implementing a wider colour space for broadcast to consumers is more difficult than you might think. All these changes will have to be backwards-compatible in some way, as you can’t simply make older screens redundant. The current Rec. 709 colour space for HD television was approved as long ago as 1990, in the era when Betacam cameras were only four years old and Digibeta would not be launched for another three years. We watched this Rec.709 on CRT screens and used mainly antennas for transmission.

The problem is, the colour spaces are not designed to easily convert from one to another. So apart from accepting the standard for the colour space, there’s also the need to set some kind of standard for how to convert from one colour space to another.

We could look to learn from the graphic, print and still camera industries, were it not for the fact that they also have a mismatched colour space system. To fix these problems for future platforms and technology the Academy of Motion Picture Arts and Science has introduced the ACES colour space as a volume to make conversion more accurate and less technologically dependent.

Higher frame rates

Higher frame rates are a somewhat more dangerous proposition – especially after brave Peter Jackson burned the Hobbit’s fingers with his 48fps project.

Any movement to a higher frame rate will have to deal with a cultural generation gap. We are used to certain image sequences for certain types of storytelling. Test it yourself: when you flick TV channels the strobing quality of your content immediately gives away what it is – American sitcom (converted 24fps/29.97fps), European TV news (50i), Hollywood film (24p) or European commercial (25fps) We immediately relate to the content due to its frame rate. Consequently it will take a while – perhaps even a generation – before people accept certain types of storytelling in another frame rate.

Here is another problem for the standards committees: higher frame rates are mainly there to solve the technical problem of the motion blur. Above 66fps, humans can barely see the difference on smaller screens, but the jerky motion blur of fast-moving images in technically challenged equipment can be a give-away and therefore there might be a need for even higher frame rates to help deal with these artefacts.

Why higher dynamic range might demand higher frame rates

The perception of judder is worse when using HDR monitors, which are brighter because they display the images in higher nits. When we start using brighter screens the sensitivity of our eyes to strobing of 50i or 25p might be too annoying and the need for higher frame rates may become more pressing. At the WTS HDR discussion forum, Sony colourist Pablo Garcia noted that he had to get used to the brighter screen, but now prefers it above the normal grade monitors as it's less straining on the eyes.

However we have to overcome a creative issue too. The eyes are very sensitive to frame rates and we consider the 24fps as a ‘cinematic’ storytelling tool. The broadcast ‘flicker’ of 50i has it’s own connotations and these cultural and inherited interpretations of the value and quality of a frame rate might take longer for viewers to overcome.

One of the reasons why Peter Jackson’s 48fps experiment got a mixed review might be because he was too close to television’s 50i frame rate. Even in the face of technological advances, we cherish the romanticised strobing of cinema projections over TV’s 50i – let alone going to frame rates of 120 or 240fps.

The fact that our lights flicker at 50Hz (or 60Hz in some parts of the world) is also a point of debate for the standards committee. While 100 fps would suit the UK, as the flickering of the lights would be compatible, 60Hz countries such as the USA would prefer 120 fps.

Due to higher frame rates we are arguably going to lose sensitivity, as the exposure time has to be shorter and results in different motion blur and strobing between images. The loss of sensitivity isn’t a problem in the bright outdoors but can create substantial issues when filming indoors or under ‘flickering’ lights (e.g. 50Hz/60Hz).

The UHD Alliance, EBU, ITU, SMPTE MPEG, BBC and other standards providers and influencers need to decide fast about how content will be delivered. Two competitive compression methods have been presented: H265 (HEVC) and VP9 (Google). HEVC H265 seems to be gaining popularity.

HDR – High Dynamic Range

The consumer industry has always been larger and more powerful than the acquisition side of the business (meaning ‘our’ industry – cinema, broadcasting and production, post-production). Manufacturers still make more money selling TVs than cameras and we have had to deal with their urge to replace the screens in people’s houses every week. ‘Sell! Sell! Sell!’ seems to be their motto. This time though, even the manufacturers of TV screens don’t know what their displays should be able to do when talking about HDR – but they certainly want to sell them.

Example of high dynamic range

For HDR screens, the debate centres on nits of brightness. Should it be considered HDR at 750-800 nits or 1,000 nits or Dolby’s 4,000 nits? If the standards aren’t set soon, will we be stuck again with something like HD-ready but now called HDR-ready, or wide colour space-ready?  How many different iterations of consumer TV sets will the divided industry allow before we end up with an accepted standard for high dynamic range, high frame rate and wider colour space?

The SMPTE has to make sure that the consumer industry doesn’t try to deliver too little too quickly, as BskyB Chief Engineer Chris Johns said in an interview at IBC 2013. We should set the standards for colour spaces and HDR before we start trying to adjust the industry.

The powerful consumer industry only has to display the ‘new’ HDR and colour spaces, but the smaller and more budget-squeezed content industry has to be able to deliver this HDR, wider colour space content.

We need these standards to know how to implement the transportation and fabrication of the content and equipment. We need them to know how to make content that can be delivered both to older TVs and newer HDR screens while still using the same pipeline. Without these standards we cannot even imagine what the budgets should be to work in HDR and HFR (high frame rate).

The good thing about HDR is that, of all the new technologies (high frame rate, wider colour space, and high resolution), it is probably the cheapest to deliver and easiest to appreciate for the consumer.

IP-based HDR content providers are blazing a trail and raising consumer awareness

“4K Ultra HD picture resolution was just the beginning – we’re excited that Prime members will soon be able to view movies and TV shows including Amazon Originals in HDR quality,” said Michael Paull, Vice President of Digital Video at Amazon. “HDR is the natural next step in our commitment to premium entertainment, and we can’t wait for customers to have even more choice in how they watch their favourite titles on Amazon Prime Instant Video.” (Source: businesswire.com)

Netflix, Amazon and BT are among the many content providers who are very keen on HDR. The small increase in data – approximately 20 percent – compared to ‘normal’ UHD is well-suited to streaming and IP-based systems, but probably way more complicated for the ‘old-fashioned’ delivery methods, which are still struggling to deliver HD without too much compression. Therefore, HDR is an easy step for ‘new-style’ content providers to raise awareness amongst consumers.

Camera technology for capturing HDR content

Let’s start at the beginning of producing HDR content – the cameras. Once again, the standards haven’t been set. We could be looking at 12 stops or 15 stops or even more. It’s up to the standards committees to sort this out. The problem here is that conventional systems in the Rec.709 world could only work in an 8-bit system, which inherently limited the ability to provide a higher dynamic.

Camera technology is now slowly moving into a 10-bit or even 12-bit world. While transmissions and other systems mostly rely on 8-bit, we are actually not reaping the benefits of the 10-bit signals the way we should to improve our dynamic range. To compare, we gained equally impressive amounts of dynamic range within the 8-bit world when the knee was added to the Rec. 709 OETF. The knee compresses the contrast above the highlight reflective capture and basically compresses all light emitting highlights as they are seen to be least important for the experience.

Cameras are becoming ‘HDR-ready’. Almost 90% of the professional cameras launched in the last two years have been able to capture an HDR image, and about half of those will give you the opportunity to record in 10-bit or 12-bit. The great question is how the camera reproduces it. You can basically split the market, in terms of reproduction of HDR, into two: cameras with high-compression codecs (below 10-K); and cameras with RAW and lower-compression codecs.

The Log options for cameras will try to fit a 10- or 12-bit Log into a classic curve. These Log options all have different names from the different manufacturers: Panasonic has V-Log, Sony S-Log, Canon C-Log, Arri Log-C etc. The Log images are a great solution if the image is properly exposed and recorded in at least a 10- or 12-bit codec.

The challenges of working with HDR content

The bigger problem of high dynamic range is that the colourist doesn’t have as much room for manoeuvre. Previously he would have to compress a 12-stop image into a six-stop Rec. 709 dynamic range, and feel very comfortable in doing so. Now he loses his margin for error. The CCU operator in the OB trucks or multi-cam studio set-up will have the same issue trying to do shading for HDR. 

The problem for a shader is that he has to approach an HDR image and an ‘SDR’ image in two completely different ways. You can’t expose the cameras for both simultaneously as, apart from the exposure, the image quality, sharpness and saturation also all change when exposure decisions are made. So the best solution is to shade for HDR and convert to ‘SDR’ afterwards. It just needs to be done carefully, as a single ‘one size fits all’ conversion LUT wouldn’t work as well for all the different shots involved in a multi-camera shoot. 

When shooting my first HDR content, it was obvious that the extra margin for error I used to have with shooting Log for a Rec.709 delivery was gone. With Rec. 709 I had a gap of six or more stops to my final delivery format – so twice the dynamic range. This safety net is gone when shooting HDR for HDR viewing.

Monitoring HDR content on a small screen on set was difficult. You have to keep a careful eye out for your shadow detail and even more so for highlights. Your eyes can deal with a reasonable dynamic range of about 15 stops at once. Your brain shifts your eye’s dynamic range to whichever part of the image it wants to ‘focus’ on, expanding it to an incredible HDR viewing experience. However, it takes a while to adjust the dynamic range of this human eye. We can easily shift from darkness to brightness in just a few minutes, but from highlight to darkness can take a while – anything from eight to 30 minutes (Source: BT/Jenny Read neuroscientist (DVB-EBU HDR Workshop June 2014)). So there are human factors that we need to take into account too.

Handling the data for 4K transmission

4K might be the buzzword, but it will take another two-letter combination to deliver it: IP. With IP being able to deal with the increased amount of data, the chances are high that UHD will have an increased market share in the near future. Added, to that, 4K compression in the delivery part of the industry seems to be slowly settling down into the HEVC codec – but having learned from the past, we have to be flexible to adjust to new players on the market.

Nevertheless, the other aspects of this technological advancement – higher dynamic range, higher frame rates and wider colour space – are actually the ones that are likely to be more convincing for end-users. Slowly we are able to combine all the different evolving technologies that are needed or have been restricting this development.

comments powered by Disqus