Guest Posted March 11 Posted March 11 (edited) 1 hour ago, RomiK said: Not quite agree there... here is Sony LUT for HDR conversion... it shows less information in highlights than PQ. To choose PQ to view does not mean I am going to deliver PQ. I am going to grade HLG. But since I have no option to reshoot or work with lights underwater I need to know the most information I could what am I recording. So empirically PQ conversion of Sgamut shows the most truthful image of what is being recorded and that is important to me. Edit: this image taken several minutes after first two and sun is going down so... doesn't change anything on the why PQ is better then Sony LUTs to review. Just explaining a bit different image the LUT to use is Sgamut3CineSlot3 to Cine+ or if you prefer low contrast the lc 709 Either the screen you show is more correct for monitoring as i see it than the other examples Picture profile PP8 not PP9 There are no LUTS to work in PP9 as that is supposed to work on BT.2020 devices which in effect do not exist let alone atomos The Slog3/Cine instead will fit in existing and wide color space without risks of colors off gamut that cannot be predicted What Sony does not clearly say is that everybody is using Slog3 for cine conversion and nothing else even when shooting PD HDR their Slog3/sgamut3 is more 'experimental' as it does not contain the camera gamut you end up sometimes with unexpected cast https://pro.sony/en_GB/support-resources/software/00263050 The Cine+ is designed for monitoring with slog3/cine you should never use the Slog3/sgamut3 on consumer devices like atomos Edited March 11 by Interceptor121
Guest Posted March 11 Posted March 11 (edited) This is the same image taken with no picture profile and with PP8 at identical exposure settings It is clear that the slog3 is 3 stops underexposed which is why you dont take photos in slog3 you can of course expose to the right but that creates other issues Picture profile alter the raw response and make it not linear this cannot be recovered in post It has been explained by various people even Gerald Undone got to this so it is not hard this also explains why log is noisy it is indeed underexposed Edited March 11 by Interceptor121
RomiK Posted March 11 Author Posted March 11 (edited) 4 hours ago, Interceptor121 said: This is the same image taken with no picture profile and with PP8 at identical exposure settings It is clear that the slog3 is 3 stops underexposed which is why you dont take photos in slog3 you can of course expose to the right but that creates other issues Picture profile alter the raw response and make it not linear this cannot be recovered in post It has been explained by various people even Gerald Undone got to this so it is not hard this also explains why log is noisy it is indeed underexposed You've got it wrong... it's not that slog3 underexposes. It is that you don't know how to expose properly for Slog3. Slog3 is not meant to be exposed on autopilot, just research something about exposing for grey cards etc... Of course underwater we can't adjust exposure using other tools, that's where waveforms come handy and you should forget about the exposure correction, just expose based on waveforms and what you see on monitor - and there you have it - without HDR monitor and HDR at least some interpretation of log color gamut on that monitor you can't do much. Zebras etc might help but it's like shooting blind. You might want to give some credit to Atomos folks, they might know just a little more than you do. I will add that LUTs are just a starting point in underwater color grading anyway, given so variable light and white balance conditions. So Atomos's interpretation of Slog gamut to PQ gives you that starting point during the creative process. With Slog3 still pictures there is different problem - and that is that current popular software tools like Lightroom can't interpret the raw files shot in Slog3 very well especially if shot with aggressive custom white balance like when shooting at 30m depth. Sony's own imaging desktop app can do this but the workflow is not as smooth. You need to play with those apps to see for yourself and please don't dismiss this idea right away like you usually do based on your understanding of the world. There is whole galaxy beyond that 🙂 . Happy shooting Edited March 11 by RomiK
Guest Posted March 11 Posted March 11 (edited) 28 minutes ago, RomiK said: You've got it wrong... it's not that slog3 underexposes. It is that you don't know how to expose properly for Slog3. Slog3 is not meant to be exposed on autopilot, just research something about exposing for grey cards etc... Of course underwater we can't adjust exposure using other tools, that's where waveforms come handy and you should forget about the exposure correction, just expose based on waveforms and what you see on monitor - and there you have it - without HDR monitor and HDR at least some interpretation of log color gamut on that monitor you can't do much. Zebras etc might help but it's like shooting blind. You might want to give some credit to Atomos folks, they might know just a little more than you do. With Slog3 still pictures there is different problem - and that is that current popular software tools like Lightroom can't interpret the raw files shot in Slog3 very well especially if shot with aggressive custom white balance like when shooting at 30m depth. Sony's own imaging desktop app can do this but the workflow is not as smooth. You need to play with those apps to see for yourself and please don't dismiss this idea right away like you usually do based on your understanding of the world. There is whole galaxy beyond that 🙂 . Happy shooting I am not sure if I shoud have laugh, try to explain it or ignore altogethere your lack of knowledge A camera sensor in digital consumer camera operates in a linear mode not in logaritmic or other modes. In fact logaritmic ADC do not exist. The signal is converted on a linear scale according to a number of values dictacted by the bit depth For a sony full frame camera those are 14 bits so each pixel has 16384 values. Pixels are monochromatic RGGB. The photos I have taken have exactly the same exposure settings. Video files produced by a camera has 10 bits of color depth, in order to compress from 16384 to 1024 values a logaritmic curve is used. However as the clipping points have to be the same this curve alters differently the hightlights and the shadows In particular shadows are lifted while highlights are compressed. In practical terms this is done first by underexposing and then expanding because the sensor is still linear it did not change just because you shoot log So when you look at the raw image it is 3 stops underexposed. Lightroom does not have any log curve to look at as at sensor level that does not exist and returns the same -3 which if you lift give you roughly the same thing with noise on top. Waveforms are based on composite video signal which obviously is irrelevant to an RGB data file and they are also in the same 10 bit space. Again they need some form of manipulation to give you something to work with at the same time as looking at clipping point at 80% is not intuitive you may use a LUT so that the clipping point is exactly as you have it in a normal video signal. Due to the fact that raw editors operate at 32 bits and then compress back in a gamma encoded space or in newer case even HDR there is no need to use log profiles at all. I have done several write ups on this subject one is here https://interceptor121.com/2021/02/07/the-truth-about-v-log/#:~:text=When we look at the,a conversion LUT is applied. Having looked at this issue for the last 5 years and having starting using (and then abandoning) HDR video in 2018 I find your comment as a minimum laughable and to an extreme offensive With regards to atomos those products are buggy and have lots of issue, starting from incorrect and false specification claims to poor software support to lack or reliability. If you had ever attended a session on HDR (some good ones by Sony professional) you would see that everybody monitors in rec709 for a variety of reasons even in studio even for actual movie production on any system. The devices have a contrast ratio of 1000:1 which is the same of a professional SDR monitor (100 nits to 0.1 nits contrast ration 1000:1) they have no ability to display HDR signals correctly and are far away from a consumer Tv set of the last generation. The various PQ HLG are uncontrolled emulations however any monitor has to be able to be 100% rec709 compliant and this is why you use a LUT and ignore the monitor settings. This give you the look that is the closest to reality and your end product. With regards to your idea of using waveform for photos it is not only irrelevant as waveform map video signal but also irrelevant for photos. In a raw image you can easily use masks all you want to make sure is that you don't clip and your image is not too dark. A waveform is not practical to use in scenes where you have action is a good tool for studio work but personally I do not find them useful either I prefer false color for a quick check. I would recommend you do some reading, stop believing fairy tales and get more knowleadgable on the topic With regards to your initial question there is no solution on Sony to shoot a monitor and coexist photo and video with best performance. The best option is to use the EVF or the LCD. On other brands instead it would be possible, although of course is not required. Edited March 11 by Interceptor121
RomiK Posted March 12 Author Posted March 12 13 hours ago, Interceptor121 said: I am not sure if I shoud have laugh, try to explain it or ignore altogethere your lack of knowledge I too am perplexed looking at some of your answers 🤣 it's like you'd be using ChatGPT for that 🤣 . Listen I don't doubt you have read the theory, you have nice extensive blog site for that. But throwing numbers around is not the same as understand these numbers and being able to use them. Your underwater footage from Malpello, for example, is outright awful. Let me guess - you were chasing sharks and eagle rays with your mask on a viewfinder which is so bright you had no idea you were underexposing all the time. And the resulting footage is so shaking it reminds me of my students first tries with GoPro... There goes an answer for why you might benefit from using an external monitor too. External viewfinder will obstruct the rear camera display so much it's unusable. You can review - so you don't feel like I'd be bashing your work without throwing in mine - some of my footage from Cocos, where diving conditions are similar to Malpello, also with A1 also no lights with custom WB plus there is some GoPro here and there mixed in. Better to watch on HDR capable screen and make sure YT shows that HDR symbol. What I want to say it's fine to debate numbers where appropriate but this thread was about on-hand experience in dealing with specific challenges, it wasn't really a question for theoretic mentors but it's great you have shown that throwing numbers around does not have to translate to end results necessarily.
TimG Posted March 12 Posted March 12 1 hour ago, RomiK said: but it's great you have shown that throwing numbers around does not have to translate to end results necessarily. A very good point, well made. Discussions of the technical aspects of equipment and combinations can be fascinating. But bottom line, it’s the image that counts. Does it create an emotion? 1
RomiK Posted March 12 Author Posted March 12 8 minutes ago, TimG said: A very good point, well made. Discussions of the technical aspects of equipment and combinations can be fascinating. But bottom line, it’s the image that counts. Does it create an emotion? This should be set in stone 👏👏👏
Guest Posted March 12 Posted March 12 (edited) 1 hour ago, RomiK said: I too am perplexed looking at some of your answers 🤣 it's like you'd be using ChatGPT for that 🤣 . Listen I don't doubt you have read the theory, you have nice extensive blog site for that. But throwing numbers around is not the same as understand these numbers and being able to use them. Your underwater footage from Malpello, for example, is outright awful. Let me guess - you were chasing sharks and eagle rays with your mask on a viewfinder which is so bright you had no idea you were underexposing all the time. And the resulting footage is so shaking it reminds me of my students first tries with GoPro... There goes an answer for why you might benefit from using an external monitor too. External viewfinder will obstruct the rear camera display so much it's unusable. You can review - so you don't feel like I'd be bashing your work without throwing in mine - some of my footage from Cocos, where diving conditions are similar to Malpello, also with A1 also no lights with custom WB plus there is some GoPro here and there mixed in. Better to watch on HDR capable screen and make sure YT shows that HDR symbol. What I want to say it's fine to debate numbers where appropriate but this thread was about on-hand experience in dealing with specific challenges, it wasn't really a question for theoretic mentors but it's great you have shown that throwing numbers around does not have to translate to end results necessarily. Malpelo is not like cocos it is very very dark and there are no sandy bottoms to reflect light Then your comments about shots being underexposed is particularly interesting as you have no idea what I was shooting and how and neither you have seen a waveform which by the way shows there is no contrast not that it is underexposed Your conditions are better but your shots are clipped and the sand is blue which is a side effect of trying to use Slog3/sgamut for HLG HDR which as explained results in cast? Try to go to Malpelo and see what you come back with instead of coming with this rubbish observations, by the way you can only come up with 1 minute out of one week diving? Edited March 12 by Interceptor121
Guest Posted March 12 Posted March 12 (edited) 45 minutes ago, TimG said: A very good point, well made. Discussions of the technical aspects of equipment and combinations can be fascinating. But bottom line, it’s the image that counts. Does it create an emotion? No it is not His post is about ergonomics no image quality and the foundation of it is based on flaws about knowledge To make it more irrelevant he posted a link that is clearly in a total different environment and made statement about how I took the shots without knowing...thats because he has nothing left to say and therefore moves to the attacj Basically hit the end of the line on all dimensions including arrogance and you are backing him up? Edited March 12 by Interceptor121
TimG Posted March 12 Posted March 12 You’re entitled to your view, Massimo, I have mine. On with the story
Guest Posted March 12 Posted March 12 (edited) 35 minutes ago, TimG said: You’re entitled to your view, Massimo, I have mine. On with the story The story was not about creating emotions but about using a monitor to take photo and the challenges it presents with a sony camera on top there were some considerations about using slog3 to take photos (as the set up has limitations) With regards to image quality statements look at the video of the op on a proper HDR set you will see that 1. Whites are clipped 2. Sand and other whitish objects look blue This indicates that whatever he is doing with either the monitor underwater or in post processing has issues hence he should astain telling others (me) who have noise in the frame because it was super dark, were finning in 3 knots current and then get a comment on stability and had 5-10 meters viz and no light that there was some technical error when indeed there was not and the issues were different I guess the op has qualified himself with his statement I have now looked at his videos and all are clipped and with colors that are off which is an outcome of wanting to use Slog3/sgamut looking at Sony documentation there are couple of things about slog3 on the A1 When using S-Log2 or S-Log3 gamma, the noise becomes more noticeable compared to when using other gammas. If the noise still is significant even after processing pictures, it may be improved by shooting with a brighter setting. However, the dynamic range becomes narrower accordingly when you shoot with a brighter setting. We recommend checking the picture in advance by test shooting when using S-Log2 or S-Log3. Setting [ITU709(800%)], [S-Log2] or [S-Log3] may cause an error in the white balance custom setup. In this case, perform custom setup with a gamma other than [ITU709(800%)], [S-Log2], or [S-Log3] first, and then reselect [ITU709(800%)], [S-Log2], or [S-Log3] gamma. S-Gamut, S-Gamut3.Cine, and S-Gamut3 are color spaces exclusive to Sony. However, this camera's S-Gamut setting does not support the whole S-Gamut color space; it is a setting to achieve a color reproduction equivalent to S-Gamut. What Sony is saying here is that Sgamut in a consumer camera is an approximation, white balance may be off or fail (all of use have experienced that) and that you can overexpose to reduce noise (the point of the op) however the dynamic range is less (because slog3 is internally underexposed) This is why most people shoot slog3/cine so that all the colour channels remain more or less in check and you don't end up with blue sand. It is still possible that the white balance fails though something that does not happen with other picture profiles Edited March 12 by Interceptor121
Recommended Posts