I do notice that setting input gamma to scene linear does not automatically change the range to 'legal' in the. With that in mind, Resolve, Lumetri and Color Finale all seem to have differing expectations of input and output ranges to LUTs, so over time I have added presets that hopefully give consistent results. My feeling is that I am consistent with it, it is the approach that I have taken in explaining LUTCalc usage and I feel it is logical and makes sense to me. I've been made aware that my interpretation (legal - floating point 0 = 0%, floating point 1.0 = 100%, data - floating point 0 = -7%, floating point 1.0 = 109% whether on input or output) is not the same as can be found in other software. Testing out on various bits of hardware and software, I have come to realise that (when it comes to LUTs) different manufacturers seem to take different interpretations for what legal / full / data / extended means for input and output values. What the output range should be is dependent upon the output gamma/tone curve and choice of software. I've just tested that myself to be sure.įor floating point linear values as input, I would expect that 'input range' should always be set to 'legal'. If you set the output gamma to Scene Linear Reflectance, then make sure that the output range is also set to 'legal' and generate a 1D cube LUT, you should end up with a file which smoothly and linearly progresses from 0.0 to 1.0 (if you are using the 'general' preset), or from the minimum to the maximum values if you are using a preset which offers input scaling. If you are working with scene linear floating point values, make sure that 'Input Range' is set to 'Legal' (ie 0.0=0%, 0.18=18% and 1.0=100%) then set the min and max inputs you are after, eg min 0, max 16.3 in the example you gave, or min -1, max 16.3 if you want to go into the superblacks. For now it's a matter of 'suck it and see' between those two for other bits of software (Color Finale takes the Lumetri approach in it's preset). If you choose 'DaVinci Resolve (.cube)' or 'Lumetri/Speedgrade (.cube)', you should have an 'input scaling' box come up. As each bit of software has a hissy fit if you use the wrong approach, I have set the default 'General Cube LUT' to not include input scaling. cube files, one used by Adobe/Lumetri, the other by Resolve. There are at least two different and incompatible ways to express input values in. cube is the most mature format in there, so the one I'm going to use as an example. As I have learned about and written parsers for other formats, I have also added builders, which I then test out (although as you found with spi3d, I have been known to change things which cause breakages -) ), but. First off, I have to say that the cube format is the one I work with most, and which LUTCalc was initially designed around.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |