-
Notifications
You must be signed in to change notification settings - Fork 2
RadianceTest
The RadianceTest recipe tests renderer outputs for consistency with physical principles. It varies things like light intensity, distance between light and target, and distance between target and camera, and checks for expected changes in output magnitude.
Above, PBRT rendered the scene.
Above, Mitsuba rendered the scene.
The parent scene contains a point light with unit intensity at all wavelengths, shining on the surface of a square reflector. The vector connecting the point light to the center of the reflector is normal to the reflector's surface. The reflector has a matte material with perfect reflectance at all wavelengths. A camera views the reflector from "above" and at an angle. The vector connecting the center of the camera to the center of the reflector makes a 45 degree angle with the reflector's surface normal.
The scene has 8 conditions that affect how much light should reach the reflector and the camera:
- The "reference" condition establishes a baseline for comparison, with several parameters set to "usual" values:
- The point light is 100 meters (or arbitrary distance units) from the reflector.
- The vector connecting the point light to the center of the reflector is normal to the reflector's surface.
- The camera is 7.1 meters from the reflector.
- The vector connecting the center of the camera to the center of the reflector makes a 45 degree angle with the reflector's surface normal.
- The light uses a uniform "white" spectrum with unit intensity at all wavelengths, sampled every 5nm between 300nm and 800nm.
- The "2xFarLight" condition doubles the distance from the point light to the reflector. As a consequence, the light that reaches the reflector is 1/4 as intense, and the reflection towards the camera has 1/4 of its usual radiance.
- The "2xFarCamera" condition restores the point light to its usual distance, but moves the camera to twice its normal distance from the reflector. The result is that the light reaching the camera has its usual radiance, but covers 1/4 of its usual area.
- The "rotateReflector" condition restores the camera to its usual distance, but rotates the reflector about it's center by 41.4 degrees, so that vector connecting the point light to the center of the reflector forms a 41.4 degree angle with the reflector's surface normal. The result is that the reflector catches less illumination from the point light, and the reflection towards the camera has about 3/4 of its usual radiance. Note that
cos(41.4)
approximately equals 3/4, so this result is consistent with a cosine radiance falloff. - The "rotateCamera" condition restores the reflector to its usual orientation, but rotates the camera in a circular orbit about the reflector, so that the vector connecting the center of the camera to the center of the reflector makes a 10 degree angle with the reflector's surface normal. The distance from the camera to the reflector is unchanged. The reflector appears larger because the camera views it nearly straight-on, but the reflection towards the camera has the usual radiance.
- The "sparseSpectrum" condition restores all objects to their usual locations, but replaces the light spectrum with a "sparse" spectrum sampled every 10nm instead of every 5mn. The light reflected towards the camera has its usual radiance, over its usual area. This result suggests that the renderers interpret illuminant spectra in units of Power per Unit Wavelength, as opposed to Power per Wavelength Band.
- The "unitAreaLight" condition restores the illuminant spectrum to the usual 5mn sampling, but replaces the point light with a flat, circular, diffuse area light with unit area. The vector connecting the center of the area light with the center of the reflector is normal to both the area light and the reflector. The total power emitted towards the reflector is approximately the same as in the "reference" condition. The light reflected towards the camera has its usual radiance, over its usual area.
- The "halfAreaLight" condition reduces the surface area of the area light by half. The light reflected towards the camera has 1/2 of its usual radiance, over its usual area.
For PBRT and Mitsuba, the results of all conditions are consistent with physical expectations for point lights, area lights, diffuse reflectors, and pinhole cameras.
The executive script MakeRadianceTest.m
produced the images above. It is located at here:
(path-to-RenderToolbox3)/ExampleScenes/RadianceTest/MakeRadianceTest.m
The executive script MakeRadianceTestFigure.m
produces a figure with a summary of results from the RadianceTest conditions.
Renderings and data from each condition are shown in separate rows. The column on the left shows renderings from PBRT. The middle column shows renderings from Mitsuba. By visual inspection, the renderers produce similar renderings.
The column on the right shows power reflected towards the camera, at one wavelength, across the width of each rendering, at the location of the dashed orange or blue line. The power has a central plateau in all conditions. The width and height of the plateau are consistent with the physical expectations for each condition: remaining constant or shrinking by a predictable factor compared to the "reference" condition, as described above.
We would like to know the specific radiometric units used by the renderers PBRT and Mitsuba, including the units they expect for input spectra and the units they write to output files. Determining these units relies on some of the things we learned from this RadianceTest and also the ScalingTest.
A key fact we learned in this RadianceTest is that the output of both PBRT and Mitsuba behaves (as it should) like illuminance (units of Power per Unit Area) in the sensor plane, for a rather simple camera model, as long as pixel size and properties of the samplers/integrators are held fixed. (See ScalingTest for information on how those factors affect the output; these are renderer dependent.) We'll use the term "fixed camera properties" below to refer to the case where we hold all the camera, pixel size, and sampler/integrator properties fixed, as what we want to do here is figure out the radiometric units that go with such "fixed camera properties".
For fixed camera properties, there is a one-to-one mapping between sensor illuminance and the radiance (Power per Area per Sr) of surfaces in the scene. This means that for fixed camera properties, the radiance of surfaces in the scene is related to the image produced by the renderer by a single multiplicative scale factor. We'll call this factor the rendererRadiometricUnitFactor
, and what we want to do here is figure out what that factor is.
In this RadianceTest, we render a reference condition that contains a perfect Lambertian diffuser with unit reflectance at all wavelengths under a point source with unit power at the same set of wavelengths. The point source is 100 distance units from the diffuser, and the camera 7.1 distance units from the surface. The camera distance and angle don't matter, however.
To calculate the rendererRadiometricUnitFactor
, we start with the power of the point light source in our rendered reference scene. This is in units of Power per Unit Wavelength.
pointSource_PowerPerUnitWavelength = 1;
We also need the distance from the point source to Lambertian diffuser.
distanceToPointSource = 100;
Now we can compute illuminance arriving at a unit area on the diffuser. The light from the point source is spread out over an area equal to 4*pi*distanceToPointSource^2
, so the Power per Unit Area on the diffuser is the point source power divided by this quantity.
unitAreaOnDiffuser = 1;
irradiance_PowerPerUnitAreaUnitWl = pointSource_PowerPerUnitWavelength/(4*pi*(distanceToPointSource^2));
The light coming off the diffuser scatters over a hemisphere. We want to know how much goes through one steradian. This involves integrating the light reflected from a Lambertian surface over the hemisphere. For the derivation, see for example Wyszecki and Stiles, Colour Science, 2cd edition, pp. 273-274. The key results is Equation 29(4.3.6), which gives that the irradiance coming off a Lambertian surface is pi times its radiance.
radiance_PowerPerAreaSrUnitWl = irradiance_PowerPerUnitAreaUnitWl/pi;
What we want is to scale the output image so that the number in the image within rendered Lambertian surface corresponds to that surface's radiance. At least, that is what I currently think we want to do. That way, if one simply displays the rendered image on a monitor, treating the numbers directly, the radiance coming off the monitor will exactly match what we would have gotten were we standing and looking at the scene.
To put it another way, if we do this then we believe the following should be the case. Suppose we took some physical scene and measured the the spectrum of the light coming off all of the light sources in physical units, say in Watts/nanometer
for point sources and Watts/(meter^2*nanometer)
for uniform area lights. And then we measured the position, size and reflectance of every object in the scene, using meters as the unit of length. Then we could render the scene, apply the computed scale factor, and display the result on the monitor. What was coming off the monitor from any angular position should then be a metamer to what would have reached the eye from the same angular position in the original scene.
renderedIrradiance_PixelValue = GetTheValueOfAPixelInsideTheLambertianDiffuser();
rendererRadiometricUnitFactor = radiance_PowerPerAreaSrUnitWl/renderedIrradiance_PixelValue;
The rendered irradiance is in arbitrary units determined by the camera properties of the renderer as well as other renderer conventions. The corresponding radiance is in units of power/(areasrwl). The rendererRadiometricUnitFactor
brings the rendered irradiance into physical radiance units.
Particular units of power, area, and wavelength. The above carries through the logic without actually specifying particular units for Power, Area, and Wavelength. That is because these can be anything the user chooses, as long as that choice is consistent. So, for example, the renderers specify scene dimensions in arbitrary units of length. So, the user can decide whether these represent mm, cm, m, feet, etc. as long as the choice is consistent throughout a project. Note that once the length units are set (e.g., cm), then the area units are implied (e.g., cm^2). Similarly, the user can decide whether the numbers passed to the renderer to specify light power are Watts, uWatts, quanta/sec, and what the wavelength numbers represent.
Note on wavelength sampling convention. The renderers are happy to spline input spectra to the wavelength sampling that they have been compiled to use. And, they do this using a Power Per Unit Wavelength convention, so that the value at a particular wavelength does not change with the wavelength sample spacing. Thus, for the renderers, the natural convention with respect to wavelength sampling is to treat light power as being in units of Power per Unit Wavelength. That is the convention we've adopted above and more generally for RenderToolbox3. Typically we think in nanometers, but the renderers really don't care. Note that this convention differs from the Psychtoolbox, which uses the convention that light power is in units of Power per Wavelength Band. This difference will matter to us (a lot) when we convert images produced by RTB into (e.g.) XYZ or LMS images for use with PTB calibration and display routines, as we will have to handle this convention switch properly in the code that integrates the output hyperspectral images from RTB against sensor sensitivities. Basically, in the code that converts to XYZ or other sensor coordinates, we should multiply the answer we get from RTB by the delta wavelength sampling that the renderer is using. We would typically omit that factor in a PTB summation over wavelength when computing sensor responses, because the PTB expects Power per Wavelength Band.
Automatic conversion to physical radiance units. RenderToolbox3 automatically scales the outputs from PBRT and Mitsuba into physical radiance units. This makes it easier to compare outputs between renderers or to swap one renderer for the other. RenderToolbox3 initially uses a default rendererRadiometricUnitFactor
for each renderer. These should be sufficient for configurations that followed the basic installation instructions and for scenes that are similar to the RenderToolbox3 example scenes. New radiometric unit factors can be computed for unusual situations, using the ComputeRadiometricScaleFactors()
function, located here:
(path-to-RenderToolbox3)/BatchRender/ComputeRadiometricScaleFactors.m
This will store a new radiometric unit factor for each renderer as a Matlab preference value. These will be applied automatically by the BatchRender() function. They can also be applied to renderer outputs "by hand" using the functions PBRTDataToRadiance()
and MitsubaDataToRadiance()
, located here:
(path-to-RenderToolbox3)/BatchRender/PBRTDataToRadiance.m
(path-to-RenderToolbox3)/BatchRender/MitsubaDataToRadiance.m