1. 程式人生 > >PBRT_V2 總結記錄 <24> Film 和 ImageFilm

PBRT_V2 總結記錄 <24> Film 和 ImageFilm

Film 類

class Film {
public:
	// Film Interface
	Film(int xres, int yres)
		: xResolution(xres), yResolution(yres) { }
	virtual ~Film();

	
	virtual void AddSample(const CameraSample &sample,
		const Spectrum &L) = 0;
	virtual void Splat(const CameraSample &sample, const Spectrum &L) = 0;

	virtual void GetSampleExtent(int *xstart, int *xend,
		int *ystart, int *yend) const = 0;

	virtual void GetPixelExtent(int *xstart, int *xend,
		int *ystart, int *yend) const = 0;

	
	virtual void UpdateDisplay(int x0, int y0, int x1, int y1, float splatScale = 1.f);

	virtual void WriteImage(float splatScale = 1.f) = 0;

	// Film Public Data
	const int xResolution, yResolution;
};

類的作用:

(Film 類 就是模擬 相機上的 膠片,Film 是 主要是 生成Image)

The type of film or sensor in a camera has a dramatic effect on the way that incident light is eventually transformed into colors in an image. In pbrt, the Film class models the sensing device in the simulated camera. After the radiance is found for each camera ray, a Film implementation determines the sample’s contribution to the nearby pixels and updates its representation of the image. When the main rendering loop exits, the Film typically writes the final image to a file on disk.

This section only provides a single Film implementation. It applies the pixel reconstruction equation to compute final pixel values and writes the image to disk with floatingpoint color values. For a physically based renderer, creating images in a floating-point format provides more flexibility in how the output can be used than if a typical image format with 8-bit unsigned integer values is used; floating-point formats avoid the substantial loss of information that comes from image quantization to(數字化) 8-bit image formats.

In order to display such images on modern display devices, however, it is necessary to map these floating-point pixel values to discrete values for display. For example, computer monitors generally expect the color of each pixel to be described by an RGB color triple, not an arbitrary spectral power distribution. Spectra described by general basis function coefficients must therefore be converted to an RGB representation before they can be displayed. A related problem is that displays have a substantially smaller range of displayable radiance values than the range present in many real-world scenes. Therefore, the pixel values must be mapped to the displayable range in a way that causes the final displayed image to appear as close as possible to the way it would appear on an ideal display device without this limitation

1. 建構函式

Film(int xres, int yres)         : xResolution(xres), yResolution(yres) { }

作用:

The Film constructor must be given the overall resolution of the image in the x and y directions; these are stored in the public member variables Film::xResolution and Film::yResolution. The Cameras in Chapter 6 need these values to compute some of the camera-related transformations, such as the raster-to-camera-space transformations.

2. virtual void AddSample(const CameraSample &sample,         const Spectrum &L) = 0;

作用:

(提供資料給Film,這裡是貢獻一個Sample 和它的對應的radiance 給 pixel,形成pixel的顏色值,而且 最終的pixel 得顏色值是加權平均它附近的sample的貢獻)

There are two methods for providing data to the film. The first is driven by Samples generated by the Sampler; for each such sample, Film::AddSample() is called. It takes a sample and corresponding radiance value, applies the reconstruction filter, and and updates the stored image. The expectation with this method is that the sampling density around a pixel is not related to the pixel’s final value. (In other words, the final pixel value is effectively a weighted average of the nearby samples.) Under this assumption, the pixel filtering equation can be used to compute the final values.

3. virtual void Splat(const CameraSample &sample, const Spectrum &L) = 0;

作用:

(這個方法也是利用sample 去更新 pixel顏色,但是不是加權平均,而是累加,pixel 附近越多 sample,pixel就越亮)

The second method for providing data to the film is Splat(). Splatting similarly updates pixel values, but rather than computing the final pixel value as a weighted average, splats are simply summed. Thus, the more splats that are around a given pixel, the brighter the pixel will be. This method is used by light transport algorithms like the one in the MetropolisRenderer, where more samples are generated in areas where the image is brighter. It would also be used by bidrectional light transport algorithms that followed paths starting from lights and then projected illuminated points onto the film.

4. virtual void GetSampleExtent(int *xstart, int *xend,         int *ystart, int *yend) const = 0;

作用:

(獲得Film 的取樣點 範圍)

The Film is responsible for determining the range of integer pixel values that the Sampler is responsible for generating samples for. While this range would be from (0, 0) to

(xResolution − 1, yResolution − 1) for a simple film implementation, in general it is necessary to sample the image plane at locations slightly beyond the edges of the final image due to the finite extent of pixel reconstruction filters. This range is returned by the GetSampleExtent() method.

5. virtual void GetPixelExtent(int *xstart, int *xend,         int *ystart, int *yend) const = 0;

作用:

(獲得Film 的畫素範圍)

GetPixelExtent() provides the range of pixels in the actual image.

ImageFilm類

class ImageFilm : public Film {
public:
    // ImageFilm Public Methods
    ImageFilm(int xres, int yres, Filter *filt, const float crop[4],
              const string &filename, bool openWindow);
    ~ImageFilm() {
        delete pixels;
        delete filter;
        delete[] filterTable;
    }

	// 基類 : 作用
	// 思路 : P408
    void AddSample(const CameraSample &sample, const Spectrum &L);
    void Splat(const CameraSample &sample, const Spectrum &L);
    void GetSampleExtent(int *xstart, int *xend, int *ystart, int *yend) const;
    void GetPixelExtent(int *xstart, int *xend, int *ystart, int *yend) const;

	// 基類 : 作用
	// 思路 : P412
    void WriteImage(float splatScale);
    void UpdateDisplay(int x0, int y0, int x1, int y1, float splatScale);
private:
    // ImageFilm Private Data
    Filter *filter;
    float cropWindow[4];
    string filename;
    int xPixelStart, yPixelStart, xPixelCount, yPixelCount;
    struct Pixel {
        Pixel() {
            for (int i = 0; i < 3; ++i) Lxyz[i] = splatXYZ[i] = 0.f;
            weightSum = 0.f;
        }
        float Lxyz[3];
        float weightSum;
        float splatXYZ[3];
        float pad;
    };
    BlockedArray<Pixel> *pixels;
    float *filterTable;
};

類的作用:

(ImageFilm 主要是 利用  filter 來加權平均sample 來形成Pixel,最終寫入 磁碟中)

The ImageFilm is a Film implementation that filters image sample values with a given reconstruction filter and writes the resulting image to disk.

1. 建構函式

ImageFilm::ImageFilm(int xres, int yres, Filter *filt, const float crop[4],
                     const string &fn, bool openWindow)
    : Film(xres, yres) {
    filter = filt;
    memcpy(cropWindow, crop, 4 * sizeof(float));
    filename = fn;
    // Compute film image extent
    xPixelStart = Ceil2Int(xResolution * cropWindow[0]);
    xPixelCount = max(1, Ceil2Int(xResolution * cropWindow[1]) - xPixelStart);
    yPixelStart = Ceil2Int(yResolution * cropWindow[2]);
    yPixelCount = max(1, Ceil2Int(yResolution * cropWindow[3]) - yPixelStart);

    // Allocate film image storage
    pixels = new BlockedArray<Pixel>(xPixelCount, yPixelCount);
	
	
    // Precompute filter weight table
#define FILTER_TABLE_SIZE 16
    filterTable = new float[FILTER_TABLE_SIZE * FILTER_TABLE_SIZE];
    float *ftp = filterTable;
    for (int y = 0; y < FILTER_TABLE_SIZE; ++y) {
        float fy = ((float)y + .5f) *
                   filter->yWidth / FILTER_TABLE_SIZE;
        for (int x = 0; x < FILTER_TABLE_SIZE; ++x) {
            float fx = ((float)x + .5f) *
                       filter->xWidth / FILTER_TABLE_SIZE;
            *ftp++ = filter->Evaluate(fx, fy);
        }
    }

    // Possibly open window for image display
    if (openWindow || PbrtOptions.openWindow) {
        Warning("Support for opening image display window not available in this build.");
    }
}

作用:

(除了 圖片的解析度之外,建構函式還會儲存其他的引數,包含一個 filter 函式,crop 用來指定 只渲染圖片的那一部分割槽域,和 輸出圖片的檔名字)

The ImageFilm constructor takes a number of extra parameters beyond the overall image resolution, including a filter function, a crop window that specifies a subrectangle of the pixels to be rendered, and the filename for the output image. On some systems, the

ImageFilm can be configured to open a window and show the image as it’s being rendered; the openWindow parameter to the constructor controls this.

細節

a. 

xPixelStart = Ceil2Int(xResolution * cropWindow[0]); xPixelCount = max(1, Ceil2Int(xResolution * cropWindow[1]) - xPixelStart); yPixelStart = Ceil2Int(yResolution * cropWindow[2]); yPixelCount = max(1, Ceil2Int(yResolution * cropWindow[3]) - yPixelStart);

作用:

(利用 xResolution,yResolution 和 cropWindow 來計算 實際的渲染區域)

In conjunction(連同) with the overall image resolution, the crop window gives the extent of pixels that need to be actually stored and written out. Crop windows are useful for debugging or for breaking a large image into small pieces that can be rendered on different computers and reassembled later. The crop window is specified in NDC space, with each coordinate ranging from zero to one (Figure 7.46). ImageFilm::xPixelStart and ImageFilm::yPixelStart store the pixel position of the upper-left corner of the crop window, and ImageFilm::xPixelCount and ImageFilm::yPixelCount give the total number of pixels in each direction.

The image crop window specifies a subset of the image to be rendered. It is given in NDC space, with coordinates ranging from (0, 0) to (1, 1). The ImageFilm class only allocates space for and stores pixel values in the region inside the crop window.

b.

pixels = new BlockedArray<Pixel>(xPixelCount, yPixelCount);

作用:

(分配當前 輸出 Image 的 Pixel 陣列)

Given the pixel resolution of the (possibly cropped) image, the constructor allocates an array of Pixel structures, one for each pixel.

c.

struct Pixel {
        Pixel() {
            for (int i = 0; i < 3; ++i) Lxyz[i] = splatXYZ[i] = 0.f;
            weightSum = 0.f;
        }
        float Lxyz[3];
        float weightSum;
        float splatXYZ[3];
        float pad;
    };

作用:

(Pixel 的 形成公式:

where L(xi , yi) is the radiance value of the ith sample located at (xi , yi), and f is a filter function.

這裡就是Filter 和 Sample 的 加權平均 公式, 理解為:

I = (f1 * L1 + f2 * L2 + f3 * L3) / (f1 + f2 + f3) 

  = ( f1 / (f1 + f2 + f3) ) * L1 +  ( f2 / (f1 + f2 + f3) ) * L2 + ( f3 / (f1 + f2 + f3) ) * L3 

那麼 ( f1 / (f1 + f2 + f3) ) 和 ( f2 / (f1 + f2 + f3) ) 就是權重.

那麼 Pixel 中的 Lxyz 儲存的 就是 上面 公式的分子,weightSum 儲存的就是上面公式的分母,而 splatXYZ 儲存的就是 所有的 xyz 的和,沒有涉及到加權平均。

The running weighted sums of pixel radiance values are stored in the Lxyz member variable, and weightSum holds the sum of filter weight values for the sample contributions to the pixel. splatXYZ holds the (unweighted) sum of sample splats. The pad member is unused; its sole purpose is to ensure that the Pixel structure is 32 bytes large, rather than 28 as it would be otherwise. This ensures that a Pixel won’t straddle a cache line, so that no more than one cache miss will be incurred when a Pixel is accessed (as long as the first Pixel in the array is allocated at the start of a cache line).

d.

ImageFile 用XYZ 來儲存 畫素的顏色

The ImageFilm uses XYZ color values to store pixel colors

Two natural alternatives would be to use Spectrum values, or to store RGB color.Here, it isn’t worthwhile to store complete Spectrum values, even when doing full spectral rendering. Because the final colors written to the output file don’t include the full set of Spectrum samples, converting to a tristimulus value here doesn’t represent a loss of information versus storing Spectrums and converting to a tristimulus value on image output. Not storing complete Spectrum values in this case can save a substantial amount of memory if the Spectrum has a large number of samples.

We have chosen to use XYZ color rather than RGB to emphasize(強調) that XYZ is a display independent representation of color, while RGB requires assuming a particular set of display response curves (Section 5.2.2). (In the end, we will, however, have to convert to RGB, since few image file formats store XYZ color.)

e.

#define FILTER_TABLE_SIZE 16     filterTable = new float[FILTER_TABLE_SIZE * FILTER_TABLE_SIZE];     float *ftp = filterTable;     for (int y = 0; y < FILTER_TABLE_SIZE; ++y) {         float fy = ((float)y + .5f) *                    filter->yWidth / FILTER_TABLE_SIZE;         for (int x = 0; x < FILTER_TABLE_SIZE; ++x) {             float fx = ((float)x + .5f) *                        filter->xWidth / FILTER_TABLE_SIZE;             *ftp++ = filter->Evaluate(fx, fy);         }     }

作用:

(把 filter 的範圍 (width,height),分成16*16,每一塊都預先算好 filter的 weight 的值再儲存起來,主要是為了減少消耗,不用在後面的AddSample頻繁呼叫 filter.Evaluate  ,其實這個16*16的數組裡面,表示的就是從索引 0~256,索引越大,表示 偏離 filter 中點越遠)

With typical filter settings, every image sample may contribute to 16 or more pixels in the final image. Particularly for simple scenes, where relatively little time is spent on ray intersection testing and shading computations, the time spent updating the image for each sample can be significant. Therefore, the ImageFilm precomputes a table of filter values so that the Film::AddSample() method can avoid the expense of virtual function calls to the Filter::Evaluate() method as well as the expense(開銷) of evaluating the filter and can instead use values from the table for filtering. The error introduced by not evaluating the filter at each sample’s precise location isn’t noticeable in practice.

2. void AddSample(const CameraSample &sample, const Spectrum &L);


void ImageFilm::AddSample(const CameraSample &sample,
                          const Spectrum &L) {
    // Compute sample's raster extent
    float dimageX = sample.imageX - 0.5f;
    float dimageY = sample.imageY - 0.5f;

    int x0 = Ceil2Int (dimageX - filter->xWidth);
    int x1 = Floor2Int(dimageX + filter->xWidth);
    int y0 = Ceil2Int (dimageY - filter->yWidth);
    int y1 = Floor2Int(dimageY + filter->yWidth);

    x0 = max(x0, xPixelStart);
    x1 = min(x1, xPixelStart + xPixelCount - 1);
    y0 = max(y0, yPixelStart);
    y1 = min(y1, yPixelStart + yPixelCount - 1);

    if ((x1-x0) < 0 || (y1-y0) < 0)
    {
        //PBRT_SAMPLE_OUTSIDE_IMAGE_EXTENT(const_cast<CameraSample *>(&sample));
        return;
    }

    // Loop over filter support and add sample to pixel arrays
    float xyz[3];
	// RGB to XYZ
    L.ToXYZ(xyz);

    // Precompute $x$ and $y$ filter table offsets
    int *ifx = ALLOCA(int, x1 - x0 + 1);
    for (int x = x0; x <= x1; ++x) {
        float fx = fabsf((x - dimageX) *
                         filter->invXWidth * FILTER_TABLE_SIZE);

        ifx[x-x0] = min(Floor2Int(fx), FILTER_TABLE_SIZE-1);
    }

    int *ify = ALLOCA(int, y1 - y0 + 1);
    for (int y = y0; y <= y1; ++y) {
        float fy = fabsf((y - dimageY) *
                         filter->invYWidth * FILTER_TABLE_SIZE);
        ify[y-y0] = min(Floor2Int(fy), FILTER_TABLE_SIZE-1);
    }

    bool syncNeeded = (filter->xWidth > 0.5f || filter->yWidth > 0.5f);
    for (int y = y0; y <= y1; ++y) {
        for (int x = x0; x <= x1; ++x) {
            // Evaluate filter value at $(x,y)$ pixel
            int offset = ify[y-y0]*FILTER_TABLE_SIZE + ifx[x-x0];
            float filterWt = filterTable[offset];

            // Update pixel values with filtered sample contribution
            Pixel &pixel = (*pixels)(x - xPixelStart, y - yPixelStart);
            if (!syncNeeded) {
                pixel.Lxyz[0] += filterWt * xyz[0];
                pixel.Lxyz[1] += filterWt * xyz[1];
                pixel.Lxyz[2] += filterWt * xyz[2];
                pixel.weightSum += filterWt;
            }
            else {
                // Safely update _Lxyz_ and _weightSum_ even with concurrency
                AtomicAdd(&pixel.Lxyz[0], filterWt * xyz[0]);
                AtomicAdd(&pixel.Lxyz[1], filterWt * xyz[1]);
                AtomicAdd(&pixel.Lxyz[2], filterWt * xyz[2]);
                AtomicAdd(&pixel.weightSum, filterWt);
            }
        }
    }
}

細節:

a. 整體思路:

(主要是計算一個Pixel的值,先計算公式的分子,再計算公式的分母,就類似 上面的 Pixel 的解析一樣)

To understand the operation of ImageFilm::AddSample(), first recall the pixel filtering equation:

It computes each pixel’s value I (x, y) as the weighted sum of nearby samples’ radiance values, using a filter function f to compute the weights. Because all of the Filters in pbrt have finite extent, this method starts by computing which pixels will be affected by the current sample. Then, turning the pixel filtering equation inside out, it updates two running sums for each pixel (x, y) that is affected by the sample. One sum accumulates the numerator of the pixel filtering equation and the other accumulates the denominator. When all of the samples have been processed, the final pixel values are computed by performing the division.

b.

// Compute sample's raster extent float dimageX = sample.imageX - 0.5f; float dimageY = sample.imageY - 0.5f;

int x0 = Ceil2Int (dimageX - filter->xWidth); int x1 = Floor2Int(dimageX + filter->xWidth); int y0 = Ceil2Int (dimageY - filter->yWidth); int y1 = Floor2Int(dimageY + filter->yWidth);

x0 = max(x0, xPixelStart); x1 = min(x1, xPixelStart + xPixelCount - 1); y0 = max(y0, yPixelStart); y1 = min(y1, yPixelStart + yPixelCount - 1);

if ((x1-x0) < 0 || (y1-y0) < 0) {     //PBRT_SAMPLE_OUTSIDE_IMAGE_EXTENT(const_cast<CameraSample *>(&sample));     return; }

作用:

(Film::AddSample 傳入 的是一個 Sample的引數,那麼 首先判斷這個Sample 會 影響 哪一些Pixel,上面的程式碼就是先判斷這個Sample影響到的 圖片的 範圍(x0, y0) -> (x1, y1))

To find which pixels a sample potentially contributes to, Film::AddSample() converts the continuous sample coordinates to discrete coordinates by subtracting 0.5 from x and y. It then offsets this value by the filter width in each direction (Figure 7.47) and takes the ceiling of the minimum coordinates and the floor of the maximum, since pixels outside the bound of the extent are guaranteed to be unaffected by the sample.

Figure 7.47: Given an image sample at some position on the image plane (solid dot), it is necessary to determine which pixel values (empty dots) are affected by the sample’s contribution. This is done by taking the offsets in the x and y directions according to the pixel reconstruction filter’s width (solid lines) and finding the pixels inside this region.

c.

// Precompute $x$ and $y$ filter table offsets     int *ifx = ALLOCA(int, x1 - x0 + 1);     for (int x = x0; x <= x1; ++x) {         float fx = fabsf((x - dimageX) *                          filter->invXWidth * FILTER_TABLE_SIZE);

        ifx[x-x0] = min(Floor2Int(fx), FILTER_TABLE_SIZE-1);     }

    int *ify = ALLOCA(int, y1 - y0 + 1);     for (int y = y0; y <= y1; ++y) {         float fy = fabsf((y - dimageY) *                          filter->invYWidth * FILTER_TABLE_SIZE);         ify[y-y0] = min(Floor2Int(fy), FILTER_TABLE_SIZE-1);     }

作用:

(因為在建構函式裡面已經預先算好了 16*16 個 filter weight 值,儲存到陣列中,那麼這裡 得到 weight 的話,要先計算出 陣列的索引.

fabsf((x - dimageX) * filter->invXWidth ->[0,1]

fabsf((x - dimageX) * filter->invXWidth * FILTER_TABLE_SIZE ->[0, 16]

Instead, the implementation retrieves(檢索) the appropriate filter weight from the table.

To find the filter weight for a pixel (x‘, y’) given the sample position (x, y), this routine computes the offset (x‘ − x, y’ − y) and converts it into coordinates for the filter weights lookup table.

This can be done directly by dividing each component of the sample offset by the filter width in that direction, giving a value between zero and one, and then multiplying by the table size.

This process can be further optimized by noting that along each row of pixels in the x direction, the difference in y, and thus the y offset into the filter table, is constant. Analogously, for each column of pixels, the x offset is constant.

Therefore, before looping over the pixels here it’s possible to precompute these indices, saving repeated work in the loop.

Therefore, before looping over the pixels here it’s possible to precompute these indices, saving repeated work in the loop.

d.

bool syncNeeded = (filter->xWidth > 0.5f || filter->yWidth > 0.5f);

作用:

(先判斷 filter的範圍是否 大過1個Pixel的範圍,如果大於的話,就要考慮多執行緒,因為會有多個Task的話,多個sample會同時影響同一個Pixel,如果沒有大過一個Pixel的話,那麼就不需要考慮 多執行緒的情況,因為不會有多個Task去影響同一個Pixel )

Before the loop starts, it checks to see if the filter is greater than one pixel wide. If this is true, then sample values from more than one SamplerRendererTask from the Sampler Renderer may affect a particular pixel’s value. (Samples near the boundary of their image tiles will affect pixels in other image tiles due to the filter’s extent.) In this case, we must ensure that multiple threads entering this method at the same time correctly coordinate their updates of pixel values. On the other hand, if the filter is a pixel wide (or less), then we know that only a single SamplerRendererTask will be updating any pixel’s value, and that therefore no synchronization is needed for pixel updates here.

e.

int offset = ify[y-y0]*FILTER_TABLE_SIZE + ifx[x-x0];           

float filterWt = filterTable[offset];

作用:

(利用ify和ifx來尋找 table的offset,也就是活的 filter.weight)

Now at each pixel, the x and y offsets into the filter table can be found for the pixel, leading to the offset into the array and thus the filter value.

f. 

Pixel &pixel = (*pixels)(x - xPixelStart, y - yPixelStart); if (!syncNeeded) {     pixel.Lxyz[0] += filterWt * xyz[0];     pixel.Lxyz[1] += filterWt * xyz[1];     pixel.Lxyz[2] += filterWt * xyz[2];     pixel.weightSum += filterWt; }

作用:

(累加上面公式的分子與分母)

For each affected pixel, we need to add the weighted XYZ color and the filter weight. If no synchronization is needed, standard addition suffices.

3.

void Splat(const CameraSample &sample, const Spectrum &L);


void ImageFilm::Splat(const CameraSample &sample, const Spectrum &L) {
    if (L.HasNaNs()) {
        Warning("ImageFilm ignoring splatted spectrum with NaN values");
        return;
    }
    float xyz[3];
    L.ToXYZ(xyz);
    int x = Floor2Int(sample.imageX), y = Floor2Int(sample.imageY);
    if (x < xPixelStart || x - xPixelStart >= xPixelCount ||
        y < yPixelStart || y - yPixelStart >= yPixelCount) return;
    Pixel &pixel = (*pixels)(x - xPixelStart, y - yPixelStart);
    AtomicAdd(&pixel.splatXYZ[0], xyz[0]);
    AtomicAdd(&pixel.splatXYZ[1], xyz[1]);
    AtomicAdd(&pixel.splatXYZ[2], xyz[2]);
}

作用:

(Splat 只是 累加 xyz 值,沒有加權平均)

The implementation of the Splat() method here just has to compute which pixel the sample maps to and safely add the value. It always uses atomic operations under the expectation that, in general, it’s likely that multiple threads may need to splat values to the same pixel. The implementation here effectively uses a pixel-wide box filter to filter the samples; Exercise 7.3 at the end of the chapter discusses how to address this shortcoming.

4. 

void GetSampleExtent(int *xstart, int *xend, int *ystart, int *yend) const;

void ImageFilm::GetSampleExtent(int *xstart, int *xend,
                                int *ystart, int *yend) const {
    *xstart = Floor2Int(xPixelStart + 0.5f - filter->xWidth);
    *xend   = Ceil2Int(xPixelStart - 0.5f + xPixelCount +
                       filter->xWidth);

    *ystart = Floor2Int(yPixelStart + 0.5f - filter->yWidth);
    *yend   = Ceil2Int(yPixelStart - 0.5f + yPixelCount +
                       filter->yWidth);
}

void GetPixelExtent(int *xstart, int *xend, int *ystart, int *yend) const;

void ImageFilm::GetPixelExtent(int *xstart, int *xend,
                               int *ystart, int *yend) const {
    *xstart = xPixelStart;
    *xend   = xPixelStart + xPixelCount;
    *ystart = yPixelStart;
    *yend   = yPixelStart + yPixelCount;
}

作用:

Because the pixel reconstruction filter spans(橫跨) a number of pixels, the Sampler must generate image samples a bit outside of the range of pixels that will actually be output. This way, even pixels at the boundary of the image will have an equal density of samples around them in all directions and won’t be biased with only values from toward the interior of the image. This is also important when rendering images in pieces(塊) with crop windows, since it eliminates artifacts at the edges of the subimages.

5.

void WriteImage(float splatScale);

void ImageFilm::WriteImage(float splatScale) {
    // Convert image to RGB and compute final pixel values
    int nPix = xPixelCount * yPixelCount;
    float *rgb = new float[3*nPix];
    int offset = 0;
    for (int y = 0; y < yPixelCount; ++y) {
        for (int x = 0; x < xPixelCount; ++x) {
            // Convert pixel XYZ color to RGB
            XYZToRGB((*pixels)(x, y).Lxyz, &rgb[3*offset]);

            // Normalize pixel with weight sum
            float weightSum = (*pixels)(x, y).weightSum;
            if (weightSum != 0.f) {
				//  (x * invWt) 其實就是求平均值
                float invWt = 1.f / weightSum;
                rgb[3*offset  ] = max(0.f, rgb[3*offset  ] * invWt);
                rgb[3*offset+1] = max(0.f, rgb[3*offset+1] * invWt);
                rgb[3*offset+2] = max(0.f, rgb[3*offset+2] * invWt);
            }

            // Add splat value at pixel
            float splatRGB[3];
            XYZToRGB((*pixels)(x, y).splatXYZ, splatRGB);
            rgb[3*offset  ] += splatScale * splatRGB[0];
            rgb[3*offset+1] += splatScale * splatRGB[1];
            rgb[3*offset+2] += splatScale * splatRGB[2];
            ++offset;
        }
    }

    // Write RGB image
    ::WriteImage(filename, rgb, NULL, xPixelCount, yPixelCount,
                 xResolution, yResolution, xPixelStart, yPixelStart);

    // Release temporary image memory
    delete[] rgb;
}

作用:

After the main rendering loop finishes, SamplerRenderer::Render() calls the Film:: WriteImage() method to store the final image in a file.

細節:

a.

XYZToRGB((*pixels)(x, y).Lxyz, &rgb[3*offset]);

// Normalize pixel with weight sum float weightSum = (*pixels)(x, y).weightSum; if (weightSum != 0.f) {     //  (x * invWt) 其實就是求平均值     float invWt = 1.f / weightSum;     rgb[3*offset  ] = max(0.f, rgb[3*offset  ] * invWt);     rgb[3*offset+1] = max(0.f, rgb[3*offset+1] * invWt);     rgb[3*offset+2] = max(0.f, rgb[3*offset+2] * invWt); }

作用:

(這裡其實主要就是把 XYZ 轉為 RGB,因為這個XYZ 是 上面

的分子,那麼 最後還是要除以 weightSum 來 計算出最後的Pixel 顏色,之後計算出來的rgb可能是負數,所以需要 clamp 掉)

Given information about the response characteristics of the display device being used, the pixel values can be converted to device-dependent RGB values from the deviceindependent XYZ tristimulus values. This conversion is another change of spectral basis, where the new basis is determined by the spectral response curves of the red, green, and blue elements of the display device.Here, weights to convert fromXYZ to the device RGB based on the HDTV standard are used. This is a good match for most modern display devices.

As the RGB output values are being initialized, their final values from the pixel filtering equation are computed by dividing each pixel sample value by Pixel::weightSum. This conversion can lead to RGB values where some components are negative; these are out-of-gamut colors that can’t be represented with the chosen display primaries. Various approaches have been suggested to deal with this issue, ranging from clamping to zero, offsetting all components to lie within the gamut, or even performing a global optimization based on all of the pixels in the image. In addition to out-of-gamut colors, reconstructed pixels may also end up with negative values due to negative lobes in the reconstruction filter function. In order to handle both of these cases, their components are clamped to zero here.

b.

// Add splat value at pixel float splatRGB[3]; XYZToRGB((*pixels)(x, y).splatXYZ, splatRGB); rgb[3*offset  ] += splatScale * splatRGB[0]; rgb[3*offset+1] += splatScale * splatRGB[1]; rgb[3*offset+2] += splatScale * splatRGB[2]; ++offset;

作用:

It’s also necessary to add in the splatted values for this pixel to the final value.