A Gentle Introduction to DirectX Raytracing 14

引言

In Tutorial 12, we showed how to do recursive bounces for path tracing using a diffuse Lambertain material model. This tutorial explores how to swap out a Lambertian material model for a more complex model, the GGX model commonly used today in film and games. Additionally, we extend the one-bounce global illumination with an arbitrary number of bounces.

教程 12中,我们展示了如何使用漫反射 Lambertain 材料模型为路径跟踪进行递归反弹。本教程探讨如何将 Lambertian 材料模型替换为更复杂的模型,即当今电影和游戏中常用的 GGX 模型。此外,我们使用任意数量的反弹来扩展单反弹全局照明。

Changes to the C++ Code

The C++ code barely changes in this tutorial, other than changing the pass names. Inside GGXGlobalIllumination.cpp, we ask for a few additional fields from our G-buffer in the GGXGlobalIlluminationPass::initialize() method, including the MaterialSpecRough and Emissive fields, which include properties needed to render specular materials and those that emit light directly. This means we can render scenes with no light sources (but materials marked “emissive” will act as lights).

除了更改渲染管线名称外,本教程中的 C++ 代码几乎没有变化。在内部,我们在方法 GGXGlobalIllumination.cpp 中从 G-buffer 中请求一些额外的字段,包括和字段,其中包括渲染镜面材质和直接发光的属性所需的属性。这意味着我们可以在没有光源的情况下渲染场景(但标记为“自发光”的材质将充当灯光)。 GGXGlobalIlluminationPass::initialize()``MaterialSpecRough``Emissive

Additionally, the method GGXGlobalIlluminationPass::execute() passes a few additional parameters to our shader, including a maximum ray depth (gMaxDepth) and our additional G-buffer textures.

此外,该方法GGXGlobalIlluminationPass::execute()向我们的着色器传递了一些额外的参数,包括最大光线深度 ( gMaxDepth) 和我们额外的 G 缓冲区纹理。

New Microfacet Functions in the HLSL Code

A new file in the shader data direction is microfacentBRDFUtils.hlsli, which include a number of utility functions for rendering a GGX material. The form of the GGX BRDF is: D * G * F / (4 * NdotL * NdotV). This form was introduced by Cook and Torrance (also available here) and is widely used across many microfacet BRDF models used today.

着色器新文件是microfacentBRDFUtils.hlsli,其中包括许多用于渲染 GGX 材质的实用程序函数。GGX BRDF 的形式为: D * G * F / (4 * NdotL * NdotV). 这种形式是由 Cook 和 Torrance 引入的(也可在此处获得),并广泛 用于当今 使用的许多微平面 BRDF 模型

A microfacet BRDF model assumes the surface is made up of a large number of very tiny planar facets that are all perfectly reflective (i.e., they reflect along the mirror direction). In rough, diffuse surfaces these facets are oriented almost uniformly randomly so light is reflected evenly around the hemisphere. On glossy surfaces, these facets are much more likely to lie flat along the geometry. The D term is the microfacet distribution, which controls the probability an incoming ray sees a facet of a particular orientation.

微平面 BRDF 模型假设表面由大量非常微小的平面组成,这些平面都是完全反射的(即,它们沿镜面方向反射)。在粗糙的漫射表面中,这些刻面几乎随机均匀地定向,因此光线在半球周围均匀反射。在有光泽的表面上,这些刻面更有可能沿着几何形状平放。该D术语是微面分布,它控制入射光线看到特定方向的面的概率。

We use a standard form of the GGX normal distribution for D (e.g., math taken from here):

我们使用 D 的标准形式的 GGX 正态分布(例如,从这里获取所需的数学知识):

float ggxNormalDistribution( float NdotH, float roughness )
{
float a2 = roughness * roughness;
float d = ((NdotH * a2 - NdotH) * NdotH + 1);
return a2 / (d * d * M_PI);
}

Note: When building path tracers, it is important to maintain numerical robustness to avoid NaNs and Infs. In some circumstances, the last line in the ggxNormalDistribution() function may cause a divide by zero, so you may wish to clamp.

注意:在构建路径跟踪器时,保持数值稳健性以避免 NaN 和 Infs 很重要。在某些情况下,ggxNormalDistribution()函数中的最后一行可能会导致除以零,因此您可能希望进行钳位。

The G term in the Cook-Torrance BRDF model represents geometric masking of the microfacets. I.e., facets of various orientations will not always be visible; they may get occluded by other tiny facets. The model for geometric masking we use is from Schlick’s BRDF model (or [direct PDF](http://www.cs.virginia.edu/~jdl/bib/appearance/analytic models/schlick94b.pdf)). Usually other masking terms are used with GGX (see Naty Hoffman’s SIGGRAPH Notes), but this model plugs in robustly without a lot of code massaging, which makes the tutorial code simpler to understand. This formulation for the Schlick approximation comes from Karas’ SIGGRAPH 2013 notes from the Physically Based Shading course:

Cook-Torrance BRDF 模型中的G术语表示微面的几何掩蔽。即,不同方向的刻面并不总是可见的;它们可能会被其他微小的刻面遮挡。我们使用的几何遮罩模型来自Schlick 的 BRDF 模型 (或[直接 PDF](http://www.cs.virginia.edu/~jdl/bib/appearance/analytic models/schlick94b.pdf))。通常与 GGX 一起使用其他掩码术语(请参阅Naty Hoffman 的 SIGGRAPH Notes),但该模型在没有大量代码按摩的情况下健壮地插入,这使得教程代码更易于理解。Schlick 近似的这个公式来自基于物理的着色课程的 Karas 的 SIGGRAPH 2013 笔记:

float schlickMaskingTerm(float NdotL, float NdotV, float roughness)
{
// Karis notes they use alpha / 2 (or roughness^2 / 2)
float k = roughness*roughness / 2;

// Compute G(v) and G(l). These equations directly from Schlick 1994
// (Though note, Schlick's notation is cryptic and confusing.)
float g_v = NdotV / (NdotV*(1 - k) + k);
float g_l = NdotL / (NdotL*(1 - k) + k);
return g_v * g_l;
}

Finally, the F term in the Cook-Torrance model is the Fresnel term, which describes how materials become more reflective when seen from a grazing angle. Rarely do renderers implement the full Fresnel equations, which account for the wave nature of light. Since most real-time renderers assume geometric optics, we can ignore wave effects, and most renderers use Schlick’s approximation, which comes from the same paper references above:

最后,FCook-Torrance 模型中的术语是菲涅耳术语,它描述了从掠射角度看材料如何变得更具反射性。渲染器很少实现完整的菲涅耳方程,这些方程解释了光的波动性。由于大多数实时渲染器假设几何光学,我们可以忽略波动效果,并且大多数渲染器使用Schlick 近似,它来自上述相同的论文参考:

float3 schlickFresnel(float3 f0, float lDotH)
{
return f0 + (float3(1.0f, 1.0f, 1.0f) - f0) * pow(1.0f - lDotH, 5.0f);
}

Finally, in addition to the three functions representing D, G, and F, the microfacetBRDFUtils.hlsli also includes the function getGGXMicrofacet() which gets a random microfacet orientation (i.e., a facet normal) that follows the distribution described by the function ggxNormalDistribution(). This allows us to randomly choose what direction a ray bounces when it leaves a specular surface:

最后,除了表示DG和的三个函数之外FmicrofacetBRDFUtils.hlsli还包括getGGXMicrofacet()获得遵循函数 描述的分布的随机微面方向(即,面法线)的函数ggxNormalDistribution()。这允许我们随机选择光线离开镜面反射面时的反弹方向:

// When using this function to sample, the probability density is:
// pdf = D * NdotH / (4 * HdotV)
float3 getGGXMicrofacet(inout uint randSeed, float roughness, float3 hitNorm)
{
// Get our uniform random numbers
float2 randVal = float2(nextRand(randSeed), nextRand(randSeed));

// Get an orthonormal basis from the normal
float3 B = getPerpendicularVector(hitNorm);
float3 T = cross(B, hitNorm);

// GGX NDF sampling
float a2 = roughness * roughness;
float cosThetaH = sqrt(max(0.0f, (1.0-randVal.x)/((a2-1.0)*randVal.x+1) ));
float sinThetaH = sqrt(max(0.0f, 1.0f - cosThetaH * cosThetaH));
float phiH = randVal.y * M_PI * 2.0f;

// Get our GGX NDF sample (i.e., the half vector)
return T * (sinThetaH * cos(phiH)) +
B * (sinThetaH * sin(phiH)) +
hitNorm * cosThetaH;
}

Shading a Surface Point

When shading a point on a surface, we need to invoke these microfacet BRDF functions. To reduce the chance of error, we combine these into a function and call this function from multiple locations. In particular, inside the ray generation shader ggxGlobalIllumination.rt.hlsl, shading looks as follows:

当对表面上的点进行着色时,我们需要调用这些微平面 BRDF 函数。为了减少出错的机会,我们将它们组合成一个函数,并从多个位置调用这个函数。特别是,在光线生成着色器ggxGlobalIllumination.rt.hlsl中,着色如下所示:

// Add any emissive color from primary rays
shadeColor = gEmitMult * pixelEmissive.rgb;

// Do explicit direct lighting to a random light in the scene
if (gDoDirectGI)
shadeColor += ggxDirect(randSeed, worldPos.xyz, worldNorm.xyz, V,
difMatlColor.rgb, specMatlColor.rgb, roughness);

// Do indirect lighting for global illumination
if (gDoIndirectGI && (gMaxDepth > 0))
shadeColor += ggxIndirect(randSeed, worldPos.xyz, worldNorm.xyz, V,
difMatlColor.rgb, specMatlColor.rgb, roughness, 0);

Basically, the color at any hitpoint is: the color a surface emits, plus any light directly visible from light sources, plus light that bounces in via additional bounces along the path.

基本上,任何命中点的颜色是:表面发出的颜色,加上从光源直接可见的任何光,加上通过沿路径的额外反射而反射的光。

When we fire indirect rays (see indirectRay.hlsli), we shade the closest hit using a similar process:

当我们发射间接光线(参见 参考资料indirectRay.hlsli)时,我们使用类似的过程对最近的命中进行着色:

[shader("closesthit")]
void IndirectClosestHit(inout IndirectRayPayload rayData,
BuiltInTriangleIntersectionAttributes attribs)
{
// Run a helper functions to extract Falcor scene data for shading
ShadingData shadeData = getHitShadingData( attribs, WorldRayOrigin() );

// Add emissive color
rayData.color = gEmitMult * shadeData.emissive.rgb;

// Do direct illumination at this hit location
if (gDoDirectGI)
{
rayData.color += ggxDirect(rayData.rndSeed, shadeData.posW,
shadeData.N, shadeData.V, shadeData.diffuse, shadeData.specular,
shadeData.roughness);
}

// Do indirect illumination (if we haven't traversed too far)
if (rayData.rayDepth < gMaxDepth)
{
rayData.color += ggxIndirect(rayData.rndSeed, shadeData.posW,
shadeData.N, shadeData.V, shadeData.diffuse, shadeData.specular,
shadeData.roughness, rayData.rayDepth);
}
}

Direct Lighting Using a GGX Model

Direct lighting using a GGX model looks very similar to the direct lighting using Lambertian from Tutorial 12. In particutlar, we start by picing a random light, extracting it’s information from the Falcor scene representation, and tracing a shadow ray to determine if it is visible.

使用 GGX 模型的直接光照看起来与使用 教程 12 中的 Lambertian 的直接光照非常相似。具体而言,我们首先对随机光进行拍照,从 Falcor 场景表示中提取其信息,然后跟踪阴影光线以确定它是否可见。

Note that with many BRDFs, our GGX model consists of a specular lobe and a diffuse lobe. The math for the diffuse lobe is identical to that in Tutorial 12, we’re just adding a new specular lobe to the diffuse term.

请注意,对于许多 BRDF,我们的 GGX 模型由镜面反射波瓣和漫射波瓣组成。漫反射叶的数学与教程 12 中的相同,我们只是在漫反射项中添加了一个新的镜面反射叶。

If visible, we perform shading using the GGX model (i.e., D * G * F / (4* NdotL * NdotV)). In this case, numerical robustness is improved significantly by cancelling NdotL terms in the GGX lobe to avoid potential divide-by-zero when light hits geometry at a grazing angle. I left in these cancelled NdotL terms in comments to make the math clear.

如果可见,我们使用 GGX 模型(即D * G * F / (4* NdotL * NdotV))执行着色。在这种情况下,通过取消 GGX 波瓣中的项来显着提高数值鲁棒性,NdotL以避免在光线以掠射角撞击几何体时出现电位除零。我在评论中留下了这些被取消NdotL的术语,以使数学上更清楚。

float3 ggxDirect(inout uint rndSeed, float3 hit, float3 N, float3 V,
float3 dif, float3 spec, float rough)
{
// Pick a random light from our scene to shoot a shadow ray towards
int lightToSample = min( int(nextRand(rndSeed) * gLightsCount),
gLightsCount - 1 );

// Query the scene to find info about the randomly selected light
float distToLight;
float3 lightIntensity;
float3 L;
getLightData(lightToSample, hit, L, lightIntensity, distToLight);

// Compute our lambertion term (N dot L)
float NdotL = saturate(dot(N, L));

// Shoot our shadow ray to our randomly selected light
float shadowMult = float(gLightsCount) *
shadowRayVisibility(hit, L, gMinT, distToLight);

// Compute half vectors and additional dot products for GGX
float3 H = normalize(V + L);
float NdotH = saturate(dot(N, H));
float LdotH = saturate(dot(L, H));
float NdotV = saturate(dot(N, V));

// Evaluate terms for our GGX BRDF model
float D = ggxNormalDistribution(NdotH, rough);
float G = ggxSchlickMaskingTerm(NdotL, NdotV, rough);
float3 F = schlickFresnel(spec, LdotH);

// Evaluate the Cook-Torrance Microfacet BRDF model
// Cancel NdotL here to avoid catastrophic numerical precision issues.
float3 ggxTerm = D*G*F / (4 * NdotV /* * NdotL */);

// Compute our final color (combining diffuse lobe plus specular GGX lobe)
return shadowMult * lightIntensity * ( /* NdotL * */ ggxTerm +
NdotL * dif / M_PI);
}

Indirect Lighting Using a GGX Model

Bouncing an indirect ray is somewhat more complex. Since we have both a diffuse lobe and a specular lobe, we need to sample them somewhat differently; the cosine sampling used for lambertian shading doesn’t have particularly good characteristics for GGX. One way would shoot two rays: one in the diffuse lobe and one in the specular lobe. But this gets costly, and they converge at different rates.

反射间接光线要复杂一些。由于我们同时有一个漫射波瓣和一个镜面波瓣,我们需要对它们进行稍微不同的采样;用于朗伯着色的余弦采样对于 GGX 没有特别好的特性。一种方法是发射两条光线:一条在漫射波瓣中,一条在镜面波瓣中。但这会变得昂贵,并且它们以不同的速率收敛。

Instead, we randomly pick whether to shoot an indirect diffuse or indirect glossy ray (see ggxIndirect()):

相反,我们随机选择是拍摄间接漫反射还是间接光泽光线(参见 参考资料ggxIndirect()):

// We have to decide whether we sample our diffuse or specular/ggx lobe.
float probDiffuse = probabilityToSampleDiffuse(dif, spec);
float chooseDiffuse = (nextRand(rndSeed) < probDiffuse);

In this case, we choose specular or diffuse based on their diffuse and specular albedos, though this isn’t a particularly well thought out or principaled approach:

在这种情况下,我们根据漫反射和镜面反射率选择镜面反射或漫反射,尽管这不是一个经过深思熟虑或原则性的方法:

float probabilityToSampleDiffuse(float3 difColor, float3 specColor)
{
float lumDiffuse = max(0.01f, luminance(difColor.rgb));
float lumSpecular = max(0.01f, luminance(specColor.rgb));
return lumDiffuse / (lumDiffuse + lumSpecular);
}

Going back to ggxIndirect(), if we sample our diffuse lobe, the indirect ray looks almost identical to that from Tutorial 12. We shoot a cosine-distributed ray, return the color, and divide by the probability of selecting this ray.

回到ggxIndirect(),如果我们对漫射波瓣进行采样,间接光线看起来与教程 12中的几乎相同。我们射出一条余弦分布的光线,返回颜色,然后除以选择这条光线的概率。

if (chooseDiffuse)
{
// Shoot a randomly selected cosine-sampled diffuse ray.
float3 L = getCosHemisphereSample(rndSeed, N);
float3 bounceColor = shootIndirectRay(hit, L, gMinT, 0, rndSeed, rayDepth);

// Accumulate the color: (NdotL * incomingLight * dif / pi)
// Probability of sampling this ray: (NdotL / pi) * probDiffuse
return bounceColor * dif / probDiffuse;
}

If we choose to sample the GGX lobe, the behavior is fundamentally identical even though the code is more complex: select a random ray, shoot it and return a color, and divide by the probability of selecting this ray. The key is that when we sample according to getGGXMicrofacet() our probability density for our rays is different (and described by D * NdotH / (4 * LdotH)).

如果我们选择对 GGX 波瓣进行采样,即使代码更复杂,行为也基本相同:选择一条随机光线,拍摄它并返回一种颜色,然后除以选择这条光线的概率。关键是当我们根据getGGXMicrofacet()我们的概率密度对我们的光线进行采样时是不同的(并且由 描述D * NdotH / (4 * LdotH))。

// Otherwise we randomly selected to sample our GGX lobe
else
{
// Randomly sample the NDF to get a microfacet in our BRDF
float3 H = getGGXMicrofacet(rndSeed, rough, N);

// Compute outgoing direction based on this (perfectly reflective) facet
float3 L = normalize(2.f * dot(V, H) * H - V);

// Compute our color by tracing a ray in this direction
float3 bounceColor = shootIndirectRay(hit, L, gMinT, 0, rndSeed, rayDepth);

// Compute some dot products needed for shading
float NdotL = saturate(dot(N, L));
float NdotH = saturate(dot(N, H));
float LdotH = saturate(dot(L, H));

// Evaluate our BRDF using a microfacet BRDF model
float D = ggxNormalDistribution(NdotH, rough);
float G = ggxSchlickMaskingTerm(NdotL, NdotV, rough);
float3 F = schlickFresnel(spec, LdotH);
float3 ggxTerm = D * G * F / (4 * NdotL * NdotV);

// What's the probability of sampling vector H from getGGXMicrofacet()?
float ggxProb = D * NdotH / (4 * LdotH);

// Accumulate color: ggx-BRDF * lightIn * NdotL / probability-of-sampling
// -> Note: Should really cancel and simplify the math above
return NdotL * bounceColor * ggxTerm / (ggxProb * (1.0f - probDiffuse));
}

What Does it Look Like?

That covers the important points of this tutorial. When running, you get the following result:

这涵盖了本教程的重点。运行时,您会得到以下结果:

With this tutorial, you can run Falcor’s using a fairly feature-rich path tracer, even if the sampling is extremely naive. Moving forward, which is left as an exercise to the reader, you can add better importance sampling, multiple importance sampling, and using next-event estimation for better explicit direct lighting. Additionally, we haven’t handled refractive materials in this set of tutorials, though as described in Pete Shirley’s Ray Tracing in One Weekend, this is fairly straightforward to add.

通过本教程,您可以使用即使采样非常简单但功能相当丰富的路径跟踪器运行 Falcor。继续前进,作为练习留给读者,您可以添加更好的重要性采样、多重重要性采样,并使用下一个事件估计来获得更好的显式直接照明。此外,在这组教程中,我们没有处理折射材料,尽管如 Pete Shirley Ray Tracing in One Weekend 所述,这很容易添加。