Anyone who has rendered a large scene has likely had to fight issues of depth buffer resolution, and the effects of the non-linearity of Z-buffering (for a bit of numeric background, see Using W-Buffers). The resulting depth artifacts often must be dealt with by limiting the depth range of the viewing frustum, a solution that is not always ideal especially in large outdoor scenes. The use of W-buffering offered some promise, with better distribution of depth values, but hardware support has been limited and does not look to be supported in future hardware.
In this article we'll look at an easy way to implement linear depth values using a Z-buffer, by implementing transformation in a programmable vertex shader. Benefits and features of this method include:
Projection Transform, Perspective Division, and Non-Linearity
To begin with, let's take a look at the transformation process, and how depth values are manipulated to get the final value that gets written to our depth buffer. If we are using the fixed function pipeline, we can consider the process in three parts:
Unfortunately, it is this final division that causes the non-linearity of transformed depth values. For example, given a near plane of 10.0 and a far plane of 10000.0:
Customizing the Projection in a Vertex Shader
When using a programmable vertex shader, we have direct control of the transformation process, and can implement our own. Vertex position can be read from the input registers, manipulated however we like, then output as a 4D homogenous coordinate to the output position register. However, there is one apparent problem at handling our linearity issue: the output from the shader is still homogenous, and will be divided in the same manner as the output from the fixed pipeline transformation would be. So how do we handle this, if we can't eliminate the division operation?
The answer is actually pretty simple - just multiply Z by W prior to returning the result from the vertex shader. The net effect is that Z*W/W = Z! If we first divide Z by the far distance, to scale it to the range of 0.0 -> 1.0, we've got a linear result that will survive perspective division. A simple HLSL implementation might look (in part) something like this:
float4 vPos = mul(Input.Pos,worldViewProj); vPos.z = vPos.z * vPos.w / Far; Output.Pos = vPos;
To simplify this, instead of needing to divide by the far plane distance to scale Z, we could instead scale the values in the Z column of the projection matrix we use:
D3DXMATRIX mProj; D3DXMatrixPerspectiveFovLH(&mProj,fFov,fNear,fFar); mProj._33/=fFar; mProj._43/=fFar; //...set to shader constant register or concatenate //...with world and view matrices first as needed
This reduces the vertex shader transformation to:
float4 vPos = mul(Input.Pos,worldViewProj);
Going back to our previous scenario (near = 10.0, far = 10000.0), here are the resulting depth values that would be generated, assuming that the projection matrix were scaled as noted previously:
Visitors Since 1/1/2000: