Unity Shaders – Depth and Normal Textures (Part 3)

This is a continuation of a series of posts on shaders: Part 1, Part 2

In the previous two parts, I talked about using depth texture in Unity. Here, I will discuss using depth+normal textures through DepthTextureMode.DepthNormals, which is basically depth and view space normals packed into one.

Below is the effect we will create. What you’re seeing is the scene being rendered with the viewspace normals as colors, and then the depth value as colors.

DepthNormals

Depth+Normal Texture

If you remember from Part 1, we can tell the camera in Unity to generate a depth texture using the Camera.depthTextureMode variable. According to the docs, there are actually two modes you can set this variable to:

  • DepthTextureMode.Depth:depth texture.
  • DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.

We are already familiar with DepthTextureMode.Depth, so the question is: how exactly do we get the values of depth and normals from DepthTextureMode.DepthNormals? 

It turns out, you need to use the function DecodeDepthNormal. This function is defined in the UnityCG.cginc include file, which, by the way, can be found on windows using this path: <program_files>/Unity/Editor/Data/CGIncludes/

Below is the definition:

inline void DecodeDepthNormal( float4 enc, out float depth, out float3 normal )
{
   depth = DecodeFloatRG (enc.zw);
   normal = DecodeViewNormalStereo (enc);
}

So what is going on here? We can see that the function takes 3 inputs: float4 enc, out float depth, out float3 normal. Basically, what it does is it takes information from enc, runs the functions DecodeFloatRG and DecodeViewNormalStereo on depth and normal respectively, and outputs those values. 

This is what it will look like in our code:

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);

depthValue is a float which will contain the depth value of the scene, and normalValues is a float3 that will contain view space normals. As for the first variable, tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), what’s going on? Well, basically, _CameraDepthNormalsTexture‘s variable type is Sampler2D. What DecodeDepthNormal requires is a float4. So what we do is apply tex2d, a function which performs a texture look up in a given sampler.

The first input that tex2d takes is the sampler, in our case _CameraDepthNormalsTextureand the second input is the coordinates to perform the look up, which in our case is the screen position, or i.scrPos. However, i.scrPos is float4, and in the input needs to be float2, so we take only the xy coordinates.

The Shader

Here’s the code for the shader. Let’s call it “DepthNormals.shader”.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Shader "Custom/DepthNormals" {
Properties {
   _MainTex ("", 2D) = "white" {}
   _HighlightDirection ("Highlight Direction", Vector) = (1, 0,0)
}

SubShader {
Tags { "RenderType"="Opaque" }

Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthNormalsTexture;
float _StartingTime;
float _showNormalColors = 1; //when this is 1, show normal values as colors. when 0, show depth values as colors.

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos: TEXCOORD1;
};

//Our Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

sampler2D _MainTex;
float4 _HighlightDirection;

//Our Fragment Shader
half4 frag (v2f i) : COLOR{

float3 normalValues;
float depthValue;
//extract depth value and normal values

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);
if (_showNormalColors == 1){
   float4 normalColor = float4(normalValues, 1);
   return normalColor;
} else {
   float4 depth = float4(depthValue);
   return depth;
}
}
ENDCG
}
}
FallBack "Diffuse"
}

Remeber that the normal values are from the view space, so when you move the camera, the normals, and thus the colors, change.

DepthNormalsCamera

The script to attach to the camera

Let’s call it “DepthNormals.cs” just to keep things consistent. What the script does is, everytime the user hits the keyboard button “E”, it switches the shader between showing the depth values and the normal values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
using UnityEngine;
using System.Collections;

public class DepthNormals : MonoBehaviour {

public Material mat;
bool showNormalColors = true;

void Start () {
   camera.depthTextureMode = DepthTextureMode.DepthNormals;
}

// Update is called once per frame
void Update () {
   if (Input.GetKeyDown (KeyCode.E)){
      showNormalColors = !showNormalColors;
   }

   if (showNormalColors){
      mat.SetFloat("_showNormalColors", 1.0f);
   } else {
      mat.SetFloat("_showNormalColors", 0.0f);
   }
}

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){
   //mat is the material containing your shader
   Graphics.Blit(source,destination,mat);
}
}

Conclusion

Now you know how to get the depth and normal values from the Depth+normal texture. Please remember that this is not meant to be a definitive guide on how to work with shaders. It is simply a summary of my experience working with the depth texture and vertex/fragment shaders in Unity during the past few days. Hopefully you found some of the information useful for you all development projects.

 

Unity Shaders – Depth and Normal Textures (Part 2)

This post is a continuation of an earlier post: Unity Shaders – Depth and Normal Textures (Part 1). The post after this is Part 3, which covers using both depth and normal textures.

Working with Depth Texture

Now that we’ve learned how to just get the depth texture, and display its values as a grayscale image, let’s do something interesting with it. I’m going to do a simpler version of the ring of light that passes through the environment in the Quantum Conundrum dimension shift effect. Instead of starting the ring from the center of wherever I’m looking at, I’m just going to have it start from the farthest point I can see, and travel linearly past the camera. Additionally, as the ring passes through objects, they will be left with a slight color tint.

Here is what it looks like:

DepthRingPass

Get Rendered Image Before Post-Processing

The effect we want to create is going to be superimposed on top of the original rendered image from the camera. As such, we will need to get from the camera an image of the scene right after it is rendered, but before any special effects are applied. To do this, we will need to use the Properties block in the shader. The rendered image from the camera will be brought in as _MainTex, so the Properties block will look like this:

Properties {
   _MainTex ("", 2D) = "white" {}
}

You’ll also need to remember to name the variable in the pass of your shader so you can use it

sampler2D _MainTex;

Time

Since this is an animated effect, it will require a time variable. Fortunately, Unity provides a built-in value we can use: float4 _Time : Time (t/20, t, t*2, t*3). If this looks confusing, let me explain. The property name is _Time, and its type is a float4, meaning a vector with 4 float components. The x component is “t/20″, meaning it gives the value of time divided by 20, the y component is “t”, so normal time, the z component is time multiplied by 2, and the w component is time multiplied by 3.

In this case, we want to use time to give us a value that’s going to go from 0 to 1. In the following code, _RingPassTimeLength is the length of time we would like the ring to take to traverse the scene. _StartingTime (which we will set in a .cs script) is the time when the ring first begins to move, and _Time.y is the the time at the moment.

float _RingPassTimeLength = 2;
float t = 1 - ((_Time.y - _StartingTime)/_RingPassTimeLength );

When the ring first starts to move, the starting time is the current time, so _Time.y – _StartingTime = 0, so t = 1. Then, as _Time.y increases, the value of t will decrease. So if we use the variable t  to adjust which depth values we are looking at, we can traverse the scene. 

User-Specified Uniforms

There are a few variables that would be nice for a user to adjust in the editor, instead of having to change it in code. To do this, we will need to use user-specified uniforms. In our case, some values that would be nice for the user to be able to adjust are the length of time it takes for the ring to pass through the scene ( _RingPassTimeLength) and the width of the ring itself (_RingWidth), like this:

uniform_variables_inspector

What we need to do is declare these properties, and then define the uniforms using the same names.

So, let’s our call our new shader DepthRingPass.shader. Here is the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
Shader "Custom/DepthRingPass" {

Properties {
   _MainTex ("", 2D) = "white" {} //this texture will have the rendered image before post-processing
   _RingWidth("ring width", Float) = 0.01
   _RingPassTimeLength("ring pass time", Float) = 2.0
}

SubShader {
Tags { "RenderType"="Opaque" }
Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthTexture;
float _StartingTime;
uniform float _RingPassTimeLength; //the length of time it takes the ring to traverse all depth values
uniform float _RingWidth; //width of the ring
float _RunRingPass = 0; //use this as a boolean value, to trigger the ring pass. It is called from the script attached to the camera.

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos:TEXCOORD1;
};

//Our Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

sampler2D _MainTex; //Reference in Pass is necessary to let us use this variable in shaders

//Our Fragment Shader
half4 frag (v2f i) : COLOR{

   //extract the value of depth for each screen position from _CameraDepthExture
   float depthValue = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.scrPos)).r);

   fixed4 orgColor = tex2Dproj(_MainTex, i.scrPos); //Get the orginal rendered color
   float4 newColor; //the color after the ring has passed
   half4 lightRing; //the ring of light that will pass through the dpeth

   float t = 1 - ((_Time.y - _StartingTime)/_RingPassTimeLength );

   //the script attached to the camera will set _RunRingPass to 1 and then will start the ring pass
   if (_RunRingPass == 1){
      //this part draws the light ring
      if (depthValue &lt; t &amp;&amp; depthValue &gt; t - _RingWidth){
         lightRing.r = 1;
         lightRing.g = 0;
         lightRing.b = 0;
         lightRing.a = 1;
         return lightRing;
      } else {
          if (depthValue &lt; t) {
             //this part the ring hasn't pass through yet
             return orgColor;
          } else {
             //this part the ring has passed through
             //basically taking the original colors and adding a slight red tint to it.
             newColor.r = (orgColor.r + 1)*0.5;
             newColor.g = orgColor.g*0.5;
             newColor.b = orgColor.b*0.5;
             newColor.a = 1;
             return newColor;
         }
      }
    } else {
        return orgColor;
    }
}
ENDCG
}
}
FallBack "Diffuse"
}

And below is the script to be attached to the camera. Let’s call it DepthRingPass.cs. In addition to passing the depth texture, as well as an image of the rendered scene from the camera to the shader, the DepthRingPass.cs is also triggering the ring to start. Whenever the user presses the “E” button on the keyboard, it sets the uniform variable _StartingTime to the current time and sets _RunRingPass to 1 (which then sets the shader to draw the ring). The reason why I’m using SetFloat here is because there isn’t a SetBool function (at least not that I’m aware of).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using UnityEngine;
using System.Collections;

[ExecuteInEditMode]
public class DepthRingPass : MonoBehaviour {

public Material mat;

void Start () {
    camera.depthTextureMode = DepthTextureMode.Depth;
}

void Update (){
   if (Input.GetKeyDown(KeyCode.E)){
      //set _StartingTime to current time
      mat.SetFloat("_StartingTime", Time.time);
      //set _RunRingPass to 1 to start the ring
      mat.SetFloat("_RunRingPass", 1);
  }
}

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){
   //mat is the material containing your shader
   Graphics.Blit(source,destination,mat);
}
}

Remember, you’ll need to attach the shader we wrote to a material, and then set the material as our mat variable in the above script DepthRingPass.cs, which is then attached to your camera object.

depthringpass_inspector

We will end here for now. In Part 3, I will talk about using both depth and normals in your shader, specifically, how to work with DepthTextureMode.DepthNormals.