Unity Shaders – Depth and Normal Textures (Part 3)

This is a continuation of a series of posts on shaders: Part 1, Part 2

In the previous two parts, I talked about using depth texture in Unity. Here, I will discuss using depth+normal textures through DepthTextureMode.DepthNormals, which is basically depth and view space normals packed into one.

Below is the effect we will create. What you’re seeing is the scene being rendered with the viewspace normals as colors, and then the depth value as colors.

DepthNormals

Depth+Normal Texture

If you remember from Part 1, we can tell the camera in Unity to generate a depth texture using the Camera.depthTextureMode variable. According to the docs, there are actually two modes you can set this variable to:

  • DepthTextureMode.Depth:depth texture.
  • DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.

We are already familiar with DepthTextureMode.Depth, so the question is: how exactly do we get the values of depth and normals from DepthTextureMode.DepthNormals? 

It turns out, you need to use the function DecodeDepthNormal. This function is defined in the UnityCG.cginc include file, which, by the way, can be found on windows using this path: <program_files>/Unity/Editor/Data/CGIncludes/

Below is the definition:

inline void DecodeDepthNormal( float4 enc, out float depth, out float3 normal )
{
   depth = DecodeFloatRG (enc.zw);
   normal = DecodeViewNormalStereo (enc);
}

So what is going on here? We can see that the function takes 3 inputs: float4 enc, out float depth, out float3 normal. Basically, what it does is it takes information from enc, runs the functions DecodeFloatRG and DecodeViewNormalStereo on depth and normal respectively, and outputs those values. 

This is what it will look like in our code:

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);

depthValue is a float which will contain the depth value of the scene, and normalValues is a float3 that will contain view space normals. As for the first variable, tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), what’s going on? Well, basically, _CameraDepthNormalsTexture‘s variable type is Sampler2D. What DecodeDepthNormal requires is a float4. So what we do is apply tex2d, a function which performs a texture look up in a given sampler.

The first input that tex2d takes is the sampler, in our case _CameraDepthNormalsTextureand the second input is the coordinates to perform the look up, which in our case is the screen position, or i.scrPos. However, i.scrPos is float4, and in the input needs to be float2, so we take only the xy coordinates.

The Shader

Here’s the code for the shader. Let’s call it “DepthNormals.shader”.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Shader "Custom/DepthNormals" {
Properties {
   _MainTex ("", 2D) = "white" {}
   _HighlightDirection ("Highlight Direction", Vector) = (1, 0,0)
}

SubShader {
Tags { "RenderType"="Opaque" }

Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthNormalsTexture;
float _StartingTime;
float _showNormalColors = 1; //when this is 1, show normal values as colors. when 0, show depth values as colors.

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos: TEXCOORD1;
};

//Our Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

sampler2D _MainTex;
float4 _HighlightDirection;

//Our Fragment Shader
half4 frag (v2f i) : COLOR{

float3 normalValues;
float depthValue;
//extract depth value and normal values

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);
if (_showNormalColors == 1){
   float4 normalColor = float4(normalValues, 1);
   return normalColor;
} else {
   float4 depth = float4(depthValue);
   return depth;
}
}
ENDCG
}
}
FallBack "Diffuse"
}

Remeber that the normal values are from the view space, so when you move the camera, the normals, and thus the colors, change.

DepthNormalsCamera

The script to attach to the camera

Let’s call it “DepthNormals.cs” just to keep things consistent. What the script does is, everytime the user hits the keyboard button “E”, it switches the shader between showing the depth values and the normal values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
using UnityEngine;
using System.Collections;

public class DepthNormals : MonoBehaviour {

public Material mat;
bool showNormalColors = true;

void Start () {
   camera.depthTextureMode = DepthTextureMode.DepthNormals;
}

// Update is called once per frame
void Update () {
   if (Input.GetKeyDown (KeyCode.E)){
      showNormalColors = !showNormalColors;
   }

   if (showNormalColors){
      mat.SetFloat("_showNormalColors", 1.0f);
   } else {
      mat.SetFloat("_showNormalColors", 0.0f);
   }
}

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){
   //mat is the material containing your shader
   Graphics.Blit(source,destination,mat);
}
}

Conclusion

Now you know how to get the depth and normal values from the Depth+normal texture. Please remember that this is not meant to be a definitive guide on how to work with shaders. It is simply a summary of my experience working with the depth texture and vertex/fragment shaders in Unity during the past few days. Hopefully you found some of the information useful for you all development projects.

 

Unity Shaders – Depth and Normal Textures (Part 2)

This post is a continuation of an earlier post: Unity Shaders – Depth and Normal Textures (Part 1). The post after this is Part 3, which covers using both depth and normal textures.

Working with Depth Texture

Now that we’ve learned how to just get the depth texture, and display its values as a grayscale image, let’s do something interesting with it. I’m going to do a simpler version of the ring of light that passes through the environment in the Quantum Conundrum dimension shift effect. Instead of starting the ring from the center of wherever I’m looking at, I’m just going to have it start from the farthest point I can see, and travel linearly past the camera. Additionally, as the ring passes through objects, they will be left with a slight color tint.

Here is what it looks like:

DepthRingPass

Get Rendered Image Before Post-Processing

The effect we want to create is going to be superimposed on top of the original rendered image from the camera. As such, we will need to get from the camera an image of the scene right after it is rendered, but before any special effects are applied. To do this, we will need to use the Properties block in the shader. The rendered image from the camera will be brought in as _MainTex, so the Properties block will look like this:

Properties {
   _MainTex ("", 2D) = "white" {}
}

You’ll also need to remember to name the variable in the pass of your shader so you can use it

sampler2D _MainTex;

Time

Since this is an animated effect, it will require a time variable. Fortunately, Unity provides a built-in value we can use: float4 _Time : Time (t/20, t, t*2, t*3). If this looks confusing, let me explain. The property name is _Time, and its type is a float4, meaning a vector with 4 float components. The x component is “t/20″, meaning it gives the value of time divided by 20, the y component is “t”, so normal time, the z component is time multiplied by 2, and the w component is time multiplied by 3.

In this case, we want to use time to give us a value that’s going to go from 0 to 1. In the following code, _RingPassTimeLength is the length of time we would like the ring to take to traverse the scene. _StartingTime (which we will set in a .cs script) is the time when the ring first begins to move, and _Time.y is the the time at the moment.

float _RingPassTimeLength = 2;
float t = 1 - ((_Time.y - _StartingTime)/_RingPassTimeLength );

When the ring first starts to move, the starting time is the current time, so _Time.y – _StartingTime = 0, so t = 1. Then, as _Time.y increases, the value of t will decrease. So if we use the variable t  to adjust which depth values we are looking at, we can traverse the scene. 

User-Specified Uniforms

There are a few variables that would be nice for a user to adjust in the editor, instead of having to change it in code. To do this, we will need to use user-specified uniforms. In our case, some values that would be nice for the user to be able to adjust are the length of time it takes for the ring to pass through the scene ( _RingPassTimeLength) and the width of the ring itself (_RingWidth), like this:

uniform_variables_inspector

What we need to do is declare these properties, and then define the uniforms using the same names.

So, let’s our call our new shader DepthRingPass.shader. Here is the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
Shader "Custom/DepthRingPass" {

Properties {
   _MainTex ("", 2D) = "white" {} //this texture will have the rendered image before post-processing
   _RingWidth("ring width", Float) = 0.01
   _RingPassTimeLength("ring pass time", Float) = 2.0
}

SubShader {
Tags { "RenderType"="Opaque" }
Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthTexture;
float _StartingTime;
uniform float _RingPassTimeLength; //the length of time it takes the ring to traverse all depth values
uniform float _RingWidth; //width of the ring
float _RunRingPass = 0; //use this as a boolean value, to trigger the ring pass. It is called from the script attached to the camera.

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos:TEXCOORD1;
};

//Our Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

sampler2D _MainTex; //Reference in Pass is necessary to let us use this variable in shaders

//Our Fragment Shader
half4 frag (v2f i) : COLOR{

   //extract the value of depth for each screen position from _CameraDepthExture
   float depthValue = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.scrPos)).r);

   fixed4 orgColor = tex2Dproj(_MainTex, i.scrPos); //Get the orginal rendered color
   float4 newColor; //the color after the ring has passed
   half4 lightRing; //the ring of light that will pass through the dpeth

   float t = 1 - ((_Time.y - _StartingTime)/_RingPassTimeLength );

   //the script attached to the camera will set _RunRingPass to 1 and then will start the ring pass
   if (_RunRingPass == 1){
      //this part draws the light ring
      if (depthValue &lt; t &amp;&amp; depthValue &gt; t - _RingWidth){
         lightRing.r = 1;
         lightRing.g = 0;
         lightRing.b = 0;
         lightRing.a = 1;
         return lightRing;
      } else {
          if (depthValue &lt; t) {
             //this part the ring hasn't pass through yet
             return orgColor;
          } else {
             //this part the ring has passed through
             //basically taking the original colors and adding a slight red tint to it.
             newColor.r = (orgColor.r + 1)*0.5;
             newColor.g = orgColor.g*0.5;
             newColor.b = orgColor.b*0.5;
             newColor.a = 1;
             return newColor;
         }
      }
    } else {
        return orgColor;
    }
}
ENDCG
}
}
FallBack "Diffuse"
}

And below is the script to be attached to the camera. Let’s call it DepthRingPass.cs. In addition to passing the depth texture, as well as an image of the rendered scene from the camera to the shader, the DepthRingPass.cs is also triggering the ring to start. Whenever the user presses the “E” button on the keyboard, it sets the uniform variable _StartingTime to the current time and sets _RunRingPass to 1 (which then sets the shader to draw the ring). The reason why I’m using SetFloat here is because there isn’t a SetBool function (at least not that I’m aware of).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using UnityEngine;
using System.Collections;

[ExecuteInEditMode]
public class DepthRingPass : MonoBehaviour {

public Material mat;

void Start () {
    camera.depthTextureMode = DepthTextureMode.Depth;
}

void Update (){
   if (Input.GetKeyDown(KeyCode.E)){
      //set _StartingTime to current time
      mat.SetFloat("_StartingTime", Time.time);
      //set _RunRingPass to 1 to start the ring
      mat.SetFloat("_RunRingPass", 1);
  }
}

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){
   //mat is the material containing your shader
   Graphics.Blit(source,destination,mat);
}
}

Remember, you’ll need to attach the shader we wrote to a material, and then set the material as our mat variable in the above script DepthRingPass.cs, which is then attached to your camera object.

depthringpass_inspector

We will end here for now. In Part 3, I will talk about using both depth and normals in your shader, specifically, how to work with DepthTextureMode.DepthNormals.

Unity Shaders – Depth and Normal Textures (Part 1)

This is Part 1 of a 3 part series on working with depth and normal textures in Unity. Here’s Part 2 and Part 3.

I spent the last three days learning to write shaders in Unity. For the most part, this isn’t a terribly difficult task as there is quite a lot of documentation that goes over the basics. However, when it comes to depth buffers, which are useful for post-process special effects, there’s definitely a shortage of information, and the Unity docs are not super helpful. For example, if you’re trying to understand how depth and normal textures are used, Unity doc’s advice is to “refer to the EdgeDetection image effect in the Shader Replacement example project or SSAO Image Effect.” While this may be sufficient for someone who already has a firm grasp of shaders, this isn’t very helpful for a beginner.

Anyway, after many hours of coding through trial and error, and hunting down rare blog posts and forum discussions concerning the topic, I eventually did figure out how to work with depth and normal textures in Unity. As the learning process was such a frustrating one, I thought it’d be a good idea to write down what I did while my memory is still fresh because:

  1. in a few months, I will have forgotten what I did and won’t be able to understand my own code.
  2. In case somebody out there is having the same problem, the information will hopefully be helpful. The few blog posts I found about depth textures were incredibly useful to me, and I was really glad those developers took the time to write things down.

So, here we go.

Inspiration

I had started dabbling with shaders about six months ago. I remember going through a lot of tutorials explaining the graphics pipeline, different kinds of shaders, etc. At the time, I didn’t understand any of it and the topic of shaders just seemed very intimidating. I did manage to get a few things done by starting with an existing shader and tweaking things around until I got kind of what I wanted.

This time around, I wanted to recreate this dimension shifting effect from the game Quantum Conundrum:quantum_conundrum_dimension_shift2

In case you haven’t played Quantum Conundrum yet, I’ll explain what’s going on. Basically, your character has the ability to shift between a number of different dimensions: fluffy dimension, heavy dimension, slow-motion dimension, and reverse-gravity dimension. In each dimension, the shape of the environment and objects are constant, but they have different physical properties. For example, in the fluffy dimension, everything is very lightweight, so you can pick up couches and other items you normally can’t pick up, and in the heavy dimension, everything becomes really heavy, so a cardboard box which normally wouldn’t weigh down a button, becomes heavy enough to do so in the heavy dimension.

In addition to changing properties, the look of everything changes. In fluffy dimension, everything looks like clouds, while in the heavy dimension, everything has a metallic texture to it. In the gif above, the player is shifting from normal dimension to heavy dimension, then to fluffy, back to heavy, and then normal again.

Here’s a still frame of the transition: quantum_conundrum_dimension_shift

A few key things I noticed about this effect:

  1. The ring of light that passes through the room always starts from whichever object you’re looking at and spreads outwards from there. My guess that it’s a sphere that’s expanding in radius in all direction, since you can see a bit of the ring behind the glass as well.
  2. The ring of light is superimposed on the environment as well an any objects.
  3. The ring splits up the textures of the dimensions, so the textures of the new dimensions are not actually put in place until the ring has passed through. This means that at certain points, objects actually have two textures (eg. the painting – look closely and you’ll see the bottom right part of the painting is the heavy dimension painting, while the rest is in the normal dimension).

First Step – Ask for Depth Texture

I had no idea how to approach this effect, and wasn’t even sure where to start looking. After posting the question on some forums and twitter, I was informed that it’s a post-processing effect shader that utilizes the depth buffer to give it that “spatially aware” sense.

I had forgotten most things I learned about shaders at this point, so I started off by going through the basics again. I won’t go into this part too much, except to point you to this explanation of the difference between surface shaders and vertex/fragment shaders, and a list of resources that I found really helpful. This stuff might seem really confusing and intimidating at first, but just read it over a few times and practice writing shaders, and I promise it’ll all make sense eventaully. I do encourage you to at least have a look over these links before you continue reading, especially if you’re still new to shaders.

In Unity, to get the depth buffer, you actually have to use a render texture, which is a special type of texture that’s created and updated in realtime. You can use it to create something like a TV screen that’s showing something happening in one area of your game. The depth buffer, or depth texture, is actually just a render texture that contains values of how far objects in the scene are from the camera. (I should note that render textures are only available in Unity Pro).

So how do you get the depth texture? It turns out you just have to ask for it. First, you need to tell the camera to generate the depth texture, which you can do with Camera.depthTextureMode. Then, to pass it to your shader for processing, you’ll need to use the OnRenderImage function.

Your script, lets call it PostProcessDepthGrayscale.cs will therefore look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
using UnityEngine;
using System.Collections;

//so that we can see changes we make without having to run the game

[ExecuteInEditMode]
public class PostProcessDepthGrayscale : MonoBehaviour {

   public Material mat;

   void Start () {
      camera.depthTextureMode = DepthTextureMode.Depth;
   }

   void OnRenderImage (RenderTexture source, RenderTexture destination){
      Graphics.Blit(source,destination,mat);
      //mat is the material which contains the shader
      //we are passing the destination RenderTexture to
   }
}

You will then need to attach this script to the camera object.

The Shader

Now, we will create a shader to process the depth texture and display it. It will be a simple vertex and fragment shader. Basically, it will read the depth texture from the camera, then display the depth value at each screen coordinate.

Let’s call the shader DepthGrayscale.shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Shader "Custom/DepthGrayscale" {
SubShader {
Tags { "RenderType"="Opaque" }

Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthTexture;

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos:TEXCOORD1;
};

//Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   //for some reason, the y position of the depth texture comes out inverted
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

//Fragment Shader
half4 frag (v2f i) : COLOR{
   float depthValue = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.scrPos)).r);
   half4 depth;

   depth.r = depthValue;
   depth.g = depthValue;
   depth.b = depthValue;

   depth.a = 1;
   return depth;
}
ENDCG
}
}
FallBack "Diffuse"
}

So as you can see, it’s a pretty basic vertex and fragment shader. The one thing I want to draw your attention to is line 24:

o.scrPos.y = 1 - o.scrPos.y

For some reason, my depth texture kept coming out inverted. I couldn’t find anyone else who had the same problem, and could not figure out what was causing this, so I just inverted the y value as a fix. If you’re finding that your image is inverted vertically with the above script, then you can delete this line.

Now, create a new material, call it DepthGrayscale, and set its shader to “DepthGrayscale” that we just created. Then, set DepthGrayscale as the material variable on the PostProcessDepthGrayscale.cs script that you attached to your camera.

What you should see

You scene should look something like this (obviously with different objects – my scene is just a bunch of boxes spaced out so that you can see the change in color, which is just the depth value):

depth_texture

Also, if your image is coming out like the image below, try lowering the far clipping plane setting on the camera object. It could be that the value is set too high, and so all your objects fall into a small band of the depth spectrum, and therefore all appear black. If you lower the far clipping plane value, then the depth spectrum gets smaller, and the objects would fall along more of a gradient in terms of depth values. I spent quite a long time thinking my code wasn’t working, when it turned out I just had the far clipping plane set too high.

depth_texture_far_clipping

This post is getting to be quite long, so I’m going to stop for now, and continue in Part 2.

Just a quick recap, this is what we’ve done so far:

  • Learned to use Camera.depthTextureMode to generate a depth texture.
  • Wrote a script to tell the camera to send the rendered image (in this case the depth texture) to a render texture, which is then passed to a shader.
  • Wrote a shader to display the depth values as a grayscale scene.

Unity Shader Programming Resources

I’m starting to work on making Relativity look good, and that basically translates to writing shaders. It was a little difficult to pick up at first, but after searching around the web for a bit, I finally found a couple of tutorials that explained the basics pretty well. I just wanted to list them here, in case anyone else is in the same position.

  • Shader Programming (Unite ’08) - If you don’t know anything about shaders, this is a good place to start. It’s a pretty long video (around 2 hours), and parts of it can be a little slow, but it’s a good practical intro to the topic. 
  • Cg Programming Wikibook – This is a fantastic resource, and I recommend going through the tutorials in order to get a firm grasp of shader programming. It breaks down all the example scripts, so it’s pretty easy to follow along. The writing can also be quite funny at times.
  • Special Effects with Depth – Slides from a presentation given by Kuba Cupisz and Ole Ciliox and SIGGRAPH 2011 which gives a nice overview to using the depth buffer to create special effects shaders.
  • Getting Started With Custom Post-Processing Shaders in Unity3D - Post-processing shaders are a bit different from regular shaders in Unity. This blogpost goes over the basics.