Search Instagram Twitter Facebook Spotify Grid Tag Date Folder Chat Pencil

Unity Shaders – Depth and Normal Textures (Part 3)

This is a continuation of a series of posts on shaders: Part 1, Part 2

In the previous two parts, I talked about using depth texture in Unity. Here, I will discuss using depth+normal textures through DepthTextureMode.DepthNormals, which is basically depth and view space normals packed into one.

Below is the effect we will create. What you’re seeing is the scene being rendered with the viewspace normals as colors, and then the depth value as colors.

DepthNormals

Depth+Normal Texture

If you remember from Part 1, we can tell the camera in Unity to generate a depth texture using the Camera.depthTextureMode variable. According to the docs, there are actually two modes you can set this variable to:

  • DepthTextureMode.Depth:depth texture.
  • DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.

We are already familiar with DepthTextureMode.Depth, so the question is: how exactly do we get the values of depth and normals from DepthTextureMode.DepthNormals? 

It turns out, you need to use the function DecodeDepthNormal. This function is defined in the UnityCG.cginc include file, which, by the way, can be found on windows using this path: <program_files>/Unity/Editor/Data/CGIncludes/

Below is the definition:

inline void DecodeDepthNormal( float4 enc, out float depth, out float3 normal )
{
depth = DecodeFloatRG (enc.zw);
normal = DecodeViewNormalStereo (enc);
}

So what is going on here? We can see that the function takes 3 inputs: float4 enc, out float depth, out float3 normal. Basically, what it does is it takes information from enc, runs the functions DecodeFloatRG and DecodeViewNormalStereo on depth and normal respectively, and outputs those values. 

This is what it will look like in our code:

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);

depthValue is a float which will contain the depth value of the scene, and normalValues is a float3 that will contain view space normals. As for the first variable, tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), what’s going on? Well, basically, _CameraDepthNormalsTexture‘s variable type is Sampler2D. What DecodeDepthNormal requires is a float4. So what we do is apply tex2d, a function which performs a texture look up in a given sampler.

The first input that tex2d takes is the sampler, in our case _CameraDepthNormalsTextureand the second input is the coordinates to perform the look up, which in our case is the screen position, or i.scrPos. However, i.scrPos is float4, and in the input needs to be float2, so we take only the xy coordinates.

The Shader

Here’s the code for the shader. Let’s call it “DepthNormals.shader”.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Shader "Custom/DepthNormals" {
Properties {
_MainTex ("", 2D) = "white" {}
_HighlightDirection ("Highlight Direction", Vector) = (1, 0,0)
}

SubShader {
Tags { "RenderType"="Opaque" }

Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthNormalsTexture;
float _StartingTime;
float _showNormalColors = 1; //when this is 1, show normal values as colors. when 0, show depth values as colors.

struct v2f {
float4 pos : SV_POSITION;
float4 scrPos: TEXCOORD1;
};

//Our Vertex Shader
v2f vert (appdata_base v){
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.scrPos=ComputeScreenPos(o.pos);
o.scrPos.y = 1 - o.scrPos.y;
return o;
}

sampler2D _MainTex;
float4 _HighlightDirection;

//Our Fragment Shader
half4 frag (v2f i) : COLOR{

float3 normalValues;
float depthValue;
//extract depth value and normal values

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);
if (_showNormalColors == 1){
float4 normalColor = float4(normalValues, 1);
return normalColor;
} else {
float4 depth = float4(depthValue);
return depth;
}
}
ENDCG
}
}
FallBack "Diffuse"
}

Remeber that the normal values are from the view space, so when you move the camera, the normals, and thus the colors, change.

DepthNormalsCamera

The script to attach to the camera

Let’s call it “DepthNormals.cs” just to keep things consistent. What the script does is, everytime the user hits the keyboard button “E”, it switches the shader between showing the depth values and the normal values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
using UnityEngine;
using System.Collections;

public class DepthNormals : MonoBehaviour {

public Material mat;
bool showNormalColors = true;

void Start () {
camera.depthTextureMode = DepthTextureMode.DepthNormals;
}

// Update is called once per frame
void Update () {
if (Input.GetKeyDown (KeyCode.E)){
showNormalColors = !showNormalColors;
}

if (showNormalColors){
mat.SetFloat("_showNormalColors", 1.0f);
} else {
mat.SetFloat("_showNormalColors", 0.0f);
}
}

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){
//mat is the material containing your shader
Graphics.Blit(source,destination,mat);
}
}

Conclusion

Now you know how to get the depth and normal values from the Depth+normal texture. Please remember that this is not meant to be a definitive guide on how to work with shaders. It is simply a summary of my experience working with the depth texture and vertex/fragment shaders in Unity during the past few days. Hopefully you found some of the information useful for you all development projects.

13 Comments

  1. Hello! Thank you for your wonderful post! I wanna know is it possible to just render the back facing surfaces. I tried “Cull Front” but it didn’t work.

  2. Yes, I do believe it is possible to only render the back facing surfaces. “Cull Front” should work. Do you have lighting on? I’d recommend checking out the section on culling here, and also the Culling and Depth Testing page on Unity Docs.

    Hope this helps.

  3. In case anyone is interested, how to convert those view space normals to world space

    C#

    Matrix4x4 MV = camera.cameraToWorldMatrix;
    aoMaterial.SetMatrix(“_CameraMV”, MV);

    Shader

    float4x4 _CameraMV;

    float3 GetWorldNormal(float2 screenspaceUV)
    {
    float4 dn = tex2D(_CameraDepthNormalsTexture, screenspaceUV);
    float3 n = DecodeViewNormalStereo(dn);
    float3 worldN = mul((float3x3)_CameraMV, n);

    return worldN;
    }

  4. Thanks a lot man! I’m trying to implement SSAO shader in Unity3D. This is definitely going to help me 🙂

  5. Help me please. I want to get the same result, but with the back faces.
    Tried out to change the code, but that does not work.
    I want to implement this method in unity: http___://www._uraldev.ru/articles/id/39
    To calculate the caustics I need five textures:
    1. Object_depth
    2. Object_normal
    3. BackFacesObject_depth
    4. BackFacesObject_normal
    5. Scene_depth
    Once again. How to get the normal and depth of the backFaces?

  6. Does not pass the url link in its purest form.

  7. float4 depth = float4(depthValue);
    is no longer valid.

    would it be
    float4 depth = float4(depthValue,1,1,1);
    float4 depth = float4(depthValue,0,0,0);
    or
    float4 depth = float4(depthValue,depthValue,depthValue,depthValue);

    I guess none would matter if only the 1st component is being read 🙂

  8. Thanks!!

    Some advice

    float4 depth = float4(depthValue);
    change to
    float4 depth = float4(depthValue,depthValue,depthValue,1.0);

    camera.depthTextureMode = DepthTextureMode.DepthNormals;
    change to
    GetComponent().depthTextureMode = DepthTextureMode.DepthNormals;

  9. wow thanks a lot for this.
    This saved me a ton of time!! 😀

  10. Saved my day! Needed a bit of modification as JackWu mentioned, but otherwise worked like a charm!

  11. First off, thanks for a great post! How can you convert the view normal to a world normal that wont be affected by the rotation of the camera? I tried peters method above and it didnt work 🙁

  12. Vikas Reddy Katta

    Awesome tutorial!

  13. Juan Camilo Quintero

    Hey! This is great, but I don’t understand how
    DecodeDepthNormal works. I mean, to my understancing, tex2D as a function which returns a color value from a texture at a specified coordinate, so it would return the final color that the camera rendered. But how can DecodeDepthNormal return the values of depth and normals out of that?? What would happen if I input an arbitraty color such as float4(0,0,0,0)? Does something else happen if you use tex2D as a float4 instead of a fixed4 as I usually do? I can understand that the last value is unused in the camera rendering so it can store other information but why and how do the other funcions DecodeFloatRG and DecodeViewNormalStereo use the other values to decode the information?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.