Manifold Garden – State of Development

The last few weeks have been crazy busy. I really need to get better at posting in the devlog more frequently.

I’m taking this weekend to write an update on everything: tools, game design, related projects, etc. It’s going to be quite extensive, so I will break it up into parts.

Let’s get started.

Tools Programming

David Laskey came on board to the project earlier this year, initially with the goal of working on optimization and PlayStation 4 port. Pretty soon after, David started working on a bunch of custom tools to help streamline the design process.

I didn’t quite realize it at the time, but the project was basically going from pre-production to production. As in, the prototyping stage was more or less over, and it was time to refine the development process and trim inefficiencies.

I also started learning to write Unity3D editor extensions as a result of working with David, and it really has been such a huge help to production. There were so many processes that used to be super tedious and time consuming to do, which now have been refined.

The thing with tedious processes is also not just the time it takes up (although that is definitely a big factor). It also makes you mentally dread working on it, because it’s just not fun. I’d be really in the zone making a level, iterating on areas, moving stuff around, and then all of a sudden, I have to make a window, which would just be tedium for an hour, and it would really kill the mood for me.

Also, it meant I was reluctant to iterate. If a window was good enough, but not great, I would just leave it at good enough, because the time it would take to get up to great just didn’t feel worth it.

The window making process is just one example. There were a lot of similar task that were incredibly tedious for me to perform, and the last several months, we’ve put a lot of time into trimming these inefficiencies.

I’m going to start talking about all these tools in detail in these next few updates.

We’ll start with the window generator.

Window Generator

Here’s a quick video showing a timelapse comparing the old and the new ways of making windows in Unity for Manifold Garden:

The old way: 

windowBuilding_old

Here’s how I built windows the old way (everything is done with ProBuilder here, just FYI):

1. Make a “backboard” that is the dimension of the window I want. This gives me a reference for the size

2. Start putting in frame pieces. Almost every straight segment is a separate piece.

3. Horizontal and vertical pieces are colored differently so I can tell them apart.

4. Place the window pieces. These are also colored differently than the frame pieces.

5. Color the outside faces of the window to be the glass material

6. Merge the frame pieces and the glass pieces (but first need to save the version with the separate pieces in case I want to come back and make changes).

For a complicated design, this can easily take an hour or more. In the gif, I was just randomly putting pieces in place without actually thinking of the design, and that still took 10 minutes.

Also, if I wanted to make changes to a design, it was a lot like having to rebuild the entire window again. Even a small change involved moving a bunch of pieces out of the way and readjusting their sizes. It was not fun.

The new way: 

windowBuilding_new

One of the first tools that David worked on when he joined was the window generator. I showed him the old process and we both agreed that it needed to go.

It felt like the most natural way of designing the windows, since they were basically 2D designs, was to design in photoshop, and then extrude that into a 2D shape.

For the process now, I basically have a grid in photoshop, each pixel is 0.25 units, make the design there, and then open up the window generator tool from Unity, which automatically makes a 3D version of the window and has it automatically prefabbed and aligned to the grid.

For the image, grey means frame, white means glass, and black means cutout.

Using photoshop means that I can take advantage of all the photoshop features (layers, invert, etc) when doing the actual design.

An entire window, even complicated one, instead of taking hours, can now take just minutes.

It’s easily my favorite tool in the engine.

If you’re interested in how the window tool works, David actually came on the stream a few weeks ago to talk about the tech behind it. It was storming in Chicago that day, so there were some internet issues, and the stream got cut up into 2 parts.

Here’s part 1:

Here’s part 2:

Line Drawer Tool Basic UI

Got a lot done today on the tool.

It now works from within an editor window instead of needing a script on an object.

I also have rectangles that project on the geometry which shows where are the areas to place subsequent markers.

The brush itself also changes color to show you where you can place the next marker.

manifoldgarden_lineDrawerToolBasicUI

Started working on mesh generation. It’s actually pretty much the exact same system that we’re doing for water mesh generation, so I’m starting by copying that over.

It’s a little less complicated, so I’ll be deleting a bunch of stuff I don’t need to clean it up and go from there.

Should have basic line mesh generation done by tomorrow.

Pagoda Pillar Level

Last night’s stream started off as an attempt to put in the finishing touches of a level, and then ended up as a debug session in which we uncovered some changes with Unity’s instantiation code in their latest update. All in all, another typical night of gamedev.

Part 1: https://youtu.be/YqT1O4WKwRU

Part 2: https://youtu.be/EjJTEL0-kCo

Anyway, we did manage to solve the weird bug, but then Unity crashed pretty hard, so I ended the stream then.

Afterwards, I was able to set up the level to run again. It took some tweaking, but I think I finally got the level of scale I wanted in order to convey some sense of mystery:Relativity_01 Relativity_02 Relativity_03 Relativity_04

Development Update – Edge-Detection + Render Textures

I finally got my edge-detection shader to work on render textures! This took a really long time to figure out, so I’m really happy to have solved this issue.

Basically, for a long time, I didn’t know how to get shaders applied to render textures. Since the portals in the game use render textures to create the illusion of a world on the other side, this meant an inconsistency in visual style when looking through a portal, like this:

Relativity_Game_Screenshot-2014-05-22_04-30-58

 

You can see that everything that appears inside the portal doesn’t have edge-detection applied. This didn’t affect gameplay or anything, but I knew that this would definitely need to be fixed for the final release of the game, and I had no idea how to address this problem.

A few weeks ago, I finally decided to roll up my sleeves and really figure out how render textures work. Up until then, the portal system was just hacked together, and I only knew enough to get things barely working.

I knew I would need the shader to get applied to a camera, but for a long time, I just couldn’t find where that camera was!

Eventually, I discovered this line of code:

1
go.hideFlags = HideFlags.HideAndDontSave;

“go” is the game object with the camera attached, and what this line did was told the engine to hide it from the editor hierachy (so that it wasn’t seen), and to not save it after run time.

I changed it to this:

1
go.hideFlags = HideFlags.DontSave;

So now, I could see the camera created during runtime inside the editor hierarchy during run time.

From here, I just added the edge-detection shader to the run time generated camera.

This is what it looks like now:

Relativity_Game_Screenshot-2014-05-22_04-25-28

This still isn’t perfect. There’s still the problem of shadows not being rendered on render textures, and making the lighting look inconsistent.

However, I’m really happy to have been able to cross a big item off of the bug list.

Unity Shaders – Depth and Normal Textures (Part 3)

This is a continuation of a series of posts on shaders: Part 1, Part 2

In the previous two parts, I talked about using depth texture in Unity. Here, I will discuss using depth+normal textures through DepthTextureMode.DepthNormals, which is basically depth and view space normals packed into one.

Below is the effect we will create. What you’re seeing is the scene being rendered with the viewspace normals as colors, and then the depth value as colors.

DepthNormals

Depth+Normal Texture

If you remember from Part 1, we can tell the camera in Unity to generate a depth texture using the Camera.depthTextureMode variable. According to the docs, there are actually two modes you can set this variable to:

  • DepthTextureMode.Depth:depth texture.
  • DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.

We are already familiar with DepthTextureMode.Depth, so the question is: how exactly do we get the values of depth and normals from DepthTextureMode.DepthNormals? 

It turns out, you need to use the function DecodeDepthNormal. This function is defined in the UnityCG.cginc include file, which, by the way, can be found on windows using this path: <program_files>/Unity/Editor/Data/CGIncludes/

Below is the definition:

inline void DecodeDepthNormal( float4 enc, out float depth, out float3 normal )
{
   depth = DecodeFloatRG (enc.zw);
   normal = DecodeViewNormalStereo (enc);
}

So what is going on here? We can see that the function takes 3 inputs: float4 enc, out float depth, out float3 normal. Basically, what it does is it takes information from enc, runs the functions DecodeFloatRG and DecodeViewNormalStereo on depth and normal respectively, and outputs those values. 

This is what it will look like in our code:

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);

depthValue is a float which will contain the depth value of the scene, and normalValues is a float3 that will contain view space normals. As for the first variable, tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), what’s going on? Well, basically, _CameraDepthNormalsTexture‘s variable type is Sampler2D. What DecodeDepthNormal requires is a float4. So what we do is apply tex2d, a function which performs a texture look up in a given sampler.

The first input that tex2d takes is the sampler, in our case _CameraDepthNormalsTextureand the second input is the coordinates to perform the look up, which in our case is the screen position, or i.scrPos. However, i.scrPos is float4, and in the input needs to be float2, so we take only the xy coordinates.

The Shader

Here’s the code for the shader. Let’s call it “DepthNormals.shader”.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Shader "Custom/DepthNormals" {
Properties {
   _MainTex ("", 2D) = "white" {}
   _HighlightDirection ("Highlight Direction", Vector) = (1, 0,0)
}

SubShader {
Tags { "RenderType"="Opaque" }

Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthNormalsTexture;
float _StartingTime;
float _showNormalColors = 1; //when this is 1, show normal values as colors. when 0, show depth values as colors.

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos: TEXCOORD1;
};

//Our Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

sampler2D _MainTex;
float4 _HighlightDirection;

//Our Fragment Shader
half4 frag (v2f i) : COLOR{

float3 normalValues;
float depthValue;
//extract depth value and normal values

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);
if (_showNormalColors == 1){
   float4 normalColor = float4(normalValues, 1);
   return normalColor;
} else {
   float4 depth = float4(depthValue);
   return depth;
}
}
ENDCG
}
}
FallBack "Diffuse"
}

Remeber that the normal values are from the view space, so when you move the camera, the normals, and thus the colors, change.

DepthNormalsCamera

The script to attach to the camera

Let’s call it “DepthNormals.cs” just to keep things consistent. What the script does is, everytime the user hits the keyboard button “E”, it switches the shader between showing the depth values and the normal values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
using UnityEngine;
using System.Collections;

public class DepthNormals : MonoBehaviour {

public Material mat;
bool showNormalColors = true;

void Start () {
   camera.depthTextureMode = DepthTextureMode.DepthNormals;
}

// Update is called once per frame
void Update () {
   if (Input.GetKeyDown (KeyCode.E)){
      showNormalColors = !showNormalColors;
   }

   if (showNormalColors){
      mat.SetFloat("_showNormalColors", 1.0f);
   } else {
      mat.SetFloat("_showNormalColors", 0.0f);
   }
}

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){
   //mat is the material containing your shader
   Graphics.Blit(source,destination,mat);
}
}

Conclusion

Now you know how to get the depth and normal values from the Depth+normal texture. Please remember that this is not meant to be a definitive guide on how to work with shaders. It is simply a summary of my experience working with the depth texture and vertex/fragment shaders in Unity during the past few days. Hopefully you found some of the information useful for you all development projects.

 

Unity Shaders – Depth and Normal Textures (Part 2)

This post is a continuation of an earlier post: Unity Shaders – Depth and Normal Textures (Part 1). The post after this is Part 3, which covers using both depth and normal textures.

Working with Depth Texture

Now that we’ve learned how to just get the depth texture, and display its values as a grayscale image, let’s do something interesting with it. I’m going to do a simpler version of the ring of light that passes through the environment in the Quantum Conundrum dimension shift effect. Instead of starting the ring from the center of wherever I’m looking at, I’m just going to have it start from the farthest point I can see, and travel linearly past the camera. Additionally, as the ring passes through objects, they will be left with a slight color tint.

Here is what it looks like:

DepthRingPass

Get Rendered Image Before Post-Processing

The effect we want to create is going to be superimposed on top of the original rendered image from the camera. As such, we will need to get from the camera an image of the scene right after it is rendered, but before any special effects are applied. To do this, we will need to use the Properties block in the shader. The rendered image from the camera will be brought in as _MainTex, so the Properties block will look like this:

Properties {
   _MainTex ("", 2D) = "white" {}
}

You’ll also need to remember to name the variable in the pass of your shader so you can use it

sampler2D _MainTex;

Time

Since this is an animated effect, it will require a time variable. Fortunately, Unity provides a built-in value we can use: float4 _Time : Time (t/20, t, t*2, t*3). If this looks confusing, let me explain. The property name is _Time, and its type is a float4, meaning a vector with 4 float components. The x component is “t/20″, meaning it gives the value of time divided by 20, the y component is “t”, so normal time, the z component is time multiplied by 2, and the w component is time multiplied by 3.

In this case, we want to use time to give us a value that’s going to go from 0 to 1. In the following code, _RingPassTimeLength is the length of time we would like the ring to take to traverse the scene. _StartingTime (which we will set in a .cs script) is the time when the ring first begins to move, and _Time.y is the the time at the moment.

float _RingPassTimeLength = 2;
float t = 1 - ((_Time.y - _StartingTime)/_RingPassTimeLength );

When the ring first starts to move, the starting time is the current time, so _Time.y – _StartingTime = 0, so t = 1. Then, as _Time.y increases, the value of t will decrease. So if we use the variable t  to adjust which depth values we are looking at, we can traverse the scene. 

User-Specified Uniforms

There are a few variables that would be nice for a user to adjust in the editor, instead of having to change it in code. To do this, we will need to use user-specified uniforms. In our case, some values that would be nice for the user to be able to adjust are the length of time it takes for the ring to pass through the scene ( _RingPassTimeLength) and the width of the ring itself (_RingWidth), like this:

uniform_variables_inspector

What we need to do is declare these properties, and then define the uniforms using the same names.

So, let’s our call our new shader DepthRingPass.shader. Here is the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
Shader "Custom/DepthRingPass" {

Properties {
   _MainTex ("", 2D) = "white" {} //this texture will have the rendered image before post-processing
   _RingWidth("ring width", Float) = 0.01
   _RingPassTimeLength("ring pass time", Float) = 2.0
}

SubShader {
Tags { "RenderType"="Opaque" }
Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthTexture;
float _StartingTime;
uniform float _RingPassTimeLength; //the length of time it takes the ring to traverse all depth values
uniform float _RingWidth; //width of the ring
float _RunRingPass = 0; //use this as a boolean value, to trigger the ring pass. It is called from the script attached to the camera.

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos:TEXCOORD1;
};

//Our Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

sampler2D _MainTex; //Reference in Pass is necessary to let us use this variable in shaders

//Our Fragment Shader
half4 frag (v2f i) : COLOR{

   //extract the value of depth for each screen position from _CameraDepthExture
   float depthValue = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.scrPos)).r);

   fixed4 orgColor = tex2Dproj(_MainTex, i.scrPos); //Get the orginal rendered color
   float4 newColor; //the color after the ring has passed
   half4 lightRing; //the ring of light that will pass through the dpeth

   float t = 1 - ((_Time.y - _StartingTime)/_RingPassTimeLength );

   //the script attached to the camera will set _RunRingPass to 1 and then will start the ring pass
   if (_RunRingPass == 1){
      //this part draws the light ring
      if (depthValue &lt; t &amp;&amp; depthValue &gt; t - _RingWidth){
         lightRing.r = 1;
         lightRing.g = 0;
         lightRing.b = 0;
         lightRing.a = 1;
         return lightRing;
      } else {
          if (depthValue &lt; t) {
             //this part the ring hasn't pass through yet
             return orgColor;
          } else {
             //this part the ring has passed through
             //basically taking the original colors and adding a slight red tint to it.
             newColor.r = (orgColor.r + 1)*0.5;
             newColor.g = orgColor.g*0.5;
             newColor.b = orgColor.b*0.5;
             newColor.a = 1;
             return newColor;
         }
      }
    } else {
        return orgColor;
    }
}
ENDCG
}
}
FallBack "Diffuse"
}

And below is the script to be attached to the camera. Let’s call it DepthRingPass.cs. In addition to passing the depth texture, as well as an image of the rendered scene from the camera to the shader, the DepthRingPass.cs is also triggering the ring to start. Whenever the user presses the “E” button on the keyboard, it sets the uniform variable _StartingTime to the current time and sets _RunRingPass to 1 (which then sets the shader to draw the ring). The reason why I’m using SetFloat here is because there isn’t a SetBool function (at least not that I’m aware of).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using UnityEngine;
using System.Collections;

[ExecuteInEditMode]
public class DepthRingPass : MonoBehaviour {

public Material mat;

void Start () {
    camera.depthTextureMode = DepthTextureMode.Depth;
}

void Update (){
   if (Input.GetKeyDown(KeyCode.E)){
      //set _StartingTime to current time
      mat.SetFloat("_StartingTime", Time.time);
      //set _RunRingPass to 1 to start the ring
      mat.SetFloat("_RunRingPass", 1);
  }
}

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){
   //mat is the material containing your shader
   Graphics.Blit(source,destination,mat);
}
}

Remember, you’ll need to attach the shader we wrote to a material, and then set the material as our mat variable in the above script DepthRingPass.cs, which is then attached to your camera object.

depthringpass_inspector

We will end here for now. In Part 3, I will talk about using both depth and normals in your shader, specifically, how to work with DepthTextureMode.DepthNormals.