Saving Image

Worked on optimizing saving today.

At the start of the day, it was 1000 ms as it was running in the main thread.

Had it run in a separate thread, and also split up texture2D.ReadPixels to happen over multiple frames.

I’m saving a small screenshot of the game to use in the load game screen.

Intead of doing ReadPixels all at once, I read it in chunks over the course of several frames. Much better performance.

Code here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// Saving by one slice each frame
        screenshot.ReadPixels(new Rect(0, 0, width / 4, height), 0, 0);
        yield return null;

        for (int i = 1; i < 4; i++)
        {
            current_tex = RenderTexture.active;
            RenderTexture.active = renderTex;
            screenshot.ReadPixels(new Rect(width * ((float)i / 4), 0, width * ((float)i / 4), height), (int) (width * ((float)i / 4)), 0);
            RenderTexture.active = current_tex;
            screenshotCamera.targetTexture = null;

            yield return null;
        }

screenshotCamera.targetTexture = null clears the player camera.

Took a while to figure this one out. Without it, the screen kept going black momentarily.

Manifold Garden – State of Development

The last few weeks have been crazy busy. I really need to get better at posting in the devlog more frequently.

I’m taking this weekend to write an update on everything: tools, game design, related projects, etc. It’s going to be quite extensive, so I will break it up into parts.

Let’s get started.

Tools Programming

David Laskey came on board to the project earlier this year, initially with the goal of working on optimization and PlayStation 4 port. Pretty soon after, David started working on a bunch of custom tools to help streamline the design process.

I didn’t quite realize it at the time, but the project was basically going from pre-production to production. As in, the prototyping stage was more or less over, and it was time to refine the development process and trim inefficiencies.

I also started learning to write Unity3D editor extensions as a result of working with David, and it really has been such a huge help to production. There were so many processes that used to be super tedious and time consuming to do, which now have been refined.

The thing with tedious processes is also not just the time it takes up (although that is definitely a big factor). It also makes you mentally dread working on it, because it’s just not fun. I’d be really in the zone making a level, iterating on areas, moving stuff around, and then all of a sudden, I have to make a window, which would just be tedium for an hour, and it would really kill the mood for me.

Also, it meant I was reluctant to iterate. If a window was good enough, but not great, I would just leave it at good enough, because the time it would take to get up to great just didn’t feel worth it.

The window making process is just one example. There were a lot of similar task that were incredibly tedious for me to perform, and the last several months, we’ve put a lot of time into trimming these inefficiencies.

I’m going to start talking about all these tools in detail in these next few updates.

We’ll start with the window generator.

Window Generator

Here’s a quick video showing a timelapse comparing the old and the new ways of making windows in Unity for Manifold Garden:

The old way: 

windowBuilding_old

Here’s how I built windows the old way (everything is done with ProBuilder here, just FYI):

1. Make a “backboard” that is the dimension of the window I want. This gives me a reference for the size

2. Start putting in frame pieces. Almost every straight segment is a separate piece.

3. Horizontal and vertical pieces are colored differently so I can tell them apart.

4. Place the window pieces. These are also colored differently than the frame pieces.

5. Color the outside faces of the window to be the glass material

6. Merge the frame pieces and the glass pieces (but first need to save the version with the separate pieces in case I want to come back and make changes).

For a complicated design, this can easily take an hour or more. In the gif, I was just randomly putting pieces in place without actually thinking of the design, and that still took 10 minutes.

Also, if I wanted to make changes to a design, it was a lot like having to rebuild the entire window again. Even a small change involved moving a bunch of pieces out of the way and readjusting their sizes. It was not fun.

The new way: 

windowBuilding_new

One of the first tools that David worked on when he joined was the window generator. I showed him the old process and we both agreed that it needed to go.

It felt like the most natural way of designing the windows, since they were basically 2D designs, was to design in photoshop, and then extrude that into a 2D shape.

For the process now, I basically have a grid in photoshop, each pixel is 0.25 units, make the design there, and then open up the window generator tool from Unity, which automatically makes a 3D version of the window and has it automatically prefabbed and aligned to the grid.

For the image, grey means frame, white means glass, and black means cutout.

Using photoshop means that I can take advantage of all the photoshop features (layers, invert, etc) when doing the actual design.

An entire window, even complicated one, instead of taking hours, can now take just minutes.

It’s easily my favorite tool in the engine.

If you’re interested in how the window tool works, David actually came on the stream a few weeks ago to talk about the tech behind it. It was storming in Chicago that day, so there were some internet issues, and the stream got cut up into 2 parts.

Here’s part 1:

Here’s part 2:

Line Drawer Tool Basic UI

Got a lot done today on the tool.

It now works from within an editor window instead of needing a script on an object.

I also have rectangles that project on the geometry which shows where are the areas to place subsequent markers.

The brush itself also changes color to show you where you can place the next marker.

manifoldgarden_lineDrawerToolBasicUI

Started working on mesh generation. It’s actually pretty much the exact same system that we’re doing for water mesh generation, so I’m starting by copying that over.

It’s a little less complicated, so I’ll be deleting a bunch of stuff I don’t need to clean it up and go from there.

Should have basic line mesh generation done by tomorrow.

Pagoda Pillar Level

Last night’s stream started off as an attempt to put in the finishing touches of a level, and then ended up as a debug session in which we uncovered some changes with Unity’s instantiation code in their latest update. All in all, another typical night of gamedev.

Part 1: https://youtu.be/YqT1O4WKwRU

Part 2: https://youtu.be/EjJTEL0-kCo

Anyway, we did manage to solve the weird bug, but then Unity crashed pretty hard, so I ended the stream then.

Afterwards, I was able to set up the level to run again. It took some tweaking, but I think I finally got the level of scale I wanted in order to convey some sense of mystery:Relativity_01 Relativity_02 Relativity_03 Relativity_04

Development Update – Edge-Detection + Render Textures

I finally got my edge-detection shader to work on render textures! This took a really long time to figure out, so I’m really happy to have solved this issue.

Basically, for a long time, I didn’t know how to get shaders applied to render textures. Since the portals in the game use render textures to create the illusion of a world on the other side, this meant an inconsistency in visual style when looking through a portal, like this:

Relativity_Game_Screenshot-2014-05-22_04-30-58

 

You can see that everything that appears inside the portal doesn’t have edge-detection applied. This didn’t affect gameplay or anything, but I knew that this would definitely need to be fixed for the final release of the game, and I had no idea how to address this problem.

A few weeks ago, I finally decided to roll up my sleeves and really figure out how render textures work. Up until then, the portal system was just hacked together, and I only knew enough to get things barely working.

I knew I would need the shader to get applied to a camera, but for a long time, I just couldn’t find where that camera was!

Eventually, I discovered this line of code:

1
go.hideFlags = HideFlags.HideAndDontSave;

“go” is the game object with the camera attached, and what this line did was told the engine to hide it from the editor hierachy (so that it wasn’t seen), and to not save it after run time.

I changed it to this:

1
go.hideFlags = HideFlags.DontSave;

So now, I could see the camera created during runtime inside the editor hierarchy during run time.

From here, I just added the edge-detection shader to the run time generated camera.

This is what it looks like now:

Relativity_Game_Screenshot-2014-05-22_04-25-28

This still isn’t perfect. There’s still the problem of shadows not being rendered on render textures, and making the lighting look inconsistent.

However, I’m really happy to have been able to cross a big item off of the bug list.

Unity Shaders – Depth and Normal Textures (Part 3)

This is a continuation of a series of posts on shaders: Part 1, Part 2

In the previous two parts, I talked about using depth texture in Unity. Here, I will discuss using depth+normal textures through DepthTextureMode.DepthNormals, which is basically depth and view space normals packed into one.

Below is the effect we will create. What you’re seeing is the scene being rendered with the viewspace normals as colors, and then the depth value as colors.

DepthNormals

Depth+Normal Texture

If you remember from Part 1, we can tell the camera in Unity to generate a depth texture using the Camera.depthTextureMode variable. According to the docs, there are actually two modes you can set this variable to:

  • DepthTextureMode.Depth:depth texture.
  • DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.

We are already familiar with DepthTextureMode.Depth, so the question is: how exactly do we get the values of depth and normals from DepthTextureMode.DepthNormals? 

It turns out, you need to use the function DecodeDepthNormal. This function is defined in the UnityCG.cginc include file, which, by the way, can be found on windows using this path: <program_files>/Unity/Editor/Data/CGIncludes/

Below is the definition:

inline void DecodeDepthNormal( float4 enc, out float depth, out float3 normal )
{
   depth = DecodeFloatRG (enc.zw);
   normal = DecodeViewNormalStereo (enc);
}

So what is going on here? We can see that the function takes 3 inputs: float4 enc, out float depth, out float3 normal. Basically, what it does is it takes information from enc, runs the functions DecodeFloatRG and DecodeViewNormalStereo on depth and normal respectively, and outputs those values. 

This is what it will look like in our code:

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);

depthValue is a float which will contain the depth value of the scene, and normalValues is a float3 that will contain view space normals. As for the first variable, tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), what’s going on? Well, basically, _CameraDepthNormalsTexture‘s variable type is Sampler2D. What DecodeDepthNormal requires is a float4. So what we do is apply tex2d, a function which performs a texture look up in a given sampler.

The first input that tex2d takes is the sampler, in our case _CameraDepthNormalsTextureand the second input is the coordinates to perform the look up, which in our case is the screen position, or i.scrPos. However, i.scrPos is float4, and in the input needs to be float2, so we take only the xy coordinates.

The Shader

Here’s the code for the shader. Let’s call it “DepthNormals.shader”.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Shader "Custom/DepthNormals" {
Properties {
   _MainTex ("", 2D) = "white" {}
   _HighlightDirection ("Highlight Direction", Vector) = (1, 0,0)
}

SubShader {
Tags { "RenderType"="Opaque" }

Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

sampler2D _CameraDepthNormalsTexture;
float _StartingTime;
float _showNormalColors = 1; //when this is 1, show normal values as colors. when 0, show depth values as colors.

struct v2f {
   float4 pos : SV_POSITION;
   float4 scrPos: TEXCOORD1;
};

//Our Vertex Shader
v2f vert (appdata_base v){
   v2f o;
   o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
   o.scrPos=ComputeScreenPos(o.pos);
   o.scrPos.y = 1 - o.scrPos.y;
   return o;
}

sampler2D _MainTex;
float4 _HighlightDirection;

//Our Fragment Shader
half4 frag (v2f i) : COLOR{

float3 normalValues;
float depthValue;
//extract depth value and normal values

DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthValue, normalValues);
if (_showNormalColors == 1){
   float4 normalColor = float4(normalValues, 1);
   return normalColor;
} else {
   float4 depth = float4(depthValue);
   return depth;
}
}
ENDCG
}
}
FallBack "Diffuse"
}

Remeber that the normal values are from the view space, so when you move the camera, the normals, and thus the colors, change.

DepthNormalsCamera

The script to attach to the camera

Let’s call it “DepthNormals.cs” just to keep things consistent. What the script does is, everytime the user hits the keyboard button “E”, it switches the shader between showing the depth values and the normal values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
using UnityEngine;
using System.Collections;

public class DepthNormals : MonoBehaviour {

public Material mat;
bool showNormalColors = true;

void Start () {
   camera.depthTextureMode = DepthTextureMode.DepthNormals;
}

// Update is called once per frame
void Update () {
   if (Input.GetKeyDown (KeyCode.E)){
      showNormalColors = !showNormalColors;
   }

   if (showNormalColors){
      mat.SetFloat("_showNormalColors", 1.0f);
   } else {
      mat.SetFloat("_showNormalColors", 0.0f);
   }
}

// Called by the camera to apply the image effect
void OnRenderImage (RenderTexture source, RenderTexture destination){
   //mat is the material containing your shader
   Graphics.Blit(source,destination,mat);
}
}

Conclusion

Now you know how to get the depth and normal values from the Depth+normal texture. Please remember that this is not meant to be a definitive guide on how to work with shaders. It is simply a summary of my experience working with the depth texture and vertex/fragment shaders in Unity during the past few days. Hopefully you found some of the information useful for you all development projects.