Vignetting + Noise

Spent the last two days getting the game ready to demo at Indy Pop Con in Indianapolis this weekend.

This involved making builds, and playing through them to make sure everything is working. Now that the game is so much bigger, this turned out to be quite time-consuming. I was able to rush through the entire build in about 30 minutes, knowing how to solves the puzzles and where the shortcuts are, etc. My guess for a first-time playthrough by someone not familiar with the game, this would take somewhere around 2 to 3 hours.

I’ve also added vignetting and a little bit of noise to the player camera. I think this makes it look a little more dream-like:


Revisiting Edge-Detection

Yesterday, I decided to revisit the topic of edge-detection again. I was reading through the devlog of Lucas Pope’s newest game, Return of the Obra Dinn, when I saw this post about the visual style of the game.

Even though the look for Obra Dinn is very different from the look I’m going for, his shader does require edge-detection. When I saw the image he shared of just the edge-detection effect, I was blown away. It was so clean, had pixel-perfect thin lines, and no artifacts.

In Pope’s technique, he uses object position and face normals to give each poly face a random color, and then draws lines separating the different color areas. Funny enough, this is very similar to the technique used by Thomas Eichhorn in his sketch shader, which was actually the initial inspiration for my shader effect.

At the time, I didn’t have a lot of experience with shader programming (I still don’t, but I had less back then), and I couldn’t figure out how to use color areas for edge detection. Eventually, I just went with comparing normal and depth values of pixels directly to draw the edges. You can read about the details of what I did here.

Anyway, I decided I would switch over to using color areas for edge-detection. It seemed to be much more accurate and also had less artifacts (further down I will explain some of the problems with my previous edge-detection shader).

In my shader, I follow Eichhorn’s method, and for each pixel, I set it’s color according to this formula:

color.r = normal.x;
color.g = normal.y;
color.b = depth;

Here are two images from my first go at using normals and depth to color poly faces, and then drawing edges between the color areas:

Relativity_Game_Screenshot-2014-05-26_03-44-39 Relativity_Game_Screenshot-2014-05-26_03-44-46

As you can see, while the shader is able to draw lines between different color areas, it is missing a ton of edges! The problem here is that there is not enough depth sensitivity, so that areas with the same normal, regardless of distance, all appear to be the same to the shader.

So, how to increase depth sensitivity? Well, one way is to lower the far clipping plane of the camera. Since depth values are spread out from 0 to 1 across the camera view distance, by lowering the far clipping plane, and therefore shortening the camera view distance, you can increase the variation in depth values.

This is what it looks like with the camera far clipping plane set to 70:


You can see there is strong variation in color, and the shader is picking up a lot more edges. Unfortunately, this is also causing a lot of clipping. Because of the size of my levels, I need the clipping plane to be at least 800.

Basically, what I needed was to have more sensitivity at the depth values closer to the camera, and less sensitivity at depth values farther away. I’m a bit embarrassed to say I couldn’t think of an equation right away (my math has been a little rusty having been out of school for some time), but Nick Udell suggested using a logarithm function (of course!  Roll Eyes)

I changed the color assignment to this:

color.r = normal.x;
color.g = normal.y;
color.b = log(depth);

and noticed improvements right away (just look at all those edges!):


What’s interesting is that you still can’t see the color variation just with you eye, but for the shader, there’s enough difference to distinguish between pixels having the same normals but different depth values.

The new version of the shader is better than the one I had before, but still not perfect. It’s also still not quite the same technique that Pope is using, as he is using the world space coordinates of the objects as well (I asked him about the approach, and he goes into some detail here)

I’m still learning shaders, so haven’t quite figured out how to actually do this yet, but it appears it should happen in the vertex shader, and not the fragment shader, which is where I had been focusing on.

Below is a comparison of the shader vs the old one.

Here’s the old shader:Relativity_Edge_Compare_001B

Here’s the new one:Relativity_Edge_Compare_001A

Here’s a gif comparing them:


You can see that the older shader had many more thicker lines. This may not look that bad from the gif, but in the game, it looks quite cluttered and distracting.

You can also see in this close up there were a ton of artifacts in the old version:Relativity_Edge_Compare_001B1

I know that this is a problem due to using normals, and not depth values. Not exactly sure what the problem is, but I think it has something to do with floating point precision. They tend to occur at places where primitives are supposed to align. I’m guessing  there’s just some tiny difference, and the shader is detecting a gap. During gameplay, these would flicker in and out all the time, which really bothered me.

In the new version, I’ve managed to improve on this:


Some of the artifacts are still there, but they’re much smaller and less noticeable.

There are also some problems with the new shader. For example, it is missing the edges on the stairs, as shown below:


I’m not too sure what’s happening. I know it has something to do with depth sensitivity, but haven’t nailed it down yet.

Here’s a gif comparing a close up of the two versions:


Anyway, I’m going to leave the shader alone for now to work on other more pressing aspects of development. I’m going to try incorporating object position into the shader next time, like what Pope is doing, as this seems to be much more precise and effective.

Development Update – OnGUI Optimization

For the past few days, I have been struggling a lot with optimization problems in the game.

I’d be running the game from the Unity editor, and for the most part, the game would be running at around 100 FPS. However, every once in a while, there the framerate would drop down to 30 or 40, and this would cause a very noticeable lag in a gameplay.

I opened up Unity profiler, and noticed these massive spikes in CPU usage:

performance spike

When I clicked on the spikes to see what exactly was causing this, most of it was caused by GC.Collect (garbage collection) under GUI.repaint.

This was really confusing to me, as I have very minimal UI in the game.

However, I started doing research, and it turns out that just by having void OnGUI in your script, even if it’s not drawing anything, will cause allocation. At some point, all of these garbage will need to get collected.

I then remembered that earlier in development, when trying to debug different triggers and switches, I would use OnGUI to let me know when different states have changed. After I got everything working, I commented out what was inside OnGUI, but left the actual call there.

So in a bunch of scripts, I had something like this:

void OnGUI(){
  //draw something

And of course, each of these was generating garbage, which then had to be collected.

After deleting this in all the scripts I could find, performance increased significantly!

So all I did, was delete a bunch of code that already wasn’t doing anything, and it fixed my performance problem.


Development Update – Notebook Sketches

Notebook Sketches
I’ve been meaning for some time to make a post sharing some of my notebook sketches. Finally getting around to doing it now.

World Maplevel_structure

This is an early version of the map of the game. I was trying to lay out how the different hub worlds would connect to one another. The current structure of the game is quite different, but you can get a sense of the complexity I have in mind.

You can also see that previously I had specific themes for different levels: Downtown, The Library, Museum, Industrial, etc. This isn’t really that relevant to the game, at least in its current state. Mostly, this was a way to help me think about how levels would evolve. Coming up with reasons for why specific structures existed gave me a starting point from which to design the levels.

Opening Level Designrelativity_sketch_001

An early draft of the layout of the first level in the game. You can see that on the right I was playing around with different arrangements for how the space would fit together in 3D.

Advanced Hub World Designsketch_003

This was a sketch intended for a Hub level that would appear in the later stage of the game. I’m not sure if I will still put it in the game. Different elements from this design have found their way into early levels. Anyway, it gives you a sense of the design process.

Water Logicsketch_004

This is a flow chart I drew up to help me work out the logic of water in the game. I still haven’t finished implementing that mechanic yet. I got it working on a basic level several months ago, and then decided I needed to focus my efforts on more immediate problems, such as the early levels.


Development Update – Edge-Detection + Render Textures

I finally got my edge-detection shader to work on render textures! This took a really long time to figure out, so I’m really happy to have solved this issue.

Basically, for a long time, I didn’t know how to get shaders applied to render textures. Since the portals in the game use render textures to create the illusion of a world on the other side, this meant an inconsistency in visual style when looking through a portal, like this:



You can see that everything that appears inside the portal doesn’t have edge-detection applied. This didn’t affect gameplay or anything, but I knew that this would definitely need to be fixed for the final release of the game, and I had no idea how to address this problem.

A few weeks ago, I finally decided to roll up my sleeves and really figure out how render textures work. Up until then, the portal system was just hacked together, and I only knew enough to get things barely working.

I knew I would need the shader to get applied to a camera, but for a long time, I just couldn’t find where that camera was!

Eventually, I discovered this line of code:

go.hideFlags = HideFlags.HideAndDontSave;

“go” is the game object with the camera attached, and what this line did was told the engine to hide it from the editor hierachy (so that it wasn’t seen), and to not save it after run time.

I changed it to this:

go.hideFlags = HideFlags.DontSave;

So now, I could see the camera created during runtime inside the editor hierarchy during run time.

From here, I just added the edge-detection shader to the run time generated camera.

This is what it looks like now:


This still isn’t perfect. There’s still the problem of shadows not being rendered on render textures, and making the lighting look inconsistent.

However, I’m really happy to have been able to cross a big item off of the bug list.

Development Update – Giant list of scenes

I was having trouble finding a specific scene file I had worked on about a week ago, and so I started talking to some people on twitter about different naming conventions and systems.

I mentioned how many scenes I thought had in the project, and some people seemed pretty shocked at how many there were. I hadn’t expanded the scene folder in quite some time, and was a bit curious myself, so I decided to take a snapshot of the entire folder.

As you can see, it’s a bit of a mess…


Development Update – New Door and Progress

Trying out a new design for the door. Glass panels allow the player to see what’s on the other side. This actually solves a ton of game design problems.



Previously, if the switch was located on the other side of the door, players had no way of knowing this. From where they were standing, it just looked like a door with no way to open. Now, players can see that there is a switch to open the door, it’s just located somewhere they can’t get to at the moment. And later, when they do get there, they make the connection that this was the previously locked door.

Also, the other day, I played an early build of Relativity in preparation for a talk I was giving at an art center. I was surprised to see how primitive the game looked back then, and decided to do a quick screenshot comparison. Amazing the difference a year can make!