Orthographic Edge Detection

Started working on orthographic projection edge detection.

Here were the first few passes. The lines are too thick:

1 2

3

Glitchy effect due to my other edge detection shader not working well with orthographic camera, but still really cool looking!

4

5

You merely adopted the madness. I was born in it, molded by it:

6 7 8 9

Playing around with adding background color:

10

Getting close:

13 12 11

This geometry above is probably too bland and repetitive. But the lines are getting there.

Stepwell level, still a bit glitchy:

14

Future poster?

15

Development Update – TIGForum’s Devlogs, Edge Detection Improvement

4th Best Devlog of 2014

RELATIVITY DevLog came in 4th place and got a mention in this year’s Best Devlog Thread.

The top 3 devlogs are all super fantastic. Congratulations to Lucas Pope’s Return of the Obra Dinn, Eigenbom’s Moonman, and JLac’s Project Rain World!

I’m incredibly honored to have even been nominated. Really awesome to see that people are enjoying the posts here. Definitely has inspired me to continue posting updates here. Thank you all for your support!

Edge Detection Improvement

A few weeks ago, my friend Devon, who is part of the Young Horses, took a look at my edge-detection shader code and made some improvements.

Basically, what he did was increase the number of sample offsets, and also raised the standard by which an edge is determined.

You can see that there are less artifacts (those random black dots) after his changes:

Box Redesign

I decided to take some time to redesign the boxes, which up until now, just had placeholder art.

I changed the arrows to black, for when the box is in the active state, and also added an outline to the arrow when the box is in the inactive state, so that it’s much easier to see (especially on the yellow box).

I think the outline and the darker arrows makes the symbols feel much more intentional, and are better at attracting player attention.

Active Box:

Inactive Box:

Level Design Tweaks

Finally, I’ve also been making minor tweaks to old levels. These will be rebuilt eventually, but I wanted to test out a few ideas for positioning. Here are a few shots of the level:

Revisiting Edge-Detection

Yesterday, I decided to revisit the topic of edge-detection again. I was reading through the devlog of Lucas Pope’s newest game, Return of the Obra Dinn, when I saw this post about the visual style of the game.

Even though the look for Obra Dinn is very different from the look I’m going for, his shader does require edge-detection. When I saw the image he shared of just the edge-detection effect, I was blown away. It was so clean, had pixel-perfect thin lines, and no artifacts.

In Pope’s technique, he uses object position and face normals to give each poly face a random color, and then draws lines separating the different color areas. Funny enough, this is very similar to the technique used by Thomas Eichhorn in his sketch shader, which was actually the initial inspiration for my shader effect.

At the time, I didn’t have a lot of experience with shader programming (I still don’t, but I had less back then), and I couldn’t figure out how to use color areas for edge detection. Eventually, I just went with comparing normal and depth values of pixels directly to draw the edges. You can read about the details of what I did here.

Anyway, I decided I would switch over to using color areas for edge-detection. It seemed to be much more accurate and also had less artifacts (further down I will explain some of the problems with my previous edge-detection shader).

In my shader, I follow Eichhorn’s method, and for each pixel, I set it’s color according to this formula:

color.r = normal.x;
color.g = normal.y;
color.b = depth;

Here are two images from my first go at using normals and depth to color poly faces, and then drawing edges between the color areas:

Relativity_Game_Screenshot-2014-05-26_03-44-39 Relativity_Game_Screenshot-2014-05-26_03-44-46

As you can see, while the shader is able to draw lines between different color areas, it is missing a ton of edges! The problem here is that there is not enough depth sensitivity, so that areas with the same normal, regardless of distance, all appear to be the same to the shader.

So, how to increase depth sensitivity? Well, one way is to lower the far clipping plane of the camera. Since depth values are spread out from 0 to 1 across the camera view distance, by lowering the far clipping plane, and therefore shortening the camera view distance, you can increase the variation in depth values.

This is what it looks like with the camera far clipping plane set to 70:

Relativity_Game_Screenshot-2014-05-26_05-04-35

You can see there is strong variation in color, and the shader is picking up a lot more edges. Unfortunately, this is also causing a lot of clipping. Because of the size of my levels, I need the clipping plane to be at least 800.

Basically, what I needed was to have more sensitivity at the depth values closer to the camera, and less sensitivity at depth values farther away. I’m a bit embarrassed to say I couldn’t think of an equation right away (my math has been a little rusty having been out of school for some time), but Nick Udell suggested using a logarithm function (of course!  Roll Eyes)

I changed the color assignment to this:

color.r = normal.x;
color.g = normal.y;
color.b = log(depth);

and noticed improvements right away (just look at all those edges!):

Relativity_Game_Screenshot-2014-05-26_05-35-50

What’s interesting is that you still can’t see the color variation just with you eye, but for the shader, there’s enough difference to distinguish between pixels having the same normals but different depth values.

The new version of the shader is better than the one I had before, but still not perfect. It’s also still not quite the same technique that Pope is using, as he is using the world space coordinates of the objects as well (I asked him about the approach, and he goes into some detail here)

I’m still learning shaders, so haven’t quite figured out how to actually do this yet, but it appears it should happen in the vertex shader, and not the fragment shader, which is where I had been focusing on.

Below is a comparison of the shader vs the old one.

Here’s the old shader:Relativity_Edge_Compare_001B

Here’s the new one:Relativity_Edge_Compare_001A

Here’s a gif comparing them:

Relativity_Edge_Compare_GIF

You can see that the older shader had many more thicker lines. This may not look that bad from the gif, but in the game, it looks quite cluttered and distracting.

You can also see in this close up there were a ton of artifacts in the old version:Relativity_Edge_Compare_001B1

I know that this is a problem due to using normals, and not depth values. Not exactly sure what the problem is, but I think it has something to do with floating point precision. They tend to occur at places where primitives are supposed to align. I’m guessing  there’s just some tiny difference, and the shader is detecting a gap. During gameplay, these would flicker in and out all the time, which really bothered me.

In the new version, I’ve managed to improve on this:

Relativity_Edge_Compare_001A1

Some of the artifacts are still there, but they’re much smaller and less noticeable.

There are also some problems with the new shader. For example, it is missing the edges on the stairs, as shown below:

Relativity_Edge_Compare_001A2

I’m not too sure what’s happening. I know it has something to do with depth sensitivity, but haven’t nailed it down yet.

Here’s a gif comparing a close up of the two versions:

Relativity_Edge_Compare_001

Anyway, I’m going to leave the shader alone for now to work on other more pressing aspects of development. I’m going to try incorporating object position into the shader next time, like what Pope is doing, as this seems to be much more precise and effective.

Development Update – Edge-Detection + Render Textures

I finally got my edge-detection shader to work on render textures! This took a really long time to figure out, so I’m really happy to have solved this issue.

Basically, for a long time, I didn’t know how to get shaders applied to render textures. Since the portals in the game use render textures to create the illusion of a world on the other side, this meant an inconsistency in visual style when looking through a portal, like this:

Relativity_Game_Screenshot-2014-05-22_04-30-58

 

You can see that everything that appears inside the portal doesn’t have edge-detection applied. This didn’t affect gameplay or anything, but I knew that this would definitely need to be fixed for the final release of the game, and I had no idea how to address this problem.

A few weeks ago, I finally decided to roll up my sleeves and really figure out how render textures work. Up until then, the portal system was just hacked together, and I only knew enough to get things barely working.

I knew I would need the shader to get applied to a camera, but for a long time, I just couldn’t find where that camera was!

Eventually, I discovered this line of code:

1
go.hideFlags = HideFlags.HideAndDontSave;

“go” is the game object with the camera attached, and what this line did was told the engine to hide it from the editor hierachy (so that it wasn’t seen), and to not save it after run time.

I changed it to this:

1
go.hideFlags = HideFlags.DontSave;

So now, I could see the camera created during runtime inside the editor hierarchy during run time.

From here, I just added the edge-detection shader to the run time generated camera.

This is what it looks like now:

Relativity_Game_Screenshot-2014-05-22_04-25-28

This still isn’t perfect. There’s still the problem of shadows not being rendered on render textures, and making the lighting look inconsistent.

However, I’m really happy to have been able to cross a big item off of the bug list.

Development Update – Edge Detection

One of the things I’ve decided to focus on these past few days is to refine the look of the game, and try to develop a unique visual identity. Up until now, pretty much every visual decision has been made based on functional reasons.

Since architecture has been, and still is, a key theme of the game, I thought that would be a good place to start looking for inspiration. Eventually, I came across this post by Thomas Eichhorn about a shader inspired by old architectural drawings. Eichhorn originally wrote it for vvvv, but looking at the image of the final result, I thought this would be a good place for me to start.

I took his image of the final result of the shader, and added a layer of blue highlights to the upward facing surfaces:

relativity_look

I actually quite like the look. Immediately, it provides a sense of atmosphere, a warm, nostalgic feeling that takes me back to reading illustrated adventure books as a kid. I thought it would be a great style to offset to clinical/sterile nature of the game at the moment. Also, it didn’t look like any other 3D game out there, so this would help in establishing a visual identity.

But first, I had to roll up my sleeves and dive into shader programming.

Edge-Detection

I started off by looking at the edge-detection image effect script that comes packaged with Unity Pro. After a day of being totally confused, with a failed attempt at learning node-based shader programming with Shader Forge, I was eventually able to understand what the script was doing.

There are 5 different modes with Unity’s edge-detection script. For my purposes, the closet one to what I was looking for was “RobertsCrossDepthNormals”, which basically selects one pixel, and then checks to see if the surrounding pixels have similar normals or depth values. If not, then a edge is drawn. However, there were a few problems, namely, it wasn’t able to pick up several important edges.

Here’s a shot of a set of stairs, which is pretty common throughout Relativity:

Edge_Detection-2014-03-31_18-36-28

With Unity’s edge detection applied, this is what it looks like: Edge_Detection-2014-03-31_18-35-50

So you can see the problem here is that the edges of the steps on higher section of the staircase are getting lost. This is because the algorithm is using both the normals and the scene depth to figure out the line, and in the higher sections, because you’re just seeing the front face of the steps, and not the top face, the normals are all the same.

You can increase the depth sensitivity, which does pick up the edges of the steps higher up, but also ends up with these black artifacts for areas in the distance, where there’s a large change in depth value. You can see the same issue happening on the side of the cube in the middle of the frame:

Edge_Detection-2014-03-31_19-49-50

Another problematic area was when I had staircases on the side:

Edge_Detection-2014-03-31_18-37-27

From this angle, Unity’s edge-detection works really well, since you can see very clearly both the front face as well as the top face of the steps:

Edge_Detection-2014-03-31_18-37-15

However, from another angle, the edges disappear completely: Edge_Detection-2014-03-31_18-37-40

I decided therefore to create my own edge-detection algorithm, using what Unity has done as a starting ground. The main difference is that instead of checking to comparing to see whether both the normals and depth values are similar, I break it into two steps.

First, I do a check comparing only the normal values of surrounding pixesl. The selection of pixels is actually from the original Unity script. Basically, if the pixel we are examining at the moment is “A”, then compare the normal value of the pixel “B” vs “E” and then “C” vs “D”.

pixel_layout

The reason why I start with normals first is that, in my case, there are no false positives. In other words, when you’re only using normals to do edge-detection, you will only miss edges, you won’t pick up wrong edges. Of course, this wouldn’t work if you had curved surfaces, but for me, since all the angles in Relativity are 90 degree angles, and everything is made up of boxes, this was no problem.

So I draw a first set of edges that pass the normal test.

For the second step, I then take everything else, and run it through a depth test. This time, I add up the depth values of pixels “B”, “C”, “D”, and “E”, then divide by 4, getting a value for the average depth value for the surrounding pixels. I then subtract this value from the depth value of pixel “A”, and if the difference is greater than 0.001, then it’s determined to be an edge.

In the following images, the blue lines are edges drawn in the first round by comparing normals, and the red lines are edges drawn in the second round by comparing depth values.  Edge_Detection-2014-03-31_18-59-35

Edge_Detection-2014-03-31_18-59-22

Edge_Detection-2014-03-31_19-00-04

 

You can see that where the normal test misses the edges, the depth test is able to catch them. And the sensitivity at which the depth test is set allows me to pick up the edges, while not getting any of the weird artifacts from the default Unity shader.

Here’s what it looks like with all black lines: Edge_Detection-2014-03-31_18-39-11

Of course, there’s still some issues, such as the normal lines being thicker than the depth lines, and I still need to fade out the lines in the distance to help with player depth perception. But overall, I think it’s a pretty good start, especially since considering yesterday morning, I had no idea even how to approach this.

DevLog Update – Ambient Occlusion + Edge Detection

Yesterday, I experimented with a couple of other grid textures. In approaching this, I wanted there to be a standard unit in the size of the grid (i.e. the size of the grid would be a multiple of the length of the cube).

This is because the grid exists primarily to help the player gauge distance for puzzles, and if the lines weren’t following a standard unit, it wouldn’t be very helpful.

I didn’t want to get too carried away at this point, so just tried varying the size of grid boxes. This already made a huge difference to the look. I also tried narrowing the lines and lightening the color. I figured that the grid doesn’t need to be very dominating, especially if I already have ambient occlusion defining the volume. It just needs to be visible enough that the player can see it when they need to.

Relativity_Game_Screenshot-2014-01-08_00-24-25

I also took Juan Raigada’s suggestion and made the ambient occlusion (AO) radius larger. At first, I set the radisu to 5, but it made everything too dark, as I had the contrast set at 1.8. So I played around with the values a bit, and eventually found a sweet spot of AO radius at 3, and contrast at 1.

I also raised the ambient light a bit, and increased the intensity of the directional light. It made a huge difference! So much so, in fact, that the AO alone seems to define the geometry quite well without any edge-detection.

Anyway, I took a couple more comparison screenshots between having edge-detetction on vs off. To be fair, I’m still using the default Unity edge-detection shader, which, as previously discussed, is not sufficient for my purposes. So these shots are not meant to be for making final judgement calls.

In any case, they look pretty cool, and I definitely feel like I’m getting closer to finding the appropriate art style for Relativity.

Relativity_Game_Screenshot-2014-01-08_00-30-32

Relativity_Game_Screenshot-2014-01-08_00-30-43

Relativity_Game_Screenshot-2014-01-08_00-31-02

Relativity_Game_Screenshot-2014-01-08_00-30-51

Relativity_Game_Screenshot-2014-01-08_00-31-40

Relativity_Game_Screenshot-2014-01-08_00-31-33

Relativity_Game_Screenshot-2014-01-08_00-34-24

Relativity_Game_Screenshot-2014-01-08_00-34-29

Relativity_Game_Screenshot-2014-01-08_00-35-28

Relativity_Game_Screenshot-2014-01-08_00-35-43