Bst Curve Combo, Illuminated States, GDC

Best Curve Combination

After playing around with the animation curves controlling position and rotation during the gravity switching, I’ve decided that this is the combination that works best:

Rotation – Custom when moving, S when stationary
Position – Convex

I’m going to put this as the default settings in the game.

Of course, I will have the option in the settings for players to choose different animation curves to use. I think there will even be an option where it skips the transition completely and just places the player in the new orientation (it will still take the same amount of time, so as not to mess up the mechanic). This is mostly done for people who are have motion sickness with the game. I’d like the game to be as accessible as possible, so would hate for people to not be able to play it because of that.

Also, I would love to make a VR version of the game. And I think for that format, it would require a lot of tweaking as well, so it’s good to have the options in place.

Light Bleedthrough

Was noticing an issue with light bleeding through the wall:

It turns out it’s because the light isn’t set to cast shadows. I could set the light to cast shadows, but I think that’s more performance heavy than is needed for this. I ended up just reducing the range of the light.

Illuminated State of Boxes

Working on the designs for the illuminated state of the boxes (when they’re placed on the correct switches)

This was the original design. I thought by turning the outline around the shapes to white, it would make for a stronger effect:

I posted these images on twitter, and several people complained that the white arrows on the yellow box was pretty hard to see (yes, it is pointing in the wrong direction). It did seem pretty inconsistent, especially given that the arrows on the other color boxes were quite clear.

I decided to add the outlines back. Here’s what the illuminated boxes look like now:

Head Bobbing

Finally got around to tweaking the head bobbing settings.

When you land, there’s a stronger head bob effect. I’ve made it so that how much head bobbing occurs is dependent on the speed of the fall right before you land. So the faster you were falling when you landed, the greater the head bobbing. The relationship is quadratic, so that the really big head bob effect doesn’t happen unless you’re falling near terminal speed.

Here’s a video showing the effect:

Water Redirection

Got water redirection working in the new build! Excited to show it off at GDC and have people try it and get feedback on it. It’s pretty broken, but works well enough to convey the idea.


Finally, I’m going to be heading out to San Francisco later today for GDC. Very excited to catch up with friends and also meet new people.

Last year was my first GDC, and my experience then had a tremendous impact on the development of the game. A lot of the feedback I got at the time was that the art style needed to improve. I was actually a bit frustrated by this, because I thought I had really great mechanics and that was all that mattered. I have come to realize since that I was wrong, and that art and mechanics are closely linked, and both matter a lot for the final game.

Anyway, last GDC inspired me to dive into shaders and constantly improve the art style, and I think all the work has paid off.

Here’s a comparison of what the game looks like back then and what it looks like now:

Of course, there’s still plenty of room for development, so I’m excited to see how the art style evolves moving forward!

Separating Position and Rotation Curves

Quote from: Juan Raigada on February 26, 2015, 07:29:18 AM

Question: are you using the curve both for rotation and movement, or are you just stopping the movement while rotating? (I’m trying to make sense of the stop and go effect you mention).


Juan’s comment in response to my last update made me realize that rotation and position are indeed separate. I have been working on this for so long that the action of switching to a wall just seemed like one action.

This is what the code looks like:

float y = rotationCurve.Evaluate(t);

transform.position = Vector3.Lerp(orgPos, dstPos, y);
transform.rotation = Quaternion.Lerp(orgRot, dstRot, y);

As you see, both the position and rotation changes are reading from the same animation curve.

I decided to try decoupling that and having separate animation curves for each.

Here’s the comparison test video:

Again, it’s very difficult for me to tell, because of how close I am to the project, but I think separate animation curves do help a little.

I made the position curve end slightly earlier than than the rotation curve. The thinking here is that the player is already moving by the time the rotation finishes.

Anyway, I’ll need to start playtesting to really figure it out.

Rotation Curve Comparison Test

I made a video comparing the different curves I’m testing for the rotation movement:

I think S curve and Custom curve feel the best.

The problem with S curve is that because the curve plateaus in the end, you get a drop in speed. So if you’re holding down the forward button and pushing the joystick forward while rotating, you get this stop and go effect. Custom curve doesn’t have this problem, and instead goes straight into movement. However, S curve feels much better when stationary.

As such, my solution is to use a combination of S and Custom curve. When you’re stationary and you rotate, we use the S curve. If you’re moving and you rotate, then we use the custom curve.

Perhaps a more precise solution would be to see if the forward button is held down to decide which curve, but I think this works pretty well for now.

Console Design Improvement

Improved the design of the console. Darked the window and change the typography to a more typical “Computer Font”, and also increase the font size.

Also added a hack to make it scroll (basically set the y value of the rect to 30000). This does cause a problem later if you type in enough commands (it’ll stop scrolling), but it’s fine for now.

Tweaking Rotation Movement In-Game

One issue I’ve been dealing with on and off is trying to get the feel of rotating to a wall right.

Problem is that I’ve been working on the game for so long and looking at it so closely that I honestly don’t know what feels good and what doesn’t anymore. I’ve gotten pretty used to the placeholder rotation movement that I implemented a long time ago, but that doesn’t mean that that’s the right feel.

I’m going to start testing it at soon, and at GDC next week, I’d like to be able to show it to people and get feedback.

To this end, I’ve set up the console to allow for in-game tweaks of the rotation.


I switched to using an animation curve for the rotation, instead of using slerp (which as it turns out, I was actually using incorrectly). I could have written a smoothing function, but I decided to go with a curve since that allows me more control, and I don’t have to worry about the math.

For testing purposes, I have made fives curves:

Concave, convex, linear, and S are all standard animation curve shapes that come with Unity. I include them primarily as reference points. Custom is one I made that is a starts of concave, and then ends linear. It’s the one I feel works best, but it’s always important to have comparisons.






S curve feels pretty good as well. The problem I find is that because the curve plateaus out in the end, when the player comes out of the rotation, their speed goes to 0, where has this stop and go effect. With custom, concave, and linear, it just goes straight into forward movement from rotation.

I also take into account initial speed (the fast you’re moving when you rotate, the faster the rotation is), as well as a time divisor (just a way to slow down or speed up the rotation time overall). Both how much the initial speed is accounted for, as well as time change can be changed as well.

Debug Console or: How Did No One Tell Me About Implementing This Incredibly Useful Tool Two Years Ago

I spent the past few days setting up a debug console in the game. I actually started working on this back at the beginning of January, but didn’t get past implementing the basic code. Also, that was in the old version of the game, and now I’ve set it up so that it’s in the version I’m making in Unity 5, in which I’m going through all the basic mechanics and refining them.

Before I get to the details, I’d like to talk about the inspiration that finally got me to take the time and set up this debug console. There are three specific moments:


1) I was in LA during IndieCade, and hanging out Glitch City. Brendon Chung was showing me his game Quadrilateral Cowboy. At one point, he needed to make some changes or fixes, and so he pulled up the console, type in some commands, and right away, those changes happened. Having not played a lot of PC FPS games when I was younger, I had never seen thsi before. I was like “whoa, what was that!?”. Brendon explained that this is actually a feature of the Doom 3 Engine.

2) I was having a conversation on twitter about playtesting with my friend Frank, who used to do QA at Harmonix. Frank explained that Harmonix took playtesting very seriously, and for each session, they would have an engineer as well as a designer on site. They would ask the player to try something, and then the engineer would change the values right away, and ask the player to try again until they got the feel right.

This was pretty mindblowing to me. The idea of making real-time changes during a playtest session had never occurred to me. I would implement something, put out a build, have people play it, then take their feedback afterwards and try to address them with new changes. Then I’d output another build, schedule another playtest session, and see if it this was fixed.

3) I did a group critique session last month with a bunch of Chicago game devs. I was the first to present, and during my demo, I ran into this bug that got me stuck in a wall. There was no way to fix this without restarting the game, and replaying through all the previous areas again, so I ended up describing my problem on paper. It worked, but it really wasn’t ideal.

Then Sean Hogan demoed his game Even the Ocean, and as we were giving him suggestions, he was able to pull up an in-game editor and start tweaking values, moving objects around, and get immediate feedback right away. It was pretty incredible.

Because I’m working on game feel aspects now, there is a lot of tweaking. Having the ability to make real-time changes to variables is invaluable.

Old Method

I should clarify that even in the past, I was displaying values and variables when debugging. In the above picture, you can see that I was printing various values in the top left corner of the screen.

However, most of these were one liners, like

GUI.Label(new Rect(10, 10, 100, 20), "Gravity Normal: " + gravityNormal.ToString());

When I no longer needed it, I would comment it out.

Invariably, this led to a bunch of commented out lines at the bottom of the script printing out different variables. And at some point, I would just delete it because it was making the code look all cluttered.

I would also use hot keys, such as hitting ‘p’ to increase movement speed. But these do get pretty hard to remember when you have a bunch of them, and there’s also the risk of players hitting a key accidentally and screwing themselves up.


I got the code for the console from here:

It was pretty straightforward to set up.

Only thing was I had to comment out this line:

GUI.TextField(bounds, “”, 0);

in the script ConsoleNamedKeyBugFix.cs

because it was causing this really small black square to appear in the upper left corner when the console was closed:

This image above is amplified and cropped, so it was quite small, but it did bother me tremendously.

Anyway, the way it’s set up is that now, I have a series of commands registered:

repo.RegisterCommand("help", ShowHelpText);
repo.RegisterCommand("list_commands", ShowCommandList);
repo.RegisterCommand("show_carry_method_text", ShowCarryMethodText);
repo.RegisterCommand("show_player_states", ShowPlayerStates);
repo.RegisterCommand("set_rotate_speed_divisor", SetRotateSpeedDivisor);

Commands like show_player_states turns on a bool in the script controlling player movement, which then displays the values on the screen.

I also have commands that allow me to change values, like set_rotate_speed_divisor.

I have a variable which I divide time with to speed up and slow down the rotation speed, and this allows me to change it in the build of the game.

It doens’t save it, but I am at least able to note it down, which helps a lot.

Here’s a gif showing the console in action:

Object Carry System Overhaul (Yes, Again) – Spring-Raycast Raycast System

I continued to think about the object carry system, and decided to overhaul it one more time. I’ve actually lost count of how many times I’ve done this, but I think this time I’ve actually done it.

The last system I was using was the Spring Raycast System, which uses the spring method when the player was on the ground, and used the raycast system when the player was in free fall. More about that in this post.

However, there was still the problem when using the spring system, of the box you’re carrying penetrating other objects. In the last post, I posted on collision is very different depending on whether the rigidbody is kinematic or not, and so I had to have a system which made all non-moving boxes kinematic so that the box you’re carrying doesn’t get pushed into it.

But even with having the rigidbody set to kinematic, it wasn’t always 100%. It’s incredibly difficult when relying on this, because even if it happens 1 out 1000 times, that’s still a problem. And because it’s a bug that does happen occasionally, but is difficult to replicate, it’s pretty hard to debug.

I decided to be safe and just create a fool proof system.

Spring-Raycast Raycast System

I’m going it the Spring-Raycast Raycast System, because now, the spring system actually incorporates raycasts as well. Like the Spring Raycast system, we use Spring-Raycast while the player is grounded, and Raycast when the player is in free fall.

How does the Spring-Raycast system work?

We still use the spring to control the movement of the box, and when the box isn’t in contact anything, it’s just at the end of the spring.

However, on the box, we’re also sending out a series of raycasts from each face. On each face, we shoot out a raycast from the center, and if that doesn’t hit anything, we shoot out raycasts from the corner.

The raycasts are very short. They are 0.6 in length. Given that the box is 1x1x1, this means each ray protrudes out of the box by 0.1 only.

As soon as the box comes close enough to an object such that the raycasts detect it, then we position the box based on the distance to hit. This allows the box to be straight up aligned with the object. There is a slight magnetic effect at close enough range, which I think is actually bonus.

Anyway, this basically takes the best part of the raycast system, and doesn’t have the problems I mentioned in this post, such as the carried box being positioned in small spaces and getting drawn on top of meshes.

We’ve taken the overall movement of using actual collision, and added the precision accuracy of raycasts. Best of both worlds!

This is what it looked like before with just the Spring Raycast system. The yellow box is kinematic, while the green box is not:

With the new Spring-Raycast Raycast system, it doesn’t matter whether the box is kinematic or not:

Box Raycast System

There was also the problem of boxes occasionally falling through things. A very mild case is something like this:

I thought by implementing grid snapping this would fix the problem.

In the above case, this would probably work, and the grid snapping would cause the box to align to wall. However, grid snapping doesn’t fix the problem if the box penetrate deeply enough, which is really the root of the problem.

If the box penetrates pretty far into what’s below it, grid snapping doesn’t fix it, like here:

Sometimes, even when grid snapping did correct the position of the box, there was one frame in which the box penetrated before being snapped back, and even though it wasn’t noticeable, if you knew it was there, you’d see it all the time, and it was visually quite jarring.

There was also the issue where occasionally the box wouldn’t detect that what’s below has moved, and would end up floating in mid-air.

Since raycasts worked really well for carrying the box, I decided to implement it in the boxes as well.

Whenever the box is in free fall, I send raycasts downward. Like the system for object carrying, it sends one ray downward from the center, if that doesn’t hit anything, then we send rays downward from the corners.

If any of those rays hit something, we align the box based on distance to the hit point.

And if the box is stationary, meaning that there is something beneath the box, it’ll also conduct a series of tests with the rays. But as soon as any of the rays detect something, it’ll break out of the test. We only need one true positive to confirm that it’s working correctly.

If none of the rays detect anything, that means nothing is below the box, and so it needs to fall.

Here’s a gif showing the new boxes in action:

Development Update – Trigger Probelm

I was dealing with a very weird bug yesterday that was very difficult to pin down. At its basis, the bug has to do with the way Unity handles collisions. It was incredibly frustrating trying to figure out what the exact problem was and I was nearly going crazy.

I think I’ve finally solved the issue… Here’s the write up explaining what the problem was and how I solved it.

Trigger Problem

Here’s the problem: I place the first blue box on the switch. On the switch, there is a trigger which detects the blue box and turns it on (signaled by the light). When I take the second blue box, and bump it against the first, somehow OnTriggerExit() is called, even those the first blue box hasn’t moved.

I should clarify it’s not the size of the trigger. Obviously if it was a big trigger, and the second box entered and exited, OnTriggerExit() would be called.

This is what the trigger looks like:

As you can see, it’s 0.25 x 0.25 in the center of the square, so the first blue box covers it completely.

Collision of Kinematic Rigidbody vs Non-Kinematic Rigidbody

As I started to dig into the issue, I realized again that Unity handles collision with kinematic and non-kinematic rigidbodies very differently. I say again because I actually noticed this back when I first started working on RELATIVITY (hence why all non-moving boxes are set to kinematic).

Here’s a gif demonstrating the difference. The green box is a non-kinematic rigidbody that has constraints on position and rotation (so that’s not moving), while the yellow box is a kinematic rigidbody. Look at how much the blue box penetrates the green box:

Box Set Up

With the above info in mind, I’m going to take a small detour and explain how the boxes are set up (this all comes together, I promise).

The boxes in the game are 1x1x1 box, with 6 plane detectors (known as “Bottom Collision Detectors” on the surface of each face.

Each box can belong to one of six gravity fields. Its gravity orientation depends on which gravity field it belongs to.

To get the box to “fall”, instead of using Unity’s default physics, I apply an acceleration force using ForceMode.Acceleartion in FixedUpdate() in the direction of the box’s gravity.

Only one Bottom Collision Detector is active, depending on which direction the box is falling. Namely, the detector at the bottom of the box. Its purpose is to let the box known when it has contacted something beneath it ie when it has landed.

I do this for a couple of reasons:

1) So the acceleration force doesn’t keep getting applied. I’ve had cases where the box got pushed past whatever was beneath it and fell through the floor.

2) So that the box knows to be set to kinematic, so that when the player pushes a box against it, it doesn’t penetrate.

So what was going on?

I started to look at the state of the rigidbody of the box, and realized what was happening was that it was becoming non-kinematic when the second box hit it.

And then I realized that this was happening because the second box was entering and exiting the Bottom Collision Detector, tricking it into thinking that box was not on the ground.

This allowed the second box to penetrate the first box, and that set off the trigger.

At least that is my diagnosis.


I decided to use OnCollisionEnter on the box to detect when something is below.

I get all the collision contact points, and if those points have a normal that is the inverse of the gravity direction, then something is below.

Also, I offset the detectors from the faces of the box slightly:

This way, the bottom collision detector is below the ground, so when another box penetrates the the box, it doesn’t hit the trigger.

This seems to fix the problem so far.

My code now is a mess from debugging, so I’m going to clean up the code and try to see if there are any more edge cases in which the system breaks down.