Before we get to the title topic, we’re thrilled to announce that N+ will be the Deal of the Week next week on XBLA (July 6-12 2009). 50% off: Hooray! Just in time for summer vacation for a lot of you! Everyone else: quit your job for the summer!
If for some crazy reason you or someone you love (or even someone you hate) have yet to download N+ XBLA — the gold standard and best of the various versions of N+ — now is your chance to save some money.
And now, the main attraction: it’s been a long time coming, but we finally have a video to show!
Although don’t get too excited.. the graphics are all debug/placeholder, so it’s nothing much to look at.
(Hopefully we’ll figure out how to get screencapture and exporting working a bit better in the future, this initial vid is unfortunately a bit choppy — sorry about that.)
It’s just a simple demonstration of how robots in the 2D world will change direction in a physically-valid way; typically this sort of movement is faked (for instance, in Super Mario Bros objects just “flip” instantaneously, however this only works because their simulated shapes are symmetrical and unchanging — only the graphics actually “flip”), or relies on movement in 3D.
We wanted to try something different. Since our goal is for all movement to be physically-based, we needed a solution which would allow robots to change direction while their movement remained valid in the 2D “flatland” simulation. This way, any external constraints (such as grappling hooks) that are interacting with a robot will continue to behave normally, with no popping or other glitches as the robot changes facing direction.
Our solution is to morph the collision/graphics geometry (which defines the shapes of each rigid body), while simultaneously morphing the physical constraints (which define the way the body moves). “Morphing” is just a non-rigid transformation.. moving around vertices or joint angles or whatever. Currently this deformation is purely kinematic; we decided that driving the morph via physical motors was one step too far in terms of complexity, although it would be possible to do. We’re already somewhat behind schedule, so keeping it simple is a priority. Since the actual rigid bodies are free to move during the morph, the robot continues to be responsive and reactive regardless of its current deformation state. Nice!
This particular test biped is representative of how the larger robots will be modeled; it’s made of over 100 points bound to 14 rigid bodies. For smaller robots, we’ll be morphing the graphics geometry, but the dynamics model will probably be simpler or symmetrical (for instance, the 6-particle + 5-stick model we used for the ragdoll in N).
The process of getting this to work nicely was a lot more complicated than we anticipated (hence the two month blog-posting delay — sorry!) but it was a great experience since it made all the unanticipated problems obvious and we managed to find solutions to most of them. It also impressed upon us the need for better tools — that biped is defined by almost 800 lines of code, painstakingly hand-transcribed from graph paper and Flash mockups! Ouch. We thought that would be quick-and-dirty; instead it was long-and-painful-and-dirty
The good news is that during implementation it became obvious that morphing could be used for much more than simply changing the facing direction of robots; it could be used to effect any sort of shape-change, allowing us to model transforming robots. We’re still not sure how far in that direction we want to go though, there are enough technical challenges as it is, but it sure would be cool
That’s it for now — Happy Canada Day!