First, some quick news:
N+ is a finalist for “Best Downloadable Game” in the Game Developer’s Choice Awards! We feel so honoured to have been nominated.
Also, we’re part of a group that just started in Toronto, the Hand Eye Society. The first meeting was last Thursday and it was great! Looking forward to more. If you’re in the area, you should definitely check out future meetings.
Anyway, continuing with our review of the current state of development, here are some of the things currently “on the operating room table”:
OpenGL-based Vector Graphics
We were getting a bit burnt out on the editor, so we decided to change gears for the past couple weeks, and have been working on the graphics system. Our two main goals are smooth Flash-quality vector shapes, and pixel-perfect fonts (i.e no ugly blurry text); so far things are going well. Trying to nail down the specifics of how the graphics system will work internally is a bit daunting, since we have very little experience with that sort of thing, but all of our tests indicate that the look we’re trying to achieve is definitely going to be possible in “plain” OpenGL (fixed-function pipeline, no shaders).
Right now it looks like it’s going to be hard to get lines less than 2 pixels wide looking nice and smooth; this means we’ll have to revise some of our mockups to see if we can get by with thicker lines. Or maybe we can live with a few jaggies.
In the future we’re definitely going to look into shader-based vector rendering, since the quality of thin-line-drawing can definitely be improved beyond what’s possible in fixed-function land. For Robotology we decided we didn’t want to have to deal with the testing mess involved in making sure our shaders work on everyone’s card.
We’ve started fleshing out exactly how the robots are going to be represented in-game. Our initial “ideal” plan was that their representation would be identical to that of the world — they would simply be geometric shapes moving around, and the only difference between a sliding door and a walking robot would be the relative complexity of the physical model (and of the controllers driving each model’s motors).
The motivation was to remove the artificial separation between “world” and “objects” which many games impose — a separation which is a necessary optimization for some games — and in doing so to encourage emergent interaction between entities. However, some special cases seem to be necessary. In Little Big Planet, the world/object duality is avoided by making all of the objects out of pieces of “world”, but the result is enemies which are substantially different from the player character.
Since we’re aiming for robots which are similar to the player character, the only workable solution so far seems to be to abandon our initial “purist” plan. For instance, the scale of robots in the world is much smaller than the world itself, and this smaller scale lends itself to different collision representations — such as using circles and capsules rather than polygons to describe the limbs of smaller robots. There are also some behaviours which are specific to robots: for instance “changing facing direction” (flipping across the y-axis). This was initially going to be handled by morphing the shape, but since we cut non-rigid animation/geometry we had to address this one specific use.
The different scale of robots, combined with the specialized collision shapes and animation problems (flipping “facing” direction), suggested a specific robot body model which will let us automate some of the body-creation and animation process. Sacrificing generality for the sake of this optimization seemed like a fair trade.
There are a few parts which we’ve researched and sketched out, but have yet to plan in detail — usually because they depend on some other unfinished system, and so are subject to change:
We’re going for no kinematic animation, and having everything driven by motors. This is an idealistic goal like “no separation of world/bodies”, which we’re hoping will make the simulation more stable (kinematic bodies interacting with dynamic bodies are a recipe for problematic behaviour) and development more simple, since there will be a single path for all animation. Some parts of the animation system will be more “procedural” (i.e balance controllers or whatever will have to use the current state of the world when calculating a target pose), while other things will be animated more traditionally (i.e with keyframes driving the target pose). We’re hoping we’ll be able to achieve something like this.
There are a couple systems which look like they might not make it — at least, we’re going to leave them out of the first release (though we’ll keep them in mind and try to make sure they’re feasible to add later):
We aren’t going to try anything too fancy — we just thought that rather than static songs or loops, we could make a simple system which would mix together various loops to form songs on-the-fly. Obviously this sort of thing it a much better fit for more abstract electronic music, where it’s possible to assemble multiple loop “palettes” which work together in various combinations. For the the time being we’re just planning on the usual looping music tracks, and hopefully later we’ll have a bit of time to add something more dynamic.
Procedural Level Generation
This is more of a “blue sky” feature, but something we’re extremely interested in (especially given the amazing Spelunky). Procedural generation of graphic design and architectural layouts suggests that developing a method to generate platformer levels on-the-fly is a definite possibility, however we haven’t given this feature much time or attention since it’s a huge and open-ended problem. We’re planning on getting the core game working, then focusing on stuff like this which builds upon the core.
That’s it for now… Part 3 coming soon!