Before we get started on the continuing saga of Robotology, we should quickly give you an update on Contesque #3, The Creativity Contesque. Thanks so much to everyone who entered! We saw some really great work. However, since we received far fewer entries than expected, instead of immediately awarding a prize, we thought we’d keep the Contesque open longer, to give more of you a chance to show off 🙂 So, everyone, until further notice, send your art, crafts or costumes related to N+ to PDLC2 [at] metanetsoftware [dot] com — if we like what we see, we’ll send you a copy of whichever version of N+ you want. Hooray for creativity!
And don’t forget, the Metanet Etsy shop is open and ready for your holiday orders.
Alright, on to Robotology.
When we first started working on the simulator, we didn’t yet know how the editor would work. We knew that we wanted to use a higher-level editing paradigm than “user must manually place polygon vertices”, but we had no idea what the solution would be. So, to be on the safe side, we just assumed that the editor would export geometry in a generic format: simple (non-self-intersecting, no holes) polygons. That way we could start working on the sim without having designed the editor.
This decision led to a lot of hard work on collision detection, since developing algorithms that work with general concave polygons is quite complicated compared to simpler, more constrained cases like boxes or convex shapes. We learned a lot — which we can’t wait to share via tutorials once this game is done! — and things are working well so far. Unfortunately, as is the case in many simulators, collision is definitely a bottleneck in our physics system.
Using simple polygons (the most unstructured, generic data format possible) as the interface between simulator and editor was a safe bet in that it allowed the two systems to be uncoupled and thus developed at different times. One nuance we failed to appreciate is that scene editors are often privy to lots of extra information — information which could make the collision system’s job easier — and that by reducing the interface between the two systems to generic polygons, we were throwing away lots of useful info!
In hindsight this should have been obvious, since we know of many existing game engines which use extra data generated automatically by the editing paradigm. Quake 3 is a good example: the editor is CSG-based, with users creating shapes by cutting and welding convex “brushes” with each other. Any sort of scene can be created with these tools, but since the building blocks are more structured than “triangle soup”, they contain lots of great info that can be exploited. For instance, covering a scene made of “triangle soup” with convex primitives is a very hard problem in the general case, but since the editor directly works with convex primitives (brushes), a set of convex shapes that covers the scene already exists “automatically”, without having to build one up from scratch. Bonus!
Anyway, now that we’ve (mostly) figured out how the editor will be working, we’re aware that a much more efficient collision detection solution would be possible, given the extra info that’s made available by our editing paradigm. But since our current collision system is working just fine, and in the interest of actually getting this game done at some point, we’re going to be leaving things as-is until they become a problem.
The editor UI/frontend is still only partially planned — and it will no doubt be an ongoing task for the rest of development, as we iteratively figure out what works and what doesn’t. However, the design of the editor’s backend (the world model, and operations which manipulate this model) is complete and we’re starting to build it. This is pretty exciting as it is the most up-front planning we’ve ever done, and it’s really paid off — what started as an intractable, frustrating mess of desired features is now a realizable, fleshed-out plan of action, complete with pseudocode for all of the major algorithms.
Figuring out how the editor will work has let us make headway in other areas, such as the graphics/rendering system, as we nail down implementation details concerning how geometry will be represented and manipulated. It’s also the realization of several years of rumination and discussions we’ve had concerning 2D geometry and what sort of paradigms might be worth exploring beyond the usual tile-based and CSG methods. Hooray!!