The past few weeks have been fairly unproductive, at least in terms of writing code; now that the editor back-end is mostly fleshed-out, the minutia have begun their inevitable soul-sucking death-by-a-thousand-cuts. After the first 90% comes the last 90%, as they say 🙂
We’re still in the “figuring things out” phase of design for the graphics and animation systems; we’ve learned the hard way that planning is just as important as the actually-writing-code phase, if not more so. So, lots of little snippets and tests have been made, but nothing substantial has been built.
In order to keep this blog trickling along, we thought it might be a good idea to review all the ideas that aren’t going to make it into Robotology. As we get further into development (or pre-production really, since at this point we’re still prototyping systems) the need to simplify by ruthlessly cutting features increases.. and as much as we really like some ideas, getting the game done comes first. Having our ideas shot down is softened somewhat by the knowledge that we’re the ones who would have to implement all this stuff!
Anyway, here’s a list of some of the bigger ideas which were scrapped.. hopefully they will be revived at some point in the service of future games:
Our constraint solver can handle constraints between non-rigid shapes; combined with “smooth skinned” geometry (vertices are bound to a blend of rigid frames, or to a frame which is itself a blend of rigid frames rather than a single rigid frame) this would allow a lot of interesting behaviours to be modeled, such as physically-based morphing.
Unfortunately, the software engineering component of this feature (i.e how to support rigid and non-rigid constraints without having to explicitly write a combinatorial explosion of constraint types) proved to be too much trouble, so we’re sticking to rigid-body physics only.
Originally we intended to support rigid, smooth-skinned and deforming/morphing polygonal shapes; as with non-rigid physics, this ended up being a big pain in the ass in terms of exponentially increasing the work involved. Plus, once non-rigid physics was scrapped, this was basically useless (since it would be a purely graphical effect rather than making an actual difference in the game world).
Additionally, non-rigid shapes cause further complications, such as having to maintain a valid triangulation for concave shapes, or having to ensure that deforming polygons never became self-intersecting. In the end, this feature was abandoned with no small amount of relief. Still, this is something we’d really like to revisit, especially since our collision system can handle deforming/soft shapes just as easily as rigid bodies.
The circular-arc-shaped tiles in N were really fun to move across, and we were planning on bringing the same sort of shapes to Robotology: all objects would be defined as chains of line segments and circular arcs. Unfortunately, moving from tile-based to more generic “anything goes” geometry made supporting circular arcs a lot more complicated — they were awkward to work with in many ways, from editing (what’s the best interface for manipulating the properties of an arc?) to collision detection (moving-arc-vs-moving-arc isn’t directly solvable).
We’re still using circular shapes for some things (circles are great collision shapes for limb joints, and rounding the corners of motion paths makes belts and point-on-path constraints behave much more smoothly) but they had to be cut from collision/world geometry in order to maintain our sanity.
Heterogeneous Geometry Generation
The original plan for the editor was to support multiple paradigms for generating polygonal shapes: tiles, platonic/regular polygons, inflated-skeletons, etc. These “shape generators” would be black boxes which would output generic polygonal geometry that users could then further modify (manipulating individual vert positions, performing CSG union/subtraction between two shapes, etc) and annotate (defining special surface properties for some line segments, such as “slippery” or “conveyor-belted”).
One of the reasons for this initial plan was to mitigate the risk of attempting the “inflated-skeleton” geometry generator — we didn’t know if it would even work, let alone be useful, so it was nice to have a simpler fallback (i.e tiles). The whole idea behind using “generators” was that we wanted to avoid, as much as possible, the agony of having to build polygonal shapes point-by-point; using higher-level building blocks like tiles is much easier and faster.
This plan separated the geometry-definition from the annotation phase, so that regardless of how a shape was generated, we could use a single tool to “paint” different materials onto its surface. Also, regardless of the constraints imposed by the generating phase (i.e tiled-based editing will generate only shapes that look tile-based) users would be free to manually screw around with the polygons.
The problem with this plan was that the editing workflow sucked — while the representation of shapes inside a generator was designed for ease-of-editing (such as placing tiles in the tile-based editor), this representation was lost when the generic polygon was output. Once a shape has been “baked” into explicit geometry, there’s no way to reverse the baking (i.e to recover a set of tiles, given an input polygon).
Of course, users could load the original tile data, but they would then lose all their annotations! The system created a one-way workflow: make shapes, and then tweak them and annotate them (i.e set material/surface properties). Users could always edit the shapes by manipulating them like generic polygons, but could never get back to the nice/easy representations (tiles, skeletons) without having to totally redo all their material-editing. This discourages the sort of iteration that we think is vital to making fun platforming levels.
In the end we decided to focus only on skeleton-based geometry, and to integrate material/property editing into this model. Another motivation for this was that many 2D games are tile-based, so by abandoning that common modeling paradigm we’ll hopefully end up steering Robotology into a more unique space “automatically”.
While the editor is still an ongoing task (we’ve discovered that a skeletal representation doesn’t lend itself perfectly to surface-editing, since the surface is implicit in the skeletal representation), it seems like it will work out in the end…fingers crossed! 😉
That concludes our review of what’s NOT going to be in Robotology. Stay tuned for posts on stuff which will be included, coming soon!