Another last-minute post — and this time it’s a blast from the past!
The year was 1996. The world’s political climate was tepid, and the masses were growing uneasy at the silence that filled the streets nightly. The signals were undeniable. The robots were coming. We decided we should finish some of the half-written posts we were writing during the development of Robotology before we get back to what we’re currently working on; this one details our motorized animation system. It’s a bit long, but then again it’s a complex problem 🙂
So now that the simulator is working, we’re in the process of adding “controller” functionality: controllers are AI/logic which drive motors in order to move things around.
The code responsible for moving robots around the world will be made of three layers of functionality working together, each layer delegating details to the layer below it.
At the highest level, there is what people typically consider “game AI” — code which is responsible for choosing an action to perform (e.g. “move left”). The high-level is where the typical AI stuff like path-finding, target selection, etc. exist. About a year ago we sketched out a simple solution for this layer; we’re leaving the implementation for later because our needs are more or less trivial, since we only need simple “robotic” entities rather than complex life-like human agents, and there are copious amounts of papers and solutions available which deal with game AI.
The mid-level layer is responsible for transforming the abstract high-level command “move left” into a concrete command such as “move right foot to position x,y”. This will vary depending on the type of robot — you can imagine that “move left” could involve vastly different activities depending on whether the robot is wheeled, or a quadruped, or a biped, or has a jetpack, etc. For example, walking bipeds typically have some sort of state machine which tracks which phase of the walking cycle the robot is currently in (i.e right foot is planted, left foot is swinging forward) and applied varying motor forces based on the current phase.
The mid-level is thus a sort of animation system, in that it produces sequences of target states (positions, angles, etc). In some cases this might be explicitly animation-based (the target states are defined as keyframes which are played back at a certain speed), and in others the target states might be generated procedurally (for instance, by raycasting to determine the current height of the ground where the foot should be placed).
The low-level layer takes these target states and tries to drive the simulation towards them via motors. One popular example of such low-level control is a PID Controller.
So, low-level controllers are more or less trivial to implement, but how they’re actually used (the mid-level layer) is very hideously complicated — getting a robot to balance and walk is far from a solved problem in reality. And even us humans, with our crazy super-computer brains, take many years to learn the delicate control strategies required for walking!
So those are the basics. Given that this is such a hard problem, we’re trying to figure out different ways to simplify things as much as possible, without cheating too much and in doing so causing obviously un-physically-plausible behaviours to arise.. and that gets us to the title of this post. Now on to the juicy details!
Our primary “faking” strategy is to exploit the fact that we don’t care about whether something is physically possible so long as it’s visually plausible. For instance, consider the problem of getting a robot to move its hand to the left:
In real life, a robot is controlled using motorized joints — each joint links two pieces of the robot, and controls their relative orientation (for instance, the upper and lower arm are joined by a motorized elbow which controls the angle between upper and lower arm).
Controlling the position of the hand is complicated because of how the bodies and joints making up the robot’s body are always interacting with each other at the same time. Kinematically, the position of each body part depends on the position of the part(s) to which it is connected — the position of the hand depends on the position of the lower arm, which depends on the position of the upper arm, which depends on the position of the shoulder, etc. Dynamically, forces generated by each joint affect the movement of every piece of the robot’s body — bending the wrist will transmit force to the lower arm, which will transmit that force to the upper arm, which transmit that force to the shoulder, etc.
(Possibly this is the lesson that the song about funny-bones-connected-to-arm-bones was meant to be teaching us. The more you know!)
As you might imagine, all of these interactions mean that figuring out how to coordinate the movement of all the motors simultaneously (for instance, to move a hand 30cm to the left) is a very big and complicated mess. In fact, it’s so complicated that one of the most popular solutions is to give up on trying to find a solution, and instead to use machine learning to evolve a controller that can find a solution. Madness! As far as we can determine, this is the approach that Natural Motion uses for its products. Sadly this is a bit more complex than we’re willing to get at this point.
So, how can this be simplified while still retaining physically-valid behaviour? One idea is to consider what’s really happening at a higher level when all those motors start moving — ultimately, all movement of the body is caused by reaction forces from the ground. If the robot was suspended in zero-gravity, it wouldn’t be able to move. (It could thrash its limbs around, but its body as a whole wouldn’t move anywhere)
So, when the robot moves its hand forward, this movement is the result of a complicated network of interacting forces, with each motor “pushing” off of the two bodies it connects: the wrist pushes the hand and lower arm; the elbow pushes the upper and lower arm; the shoulders push the upper arm and torso; the hip pushes the torso and upper leg; the knee pushes the upper and lower leg; the ankle pushes the lower leg and foot. Ultimately, the foot’s movement pushes the ground its resting on, which creates a reaction force that travels back up this network of joints, reaching the hand and moving it forward.
(At least, this is how we imagine it working — it’s possible we’re way off here, but this simple mental model of forces flowing through the joints seems at least somewhat valid.)
To simplify this complicated process, rather than explicitly modelling every step of this process, we instead considered condensing it so that the higher level behaviour is modelled directly: the hand is moved by the ground reaction force. If we could define a constraint which coupled the hand movement with the foot’s movement, we could potentially use this as a motor to drive the movement of the hand in a physically valid way — by pushing off of the ground!
Of course, such a constraint is a bit hard to imagine since in reality you can’t just couple one physical property to another like that, you would need a mechanism which physically connected the hand to the foot. But from the point of view of the simulator, such a coupling is perfectly valid — it can be expressed mathematically just like any other constraint (such as more “realistic” constraints like ball-and-socket or friction).
In this way, we can avoid having to ever deal with the mess of internal forces that normally propagate inside the robot’s body from ground to hand — instead it’s as if we have a perfect controller which knows exactly how to control all of the joint motors in order to achieve a complete transmission of force from ground to hand.
This is in fact the strategy we used for the robots in the walking video: http://www.metanetsoftware.com/?p=508
Another strategy which we are still in the process of implementing is to exploit the fact that our physics simulation is driven by a special kind of constraint solver — a non-linear optimizer. Given a set of constraints that describe how “wrong” the current state of the world is, our solver adjusts the world state until it finds a configuration which is the “most un-wrong”…in other words a least-squares fit to minimize the error.
We started using this to model physical constraints — pins and ropes connecting bodies together, collisions pushing bodies apart — but it can actually be used to find the solution for any sort of problem that is described as a measurement of error (“wrongness”).
In fact, many robotics papers use this sort of optimization in order to compute a valid pose for the robot — they define their desired movement in terms of error measurements (How much energy is used? How well-balanced is the robot? How closely does the robot’s state match a given keyframe?) and ask the optimizer to find a pose which ideally meets all their requirements.
They then take the pose calculated by the optimizer and feed it to their control system, which attempts to guide the robot to that pose.
Our main intuition behind a simplified control strategy is that we don’t actually have a real robot to control — this suggests that so long as the optimizer used to generate a target pose obeys the physical constraints of our simulation, generating a valid pose and moving the robot to that pose are in fact the exact same problem, because the robot model that the optimizer uses to generate the pose IS the actual robot. This means that, hopefully, we can avoid most of the super-complicated control problems. At least, this is our current theory. Fingers crossed!
We’ve glossed over many of the tricky problems, such as: how to we simultaneously model the hard limits which define the structure of the body? The elbows, shoulders, etc. all need to be simulated so that the body stays together in the desired shape, we just don’t want those joints to be actively motorized. In a proper robotics simulation, getting the motors to obey the joint constraints (for instance, knees can’t bend backwards) is simple, because the joints and motors are the same mechanism — you can just clamp the motor’s angle to the angle limits of the joint. In our “fake” system, where the network of motors is completely different than the network of structural pin joints, it’s not so easy to ensure that the motors respect the limits of movement on each joint’s angle.
Aside: So far our progress with Robotology has been slower than we anticipated — in part this is due to some of the problems being much more complicated than we initially assumed. However, it’s also due to our development process: rather than trying to find a solution that works while simultaneously figuring out the best way to implement this solution, we start by just trying to get something to work.
Once everything is working, we try to use it — typically, what we discover is that since it was implemented with and almost total disregard for software design, it’s horribly awkward to use and a big mess to maintain. Restructuring a system that works — but is a mess — is a much simpler problem than trying to figure out the ideal structure before you even know what a working solution looks like. And sometimes we get lucky and the proof-of-concept implementation is good enough to use as-is.
This does mean than we typically write everything twice — once to get it working, and once to make it cleaner, maintainable, and convenient to use. However, the alternative so far seems to be days of not writing anything, trying to plan out the “best” implementation, only to implement it and find that there are problems despite all that planning. Some amount of basic planning is required, but beyond that it seems more productive to go ahead and write a straightforward version, and then later try to figure out how to restructure it to avoid code duplication or to clean up the interface.
It does make it difficult to stomach as the previously immaculate code-base becomes bloated and full of obviously non-optimal solutions, but maybe we need to learn to tolerate code-mess a little more.
We hope to return to Robotology development next year, but will bring you more posts on the subject before then. You’ll definitely want to stay tuned for that.
So that’s that for this month — whew, just made it. We need to get better at writing on time and not leaving things until last minute. But we’ve stuck to our promise of at least one post per month so far, and surely we can all agree that’s progress.