Another last-minute post — and this time it’s a blast from the past!
The year was 1996. The world’s political climate was tepid, and the masses were growing uneasy at the silence that filled the streets nightly. The signals were undeniable. The robots were coming. We decided we should finish some of the half-written posts we were writing during the development of Robotology before we get back to what we’re currently working on; this one details our motorized animation system. It’s a bit long, but then again it’s a complex problem 🙂
—–
So now that the simulator is working, we’re in the process of adding “controller” functionality: controllers are AI/logic which drive motors in order to move things around.
The code responsible for moving robots around the world will be made of three layers of functionality working together, each layer delegating details to the layer below it.
At the highest level, there is what people typically consider “game AI” — code which is responsible for choosing an action to perform (e.g. “move left”). The high-level is where the typical AI stuff like path-finding, target selection, etc. exist. About a year ago we sketched out a simple solution for this layer; we’re leaving the implementation for later because our needs are more or less trivial, since we only need simple “robotic” entities rather than complex life-like human agents, and there are copious amounts of papers and solutions available which deal with game AI.
The mid-level layer is responsible for transforming the abstract high-level command “move left” into a concrete command such as “move right foot to position x,y”. This will vary depending on the type of robot — you can imagine that “move left” could involve vastly different activities depending on whether the robot is wheeled, or a quadruped, or a biped, or has a jetpack, etc. For example, walking bipeds typically have some sort of state machine which tracks which phase of the walking cycle the robot is currently in (i.e right foot is planted, left foot is swinging forward) and applied varying motor forces based on the current phase.
The mid-level is thus a sort of animation system, in that it produces sequences of target states (positions, angles, etc). In some cases this might be explicitly animation-based (the target states are defined as keyframes which are played back at a certain speed), and in others the target states might be generated procedurally (for instance, by raycasting to determine the current height of the ground where the foot should be placed).
The low-level layer takes these target states and tries to drive the simulation towards them via motors. One popular example of such low-level control is a PID Controller.
So, low-level controllers are more or less trivial to implement, but how they’re actually used (the mid-level layer) is very hideously complicated — getting a robot to balance and walk is far from a solved problem in reality. And even us humans, with our crazy super-computer brains, take many years to learn the delicate control strategies required for walking!
So those are the basics. Given that this is such a hard problem, we’re trying to figure out different ways to simplify things as much as possible, without cheating too much and in doing so causing obviously un-physically-plausible behaviours to arise.. and that gets us to the title of this post. Now on to the juicy details!
Our primary “faking” strategy is to exploit the fact that we don’t care about whether something is physically possible so long as it’s visually plausible. For instance, consider the problem of getting a robot to move its hand to the left:
In real life, a robot is controlled using motorized joints — each joint links two pieces of the robot, and controls their relative orientation (for instance, the upper and lower arm are joined by a motorized elbow which controls the angle between upper and lower arm).
Controlling the position of the hand is complicated because of how the bodies and joints making up the robot’s body are always interacting with each other at the same time. Kinematically, the position of each body part depends on the position of the part(s) to which it is connected — the position of the hand depends on the position of the lower arm, which depends on the position of the upper arm, which depends on the position of the shoulder, etc. Dynamically, forces generated by each joint affect the movement of every piece of the robot’s body — bending the wrist will transmit force to the lower arm, which will transmit that force to the upper arm, which transmit that force to the shoulder, etc.
(Possibly this is the lesson that the song about funny-bones-connected-to-arm-bones was meant to be teaching us. The more you know!)
As you might imagine, all of these interactions mean that figuring out how to coordinate the movement of all the motors simultaneously (for instance, to move a hand 30cm to the left) is a very big and complicated mess. In fact, it’s so complicated that one of the most popular solutions is to give up on trying to find a solution, and instead to use machine learning to evolve a controller that can find a solution. Madness! As far as we can determine, this is the approach that Natural Motion uses for its products. Sadly this is a bit more complex than we’re willing to get at this point.
So, how can this be simplified while still retaining physically-valid behaviour? One idea is to consider what’s really happening at a higher level when all those motors start moving — ultimately, all movement of the body is caused by reaction forces from the ground. If the robot was suspended in zero-gravity, it wouldn’t be able to move. (It could thrash its limbs around, but its body as a whole wouldn’t move anywhere)
So, when the robot moves its hand forward, this movement is the result of a complicated network of interacting forces, with each motor “pushing” off of the two bodies it connects: the wrist pushes the hand and lower arm; the elbow pushes the upper and lower arm; the shoulders push the upper arm and torso; the hip pushes the torso and upper leg; the knee pushes the upper and lower leg; the ankle pushes the lower leg and foot. Ultimately, the foot’s movement pushes the ground its resting on, which creates a reaction force that travels back up this network of joints, reaching the hand and moving it forward.
(At least, this is how we imagine it working — it’s possible we’re way off here, but this simple mental model of forces flowing through the joints seems at least somewhat valid.)
To simplify this complicated process, rather than explicitly modelling every step of this process, we instead considered condensing it so that the higher level behaviour is modelled directly: the hand is moved by the ground reaction force. If we could define a constraint which coupled the hand movement with the foot’s movement, we could potentially use this as a motor to drive the movement of the hand in a physically valid way — by pushing off of the ground!
Of course, such a constraint is a bit hard to imagine since in reality you can’t just couple one physical property to another like that, you would need a mechanism which physically connected the hand to the foot. But from the point of view of the simulator, such a coupling is perfectly valid — it can be expressed mathematically just like any other constraint (such as more “realistic” constraints like ball-and-socket or friction).
In this way, we can avoid having to ever deal with the mess of internal forces that normally propagate inside the robot’s body from ground to hand — instead it’s as if we have a perfect controller which knows exactly how to control all of the joint motors in order to achieve a complete transmission of force from ground to hand.
This is in fact the strategy we used for the robots in the walking video: https://www.metanetsoftware.com/?p=508
Another strategy which we are still in the process of implementing is to exploit the fact that our physics simulation is driven by a special kind of constraint solver — a non-linear optimizer. Given a set of constraints that describe how “wrong” the current state of the world is, our solver adjusts the world state until it finds a configuration which is the “most un-wrong”…in other words a least-squares fit to minimize the error.
We started using this to model physical constraints — pins and ropes connecting bodies together, collisions pushing bodies apart — but it can actually be used to find the solution for any sort of problem that is described as a measurement of error (“wrongness”).
In fact, many robotics papers use this sort of optimization in order to compute a valid pose for the robot — they define their desired movement in terms of error measurements (How much energy is used? How well-balanced is the robot? How closely does the robot’s state match a given keyframe?) and ask the optimizer to find a pose which ideally meets all their requirements.
They then take the pose calculated by the optimizer and feed it to their control system, which attempts to guide the robot to that pose.
Our main intuition behind a simplified control strategy is that we don’t actually have a real robot to control — this suggests that so long as the optimizer used to generate a target pose obeys the physical constraints of our simulation, generating a valid pose and moving the robot to that pose are in fact the exact same problem, because the robot model that the optimizer uses to generate the pose IS the actual robot. This means that, hopefully, we can avoid most of the super-complicated control problems. At least, this is our current theory. Fingers crossed!
We’ve glossed over many of the tricky problems, such as: how to we simultaneously model the hard limits which define the structure of the body? The elbows, shoulders, etc. all need to be simulated so that the body stays together in the desired shape, we just don’t want those joints to be actively motorized. In a proper robotics simulation, getting the motors to obey the joint constraints (for instance, knees can’t bend backwards) is simple, because the joints and motors are the same mechanism — you can just clamp the motor’s angle to the angle limits of the joint. In our “fake” system, where the network of motors is completely different than the network of structural pin joints, it’s not so easy to ensure that the motors respect the limits of movement on each joint’s angle.
Aside: So far our progress with Robotology has been slower than we anticipated — in part this is due to some of the problems being much more complicated than we initially assumed. However, it’s also due to our development process: rather than trying to find a solution that works while simultaneously figuring out the best way to implement this solution, we start by just trying to get something to work.
Once everything is working, we try to use it — typically, what we discover is that since it was implemented with and almost total disregard for software design, it’s horribly awkward to use and a big mess to maintain. Restructuring a system that works — but is a mess — is a much simpler problem than trying to figure out the ideal structure before you even know what a working solution looks like. And sometimes we get lucky and the proof-of-concept implementation is good enough to use as-is.
This does mean than we typically write everything twice — once to get it working, and once to make it cleaner, maintainable, and convenient to use. However, the alternative so far seems to be days of not writing anything, trying to plan out the “best” implementation, only to implement it and find that there are problems despite all that planning. Some amount of basic planning is required, but beyond that it seems more productive to go ahead and write a straightforward version, and then later try to figure out how to restructure it to avoid code duplication or to clean up the interface.
It does make it difficult to stomach as the previously immaculate code-base becomes bloated and full of obviously non-optimal solutions, but maybe we need to learn to tolerate code-mess a little more.
We hope to return to Robotology development next year, but will bring you more posts on the subject before then. You’ll definitely want to stay tuned for that.
—–
So that’s that for this month — whew, just made it. We need to get better at writing on time and not leaving things until last minute. But we’ve stuck to our promise of at least one post per month so far, and surely we can all agree that’s progress.
I miss these posts. I can’t wait until you get back to robotology! You’re really doing things with 2D physics I’ve never seen before.
I definitely sympathize with the messy code dilemma. Still, I think writing code twice definitely beats writing perfect code only to discover there’s something fundamentally wrong with the concept.
A year ago I wanted to post a suggestion, but I felt that you would go that way anyway. You mentioned layers – so what about physics layers? Let’s say you want a 4 legged robot to walk from left to right. Problem is, it has to keep it’s body in the air. Why not suspend it’s body on an invisible collision plane. So it’s legs touch the ground, but it’s body is suspended by a second invisible ground n pixels up. Then it’s legs can be used to push/pull and not have to keep the body afloat as well. The invisible plane is like a sheet of ice, to move left and right is completely effortless, but you still can employ some friction so that moving and stopping looks realistic as the body is a mass.
A problem this raises however – map design. If this visual trickery was employed, how would maps be created. Simple answer at the moment is to create the maps as you would anyway, but have the “floors” and inclines generate that physical second floor above it. The more the incline, the less space vertical space, until 0 at 90 degree vertical incline – in other words a wall.
So then this would work inside elliptical shapes as well. Now the bodies of the robots can never hit the floor – unless that action is made intentional – such as you make real tripping or the player falls from too high. etc. etc.
This way, then you can focus on just the actions of the limbs – which are the primary motors of the robots and thus can program in safeties. For instance, a robot that flips over it’s hands and for a brief period – suspends it’s body higher – again, an example would be to put an invisible “growing” shape between it’s body and the boundary of the invisible ground. This shape sticks with the robot (always vertically aligned) and it’s default size is 0x0, unless needed. Then when the shape is needed, you expand it and change it’s dimensions to fit between it and the body – so it’s always like a buffer.
Anyway, back to my boring old PHP coding. 🙁
–Snow
You said nothing about your progress on Nv1.5 🙁
I’m with Destiny here. So eager to play the next version of N, because the current version doesn’t work quite right on my laptop (running Windows7).
But it’s great to hear you are making good progress on Robotology, although it seems like you’re planning on releasing Nv1.5 and Robotology within a few months of each other!
These are such tantalizing challenges. You make me want to play around with teaching jointed critters to walk.
I can’t wait to see what you come up with. I hope you make a lot of robots with weird body plans. And while walking is the holy grail, I hope that when you are designing characters you play around with some simpler, more mechanical actions as well. There are a lot of wonderful simple possibilities, like the inchworm in your video.
Great stuff, I always love hearing about this project 🙂
*YAWN* I hate your new topic SO much… time for a segment called “REALLY?!” from otherworld99.
All that blogging… and you waste some of it on the past and that Robotology crap??? REALLY?!
And then there’s the time of the topic “On the road agaiN…” then that music (have NOT checked them out, nor ever will!) then you go to robotology?!?!?! REEEEEEEEEEALLLLLLLLLYYYYYYYYYYYY?!?!?!?!?!?!!?!?!?!?!?!?!?!?!
THEN YOU MENTION IT BEING A GAME, EVEN THOUGH IT’S NOT!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
REEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYAYYYYYYYYYYYYYAYYYYYYYYYYYYYYYYYAYYYYYYYYYYYYYYYAYAYAYAYAYAYAYAYAYYYYYYYYYYYYYYY!?!?!?!?!?!?!!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?
THEN!!!!!!!!! YOU DIDN’T SAY ANYTHING ABOUT N V2.0 (OR 1.5, DEPENDING!!!)
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRREEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY?!?!?!?!!??!?!!?!?!?!?!!?!?!!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!!?!?!!?!?!?!?!?!?!?!?!?!?!?!??!?!?!?!!!?!?!?!?!?!?!?!?!
*deeeeeeeeep breath*
This has been REALLY!? from otherworld99!
This is redundantly redundant to the point of redundancy.
But enough of my long ranting of EEEEEEEEEVILLLL!
What will robotology run on?
This is redundantly redundant to the point of redundancy
Hello,
nice to hear robotology is on it’s way, and I’m sure the problems you’re encountering are interesting, but…
What about N? Some info about new version would be really appreciated, or at least some attention to the current version (cleaning the highscore boards? I’m sure this would take like 1 hour max of your time, you can even handle this to the community.)
So my main question is: can we, and if yes, when, expect new version of N?
I hardly feel like I read this two weeks ago, but it has indeed been a month D:
/me craves for updates as a zombie craves brains
N WAS THE GAME THAT STARTED IT ALL, I WILL NOT HAVE IT MALIGNED!!!
Sorry, I… I got so wrapped up in the old game, I– I NEED A HUG!!!
This is redundantly redundant to the point of redundancy.
Was Robotology abandoned?
@Ben: no, just put on hold while we figure out solutions to a few problems.
Hi my name is Ruby Walser aka rubywalser on the n game. I beat the whole n game on the mac and have done a lot of the users levels. Soon after playing most of the levels, I needed something more, something different, but still as exciting as the n game. So I went on this website for answers, looking to see if you have made something different. Seeing that you guys are in the making of robotology and office yeti, I cannot wait any longer and I would love to know how much longer till you have made the finishing touches of those two games. Please respond. Also, you can contact me at [email protected] or [email protected].
Thanks,
Ruby Walser
@Ruby: Thanks, so glad you enjoyed N! You should really try N++, it’s even better than N! We’re hoping to be able to bring N++ to Steam next year, we’ll see how that goes.
we do not know when Robotology or Office Yeti will be done — all we know is, when they feel right, we’ll release them. I know that’s not the answer you’re looking for, but that’s how we make games — when they feel good and we feel proud of them, that’s how we know they’re done 😉 Please email us at metanetATmetanetsoftware.com if you want to chat more.