After being able to control the basic features of a Kinect device (or any time-of-flight camera), the next thing many users look for is skeleton tracking. Skeleton tracking means to easily retrieve different joints of the human skeleton from depth images.
The most famous solutions (or the only ones) to do so are the Microsoft Kinect SDK or the OpenNI framework. If you are looking for a Free software solution though, you are out of luck. Microsoft’s SDK, apart from obviously being close, does not even allow a commercial use of it which is something the OpenNI framework does but this is as far as the meaning of the word “open” in OpenNI goes… You cannot adapt/improve their code nor learn from it. To solve this problem, Igalia has just created Skeltrack.
Free Software Skeleton Tracking
Skeltrack is a Free and Open Source Software library whose goal is to provide easy to use human skeleton tracking.
Skeltrack’s implementation was based on a paper by Andreas Baak but, apart from other internal differences, it doesn’t use a pose database. This means that the skeleton joints extraction is based on mathematics and heuristics, no calibration pose nor pose database is needed.
It provides an asynchronous API written in GLib, supports single user tracking (one skeleton only) and tracks up to 7 joints currently: head, shoulders, elbows and hands.
Take a look at the video below to get an idea of what it can do:
How to use it
We took a more modular approach than the two projects mentioned above. This means that Skeltrack expects to be given a depth buffer (i.e. depth image from the Kinect) with nothing but the user on it as opposed to connecting directly to a Kinect device.
Still, three easy steps should be all that is needed for Skeltrack to be ready to use:
1) Use the GFreenect library we released a couple of months ago, connect to the depth stream signal and get the depth buffer;
2) If there are other objects (chairs, tables, walls, etc.) apart from the user, then these should be removed by performing background subtraction or by excluding everything that is out of a threshold;
2) Skeltrack performs some calculations that might be heavy depending on the machine and the buffer size so users should reduce the size of the buffer that is given to Skeltrack by using its dimension-reduction property as the reduction factor.
Take a look at the test-kinect example (shown in the video) shipped with Skeltrack where the steps above are implemented.
The documentation for the project is available here.
Skeltrack is in its early beginnings and we want to detect more skeleton joints and work on stabilizing the results so, its features and API might change in the future.
Feel free to get the code, file bugs and send patches at its GitHub repository.
Hopefully more people will contribute to it until we finally have a rock solid, easy to use, Free Software skeleton tracking library.
I need to thank Søren Hauberg who was kind enough to point me in the right direction when I needed.
Absolutely awesome!
Thanks a lot for bringing skeleton tracking to the FLOSS world.
Amazing.
This is an important advance of the FLOSS world.
You are a great man!!
As a biomechanist working on a budget, largely due to a constant overload of teaching, this is very welcome news to me. I can’t wait to start playing with this! Thanks folks!
Thanks for this nice piece of code!
Hello,
Do you know the OpenNI project ??
http://openni.org/Documentation/ProgrammerGuide.html#Overview
It has the similar tracking capabilities and also the capability of combining other inputs, to create a more comprehensive user experience.
Your api is a nice idea, but no need on reinventing the wheel 🙁
Cheers!
Hi Randy,
Of course I knew about the OpenNI project. I even mentioned that in this very post together with the reason why we developed Skeltrack.
Basically, OpenNI’s skeleton tracking is not Open Source. This might not make a difference to you if you’re just a user but if you’re a company working with a skeleton tracking solution and you run into some problem, if the solution is Open Source, you can fix it yourself or hire someone to do it right away; if you’re working with a proprietary solution, only its authors can fix it and that can have several risks: their price, their availability, whether they’re still in business, etc.
This is one of reasons why Free and Open Source Software is better for business, and it’s one you shouldn’t underestimate.
I plan to show this to my buddy we were just talking about this recently!
This is fantastic work.
Are you able to track the angle of the neck? I need to measure how much the head bends or rotates to take care of patients.
Hi Florence,
Currently Skeltrakc does not provide a way to get the neck’s angle, only the positions of the head and other joints.
I understand that tracking head rotation would be difficult because the Kinect does not track landmarks on the face.
However the skeleton that is displayed shows a stick for the neck. Is it possible to get the coordinates of each end of that stick? Couldn’t one get the neck angle by comparing the neck stick’s position vs. the position of the line connecting the shoulder points?
You could probably get an angle closer to the real neck’s by comparing the head with the shoulders’ line like you say.
Please note that we want to add more joints in the future, which could help you figure that angle even better (like the shoulder’s center, spine center, etc.).
We just imaged me and the stick to my head does not change angle to my shoulder-shoulder line when I am bending my head down toward a shoulder. So now my question is whether the head is being tracked at all! – or is it just a constant position relative to the body?
Of course the head is being tracked and it is not a constant position…
How did you connect the stick to the head? In the video I don’t have one, I have a circle.
I am afraid because of the way the extremas are tracked, it’d be difficult to get accurate angles like you’re trying to do.
We were just experimenting with the OpenNI code, Joachim. Yesterday we observed that it tracks the stick to the head only in the beginning, then settles down to a constant 90 degree angle to the shoulder-shoulder line. We are guessing that the head tracking is only occurring during some sort of initial calibration to the subject’s body.
In your video, when your subject bent his head down, the circle did not seem to go down too. We played your video over and over, and that is our impression. So you are tracking the head’s position over the body, but it doesn’t look like you are detecting when the head bends forward.
By the way, your use of a circle is a much nicer presentation than the neck stick.
The skeleton joints are tracked as positions and to be able to check if there’s a “bend” in the head you’d need at least two positions (head bottom and head top, for example) so it’s obvious that the circle doesn’t bend.
The head is calculated taking into account the shoulders and this might not take into account bending but it doesn’t mean that the head is fixed like you said before (it is not).
1. Why can’t the head be tracked accurately as one of the extrema?
2. Can you “subtract” the floor so that a person lying on the floor can be tracked?
Hi Florence,
Why do you say the head can’t be tracked? It is tracked as an extrema.
About the floor, no. It expects a person to be standing.
Cheers,
Thanks a lot for providing this open source code. Is it possible to use CMake build system? I have trouble with the configuration of Skeltrack. The autogen.sh fails.
Hi Alireza,
Skeltrack uses the GNU build system just like most libraries based in GNOME technologies. We have no intention to use CMake.
If autogen fails, it is probably because of missing dependencies.
Hi Joaquim,
I have a research project where we want to study how kids really behave when they are restrained in their car seat and how the seat belt falls on their shoulder. We are about to instrument a car with a kinect and are looking for the best software to use for our data analysis. One issue we have with Microsoft algo is that the spine joint is only available in standing mode, which is not really an option for us.
We are ok with post processing as long as the data collected does not get out of proportion. Does your algorithm work in near and seated mode? What are the joints available for the upper body. Thanks. Helen
Hi Helen,
Skeltrack currently tracks the following upper body joints: head, shoulders, elbows and hands.
It was not designed to track a sitting pose when the whole body is in the frame and I also think that the car’s inside surroundings could make the detection difficult.
I’m sorry that it might not be of great help for your project at this point. If you would like Igalia to help with some improvements in Skeltrack or in the processing of the Kinect’s depth buffer please send me an email.
Thank you for your contact and my best wishes for that interesting project.
Hi. You mention time-of-flight cameras. I believe the new Kinect One (The second Kinect) is time-of-flight, but the old one (The first Kinect for Xbox 360 and Kinect for Windows) was not. Is this correct?
Hi Troy,
As you say, the old one was not TOF, I don’t know about the new one though.
Cheers,