i3D — Head Tracking for iPad: Glasses-Free 3D Display – YouTube



Voodoo Camera Tracker: A tool for the integration of virtual and real scenes

Voodoo Camera Tracker: A tool for the integration of virtual and real scenes

Version 1.1.0 beta for Linux and Windows
Copyright (C) 2002-2010 Laboratorium für Informationstechnologie

This non-commercial software tool is developed for research purpose at the Laboratorium für Informationstechnologie, University of Hannover. Permission is granted to any individual or institution to use, copy, and distribute this software, provided that this complete copyright and permission notice is maintained, intact, in all copies and supporting documentation. University of Hannover provides this software „as is“ without express or implied warranty.

Contents:[Overview]  [Download]  [Usage]  [Troubleshooting]  [Camera Model]  [PNT Files]  [Bug Report]  


The Voodoo Camera Tracker estimates camera parameters and reconstructs a 3D scene from image sequences. The estimation algorithm offers a full automatic and robust solution to estimate camera parameters for video sequences.The results are useful for many applications like film production, 3D reconstruction, or video coding.The estimated parameters can be exported to the 3D animation packages: 3D Studio Max, Blender, Lightwave, Maya, and Softimage.

The Voodoo Camera Tracker works very alike to commercial available camera trackers (also called match movers), e.g. 3D-Equalizer by Science-D-Visions, boujou by 2d3, Matchmover by RealViz , PFTrack by The Pixel Farm, SynthEyes by Andersson Technologies, or VooCAT by Scenespector Systems. Please consider buying a commercial product, if you need a camera tracker with professional support.

The estimation method consist of five processing steps:


FaceTracker is a C/C++ API for real time generic non-rigid face alignment and tracking.


Non-rigid face alignment and tracking is a common problem in computer vision. It is the front-end to many algorithms that require registration, for example face and expression recognition. However, those working on algorithms for these higher level tasks are often unfamiliar with the tools and peculiarities regarding non-rigid registration (i.e. pure machine learning scientists, psychologists, etc.). Even those directly working on face alignment and tracking often find implementing an algorithm from published work to be a daunting task, not least because baseline code against which performance claims can be assessed does not exist. As such, the goal of FaceTracker is to provide source code and pre-trained models that can be used out-of-the-box, for the dual purpose of:

  1. 1. Promoting the advancement of higher level inference algorithms that require registration.

  2. 2. Providing baseline code to promote quantitative improvements in face registration.


  1. Real time: ranging from 20-30 fps (depending on processor, compiler and use of OpenMP)

  2. Generic: designed to work for most people under most conditions

  3. No training required: a pre-trained model is provided

  4. Detection based initialisation: no user intervention required

  5. Automatic failure detection: requires no user re-initialisation

  6. Camera or video input


FaceTracker is available for download (for research purposes only). The library includes the C/C++ API, example code for interfacing with the API, a pre-trained model and documentation. To download it, please send an email to Jason Saragih (Jason.saragih@csiro.au).

The Tracker:

The code requires OpenCV 2.0 and the provided model was trained using the MultiPIE database. The tracker is based on a modified version of the constrained local model described in:

J. Saragih, S. Lucey and J. Cohn, „Deformable Model Fitting by Regularized Landmark Mean-Shifts“, International Journal of Computer Vision (IJCV)

People Using FaceTracker:

Some people are using FaceTracker to do some really cool stuff:

  1. FaceOSC

  2. Face Cloning

  3. A music Video!

  4. Face Projection

  5. Being John Malkovich

  6. pkmFace


Below are some examples of difficult YouTube videos that were processed using FaceTracker with the pre-trained model. There was absolutely no user intervention in any of the videos.