(via YouTube by CIMMIQC )Augmented reality (AR) is limited by how you can interact with the virtual object displayed. Currently, the object can only be moved by moving the AR marker. In this project, we used the Microsoft Kinect sensor to allow one to virtually touch and manipulate the object displayed. In order to do this, we use the skeletal tracking of the sensor to allow the user to rotate and resize the object with his hands. While doing so, the user hands will inevitably get between the virtual object and the camera. To prevent the virtual object from being displayed wrongly over the hand, the depth map of the Kinect is used to find any obstruction and block rendering of pixels under occlusion. This is realized at the GPU level, using modified Shaders.
The AR system uses the Unity3D game engine to display the 3D models with a custom plugin created at the CIMMI research center to enable the AR. The plugin computes in real time the pose of the Kinect using the color image while the depth map allow occlusion detection and hands tracking. The plugin was developed with the OpenCV library which allows easy analysis of the images. This work was realized at the CIMMI research center (cimmi.qc.ca) by two interns, Alexis Legare-Julien and Renaud Simard, under the supervision of Jean-Nicolas Ouellet, Ph.D.Eng.