This video shows the setup of the Leaf++ augmented reality performance. This is the second component of the Leaf++ project, a collaborative environment which uses computer vision and augmented reality to recognize and identify leaves, allowing to use them to share information and media (which can be added to them using a mobile application) and to use them in performative environments. This video shows the second component. Here, leaves can be used on a lightbox to be recognized by a webcam and computer vision algorithm using the leaf descriptors created using the mobile application. When leaves are recognized, their contours are used in a sond synthesizer algorithm and can be played live.
(via YouTube)