(This post describes a project I completed for MAS.S62 Interactive Machine Learning at the MIT Media Lab. The assignment was to create a simple interactive machine learning system that gave the user visibility into the learning algorithm.)GestuRe is a mixed-initiative interactive machine learning system for recognizing hand gestures. It attempts to give the user visibility into the classifier’s prediction confidence and control of the conditions under which the system actively requests labeled gestures when its predictions are uncertain.Training object or gesture recognition systems is often a tedious and uncertain process. The slow loop from gathering images to training a model to testing the classifier separates the process of selecting training samples from the situation in which the system is actually used.In building such systems, I’ve frequently been frustrated by the inability to add corrective samples at the moment a mistaken classification occurs. I’ve also struggled to feel confident in the state of the classifier at any point in the training process. This system attempts to address both problems, producing a recognition system that is fluid to train and whose reliability can be understood and improved.GestuRe creates a feature vector from an input image from a live camera using Histogram of Oriented Gradients [Dalal & Triggs 2005]. The user selects a class label for these input images and the system then trains a Support Vector Machine-based classifier on these labeled samples [Vapnik 1995].