A Co-training Framework for Visual Tracking with Multiple Instance
View Researcher's Other CodesDisclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).
Authors | Huchuan Lu, Qiuhong Zhou, Dong Wang, Ruan Xiang |
Journal/Conference Name | Automatic Face and Gesture Recognition |
Paper Category | ECE |
Paper Abstract | This paper proposes a Co-training Multiple Instance Learning algorithm (CoMIL). Our framework is based on the co-training approach which labels incoming data continuously, and then uses the prediction from each classifier to enlarge the training set of the other. The discriminative classifier is implemented using online multiple instance learning (MIL), which can deal with inaccurate positive samples in the updating process and allow some flexibility while finding a decision boundary. Firstly, two classifiers are improved mutually in our CoMIL tracking system. Secondly, our update mechanism uses multiple potential positives according to the MIL which handles the update error due to the risk of extracting only one positive example. Experiments show that our CoMIL tracking algorithm performs better than several state-of-the-art tracking algorithms on challenging sequences. |
Date of publication | 2011 |
Code Programming Language | MATLAB |
Comment |