Train a CRFs model for one morning indoor activities with Virtual Evidence Boosting method. Evaluate the performance of the algorithm and acquire deeper understanding of Boosting methodology and the process of training Conditional Random Fields.
Method:
Training Conditional Random Fields using Virtual Evidence Boosting
Virtual evidence boosting extend LogitBoost Algorithm to make use of Virtual evidence from neighbors rather than true value of neighbors,hence avoid over-estimating neighborhood dependencies. Also, Virtual evidence boosting training model can perform feature selection and parameter estimation in a unified and efficient manner.
Lin Liao, Tanzeem Choudhury, Dieter Fox, and Henry Kautz. "Training Conditional Random Fields using Virtual Evidence Boosting." Appears in the Proceedings of International Joint Conference on Artificial Intelligence (IJCAI 2007). January 2007, Hyderabad, India.
Parameter Estimation in CRF using Virtual Evidence Boosting (this paper shows how to extend binary lable into multi-classes)
Dataset:
A MIT PlaceLab public data sets describes a small, four-hour test datasets of relatively intensive home activity.
Platform:
All the implementation and experiments are going to be done in matlab.
Timeline:
Week 1,2: (April 19th-May 2nd ) Implement BP stage to get the virtual evidence, compute the likelihood function weight and working response.
Week 3,4: (May 3rd-May 16th) Implement the feather selection part of the algorithm. Polish the code in iterations.
Goal by the time milestone occurs: finish the first 4 weeks' work described above.
Week 5: (May 17th-May 23rd) Experiment on the PLIA1 dataset. Get the results and evaluate the performance.
Week 6: (May 25th-May 30th) Final write up, make poster, prepare for presentation.
Rong Yang April 19th 2009