Dictionary training for sparse representation as generalization of K-means clustering

Recent dictionary training algorithms for sparse representation like K-SVD, MOD, and their variation are reminiscent of K-means clustering, and this letter investigates such algorithms from that viewpoint. It shows: though K-SVD is sequential like K-means, it fails to simplify to K-means by destroyi...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: Sahoo, Sujit Kumar, Makur, Anamitra
مؤلفون آخرون: School of Electrical and Electronic Engineering
التنسيق: مقال
اللغة:English
منشور في: 2013
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/96655
http://hdl.handle.net/10220/9970
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:Recent dictionary training algorithms for sparse representation like K-SVD, MOD, and their variation are reminiscent of K-means clustering, and this letter investigates such algorithms from that viewpoint. It shows: though K-SVD is sequential like K-means, it fails to simplify to K-means by destroying the structure in the sparse coefficients. In contrast, MOD can be viewed as a parallel generalization of K-means, which simplifies to K-means without perturbing the sparse coefficients. Keeping memory usage in mind, we propose an alternative to MOD; a sequential generalization of K-means (SGK). While experiments suggest a comparable training performances across the algorithms, complexity analysis shows MOD and SGK to be faster under a dimensionality condition.