Dictionary training for sparse representation as generalization of K-means clustering
Recent dictionary training algorithms for sparse representation like K-SVD, MOD, and their variation are reminiscent of K-means clustering, and this letter investigates such algorithms from that viewpoint. It shows: though K-SVD is sequential like K-means, it fails to simplify to K-means by destroyi...
Saved in:
Main Authors: | , |
---|---|
其他作者: | |
格式: | Article |
語言: | English |
出版: |
2013
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/96655 http://hdl.handle.net/10220/9970 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
總結: | Recent dictionary training algorithms for sparse representation like K-SVD, MOD, and their variation are reminiscent of K-means clustering, and this letter investigates such algorithms from that viewpoint. It shows: though K-SVD is sequential like K-means, it fails to simplify to K-means by destroying the structure in the sparse coefficients. In contrast, MOD can be viewed as a parallel generalization of K-means, which simplifies to K-means without perturbing the sparse coefficients. Keeping memory usage in mind, we propose an alternative to MOD; a sequential generalization of K-means (SGK). While experiments suggest a comparable training performances across the algorithms, complexity analysis shows MOD and SGK to be faster under a dimensionality condition. |
---|