MAP approximation to the variational Bayes Gaussian mixture model and application

The learning of variational inference can be widely seen as first estimating the class assignment variable and then using it to estimate parameters of the mixture model. The estimate is mainly performed by computing the expectations of the prior models. However, learning is not exclusive to expectat...

Full description

Saved in:
Bibliographic Details
Main Authors: Lim, Kart-Leong, Wang, Han
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/138544
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-138544
record_format dspace
spelling sg-ntu-dr.10356-1385442020-05-08T01:33:44Z MAP approximation to the variational Bayes Gaussian mixture model and application Lim, Kart-Leong Wang, Han School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Variational Bayes Gaussian Mixture Model The learning of variational inference can be widely seen as first estimating the class assignment variable and then using it to estimate parameters of the mixture model. The estimate is mainly performed by computing the expectations of the prior models. However, learning is not exclusive to expectation. Several authors report other possible configurations that use different combinations of maximization or expectation for the estimation. For instance, variational inference is generalized under the expectation–expectation (EE) algorithm. Inspired by this, another variant known as the maximization–maximization (MM) algorithm has been recently exploited on various models such as Gaussian mixture, Field-of-Gaussians mixture, and sparse-coding-based Fisher vector. Despite the recent success, MM is not without issue. Firstly, it is very rare to find any theoretical study comparing MM to EE. Secondly, the computational efficiency and accuracy of MM is seldom compared to EE. Hence, it is difficult to convince the use of MM over a mainstream learner such as EE or even Gibbs sampling. In this work, we revisit the learning of EE and MM on a simple Bayesian GMM case. We also made theoretical comparison of MM with EE and found that they in fact obtain near identical solutions. In the experiments, we performed unsupervised classification, comparing the computational efficiency and accuracy of MM and EE on two datasets. We also performed unsupervised feature learning, comparing Bayesian approach such as MM with other maximum likelihood approaches on two datasets. 2020-05-08T01:33:44Z 2020-05-08T01:33:44Z 2017 Journal Article Lim, K.-L., & Wang, H. (2018). MAP approximation to the variational Bayes Gaussian mixture model and application. Soft Computing, 22(10), 3287-3299. doi:10.1007/s00500-017-2565-z 1432-7643 https://hdl.handle.net/10356/138544 10.1007/s00500-017-2565-z 2-s2.0-85017139654 10 22 3287 3299 en Soft Computing © 2017 Springer-Verlag Berlin Heidelberg. All rights reserved.
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Variational Bayes
Gaussian Mixture Model
spellingShingle Engineering::Electrical and electronic engineering
Variational Bayes
Gaussian Mixture Model
Lim, Kart-Leong
Wang, Han
MAP approximation to the variational Bayes Gaussian mixture model and application
description The learning of variational inference can be widely seen as first estimating the class assignment variable and then using it to estimate parameters of the mixture model. The estimate is mainly performed by computing the expectations of the prior models. However, learning is not exclusive to expectation. Several authors report other possible configurations that use different combinations of maximization or expectation for the estimation. For instance, variational inference is generalized under the expectation–expectation (EE) algorithm. Inspired by this, another variant known as the maximization–maximization (MM) algorithm has been recently exploited on various models such as Gaussian mixture, Field-of-Gaussians mixture, and sparse-coding-based Fisher vector. Despite the recent success, MM is not without issue. Firstly, it is very rare to find any theoretical study comparing MM to EE. Secondly, the computational efficiency and accuracy of MM is seldom compared to EE. Hence, it is difficult to convince the use of MM over a mainstream learner such as EE or even Gibbs sampling. In this work, we revisit the learning of EE and MM on a simple Bayesian GMM case. We also made theoretical comparison of MM with EE and found that they in fact obtain near identical solutions. In the experiments, we performed unsupervised classification, comparing the computational efficiency and accuracy of MM and EE on two datasets. We also performed unsupervised feature learning, comparing Bayesian approach such as MM with other maximum likelihood approaches on two datasets.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Lim, Kart-Leong
Wang, Han
format Article
author Lim, Kart-Leong
Wang, Han
author_sort Lim, Kart-Leong
title MAP approximation to the variational Bayes Gaussian mixture model and application
title_short MAP approximation to the variational Bayes Gaussian mixture model and application
title_full MAP approximation to the variational Bayes Gaussian mixture model and application
title_fullStr MAP approximation to the variational Bayes Gaussian mixture model and application
title_full_unstemmed MAP approximation to the variational Bayes Gaussian mixture model and application
title_sort map approximation to the variational bayes gaussian mixture model and application
publishDate 2020
url https://hdl.handle.net/10356/138544
_version_ 1681057171071041536