Bayesian inference for finite mixtures of generalized linear models with random effects
dc.contributor.author | DeSarbo, Wayne S. | en_US |
dc.contributor.author | Lenk, Peter J. | en_US |
dc.date.accessioned | 2006-09-11T16:26:00Z | |
dc.date.available | 2006-09-11T16:26:00Z | |
dc.date.issued | 2000-03 | en_US |
dc.identifier.citation | Lenk, Peter J.; DeSarbo, Wayne S.; (2000). "Bayesian inference for finite mixtures of generalized linear models with random effects." Psychometrika 65(1): 93-119. <http://hdl.handle.net/2027.42/45757> | en_US |
dc.identifier.issn | 0033-3123 | en_US |
dc.identifier.issn | 1860-0980 | en_US |
dc.identifier.uri | https://hdl.handle.net/2027.42/45757 | |
dc.description.abstract | We present an hierarchical Bayes approach to modeling parameter heterogeneity in generalized linear models. The model assumes that there are relevant subpopulations and that within each subpopulation the individual-level regression coefficients have a multivariate normal distribution. However, class membership is not known a priori, so the heterogeneity in the regression coefficients becomes a finite mixture of normal distributions. This approach combines the flexibility of semiparametric, latent class models that assume common parameters for each sub-population and the parsimony of random effects models that assume normal distributions for the regression parameters. The number of subpopulations is selected to maximize the posterior probability of the model being true. Simulations are presented which document the performance of the methodology for synthetic data with known heterogeneity and number of sub-populations. An application is presented concerning preferences for various aspects of personal computers. | en_US |
dc.format.extent | 2050876 bytes | |
dc.format.extent | 3115 bytes | |
dc.format.mimetype | application/pdf | |
dc.format.mimetype | text/plain | |
dc.language.iso | en_US | |
dc.publisher | Springer-Verlag; The Psychometric Society | en_US |
dc.subject.other | Psychology | en_US |
dc.subject.other | Latent Class Analysis | en_US |
dc.subject.other | Consumer Behavior | en_US |
dc.subject.other | Generalized Linear Models | en_US |
dc.subject.other | Statistical Theory and Methods | en_US |
dc.subject.other | Statistics for Social Science, Behavorial Science, Education, Public Policy, and Law | en_US |
dc.subject.other | Psychometrics | en_US |
dc.subject.other | Assessment, Testing and Evaluation | en_US |
dc.subject.other | Bayesian Inference | en_US |
dc.subject.other | Finite Mixtures | en_US |
dc.subject.other | Heterogeneity | en_US |
dc.subject.other | Markov Chain Monte Carlo | en_US |
dc.title | Bayesian inference for finite mixtures of generalized linear models with random effects | en_US |
dc.type | Article | en_US |
dc.subject.hlbsecondlevel | Psychology | en_US |
dc.subject.hlbtoplevel | Social Sciences | en_US |
dc.description.peerreviewed | Peer Reviewed | en_US |
dc.contributor.affiliationum | University of Michigan Business School, 701 Tappan Street, 48109-1234, Ann Arbor, MI | en_US |
dc.contributor.affiliationother | Pennsylvania State University, USA | en_US |
dc.contributor.affiliationumcampus | Ann Arbor | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/45757/1/11336_2005_Article_BF02294188.pdf | en_US |
dc.identifier.doi | http://dx.doi.org/10.1007/BF02294188 | en_US |
dc.identifier.source | Psychometrika | en_US |
dc.owningcollname | Interdisciplinary and Peer-Reviewed |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.