-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
72 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Discrete Mixture Models\n", | ||
"\n", | ||
"Discrete mixture models: if $p_m(\\theta)$, for $m=1,\\ldots,M$ are conjugate prior densities for the sampling model $y|\\theta$, show that the class of finite mixture prior densities given by \n", | ||
"$$\n", | ||
"p(\\theta)=\\sum_{1}^{M} \\lambda_m p_m(\\theta)\n", | ||
"$$\n", | ||
"is also a conjugate class, where the $\\lambda_m$’s are nonnegative weights that sum to 1. This can provide a useful extension of the natural conjugate prior family to more flexible distributional forms. As an example, use the mixture form to create a bimodal prior density for a normal mean, that is thought to be near $1$, with a standard deviation of $0.5$, but has a small probability of being near $−1$, with the same standard deviation. If the variance of each observation $y_1,\\ldots,y_{10}$ is known to be $1$, and their observed mean is $y =−0.25$, derive your posterior distribution for the mean, making a sketch of both prior and posterior densities. Be careful: the prior and posterior mixture proportions are different.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Let's skip the theory part and look at the example.\n", | ||
"\n", | ||
"We have\n", | ||
"$$\n", | ||
"p(\\theta|y_1,\\ldots,y_{10})\\propto p(y_1,\\ldots,y_10|\\theta)p(\\theta)$$\n", | ||
"so\n", | ||
"$$\n", | ||
"p(\\theta|\\{y_{i}\\})\\propto \\sum \\lambda_{m}p(\\{y_{i}\\}|\\theta)p_{m}(\\theta)\n", | ||
"$$\n", | ||
"\n", | ||
"Each of the terms $p_{m}(\\theta)p(\\{y_{i}\\}|\\theta)$\n", | ||
"is equal to $p_{m}(\\theta|\\{y_{i}\\})p_{m}(\\{y_{i}\\})$.\n", | ||
"\n", | ||
"Therefore the total posterior density is a weighted sum\n", | ||
"of the individual posteriors:\n", | ||
"\n", | ||
"$$p(\\theta|\\{y_{i}\\})=\\sum c_{m}p_{m}(\\theta|\\{y_{i}\\})$$\n", | ||
"where \n", | ||
"$$\n", | ||
"c_{m}=\\frac{\\lambda_m p_{m}(\\{y_{i}\\})}{\\sum_{m} \\lambda_m p_{m}(\\{y_{i}\\}}\n", | ||
"$$" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.6.4" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |