From 8e76a6b3b1770c250e246d97bcfbce6de7f51bae Mon Sep 17 00:00:00 2001 From: Jeremy Teitelbaum Date: Fri, 27 Apr 2018 14:42:03 -0400 Subject: [PATCH] 5.9.8 added (in progress) --- BDA 5.9.8.ipynb | 72 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 BDA 5.9.8.ipynb diff --git a/BDA 5.9.8.ipynb b/BDA 5.9.8.ipynb new file mode 100644 index 0000000..9a60fc0 --- /dev/null +++ b/BDA 5.9.8.ipynb @@ -0,0 +1,72 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Discrete Mixture Models\n", + "\n", + "Discrete mixture models: if $p_m(\\theta)$, for $m=1,\\ldots,M$ are conjugate prior densities for the sampling model $y|\\theta$, show that the class of finite mixture prior densities given by \n", + "$$\n", + "p(\\theta)=\\sum_{1}^{M} \\lambda_m p_m(\\theta)\n", + "$$\n", + "is also a conjugate class, where the $\\lambda_m$’s are nonnegative weights that sum to 1. This can provide a useful extension of the natural conjugate prior family to more flexible distributional forms. As an example, use the mixture form to create a bimodal prior density for a normal mean, that is thought to be near $1$, with a standard deviation of $0.5$, but has a small probability of being near $−1$, with the same standard deviation. If the variance of each observation $y_1,\\ldots,y_{10}$ is known to be $1$, and their observed mean is $y =−0.25$, derive your posterior distribution for the mean, making a sketch of both prior and posterior densities. Be careful: the prior and posterior mixture proportions are different.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's skip the theory part and look at the example.\n", + "\n", + "We have\n", + "$$\n", + "p(\\theta|y_1,\\ldots,y_{10})\\propto p(y_1,\\ldots,y_10|\\theta)p(\\theta)$$\n", + "so\n", + "$$\n", + "p(\\theta|\\{y_{i}\\})\\propto \\sum \\lambda_{m}p(\\{y_{i}\\}|\\theta)p_{m}(\\theta)\n", + "$$\n", + "\n", + "Each of the terms $p_{m}(\\theta)p(\\{y_{i}\\}|\\theta)$\n", + "is equal to $p_{m}(\\theta|\\{y_{i}\\})p_{m}(\\{y_{i}\\})$.\n", + "\n", + "Therefore the total posterior density is a weighted sum\n", + "of the individual posteriors:\n", + "\n", + "$$p(\\theta|\\{y_{i}\\})=\\sum c_{m}p_{m}(\\theta|\\{y_{i}\\})$$\n", + "where \n", + "$$\n", + "c_{m}=\\frac{\\lambda_m p_{m}(\\{y_{i}\\})}{\\sum_{m} \\lambda_m p_{m}(\\{y_{i}\\}}\n", + "$$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.4" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}