EZ Study

Actuarial Biology Chemistry Economics Calculators Confucius Engineer
Physics
C.S.

3. Gibbs Sampler Example in SAS

0. Intro. to Bayesian   1. Bayesian VS. Frequentist   2. Metropolis Algorithm Example   3.Gibbs Sampler Example
4. Bayesian Proc in SAS     5. Intuitive example for Beta distri.     6. Trick to compare two baseball players

Gibbs sampler, which is an algorithm that sequentially samples from a joint distribution of two or more random variables, is a special case of the Metropolis and Metropolis- Hastings Algorithms in which the proposal distributions exactly match the posterior conditional distributions and proposals are accepted 100% of the time.

Gibbs sampling requires you to decompose the joint posterior distribution into full conditional distributions for each parameter in the model and then sample from them. The sampler can be efficient when the parameters are not highly dependent on each other and the full conditional distributions are easy to sample from.

Here is more details about Gibbs sampling and example in SAS.

Here is the SAS code example for Gibbs sampling. The credit belongs to Dr. Siva.

        data gibbs;
        n=10; sy=50; yb=5; ssy=286; ss=36;
        * initial values for mean mu and var. sg2;
        * i = iteration #;
        do; i=0; mu=0; sg2=1; output; end;
        seed1=138929; seed2=982374;

        do i=1 to 2500;
        * Draw mu given sg^2;
        retain seed1;
        call rannor(seed1,r1);
        /*call rannor to generate standard uniform sample to r1 */
        mu=yb+sqrt(sg2/n)*r1;
        ssmu=ssy-2*mu*sy+n*mu*mu; * ssmu=sum(yi-mu)^2 ;
        * draw sg2 given mu;
        retain seed2;
        call rangam(seed2,n/2,r2);
        sg2=ssmu/(2*r2);
        output; end; run;

        /* Plot of mu versus i : this is to see
        if the markov Chain has reached a stationary stage */


        symbol interpol=join value=star height=1;
        proc gplot; plot mu*i; run;

        Burn-in: discarding an initial portion of a Markov chain sample so
        the effect of the initial values on the posterior inference is minimized.

        /* First 500 values of the draw are eliminated to
        account for the burn-in time of the Markov Chain */
        Thinning: the practice of keeping every kth simulated draw
        from each sequence in order to reduce sample autocorrelations.


        data gibbs; set gibbs; if i >=500; keep i mu sg2;

        /* Find the Bayes point estimates and CI for mu and sg2
        mean_mu= Monte Carlo (MC) estimate of posterior mean of mu
        mean_sg2 =MC estimate of posterior mean of v */


        proc univariate noprint; var mu sg2;
        output out=AA mean=mean_mu mean_sg2
        pctlpts =2.5 97.5 pctlpre=meanci varci; run;
        proc print; run;


Related links:
Continue Bayesian Analysis in SAS       Bayesian Analysis Home
Back to Linear Regression tutorial home     Time Series Modeling     Homepage