EZ Study

Actuarial Biology Chemistry Economics Calculators Confucius Engineer
Physics
C.S.

4. Bayesian Procedure in SAS

0. Intro. to Bayesian   1. Bayesian VS. Frequentist   2. Metropolis Algorithm Example   3.Gibbs Sampler Example
4. Bayesian Proc in SAS     5. Intuitive example for Beta distri.     6. Trick to compare two baseball players

Bayesian Example in Proc Genmod
We can use the data described in Ibrahim, Chen, and Lipsitz: Monte Carlo EM
for Missing Covariates in Parametric Regression Models
, to demostrate.

        proc genmod data=Liver desc; /* sort from high to low, reference=high */
        class alcohol(desc);
        model Y = X1-X6 alcohol
        / dist=Poisson link=log plots=none outpost=bayes_prob;
        /* for discrete distribution, use following for 0/1 Binary distribution */
        / dist=binomial link=logit;
        lsmeans alcohol / diff oddsratio plots=all cl;
        /*report diff. of LSM for odds ratios, confidence bounds, default plots */
        bayes seed=1234 coeffprior=normal sampling=arms nmc=25000
        /*specify # of iterations after burn-in, adaptive rejection Metropolis sample*/
        outpost=bayes_prob1 plots(smooth)=all diag=all;
        /*Run proc means on the output dataset bayes_prob1 to see (% of >0) */
        /* use a noninformative normal prior with 0 mean and 10^6 variance */
        bayes seed=1234 coeffprior=normal(input=NormalPrior) ;run;
        /* a pre-specified normal prior from the following dataset */

        data NormalPrior; input _type_ $ Intercept X1-X6;
        datalines;
        Var 1e6 0.0005 1e6 1e6 1e6 1e6 1e6
        Mean 0.0 0.1385 0.0 0.0 0.0 0.0 0.0
        ; run;

Maximum likelihood estimates of the model parameters are computed by default. Noninformative independent normal prior distributions with zero means and variances of 10^6 were used in the initial analysis.

Bayesian Example in Proc Phreg
Cox proportional hazards model(Cox model),Piecewise Exponential Model
The key term in Survival models is the underlying hazard function, Lambda_0(t): describing how the hazard (risk) changes over time at baseline levels of covariates.

        ods graphics on;

        proc phreg data=VALung;
        class Prior(ref='no') Cell(ref='large') Therapy(ref='standard');
        model Time*Status(0) = Kps Duration Age Prior Cell Therapy;
        /*sensored data, survival status(0/1) * survival time */
        bayes seed=1 coeffprior=normal(input=Prior)
        diagnostic=(autocorr ess) plots=trace;
        /* Autocorrelations and effective sample size diagnostics, trace plots */
        hazardratio 'Hazard Ratio Statement 1' Therapy;
        hazardratio 'Hazard Ratio Statement 2' Age / unit=10;
        hazardratio 'Hazard Ratio Statement 3' Cell; run;
        proc phreg data=sasuser.methadone;
        class clinic (param=ref ref='2');
        model time*status(0)=clinic dose prison / ties=exact;
        bayes seed=27513 coeffprior=normal(input=Prior) diag=all
        plots(smooth)=all sampling=rwm thin=2 nmc=100000;
        /* random walk Metropolis(RWM) algorithm */
        hazardratio "HR1" clinic;
        hazardratio "HR2" dose / units=10;
        hazardratio "HR3" prison; run;
        proc phreg data=Rats;
        model Days*Status(0)=Group;
        bayes seed=1 piecewise=hazard; run;

        ods graphics off;


The acceptance rate is closely related to the sampling efficiency of a Metropolis chain. For a random walk Metropolis, high acceptance rate means that most new samples occur right around the current data point. Their frequent acceptance means that the Markov chain is moving rather slowly and not exploring the parameter space fully. On the other hand, a low acceptance rate means that the proposed samples are often rejected. Hence, the chain is not moving much. An efficient Metropolis sampler has an acceptance rate that is neither too high nor too low.

Bayesian Example in Proc MCMC
        proc mcmc data=sasuser.birth outpost=birthout diag=all dic
        propcov=quanew nbi=5000 ntu=5000 nmc=200000 thin=5
        /*nbi: # of burn-in, ntu: # of tuning. quanew: quasi-Newton optimization */
        mchistory=brief plots(smooth)=all seed=27513;
        parms (beta0 beta1 beta2 beta3 beta4) 0;
        prior beta: ~ normal(0, var=100);
        prior beta1 ~ normal(1.0986,var=0.00116);
        prior beta0 beta2 beta3 beta4 ~ normal(0, var=100);
        p=logistic(beta0+beta1*alc+beta2*hyp+beta3*mo_wt+beta4*pretrm);
        model low ~ binary(p);
        preddist outpred=pred; ods output predsummaries=prediction_summaries;
        run;


Related links:
Continue to Intuitive example to understand Bayesian Analysis   Statistics Home
Back to Linear Regression tutorial home     Time Series Modeling     Homepage