EZ Study

Actuarial Biology Chemistry Economics Calculators Confucius Engineer

Experimental Design & Data Analysis


Think of experimental design like a fork in the road. Down one path, you implement the program and observe the outcome. Down the other path, you don't implement the program and the outcome doesn't occur. But, how do we take both paths in the road in the same study? How can we be in two places at once?

Ideally, what we want is to have the same conditions -- the same people, context, time, and so on -- and see whether when the program is given we get the outcome and when the program is not given we don't. Obviously, we can never achieve this hypothetical situation. If we give the program to a group of people, we can't simultaneously not give it! So, how do we get out of this apparent dilemma?

Perhaps we just need to think about the problem a little differently. What if we could create two groups or contexts that are as similar as we can possibly make them? If we could be confident that the two situations are comparable, then we could administer our program in one (and see if the outcome occurs) and not give the program in the other (and see if the outcome doesn't occur). And, if the two contexts are comparable, then this is like taking both forks in the road simultaneously! We can have our cake and eat it too, so to speak.

That's exactly what an experimental design tries to achieve. In the simplest type of experiment, we create two groups that are "equivalent" to each other. One group (the program or treatment group) gets the program and the other group (the comparison or control group) does not.

In all other respects, the groups are treated the same. They have similar people, live in similar contexts, have similar backgrounds, and so on. Now, if we observe differences in outcomes between these two groups, then the differences must be due to the only thing that differs between them -- that one got the program and the other didn't.

Lecture Notes
We provide the example-demonstrated lecture notes, step by step explaination.

  Lecture01: Introduction to Experimental Design
  Lecture02: Statistical hypothesis testing
  Lecture03: Two sample T-test, Paired T-test
  Lecture04: Analysis of variance   Lecture05: Least Square Estimation
  Lecture06: Fisher's Test, Tukey's Test   Lecture07: Orthogonal Contrasts
  Lecture08: Variable Transformation, Box-Cox Method, Bartlett's test
  Lecture09: Power and Sample size determination
  Lecture10: Unparametric Kruskal-Wallis Test

  Lecture11: Blocking principle, Hardness testing principle
  Lecture12: Latin Square Design
  Lecture13: introduction to factorial design
  Lecture14: Blocking in Factorial Design     Lecture15: 2^k Factorial Design
  Lecture16: Unreplicated 2^k factorial design
  Lecture17: Blocking and Coufounding in 2^k factorial
  Lecture18: Random Effect Modelling
  Lecture19: Two-way Fixed Effects
  Lecture20: Mixed Model, F-Test
  Lecture21: Two-stage nested design
  Lecture22: Split-Plot Design

Practice is the best of all instructors. We all learnt by doing, by experimenting
(and often failing) and by asking questions.

Related links:

Back to Statistics home     Linear Regression home     Time Series Modeling home
Back to SAS tutorial     Regression tutorial home