Sunday, June 7, 2009

A Brief Lesson.....

My education background is as an epidemiologist, and even though I've never truly used this training professionally (exactly), it is definitely useful when trying to understand all of the information on a medical condition (autism in this case). Using this background I am in a position to read a study and recognize not only the study's results but also determine its strengths and weaknesses.

In order to understand what will probably be my next several posts, I feel it's important to briefly educate you in some basic statistics. After all, almost all research is presented using statistics and we all have heard that statisticians can make the numbers say anything they want it to. That is true, in many respects, but that is done at great costs.

Because the purpose of statistics in research is to generalize findings from a sample to the general population, the people conducting this research have to be confident that their results warrant making such an assumption. Therefore, statisticians consider 2 types of errors -- the "alpha" error and the "beta" error (yeah, real original names). Regardless of how well a study is conducted and how sound the findings, both of these errors will always exist and they are always reported in the study's results. They are also often considered in both the study design and the discussion sections of the research article since this is where all weaknesses can be explained away.

The alpha error has to do with how confident the researchers are that the study results are accurate and not due to "outliers" (unusual cases). Whenever a sample population is selected, it's always possible to find a group of individuals who really can't be generalized to the population at large. Usually in these studies there are maybe a couple of participants for whom this is true, but most of the sample should be representative of the general population, allowing the researchers to report those results. Using the alpha error, a degree of confidence is reported which is basically calculated as (1 - [alpha error]). This is often presented using a "confidence interval" to show how likely the true pinpointed result of the statistical test(s) actually lie. When reading these results, assuming the alpha error is 5%, readers can generalize that the researchers are 95% confident that their results are accurate. The smaller the alpha error, the higher the confidence.

The beta error has to do with how "powerful" the study is, or how "reproducible" it is. In other words, if 10 different studies with the same design were conducted the same conclusions should be reached 10 times. The higher the power, the more reproducible that study is. This is reported as (1 - [beta error]) and a standard power value is 90%. One of the best ways of increasing the power of the study is by increasing the study's sample size.

So, hopefully, this will give everyone a basis for understanding much of what is coming next. I know I'm posting a lot right now, but I'm still trying to get myself caught up a bit.....

No comments:

Post a Comment