TU-L0022 - Statistical Research Methods D, Lecture, 2.11.2021-6.4.2022
This course space end date is set to 06.04.2022 Search Courses: TU-L0022
Coefficient alpha (”Cronbach’s alpha”) or tau-equivalent reliability (9:05)
What is coefficient alpha or "Cronbahc's alpha" and is it a good estimate of reliability?What other alternatives for estimating reliability exist and how should you go about choosing the best one?
Click to view transcript
Coefficient alpha, sometimes called Cronbach's alpha or tau-equivalent reliability is one of the most commonly used reliability indices in social science research.
It may not be the best index for every scenario but it is commonly used and therefore it is important to understand what it quantifies and under which assumptions it is a good reliability index.
Coefficient alpha is sometimes referred to as internal consistency reliability. What that means is that is based on... its calculated based on how consistent the indicators are internally. So basically it means how highly the indicators are correlated. And this index is calculated from the correlation matrix of the indicators, or covariance matrix depending on which equation you apply.
Coefficient alpha quantifies what is the reliability of a scale score calculated as a sum of the scale items. So if you have a scale of five items or five measures that are supposed to measure the same thing and then you take a mean or a sum of those five items then alpha quantifies the reliability of that sum.
The part about internal consistency is very important because while alpha is an internal consistency measure in terms that it quantifies how strongly the indicators are correlated it doesn't test where the internal consistency holds.
So here's an example of six indicators. We have indicators x1 x2 and x3 that are designed to measure one thing. So they're highly correlated because they measure the same thing. x4 x5 and x6 are designed to measure something else and they are highly correlated with one another because they measure the same thing. But these two groups of indicators x1 x2 x3 and x4 x5 x6 are uncorrelated with one another because they are designed to measure two distinct things that are not correlated.
So this set of six indicators measures two different dimensions or two different things. Yet if we calculate coefficient alpha using this correlation matrix we get the value of 0.7. So the fact that we got an alpha value that is acceptable for some researchers doesn't guarantee that the scale is a unidimensional, internally consistent one.
The important thing here is that internal consistency is something that alpha assumes and it is based on that assumption. High alpha doesn't guarantee that you have an internally consistent unidimensional scale.
So alpha is a reliability index and it relies on the classical test theory assumptions. Basically it means that your scale that goes into the Alpha, the indicators, must be uni-dimensional and there is no other error there except unreliability which is random noise.
Then it doesn't provide any test of the classical test theory assumptions. So before you apply alpha at least you have to check unidimensionality and you also can check the tau equivalence which basically means whether the indicators are equally reliable.
The classical test theory states that the measures score X is a sum of the true score T plus the measurement error E. In greek letters the T is tau and therefore it's tau equivalent. Every indicator has the same amount of true score variation in it. That's what the tau equivalence means.
While alpha is commonly used there are also common misconceptions that you see in research.The first misconception is that it was developed by Cronbach. That's not the case. It was developed a couple of decades before Cronbach's widely cited paper. It just happens to be the first index in his paper where he discussed many different reliability indices and therefore he got the coefficient alpha and it got his name. But Cronbach himself says that this index probably shouldn't carry his name. So that's why we call it coefficient alpha instead of Cronbach's alpha.
Also alpha does not necessarily equal reliability. It is an estimate of reliability under the assumptions of classical test theory. If those assumptions don't hold then alpha can underestimate reliability.Your statistical software when you calculate alpha also gives you an estimate of how much the alpha would be if you omit one of the indicators.
For example, it would say that if you have five indicators going to the alpha dropping one of those indicators could increase the alpha.Should you drop the indicator? The answer to that question is not necessarily, because dropping an indicator from the... while it increases the alpha value it can also mean that you're just capitalizing on chance factors. So the actual reliability doesn't increase.
Remember that alpha is not equal to reliability. It is an estimate of reliability.And we could just have an alpha value that is slightly overestimated because of random factors.Then there is the misconception about cut-offs. So if alpha equals 0.7 you're okay if it's 0.69 then your study is unpublishable.
So that of course doesn't hold true. What the reliability - what kind of reliability you require depends on the context or if you are measuring something that no one has ever measured before then it's perhaps more acceptable to have an alpha of maybe even less than 0.7 if you're studying something that others have studied and have used scales with the reliability of 0.85 then 0.7 is not gonna cut it because there are better scales available.
Also it is not about yes or no decision. You have to explain what the reliability of 70% or 80% means for your study results.So what kind of systematic error do you expect when your measures are unreliable. So it's not about the cutoff.And then the final misconception is that alpha is the best reliability coefficient. People may think that it's the best because it's widely used but sometimes we use a statistic simply because they have been used in the past and if we just used something that's been used in the past - it means that we use the oldest thing. And there are many other reliability indices that have been introduced after coefficient alpha that are more modern and better than alpha.
Here's a list of some from a paper by McNeish from Psychological Methods. He starts with alpha and then he goes and explains omega which relaxes some of the assumptions then he goes on and explains others that relax the assumptions.
The important assumptions in alpha were that the indicators are unidimensional. If you relax that assumption then you can go with some of these hierarchical omega coefficients.Also, another important assumption is that indicators are about equally reliable. If you relax that assumption you can go with omega coefficient which is also known as composite reliability.Cho's paper presents this nice decision diagram of which reliability index to choose from. So he starts by checking is the scale unidimensional and if the scale is unidimensional and only measures one thing then you check are the indicators about equally reliable - do they have the same true score. If yes, then you're going to be OK with coefficient alpha.
If no, the second step then you go with coefficient omega or composite reliability index.If on the other hand, you don't have unidimensionality then you apply a factor model in which case you do hierarchical omega or a variant of alpha that doesn't use the factor model.
So it's a not that you would always use alpha but instead you have to make a decision based on the nature of your data and you have to - if you use alpha you have to justify the unidimensionality assumption which you do with the factor analysis. And then you have to justify the Tau equivalence assumption which basically means that all the indicators are equally reliable. Which you also do with the factor analysis. And if those assumptions don't hold then alpha is not the ideal but you have to look at these other coefficients instead.