The word "significant" has two meanings.
In everyday English, "significant" means "important". But when scientists have used statistical tests to analyse their data and have found that their results are "significant", this is "significance" in the statistical sense of the word and merely means that their results are "very probably not due to chance".
Statistical significance should only ever be an unemotive indication of how likely it is that a proportion of the results are due to chance variation in the sample of the population or events being studied.
- p<0.05 - the probability that the results are due to chance is less than 5% is the most commonly used level.
- p<0.01 - the probability that the results are due to chance is less than 1% is the level used in the social sciences when contradicting previously held theory, scientists may use this level in a field study where there is only a one-off opportunity to collect data.
- p<0.001 – the probability that results are due to chance is less than 0.1% - less than one in a thousand – is used when testing drugs and foods for unwanted effects. Here we have to be more certain than ever that chance is not affecting the results.
Reporting science correctly
If a set of data are found to be significant in the statistical sense, this in no way implies that the results are somehow important. Journalists wishing to publicise a claim by a team of scientists occasionally confuse the two meanings and stress the fact that their result are "significant" as being something substantial or noteworthy making the discovery more relevant.
What's your opinion?
Average rating
Current rating: 5/5 (from 1 votes cast)