Did you know that Kabul, Afghanistan is safer than New York City for kids? Well, it depends on how you measure “safer”, just like a lot of things. This report from STATS.org shows how you could arrive at that conclusion. Here’s the basics: look at child homicide. The rate of child homicide in Kabul is quite likely to be lower than that of NYC. (check the article for the math) The problem of course, is that way more kids in Kabul die of other problems, malnutrition and infectious disease being the biggest culprit. About 20% of kids die before age 5. This is amazing, and certainly is cause for not buying the argument that kids in Kabul are really safer by any appropriate measure.
When looking at the results of a statistical study, we should always be asking about how they got the data. How was the data gathered? Were subjects randomly chosen, or simply volunteers? What questions, exactly, were asked? What options were given for answers? Did they define their terms, or is it possible that some responders had different definitions in mind than the researchers? These are the things that good researchers and statisticians are always worried about, and try to address.
In scholarly literature, it is much more common to see these issues addressed than it is in the general press. We as consumers of information, however, should insist that the general press (whether online, video, or print) gives us either the information directly, or links/citations that allow us to find this information in the scholarly literature on which the article/piece is based. Anything less than this means that we are acting as if the answers don’t matter, when in reality they are of utmost importance. Wrong answers to them, means the conclusion is not valid, what could matter more?