Comment from: Kalvin Grubs [Visitor]
Does anyone see a problem with the fact that the sample size of this study was so small? They followed 18 people. Can you realistically extrapolate the news consuming habits of the enormous, ever-changing and ever-expanding community of web 3.0 youth from such a small sample? I’m neither scientist, nor statistician, but this doesn’t add up to me. Anyone with a background in research methodology and statistics want to chime in on this one? Sounds to me like the researchers fit this study to affirm what has now become conventional wisdom, but without applying the true rigors of scientific research.
Comment from: [Member]
Yes, you’re right, it’s a very small sample. But often you get better results (=closer to real life) by studying 18 people indepth than sending a questionnaire to 18 000 people. By the way, they followed 24 people. I quote:
To gather as broad a group of participants as possible, 24 participants were recruited from ages 18 to 55 (with an emphasis on the 18 to 34 age group), representing a mix of ethnicity, gender and household income. Each participant had to have access to the Internet and had to report interacting with advertising and accessing news through both traditional and non-traditional means. In addition, participants had to report checking the news at least once a day.
The participants were selected from a mix of urban and suburban neighborhoods in four cities in the United States: Atlanta, Kansas City, New York and San Francisco. The locations were chosen to provide a broad geographical sweep and to capture a full range of traditional and non-traditional advertising and news consumption.
Thank you for this great post Lorenz, this is truly exciting - both in terms of results and as an example of a contribution from applied anthropology. I love applied anthro !
Kalvin, you have a good question here. Being critical of any kind of results (scientific or otherwise) is a healthy attitude. Government agencies, private corporations and the media tend to throw so much so-called results at us, it’s important to always ask how facts and figures were created in the first place and what they stand for.
I’m not familiar with that particular company, but the first thing to consider is that we are talking about an applied research protocol - essentially market research informed by anthropological theories and methods. Market research companies do not pursue the same goals as (usually public funded) research labs, nor do they work towards the same scientific standards (for reasons ranging from strategic objectives, time-frame, funding and professional culture - broadly speaking the private market space is a much faster paced and rationalised world than public funded research). Some companies are obviously more rigorous than others, but all in all it would not be fair to expect the same from applied research and research - they don’t do the same job.
Now, the next thing is that we are looking at qualitative research methods here. Science has many different methods in its toolbox. Each method is essentially a solution to a problem (usually an imperfect solution by the way - but that’s another story!). Put very simply, your choice of method depends on which question you are trying to answer. So if you pick quantitative methods, it could be because you want to identify and describe global phenomena ( = “trends"). Here they used qualitative methods, plausibly because they were after why and how a previously identified trend is deployed, rather than after demonstrating the existence of said trend (for example by showing that is bears statistically significant weight within a representative population sample). Your choice of method also reflects the scale you are working on and whether you want to answer a macro kind of question or a micro level question. It’s frequent to combine various methods together to custom crack your question from different angles and/or scales. Given the methods used here, it could be that the client (AP) already knew about the trend and were after in-depth qualitative data to really get to grip with it. It does makes sense, in methodological terms, to restrict the size of the sample to reach that kind of vastly detailed data so I’d second what Lorenz said earlier for that matter.
By the way, in statistical and otherwise quantitative research, what matters is not so much the size of the population you’re looking at but rather the representativeness of that population. Very few statistical research programmes are based on extremely large groups because it’s time consuming, expensive and thus remains the privilege of a few select government agencies (e.g. population census). For everyone else, there are various methods you can use to achieve and maximize the reprensentativeness of your research sample. Put simply, it’s not about being big, it’s about being accurate.
Hope this helped!
***Does anyone see a problem with the fact that the sample size of this study was so small? They followed 18 people***
No, not as long as the study is open about the # of people involved & approach. A lot of qualitative research is about observation & interpretation, not statistical validity.
Speaking as a businessperson – that’s OK. Most business data isn’t statistically valid – doesn’t mean it isn’t useful data for other reasons (directional, etc.). & lot of statistically valid data can lead to poor decisions if it’s not evaluated in a broader context.
Of course if this were medical research or something like that, it would be important to follow a much more structured & statistically rigorous process.