Social science is a soft science. You could do the exact same study with different samples and come up different results. Good studies in social science have really solid methods, which is something very difficult for a lay person to evaluate, almost impossible even. And the studies really should be replicated but unfortunately social science journals don’t like publishing replication. They always want new stuff building off old stuff. Imo social science really should slow the eff down and ask more researchers to replicate studies. Once a study like the one above us replicated with the same or very similar results three or four times with different samples you can probably start feeling ok about the results. Another problem though is time lag, especially on issues like this so if you wait too long between samples you may also end up with different results. There are big sections of experimental design books dedicated to all the various threats to validity when conducting studies like this but that’s just one I’ll mention here. Bottom line, when you see some survey or study mentioned especially in popular press or social media interpret the findings with extreme caution. Especially if it agrees with your world view. You’re way more likely to believe bs if it’s something you intuitively agree with.
You are a teacher/professor, right? Mind me asking what field? I think you've mentioned it before and I can't remember.
Nice. I was thinking in the back of my mind that you might be a professor of philosophy - probably based on your temperament and intellectual curiosity.
IMO, this is not so much about soft v. hard science as it is about empirical research in general. This seems to be an issue with empirical research across the board. Further, when dealing with empirical research most people focus on & examine the statistical techniques used etc. not the data/metrics being used. Here's an example of questionable data from hard science. 4 years ago I spent about a month at a research station in the Bodongo rain forest. It was a citizen science deal thorough Earthwatch. Me, my wife, daughter & son collected data on birds, budding of trees, chimp behavior, snares & other illegal stuff going on in the forest. This data collected by us - rank amateurs - is now being used. Some other amateurs we met had done this in other places...measuring butter fly wings at diff elevations for global warming data, etc. This data is super suspect, but no knows or questions how it was collected & measured. So, there are at least 3 problems with empirical research common to hard & soft science. 1. nature of the data is suspect in many cases, but typically taken as a given that it is measuring what it is supposed to measure. 2. statistical manipulations - including perfectly "legal" ones, but ones that are still questionable. In fact simply getting more data points can turn an insignificant result into a statistically sig result. But, all science should be concerned as to whether the result is significant in a scientific sense. (recall a paper with 250,000 data points - it had 0.0000* * significantly different from 0 (p< .01)) ???????? 3. replicability. this is certainly an issue in both empirical & experimental research & is not unique to soft sciences. the cutest little scientist
Also well deduced. I am definitely philosophically-minded and have recently even convinced the college to approve my teaching of a philosophy of science course, basically exploring the question of how do know what we know. Side note: I had the seemingly insurmountable disagreements of this board near the top of my mind when designing the course.
$150,000/year is not even close to the top 1% of earners. It takes $650,000 a year to reach America’s top 1%
kinda depressing to think hard science struggles with that as much as soft science. I had heard similar things from colleagues in those areas too. Don’t even get me started on stats. Half of social scientists don’t even understand the statistical methods they employ and often completely ignore basic assumptions and then focus entirely on significance without considering meaningful effect etc. I’m hardly the first person to mention all that I don’t want to pretend to be some smarty pants ahead of the curve on that stuff or anything. Plenty of people smarter than me have written about it. I just recently saw a student submit a qualifying exam using chatGPT to write up the methods and of course it just made shit up lol. The faculty reviewing it didn’t even notice right away lol. So it’s going to probably get worse before it gets better.
Like its not specifically what you are calling for, but it would be super easy for a professor who has published such as study to have his grad students work at replicating the results, and it would be a good way to learn the process. There is probably too much focus on "new" or "original" studies, but that is the effect of introducing the market into education, everyone has to market themselves and their research if they want a job, replicating someone else's work doesn't do that for you at all.
I think it's natural that social scientists (a broad label) struggle with statistical methods/analyses. It may be easy to place a number on things such as attitudes and perceptions, but it's near impossible to derive meaning from the numbers. You may have stated the same point in a previous post. That complication does not diminish the value of good research, though. I was taught quantitative research methods as a musician-educator. Imagine the difficulty of ascribing numbers to the quality of musical performance and attempting to link the results with treatment. It's what you would probably call "soft science," albeit an unfortunate label. Measurement of musical outcomes can be wrought with validity pot holes and generalizability is almost always mentioned in the study limitations. These days I find qualitative methods of more value in my discipline. Study results are still not generalizable, but that's not the purpose. Sorry for what I assume is a continued diversion from the topic. I haven't read the thread, except for your compelling exchange with docspor.
Qualitative is a lot of fun, and can be an elegant way to investigate human feelings, attitudes, and behaviors. I've been reading a lot about narrative methods recently and we plan to employ them more in the area we're studying. I've also been trying to plan more mixed methods stuff, just because IMO when you can do that in social science the numbers become more meaningful and I feel like you are able to learn more. I feel like this topic is way more interesting than the original post, so don't feel bad lol.
One of the challenges is that replicated studies simply don’t carry the cache of novel results, even though novel findings are usually discarded over time. As a grad student, psychologist Stuart Ritchie worked to replicate the infamous study by Daryl Bem showing that college students displayed precognition of pornographic images. (Really) Unsurprisingly, Ritchie’s replication did not replicate the psychic result. Ritchie submitted the manuscript to the same journal that published Bem’s original result and, wait for it, was rejected! Journals aren’t as interested in publishing replications. People are working to try to change this culture, but that is not an easy task. That said, Nature Human Behavior announced just yesterday while you were posting that they are partnering in an effort to replicate the studies they publish. Kudos to them. Promoting reproduction and replication at scale | Nature Human Behaviour
guessing it was a typo, the goal should never be working at replicating the results. That implies you have a solution in mind. The science is replicating the study and documenting if the results happen to be same or similar. Critical distinction.