One of the key criticisms of psychology is that it’s not, and by some arguments could never be, a “hard science” like math or physics, where you can express a hypothesis, test it, and then have clear indications as to whether your supposition was supported or not.
This isn’t an official term, and it’s often presented alongside the opposite colloquial term, “soft science,” which is frequently leveled (at times, pejoratively) at fields in which there are means of testing hypotheses, but even if you do, you can’t say for certain that the study says what it seems to say, or even that it was testing what it seemed to be testing.
This isn’t generally due to a lack of assiduousness or competency on the part of the researchers: there are just so many variables that can influence how a person thinks and behaves that experiments meant to assess such things cannot (with existing methods and metrics, at least) concretely establish why we do the things we do in different contexts, and how our behaviors might change as the variables acting upon us change.
One of the consequences of this fuzziness (and one of the forces that might be in part responsible for perpetuating it) is what’s sometimes called “The Toothbrush Problem,” which refers to the commonality, in the field of psychology, of inventing new theories for each new body of work.
The name is derived from the researchers’ supposed avoidance of frameworks that their peers invented; they avoid them like they might avoid another person’s toothbrush.
And the theory is that because many such researchers are keen to carve their own paths within academia (where many of them work, and where many of them are hoping to attain tenure someday), they don’t want to piggyback on anyone else’s ideas, methods, or templates: they want to be seen as independent minds capable of coming up with their own headline concepts and theories, and that incentivizes them to avoid using research methods other psychologists have innovated, coming up with their own instead even if existing models might be perfect for their purposes.
Thus, psychology has become a field awash in orthogonal theories, few of them interacting with the others or using comparable metrics, and that makes it difficult to benefit from the “cumulative science” we see in other research-focused fields, where individual theorists build atop each others’ work, iterating and changing it sometimes, but doing so within a framework that expounds upon existing models rather than ignoring them, and setting things up so that their contemporaries and future scientists can built atop their own.
There are efforts to address this issue, including a relatively recent program out of Germany called the Standardization of Behavior Research Methods (or SOBER) that’s aiming to establish standards that would make more psychology research interactive and cross-compatible with other research, potentially making findings more robust, replicable, and build-atop-able.
This will likely be a global, generational effort, though, similar to what we went through trying to standardize shipping containers: not something that happens overnight, and likely with a fair bit of pushback and a number of competing ideas about how best to do it arising in the meantime.
This is a great model of criticism of our collective efforts. I have a very idiosyncratic style or idiolect, but not because I am carving out a career, nor a brand as an influencer, but because I am mildly autistic trying to find some commonality. I have the opposite vector on the same issue.
The toothbrush problem is related to my notice that a lot of what I call science essay books, whether they introduce yet another theory of eeeeeveeeeerythingggg or not, is that they cover the same ground and that many books could just refer to a common consensus code-base and then fork it with their little perspective. It would save a few trees and a lot of bits, and time. I known the publishing industry requires this in order to have an excuse to actually publish a 'monograph' but really, we could all save time if we coalesced the commonalities (the introductory matter is too long) into a git-style repository model and fork the novelty. Too much of these books go over the same ground for too long, admittedly I might have read too many of them.
Current example of this is "A theory of everyone : who we are, how we got here, and where we're going" / Michael Muthukrishna ISBN 9781399810630 Basic Book 2023 London. Its like 90% introductory matter, good if you want an introduction, but couldn't it come with a number, a fork-label and a weighting? I skimmed it, but which bit was the interesting bit, which was their bit? Didn't see it.
The same could be done with a flow-chart-y methodology/ontology/epistemology frameworks in these softer sciences, it would work a bit like those POW jokes where prisoners just say a number and everybody laughs, because everyone knows the jokes so well only an index is required...
This could be mapped to erstwhile attempts to toothbrush the known jokes with a re-branding, "oh, that's just joke 22 with a new gen z skin..., let's call it 22z."
it would be a cross between git and wikipedia style resources (looks over his shoulder at AI).