Pandora's box in neuropsychology?

It's now a long time since I began to wonder about the widely overlooked problem of measurement in my beloved field, neuropsychology. "Measurement" does look like a dusty and boring topic from our old days as undergrads - yes, the yawning during the courses of statistics, I'm sure you all remember this. It took a long time, and the sudden passion for popular books of physics, to understand that measurement lies at the core of real, solid science development. The notorious academic example was Tycho Brahe, the Danish astronomer who had the intuition that perhaps obsessive precision in measurement might help deciding which scientific theory is true, without swinging forever between philosophically incompatible positions. It took another while to me to realize that measurement is even more deeply linked to scientific discovery - far from being boring technicality, measurement only makes sense within a good theory - and if one can measure some hypothetical phenomenon, this automatically means that a good (testable) theory of the phenomenon is available. We are speaking of the very gears and wheels of scientific discovery, not of peripheral, auxiliary knowledge.
So, the humble tailor working behind the scenes of a theatre is actually one of the main actors on stage. Relegating the problem of measurement to the dark meanders of the theatre: a bad service indeed by current, misleading university programs!
Some years ago, guess, I was analyzing some data and I realized I could not decide whether I should have computed a dissociation by subtracting the raw scores, or by subtracting the standardized scores. It did look like a peripheral, technical problem to me. It was not. The deeper I dug into the problem, the more I understood it concerned the very nature of neuropsychology as a science. I suddenly understood that I was trying to scratch over the surface of a huge mountain of overlooked puzzles, rich in structure and connections. Tim Shallice did hit on the problem in 1988, but his hint was left largely unattended. I believe this is the key for a big jump in the quality of scientific research in cognitive neuropsychology. I tried to tackle some of the nuances of this Pandora’s box in a long, passionate paper that just came out in Neuropsychologia. Those of you who are interested will see that I just tried to sketch the map of a gigantic anthill with a disturbing number of branches. I particularly like the branch about test reliability, and the mistery of within-patient statistical tests.
Some links below. Enjoy!
Alessio

 

https://www.researchgate.net/publication/359050359_Dissociations_in_neuropsychological_single-case_studies_Should_one_subtract_raw_or_standardized_z_scores
https://www.academia.edu/73716182/Dissociations_in_neuropsychological_single_case_studies_Should_one_subtract_raw_or_standardized_z_scores
https://pubmed.ncbi.nlm.nih.gov/35247434/