Analyzing metacognition in an SDT framework

How should we measure metacognitive sensitivity (i.e. the efficacy with which confidence ratings distinguish between correct and incorrect judgments)? This question faces two central difficulties: (1) How can we separate response bias effects from true sensitivity effects? (2) Given the empirical observation that metacognitive sensitivity improves as basic task performance improves, how can we assess metacognitive sensitivity independent of basic task performance?

In collaboration with Hakwan Lau, I addressed these questions by developing an extension of conventional signal detection theory (SDT) that capitalizes on a theoretical link between type 1 task performance and expected type 2 task performance (Maniscalco & Lau, 2012; Maniscalco & Lau, 2014). The resulting measure of metacognitive sensitivity, meta-d’, (1) measures metacognitive sensitivity independently from response bias effects, and (2) allows for a computation of metacognitive efficiency by comparison to the signal detection theory measure of type 1 sensitivity, d’. Thus, this analysis framework addresses the two central difficulties of measuring metacognition described above. It has become a standard tool in the scientific study of metacognition.

Matlab code for performing meta-d’ analysis is available here.

References

Maniscalco, B., & Lau, H. (2012). A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422–430. https://doi.org/10.1016/j.concog.2011.09.021 [supplementary material]

Maniscalco, B., & Lau, H. (2014). Signal detection theory analysis of type 1 and type 2 data: meta-d’, response-specific meta-d’, and the unequal variance SDT model. In S. M. Fleming & C. D. Frith (Eds.), The Cognitive Neuroscience of Metacognition (pp.25-66). Springer, Berlin, Heidelberg.

Lau, H., Maniscalco, B. (2010). Should confidence be trusted? Science, 329(5998), 1478–1479. https://doi.org/10.1126/science.1195983