Spotted a nice paper from Iris van Rooij (2008) today, a study of ideas of how computational tractability can influence the choice of computational-level theories of cognition. She notes that many cognitive scientists assume that a theory is tractable if it is computable in polynomial time (the P-cognition thesis). Instead she argues for fixed-parameter tractable (FPT-cognition).
The general problem faced by researchers is what to do if one discovers that one’s computational-level theory is, e.g., NP-hard. The author categorises four alternatives:
- Reject the framework, choose something else.
- Revise the theory.
- Devise heuristics.
So for instance she cites Oaksford and Chater’s rejection of logicist cognitive science and embrace of Bayesian networks—which are again intractable in general.
It sounds commonsenical—and many P-cognition algorithms can be horrendous in theory and practice, as van Rooij points out! It’s great to have this all spelled out in detail, with the maths, examples, and a careful analysis of the literature.
Here are the author’s recommendations.
- Make explicit assumptions about (bounds on) the range of values that relevant input parameters can take in reality.
- Analyze the parameterized complexity of the theory for all relevant input parameters that can reasonably be assumed to be much smaller than the whole input size.
- Derive predictions from function restrictions to tractable parameter ranges and put them to an empirical test.
- If no tractable parameter ranges can be found, or none that passes empirical tests, then revise the theory and return to Step 1.
Iris van Rooij (2008). The Tractable Cognition Thesis Cognitive Science: A Multidisciplinary Journal, 32 (6), 939-984 DOI: 10.1080/03640210801897856