Argumentum ad intractableum


Any readers good at Latin?

Argumentum ad intractableum: the fallacy of arguing that a cognitive model is poor because it is computationally intractable (in general).

(I presume “intractableum” is incorrect…)


5 thoughts on “Argumentum ad intractableum”

  1. I’m not convinced that this is a fallacy. Consider Herbert Simon’s idea of bounded rationality:

    http://en.wikipedia.org/wiki/Bounded_rationality

    I believe that resource bounds have played an immense role in shaping cognition. A model of cognition that ignores the importance of resource bounds is like a model of the internal combustion engine that ignores the second law of thermodynamics.

  2. I really meant general results, e.g., about computability: we can’t compute x in general by method y, therefore people don’t have method y in their heads. The problem is that it may work very well for limited cases of x.

  3. I agree that a problem P being intractable in general (i.e., without its domain being somehow constrained) does not mean that it is intractable for all cases. Special cases of P may yet be tractable.

    On the other hand, however, I do think that a problem P is a poor cognitive model if it is intractable, even if a special case of P, let’s call it P’, may be a good and tractable one. If anything is a good model, it will be P’ rather than P.

    I am curious: What motivated the formulation of the (purported) fallacy? Was it something you read in the cognitive science literature? If so, I’d be very interested to hear what that was.

  4. I’m not sure now what I was thinking about at the time, but the post came after a conversation with a colleague. One thing was general results from Turing and Gödel—decidability, incompleteness, and so forth—and how they’re abused. Something else that annoyed me was how sometimes people argue about a particular model being too computationally complex when they haven’t given serious consideration to how it may be implemented. It appears they’ve found one poor implementation, a bubble sort analogue, and inferred from that—to continue the analogy—that all sorting algorithms are poor and should be excluded from cognitive models.

    I suspect you’ve found good concrete examples—will get back to you when I’ve digested your paper properly!

  5. Andy wrote: “It appears they’ve found one poor implementation, a bubble sort analogue, and inferred from that—to continue the analogy—that all sorting algorithms are poor and should be excluded from cognitive models.”

    Ah, yes, that is a common fallacy I have encountered a lot as well (both in discussions and in the literature).

    Perhaps call it “Intractability Argument from One Bad Algorithm”. (sorry, I would not know how to translate to Latin).

Comments are closed.