This is a very quick reply to a colleague who queried the validity of “logical accounts of logical operations”.

At least two issues there: the validity of using logics to give a functional characterisation of (measurable) behaviour versus using them to provide a causal mechanism for the generation of the behaviour. Can’t see any problem with the former: it’s just a linguistic analysis and allows you to sort people into categories, e.g. to enable associations cross-task or with individual difference measures. Or to put it another way, it’s just a very structured likert scale! Whether or not the causal mechanism construction is a valid enterprise collapses to an argument of what can be measured experimentally and what particular logical mechanism is used. If all you can get at is outputs then it doesn’t matter what your model looks like, really. Add RTs and things become a bit more constrained. If your experiments give you sequences of outputs then the temporal nature of the model is more important.

There’s more to logic than, say, Gentzen natural deduction with trees that look like:


There is unlikely to be a one-to-one mapping between these beasties or the theorem provers that can build them and the brain, but that doesn’t imply the non-existence of another logical mechanism which is more closely related to the grey stuff. If the word “logic” causes you grief then replace with “computational”.

I am interested in equilvalencies between logics, for instance how nonmonotonic logic may be embedded in classical logic, or classical probability, or connectionist networks. I’m convinced these theoretical results can point us in the right direction for building cognitive models. For instance you can fix on a particular computational mechanism—whatever you like—and then argue using the maths how everything you’ve done maps across to the other formalisms. So one model is just a point in an equivalence class of models.