Thoughts after a talk by Michelle Dawson

Some thoughts, not yet expanded…

1. Autistic people have been shown to be more emotionally expressive than non-autistics, contrary to some stereotypes. In one experiment, they were also still less susceptible to a framing effect.

2. Seemingly narrow abilities can get one very far, e.g., spotting weird interpretations of results in papers; systematically cataloguing results. They are only “narrow” if judged that way.

3. Everyone needs to find their talents and spot and help cultivate talents in others. Autism is another more visible instance of this.

4. “Interventions” are often poor substitutes for mentoring relationships, which have been found to be so important in, e.g., apprenticeships, Oxbridge undergrad supervision, and PhD supervision elsewhere.

5. Opportunities to try things can be the best intervention.

6. Judgmental observation is a kind of interaction, as when you see something, a trait, behaviour, you assess to be negative, it’s difficult to avoid broadcasting your opinion, even if just in a brief facial expression. This affects the person you’ve just observed.

7. Verbal fluency is still over-emphasised in academia. Visuospatial processing, rapid categorisation, implicit learning – still computationally complex cognitive processes – are often undervalued.

8. Everyone has biases, e.g., results they want to be true, even those pointing out biases in others. That’s where debate and criticism from other folk who are less involved is crucial.

More on “context aware” systems

Erickson (2002) argues that “context awareness” is motivated by a desire for systems to take action, autonomously, leaving us out of the loop. The ability to do so accurately requires a lot of intelligence to draw inferences from the available sensors.  Erickson reckons the project is doomed to failure. However, he thinks we might make some progress if humans are brought back into the loop and given the contextual data in rawer form so they can interpret it and take appropriate action themselves. Not sure. The example he gives can easily be modified to reveal potentially damaging information about a user’s whereabouts and actions:

“Lee has been motionless in a dim place with high ambient sound for the last 45 minutes. Continue with call or leave a message.”

Reminds me of the impressive-looking thesis by Nora Balfe (2010) on a safety critical railway signalling systems.  For instance from the conclusions:

“Feedback from [the system] was … found to be very poor, resulting in low understanding and low predictability of the automation. As signallers cannot predict what the automation will do in all situations they do not feel they can trust it to set routes and frequently step in to ensure trains are routed in the correct order. In the observation study, the differences found between high and low interveners in terms of feedback, understanding and predictability confirm the importance of good mental models in the development and calibration of trust…”


Balfe, N.(2010) Appropriate automation of rail signalling systems: a human factors study. PhD thesis, University of Nottingham.

Erickson, T. (2002). Some problems with the notion of context-aware computing: Ask not for whom the cell phone tolls. Communications of the ACM, 45(2), 102-104.

What is cognition? Again

A while back I posted a list of quotations attempting to define cognition.  In discussing and searching for these, I came to the conclusion that definitions of these kinds of global, general, concepts are only useful as department labels. They allow people to work out, vaguely, to whom they could talk to learn about a topic that interests them.  Concepts like “cognition” should be defined with that goal in mind, in a way that causes as least confusion as possible.  For instance it seems likely that separating cognition and emotion, or cognition and perception, or equating cognition with conscious deliberative thought, are all bad ideas.

Adams (2010) likes definitions.  He suggests that philosophers ask cognitive scientists:

of the processes which are cognitive, what (exactly) makes them cognitive? This is the question that will really irritate, and, I’ve discovered, really interest them. It will interest them because it is a central question to the entire discipline of the cognitive sciences, and it will irritate them because it is a question that virtually no one is asking.  [emphasis original]

He gives some examples of processes which are clearly to him not cognitive, e.g., processes that regulate blood sugar levels, thermo-regulatory processes such as capillary constriction and dilation.

He also provides a list of necessary conditions for a process to be cognitive:

  1. Cognitive processes involve states that are semantically evaluable.
  2. The contents carried by cognitive systems do not depend for their content on other minds.
  3. Cognitive contents can be false or even empty, and hence are detached from the actual environmental causes.
  4. Cognitive systems and processes cause and explain in virtue of their representational content.

This all left me rather cold.  I don’t understand what this list helps to explain.  I’m not sure it’s even wrong.

Why not, for instance, allow cognitive processes regulate blood sugar levels?  If, at an abstract level of analysis, this turns out to be useful, for instance if performing a task which may be analysed in a cognitive fashion seems to influence blood sugar levels, then why not make it a cognitive process?

The word “cognitive” seems to cause more trouble than it’s worth so maybe we should stop talking about “cognitive processes” altogether.  As I wrote in a previous post:

It used to be considered bad form to refer to something as a neural process unless it referred to synapses, but is this still the case? There are various levels of “neural” from absence of neural due to lesions and BOLD activation patterns, down to vesicle kissing and gene expression. Maybe behavioral neuroscience is allowed up another level to more abstract representations currently called “mental” or “cognitive”, and the mental can be returned to refer to the what-it-feels-like.  Similarly maybe psychologists are behavioral neuroscientists focusing on an abstract level of explanation.

That probably wouldn’t help either.  If only we could find a pill to take which makes us less anxious about the meaning of individual words and phrases.


Adams, F. (2010). Why we still need a mark of the cognitive. Cognitive Systems Research, 11, 324-331

What is a mental process?

What is a “mental” process? The stuff we’re conscious of or a limbo between real, wet, neural processes and observable behavior?

A well known analogy is the computer. The hardware stuff you can kick is analogous to the brain; the stuff you see on the screen is, I suppose, the phenomenology; then the software, all of which correlates with processes you could detect in the hardware if you looked hard enough, some but not all of which affects the screen, is cognition.

Forget for a moment about minds and consider the engineering perspective; then the point of the levels is clear. When you want, say, to check your email, you probably don’t want to fiddle around directly with the chips in your PC. It’s much less painful to rely on years of abstraction and just click or tap on the appropriate icon. You intervene at the level of software, and care very little about what the hardware is doing being the scenes.

What is the point of the levels for understanding a system? Psychologists want to explain, tell an empirically grounded story about, people-level phenomena, like remembering things, reasoning about things, understanding language, feeling and expressing emotions. Layers of abstraction are necessary to isolate the important points of this story. The effect of phonological similarity on remembering or pragmatic language effects when reasoning would be lost if expressed in terms of (say) gene expression.

I don’t understand when the neural becomes the cognitive or the mental. There are many levels of neural, not all of which you can poke. At the top level I’m thinking here about the sorts of things you can do with EEG where the story is tremendously abstract (for instance event-related potentials or the frequency of oscillations) though dependent on stuff going on in the brain. “Real neuroscientists” sometimes get a bit sniffy about that level: it’s not brain science unless you are able to talk about actual bits of brain like synapses and vesicles. But what are actual bits of brain?

Maybe a clue comes from how you intervene on the system. You can intervene with TMS, you can intervene with drugs, or you can intervene with verbal instructions. How do you intervene cognitively or mentally?  Is this the correct way to think about it?

Levels of description — in the Socialist Worker

The mainstream media is notoriously rubbish at explaining the relationships between brain, feelings, and behaviour. Those of a suspicious disposition might argue that the scientists don’t mind, as often the reports are very flattering — pictures of brains look impressive — and positive public opinion can’t harm grant applications.

The Socialist Worker printed a well chosen and timely antidote: an excerpt of a speech by Steven Rose about levels of description.

… brains are embedded in bodies and bodies are embedded in the social order in which we grow up and live. […]

George Brown and Tirril Harris made an observation when they were working on a south London housing estate decades ago.

They said that the best predictor of depression is being a working class woman with an unstable income and a child, living in a high-rise block. No drug is going to treat that particular problem, is it?

Many of the issues that are so enormously important to us—whether bringing up children or growing old—remain completely hidden in the biological levels.

You can always find a brain “correlate” of behaviour,  and what you’re experiencing, what you’re learning, changes the brain. For instance becoming an expert London taxi driver — a cognitively extremely demanding task — is associated with a bit of your brain getting bigger (Maguire et al, 2000). These kinds of data have important implications for (still laughably immature) theories of cognition, but, as Steven Rose illustrates with his example of depression, the biological level of analysis often suggests misleading interventions.

It’s obvious to all that would-be taxi drivers are unlikely to develop the skills they need by having their skull opened by a brain surgeon or by popping brain pills. The causal story is trickier to untangle when it comes to conditions such as depression. Is it possible that Big Science, with its fMRI and pharma, is pushing research in completely the wrong direction?


Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, R. S. and Frith, C. D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences of the United States of America, 97, 4398-4403

On the inseparability of intellect and emotion (from 1933)

“[…] Imagine that we are engaged in a friendly serious discussion with some one, and that we decide to enquire into the meanings of words. For this special experiment, it is not necessary to be very exacting, as this would enormously and unnecessarily complicate the experiment. It is useful to have a piece of paper and a pencil to keep a record of the progress.

“We begin by asking the ‘meaning’ of every word uttered, being satisfied for this purpose with the roughest definitions; then we ask the ‘meaning’ of the words used in the definitions, and this process is continued usually for no more than ten to fifteen minutes, until the victim begins to speak in circles—as, for instance, defining ‘space’ by ‘length’ and ‘length’ by ‘space’. When this stage is reached, we have come usually to the undefined terms of a given individual. If we still press, no matter how gently, for definitions, a most interesting fact occurs. Sooner or later, signs of affective disturbances appear. Often the face reddens; there is bodily restlessness; sweat appears—symptoms quite similar to those seen in a schoolboy who has forgotton his lesson, which he ‘knows but cannot tell’. […] Here we have reached the bottom and the foundation of all non-elementalistic meanings—the meanings of undefined terms, which we ‘know’ somehow, but cannot tell. In fact, we have reached the un-speakable level. This ‘knowledge’ is supplied by the lower nerve centres; it represents affective first order effects, and is interwoven and interlocked with other affective states, such as those called ‘wishes’, ‘intentions’, ‘intuitions’, ‘evalution’, and many others. […]

“The above explanation, as well as the neurological attitude towards ‘meaning’, as expressed by Head, is non-elementalistic. We have not illegitimately split organismal processes into ‘intellect’ and ’emotions’.”


Korzybski, A. (1933).  Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics Institute of General Semantics.

Žižek, on Malabou, on the brain sciences

Any Hegel scholars around? Žižek (2006, pp. 208–209):

“Where, then,do we find traces of Hegelian themes in the new brain sciences? The three approaches to human intelligence—digital, computer-modeled; the neurobiological study of brain; the evolutionary approach—seem to form a kind of Hegelian triad: in the model of the human mind as a computing (data-processing) machine we get a purely formal symbolic machine; the biological brain studies proper focus on the “piece of meat,” the immediate material support of human intelligence, the organ in which “thought resides”; finally, the evolutionary approach analyzes the rise of human intelligence as part of a complex socio-biological process of interaction between humans and their environment within a shared life-world. Surprisingly, the most “reductionist” approach, that of the brain sciences, is the most dialectical, emphasizing the infinite plasticity of the brain.”

This is the beginning of an interesting (or at least confusing) section on relationships between society, brain, mind, free-will (and so on, and so forth). A reading group would be tremendously helpful. (Page 13 discusses fisting, if that acts as a motivator.)


Slavoj Žižek (2006). The Parallax View. The MIT Press.

Mathematics of the Brain

The Defense Advanced Research Projects Agency (DARPA) released its call for research proposals a couple of days ago.  A couple of topics look interesting:

  • “The Mathematics of the Brain: Develop a mathematical theory to build a functional model of the brain that is mathematically consistent and predictive rather than merely biologically inspired.”
  • “The Dynamics of Networks. Develop the high-dimensional mathematics needed to accurately model and predict behavior in large-scale distributed networks that evolve over time occurring in communication, biology and the social sciences.”

I like the use of “merely” above.

An old rant from 2nd year PhD me. But is it true? Do I believe it now? *Chin stroke*

[I can’t remember what I was responding to.]

There’s nothing “cartesian” about the language of cognitivism. Information processing is just a viewpoint on phenomena which doesn’t give a damn about ion flows or gene expression. It just posits that there’s something transforming what’s perceived into the actions, and whether it’s a set of cogs or a Turing machine isn’t particularly interesting. These guys need to go back to Neisser!

I imagine a load of these conceptual analysts (using a priori wisdom they received from where?!) pounding their fists on a table, some of them agreeing it is a table, some of them arguing that, no, that’s ridiculous, it’s a collection of atoms, electrons, and protons, … There are multiple levels of analysis, and somewhere those levels have to connect to what it feels like to be a person and how people communicate with each other about what they’re doing. I agree that sometimes the language that we use at the personal level gets applied, by analogy, to what the brain’s doing at the sub-personal level, but often that’s just to try to tell a story about what’s going on. For instance today [a fairly famous researcher] talking about a parietal area “caring” about something or other. It was just a cheap way to get an idea across instead of saying, “We were able to reject the null hypothesis that there is no difference in BOLD activation between the two conditions (with alpha = 0.05).”

Many of the theories used by fMRI folk seem not far from the folk psychological vernacular and thus are much in need of refinement to make them more consistent with what a charming Italian professor termed the “meat machine” is up to. That’s the point, to me, of fMRI et al: improving consistency between what the brain’s up to and our models of information processing.