Introduction to mainstream Cognitive Science or: How we learned to stop worrying and love the representation

My first post is just going to set out a very brief of history of cognitive science.  While this will be woefully incomplete as a comprehensive historical account, the aim is to focus on some key movements that will hopefully shed some light on ideological shifts responsible for its current incarnation.

Psychology is generally accepted to have begun experimentally in the 1880s in the laboratory of Wilhelm Wundt, a man who focused his efforts on discovering the basic units of consciousness such as sensations through phenomenological report or “introspection”.

The early 1900s saw a shift in practice in the face of increasing criticism concerning the unreliability of introspective methods in a world where science aimed to champion the “objective”.  This led to the eventual development of behaviourism, an approach which switched the focus from fuzzy, undefined conceptions of mind and constructs (e.g. beliefs and feelings) to overt behaviour, which could be empirically tested and more easily experimentally controlled.  Those such as Watson and Skinner pioneered this science of behaviour, in which attempts were made to characterse human capacities in terms of stimulus-response associations and reinforcement schedules.

However, in the 1950s the American linguist Noam Chomsky famously reviewed Skinner’s account of language acquisition, critiquing the scope of operant conditioning to explain the complex and productive nature of language competence.  He argued that a fundamental part of the picture was missing, as mere exposure to natural language would be insufficient to account for the robust knowledge of language humans display.  Concerns were highlighted about the lack of information available in speech signals and also the lack of negative evidence (feedback on errors in production) to account for language competence.  This is known as the poverty of the stimulus argument and conclusions were drawn that internal mental structures must exist to account for enhancement of impoverished input.

This came during a period when the increasing prominence of information processing led to parallels being drawn between the way computers deal with information and how a mind might process sensory input.  The computer metaphor captivated the cognitive science community and had a profound impact on the direction Psychology would take up to the present day, in what is now termed the “cognitive revolution”.

The focus had turned full circle in under a century and it was squarely back on the empirically inaccessible mind and its internal causal contents, whose nature could only be inferred.  It was not solely language that was receiving this treatment.  The study of perception became a domain in which the basic building blocks of perception started at the level of sense data.  Since stimulation of a sense cell contains no information of the causal stimulus itself, the job of the brain was conceived to be to derive and infer the content of the world on the basis of this sparse input.  The role of the environment in any explanatory sense was theoretically dispensable; the conception was not that it was the world, but the world-as-represented, ­which influenced behaviour.

This approach has enjoyed a lengthy period of stability in the history of the science and countless models of human behaviour take the form of input/output models in which representations are computed over by something akin to a computer’s central processor to explain human behaviour.  Representations have done a lot of the heavy lifting for a wide range of cognitive tasks and highly complex calculations ascribed to them in order to account for simple and complex human behaviour.  It is rarely detailed how these calculations are performed, but these explanations are frequently believed to be on the horizon and thought unlikely to be a problem for the frighteningly complex human brain.

Mental representations nominally play the necessary explanatory role in most accounts of human behaviour and have to be presupposed, since they cannot be empircally accessed.  As a consequence, research is frequently dedicated, not to proving their existence, but to supposedly indirectly probing their character and modelling how they might be put to work.

Is this a problem or a scientific necessity?  For example, are mental representations providing a similar function to the hypothesised role of dark matter in astrophysics?  Can models simply not function without invoking mental representations?

One obvious benefit of invoking mental representations is their decouplability.  They appear to cater as a simple explanation of how we achieve competent behaviour when a stimulus is currently absent in the real world.  Things do not come in and out of existence for us just because they are not in contact with our senses at any one time, so the brain must surely contain some form of stand-in for the world.  This also seems to serve to explain highly competent behaviour that appears to require knowledge of future states of the world, such as knowing when and where to be to catch a fly ball.  How else could this be achieved, but through some hidden computation that allows prediction?

Language is perhaps the most obvious domain begging for a representational narrative.  With no lawful relationship between arbitrary sounds and their referents, a mental stand-in seems called for to explain how language and concepts can be used to control action.  How could we create sense-making but novel utterances, if there are not abstract structures in place to scaffold this?  Why else would children overgeneralise regular grammatical rules to irregular forms they had already mastered?

In essence, the resounding chorus in the cognitive science community was and continues to be: What else could it be?

A future post will look at an altenative approach to the current paradigm and why assumptions of mental representations are not as benign as they initially appear.