Presenting the World

This post was a brief introduction to the history of mainstream cognitive science and has some background information on the notions of ‘mental representation’ and the computational metaphor, which are going to be looked at critically in this post.

The Computational Metaphor

Anthony Chemero, author of Radical Embodied Cognitive Science (2009) expresses in his opening chapter that a description or metaphor in science is acceptable, only as long as it furthers our understanding of a problem.

Sticking points arise, however, when a metaphor become so entrenched in the intellectual community that it becomes the object of study itself. Within cognitive science, the computational metaphor has successfully embedded and reinforced itself by rewriting the central aim of Psychology.  Cognitive science has become preoccupied with elucidating the nature of representations (supposedly functionally invoked entities), rather than examining critically whether the metaphor is fit for purpose.  Metaphors not only constrain our understanding of the behaviour under scrutiny, but also constrain the questions that are being asked.

This is a trivial issue if, like Fodor (1987), we believe that a representational language of thought is theoretically non-negotiable when explaining human perceptual and cognitive capabilities. While Fodor (2003) himself admits that this approach is currently without a psychosemantic theory of content (which I’ll get to later), the lack of urgency to find such an account is implicitly vindicated by the lazy mantra of ‘what else could it be?’. Traditional cognitivism will eventually have to bear this burden of explanation in the face of any viable alternative to studying human behaviour.

One such alternative is Radical Embodied Cognitive Science which studies behaviour from a Systems Theory perspective.  This post will look at what such a view entails and having considered this alternative account of complex behaviour, the post will finally consider the metaphysical consequences of assuming that we have contentful mental representations which are computed over and also the feasibility of “information transmission”. Hopefully this will go some way towards questioning whether representationalism warrants its status as the default theoretical stance in cognitive science.

Dynamical Systems

The “radical” of “Radical Embodied Cognitive Science” refers to its anti-representationalist stance and it seeks instead to explain behaviours in terms of the system of brain, body and environment interaction.

Van Gelder’s (1995) seminal paper “What Might Cognition Be If Not Computation?” makes a useful example of the Steam Governor, which was a device designed by the mechanical engineer James Watt to regulate steam engines. The aim was to maintain the speed of the driving flywheel smoothly in the face of large fluctuations in steam pressure and workload. This could be controlled via the turn of the throttle, which was the gateway for the steam.

From the perspective of a computational designer, we might regulate the speed of the flywheel by measuring the speed, comparing it to the desired speed, calculating how to adjust the throttle that restricts/facilitates steam flow and then implementing the change. This appears to be a task which perfectly illustrates the need to posit some sort of information processing mechanism, in which information on current and desired speed (content) is transmitted, interpreted and computed over.

However, the actual Governor does not perform any of these complex calculations at all, but still regulates the engine – and it does so incredibly well.

In the governor, a spindle is geared to the flywheel, whose rotation speed directly depends on the speed of the flywheel. Attached by hinges to the spindle are two arms, each with a metal ball attached at the end. These in turn are connected to the throttle itself. The movement of the spindle creates a centrifugal force, resulting in the balls being driven up and out. This means that when speed increases, the rising balls immediately begin restricting the flow of steam (and vice versa). The solution to the problem is immediate, continuous and smooth; the system is robust to large changes in pressure and load and maintains the desired speed effectively.

govenor

The Nature of Representation

The Watts governor is a dynamical system in which the arm angle of the system covaries with the speed of the flywheel. This could conceivably be called the system’s “representation” of speed; however, it is misleading to call this a “representation”. The system is not taking a measure of speed and performing a computation on it. In fact, there are no identifiable sub-components which perform discrete operations, whereas the computational solution clearly has such identifiable modules. The Governor does not have a schedule of rules to follow and its task is not one of translation in any meaningful sense of the word. Its activity is continuous and there is no point in time at which any part of the system is not influencing the behaviour of all other parts.

Van Gelder raises the point that the correlative relationship between arm angle and flywheel speed in fact breaks down in the system when outside of an equilibrium state, meaning that the supposed representational relationship (specified as correlation) is not even an enduring one1. However, when described within the framework of dynamical systems, mathematical description of the coupling of the parts adequately characterises the relationship between the system’s components over time in its entirety. A representational narrative not only adds nothing, but encourages us to ask misleading questions, such as how the elements ‘communicate’, how information is ‘processed’ or how a proposed ‘algorithm’ might be implemented.

While this has laid out that there do in fact exist alternative approaches to studying behaviour, it is also worth looking at traditional computationalism’s conceptual commitments and viability.

The Hard Problem of Content

Hutto and Myin (2013) define the classic representationalism stance as CIC (Cognition necessarily Involves Content) and characterise its biggest credibility hurdle as The Hard Problem of Content. This challenge holds for any theory which aims to characterise cognition within the bounds of explanatory naturalism while maintaining a CIC stance that cognition is about manipulating contentful representations.

Problems arise when attempting to explain how information maintains its integrity through transmission in different physical mediums. Invariably, attempts to ground representations in the physical world lead to fuzzy distinctions between representational vehicles (“information carriers” which are potentially amenable to physical description) and their contents. If our cognitive architecture is specialised to deal with the physical vehicles that hold the content, then what or who is the attached content meaningful for?

While covariance relations could be sufficient to constitute information, Hutto and Myin (2013) claim that this is insufficiently constrained to account for meaningful content, mirroring van Gelder’s concerns that representation-as-correlation opens the term “representation” up to trivialisation. Covariance does not constitute a state carrying information about something else without an external interpretive process imposed on it (e.g. using the number of tree rings to derive the age of the tree). In order for a state to be contentful, it must have conditions of satisfaction. Covariance is not a semantic relationship, as it is not propositional about the truth of states in the world.  While organisms might respond to natural signs, this does not necessarily entail that they respond to them as stand-ins for something else.

RECS does not deny that organisms are informationally sensitive (Hutton & Myin, 2013) in that they “exploit correspondences in their environments to adaptively guide their environments” (p. 82). However, this is fundamentally distinct from claiming that information is transmitted as semantic content and presupposing some form of internal language to support the required interpretive process.

For those interested in reading about these problems in detail, their book ‘Radicalising Enactivism’ goes on to discuss potential CIC rebuttals and consequent revisions of the notion of ‘representation’ and ‘content’. This lays out the inevitable dilution of concepts that occurs during this redefining process. These terms appear to be rendered empirically implausible at worst and, at best, explanatorily irrelevant.

While this post has served to introduce a systems approach as a potential alternative to computationalism, I will later discuss in more detail a particular theoretical approach which characterises organisms as dynamical systems coupled with their environment through information.  This approach does not rely on the concepts of representation or information transmission and therefore, unlike cognitivist theories, avoids the troublesome and persistent demand to provide a coherent theory of content.

1There is a small but significant differential between arm angle and speed when the flywheel slows quickly.  Whilst the flywheel can slow almost instantly, the rate at which the arms can fall is dictated by gravity and it is during this fall that their angle cannot be correlated to the speed of the flywheel.

References

Chemero, A. (2009). Radical embodied cognitive science. MIT Press.

Fodor, J. (1987). Psychosemantics. MA: MIT Press.

Fodor, J. (2003). Hume Variations. Oxford: Oxford University Press.

Hutto, D. D. & Myin, E. (2013) Radicalising enactivism: Basic minds without content. Cambridge, MA, MIT Press.

Van Gelder, T. (1995). What might cognition be, if not computation?. The Journal of Philosophy, 92(7), 345-381.

Introduction to mainstream Cognitive Science or: How we learned to stop worrying and love the representation

My first post is just going to set out a very brief of history of cognitive science.  While this will be woefully incomplete as a comprehensive historical account, the aim is to focus on some key movements that will hopefully shed some light on ideological shifts responsible for its current incarnation.

Psychology is generally accepted to have begun experimentally in the 1880s in the laboratory of Wilhelm Wundt, a man who focused his efforts on discovering the basic units of consciousness such as sensations through phenomenological report or “introspection”.

The early 1900s saw a shift in practice in the face of increasing criticism concerning the unreliability of introspective methods in a world where science aimed to champion the “objective”.  This led to the eventual development of behaviourism, an approach which switched the focus from fuzzy, undefined conceptions of mind and constructs (e.g. beliefs and feelings) to overt behaviour, which could be empirically tested and more easily experimentally controlled.  Those such as Watson and Skinner pioneered this science of behaviour, in which attempts were made to characterse human capacities in terms of stimulus-response associations and reinforcement schedules.

However, in the 1950s the American linguist Noam Chomsky famously reviewed Skinner’s account of language acquisition, critiquing the scope of operant conditioning to explain the complex and productive nature of language competence.  He argued that a fundamental part of the picture was missing, as mere exposure to natural language would be insufficient to account for the robust knowledge of language humans display.  Concerns were highlighted about the lack of information available in speech signals and also the lack of negative evidence (feedback on errors in production) to account for language competence.  This is known as the poverty of the stimulus argument and conclusions were drawn that internal mental structures must exist to account for enhancement of impoverished input.

This came during a period when the increasing prominence of information processing led to parallels being drawn between the way computers deal with information and how a mind might process sensory input.  The computer metaphor captivated the cognitive science community and had a profound impact on the direction Psychology would take up to the present day, in what is now termed the “cognitive revolution”.

The focus had turned full circle in under a century and it was squarely back on the empirically inaccessible mind and its internal causal contents, whose nature could only be inferred.  It was not solely language that was receiving this treatment.  The study of perception became a domain in which the basic building blocks of perception started at the level of sense data.  Since stimulation of a sense cell contains no information of the causal stimulus itself, the job of the brain was conceived to be to derive and infer the content of the world on the basis of this sparse input.  The role of the environment in any explanatory sense was theoretically dispensable; the conception was not that it was the world, but the world-as-represented, ­which influenced behaviour.

This approach has enjoyed a lengthy period of stability in the history of the science and countless models of human behaviour take the form of input/output models in which representations are computed over by something akin to a computer’s central processor to explain human behaviour.  Representations have done a lot of the heavy lifting for a wide range of cognitive tasks and highly complex calculations ascribed to them in order to account for simple and complex human behaviour.  It is rarely detailed how these calculations are performed, but these explanations are frequently believed to be on the horizon and thought unlikely to be a problem for the frighteningly complex human brain.

Mental representations nominally play the necessary explanatory role in most accounts of human behaviour and have to be presupposed, since they cannot be empircally accessed.  As a consequence, research is frequently dedicated, not to proving their existence, but to supposedly indirectly probing their character and modelling how they might be put to work.

Is this a problem or a scientific necessity?  For example, are mental representations providing a similar function to the hypothesised role of dark matter in astrophysics?  Can models simply not function without invoking mental representations?

One obvious benefit of invoking mental representations is their decouplability.  They appear to cater as a simple explanation of how we achieve competent behaviour when a stimulus is currently absent in the real world.  Things do not come in and out of existence for us just because they are not in contact with our senses at any one time, so the brain must surely contain some form of stand-in for the world.  This also seems to serve to explain highly competent behaviour that appears to require knowledge of future states of the world, such as knowing when and where to be to catch a fly ball.  How else could this be achieved, but through some hidden computation that allows prediction?

Language is perhaps the most obvious domain begging for a representational narrative.  With no lawful relationship between arbitrary sounds and their referents, a mental stand-in seems called for to explain how language and concepts can be used to control action.  How could we create sense-making but novel utterances, if there are not abstract structures in place to scaffold this?  Why else would children overgeneralise regular grammatical rules to irregular forms they had already mastered?

In essence, the resounding chorus in the cognitive science community was and continues to be: What else could it be?

A future post will look at an altenative approach to the current paradigm and why assumptions of mental representations are not as benign as they initially appear.