Saturday, March 11, 2017

Psychology in the 1950s: a personal view Alan Baddeley

Psychology in the 1950s: a personal view

Alan Baddeley

DOI:10.1093/acprof:oso/9780199228768.003.0003

Abstract and Keywords

This chapter is concerned with developments in psychology in the 1950s. It suggests that this period witnessed the demise of Gestalt psychology, a distinctive approach to experimental psychology, strongly influenced by the Gestalt principles of perception. During this period, in universities the influence of behaviourism was very strong and the major focus of most psychology courses were theories of learning. Another significant development came through the information processing approach to the study of human cognition, which reflected a number of separate but related sources. These include communication theory and the attempt by Claude Shannon to measure the flow of information through an electronic communication channel in terms of the capacity of a message to reduce uncertainty.
I was initially asked to write about the development of the concept of working memory. However, having just completed such an article (Baddeley and Hitch, 2007) for a symposium marking the 30th anniversary of our 1974 paper, I opted for a different task, that of trying to give an overview of psychology in the 1950s, in hope that this might provide a useful background to the rest of the book. It is a personal view, but reasonably widely based, on time spent during the 1950s in five different research centres: University College London (UCL), Princeton, the University of Southern California, the Burden Neurological Institute in Bristol, and the Medical Research Council (MRC) Applied Psychology Unit (APU) in Cambridge. In each case I was a student or graduate student, and try to give a flavour of that experience.
I began my study of psychology as a first year student at UCL in 1953. It was an exciting time, in which ‘schools of psychology’ were being overtaken by new developments, both empirical and theoretical. In my first year, our basic text was Woodworth's Experimental Psychology, published in 1938 and comprising 889 flimsy pages (paper was scarce) that covered an enormous range of experimental work, much of it using methodology that was no longer regarded as acceptable. Things changed with the publication of Osgood's (1953Method and Theory in Experimental Psychology, an exciting but even-handed blend of the old and the new.
So, what had happened in the years intervening between Woodworth and Osgood? First came the demise of Gestalt psychology, a distinctive approach to experimental psychology, strongly influenced by the Gestalt principles of perception, such as continuity and proximity, and their extensions to cover animal behaviour (Köhler 1925), reasoning (Wertheimer 1945), social psychology (Lewin 1951), and memory (Katona 1940). Theoretically, Gestalt psychology was strongly influenced by developments in physics, adopting a ‘field’ theory within which stimuli had a tendency to form structured wholes, the Gestalt principle. However, with the rise of the Nazis, most of the influential German (p.28) psychologists, many of whom were Jewish, were forced to flee the country, principally to North America. Although the USA provided a welcome haven, Gestalt psychology became fragmented and did not flourish in the neo-behaviourist atmosphere that characterized the USA at that time.
Gestalt psychology did, however, form an important, although not large, part of the syllabus at UCL, principally reflected through translations of books by the major Gestalt psychologists such as Koffka (1935), Köhler (19251940), and Wertheimer (1945). I myself found it an attractive approach to psychology, although I was less convinced by the proposed physiological basis in terms of electrical fields on the surface of the cortex. This was tested by Lashley, Chow, and Semmes (1951), who placed strips of highly conductive gold foil over the visual cortex of a monkey and found no evidence of disturbed perception, a result that had few implications for the basic psychological principles, which remain valid, but which cast doubt on the whole Gestalt enterprise. In my own case it has made me cautious about tying psychological concepts too firmly to physiological speculation.
By far the most active region in experimental psychology at this time was the USA, where the influence of behaviourism was still very strong. A major focus of our course at UCL was theories of learning, of which there were a number of prominent contenders, summarized in Hilgard's (1948) classic text, and virtually all based on experiments performed on rats. The most influential was that of Clark L. Hull, whose theory involved the establishment of stimulus-response associations based on reward, and was expressed in terms of postulates and equations explicitly aimed to imitate Newton's Principia. Hull's principal opponent was Edward C. Tolman, who argued that rats learned mazes, not by establishing stimulus-response associations, but by developing mental maps. By the early 1950s, the controversy had moved on to the next generation, with Hull's position being defended by Spence (1956), and challenged among others by Bitterman (1957). As an undergraduate, this controversy was a godsend, given that the broad outline of Hull's model was easy to learn, and the critical experiments could readily be generated by imagining oneself in the position of the rat and selecting experimental paradigms for which one's own response would be inconsistent with Hull's principles.
My first published experiment, carried out during a year at Princeton, was generated on this basis, and used hooded rats borrowed from Bitterman. Tolmanians tended to use hooded rats that were considerably more visually oriented, and perhaps less intellectually challenged, than the albino rats favoured by Hullians. My rats had to learn to choose one of two doors for a food reward. In the crucial condition, whenever they made an error they could see food being delivered to the correct side but could not, of course, reach it. (p.29) Would the sight of food act as a secondary reinforcer for the response, as Hull's theory would predict, and simply make the rats even more likely to make the wrong response in future? Or would they behave like sensible Tolmanians? True to form, my rats proved cleverer than they ought to be, and I published a paper to this effect anticipating that the Hullians would come down on me from a great height. In fact, absolutely nothing happened, and this is, I believe, my paper's third citation (Baddeley 1960). By this time the whole controversy appeared to have been abandoned as a nil-nil draw. The Tolmanian maps implied some form of unseen internal representation, whereas Spence's alternative also involved invisible internalised stimuli and responses. Neither kind of internal representation was regarded as respectable within the neo-behaviourist canon, so, instead of agreeing to investigate the nature of such representations, theorists abandoned the field.1
The theoretical approaches discussed so far were already highly active in the 1930s. There were, however, some very exciting new ideas that had developed during the war years when psychologists had been required to move away from their ivory towers and tackle practical problems such as how to train a pilot, why radar operators showed a decline over time in detection rate, and how physiological stressors influenced human performance. One effect was to focus attention on the richness of the real world environment and the importance of taking this into account, a view strongly advocated by the German psychologist Egon Brunswik (1947), who argued vigorously, though in somewhat turgid prose, for ecological validity based on ‘representative design’, a method that required the investigator to go out into the field and measure the environment in considerable detail before designing experiments that reflected this complexity. Sadly Brunswik died before taking his views further, although they have continued to have an influence in the area of decisionmaking (Hammond 2007). Considerably more broadly influential was the work of J. J. Gibson (1950), who also emphasized the need to measure and specify the stimulus, taking as one of his inspirations the task of a pilot attempting to land a plane, utilizing the expanding flow of information in his visual field as the plane approached the ground, an approach that continues to be highly influential through his enthusiastic disciples (Turvey et al. 1981).
One of the most influential new developments came through the information-processing approach to the study of human cognition. This reflected a number of separate but related sources. One of these was through communication theory and the attempt by Claude Shannon (Shannon and Weaver 1949) to (p.30) measure the flow of information through an electronic communication channel in terms of the capacity of a message to reduce uncertainty, which was in turn measured in terms of binary choices or bits. This led to the concept of the human as a limited-capacity information-processing device, an approach that led to Hick's (1952) classic multi-choice reaction time experiment, in which he showed that the time to respond was a logarithmic function of the number of response alternatives. This finding, tagged as ‘Hick's Law’, suggested that perception involved the flow of information through a channel of limited capacity, while Paul Fitts (1954) showed a comparable function for motor behaviour, ‘Fitt's Law’. The attempt to measure channel capacity was also applied to a wide range of other topics, such as perceptual judgements (Miller 1956) and immediate memory (Davis et al. 1961; Miller 1956).2
A related source of excitement during the early 1950s was the development of cybernetics, stimulated by attempts to optimize the automatic control of weapons. In North America its most influential advocate was Norbert Wiener (1950), whose book The Human Use of Human Beings speculated on the possible social implication of such developments. In Britain, the most influential cyberneticist was probably W. Grey Walter, a very creative and persuasive physiologist, whose inventions ranged from developing methods of using elec-toencephalography (EEG) to locate epileptic foci by triangulation, to inventing a machine called ‘the tortoise’ that was capable of searching a room in order to find a source of electricity to recharge its batteries (Walter 1953). One advantage of this ingenious toy was that it allowed one to refute the objection to the concept of purpose, made by many philosophers at the time. If a simple machine could show purposeful behaviour, why not people and animals? In my own case, however, the strongest influence ultimately was one that I did not encounter until I had left UCL and gone to Cambridge, where I encountered the ideas of Kenneth Craik (1943) on the development of models of human behaviour based on computing and the information-processing metaphor. I will return to this later.
When I arrived at UCL, it was in process of transition, from a department chaired by Sir Cyril Burt, emphasizing the psychometric study of intelligence, to one chaired by Roger Russell, an American experimental psychologist who had worked in London during the war. He had appointed a bright and enthusiastic group of young lecturers, most of them pre-PhD, who managed to convey a sense of the excitement in the field.
(p.31) We learned about Barlett's concept of schema, which was respectfully but sadly dismissed as probably untestable, although Carolus Oldfield in Oxford subsequently suggested that computers might be programmed to simulate schemata, a view also proposed by George Miller, and of course subsequently developed by Schank and Abelson (1977).
The question of the testability of theories loomed large at the time. A. J. Ayer's (1936Language, Truth and Logic presented a readily accessible version of the approach to philosophy taken by the Vienna Circle of logical posi-tivists, presenting the view that a theory that was not verifiable was not meaningful. Gilbert Ryle's (1949) book The Concept of Mind was also influential in applying the approach to philosophy based on a careful analysis of everyday language that was dominant in Oxford at the time and for many years afterwards. Although this was not, I think, a very constructive approach to psychology in the long term, it did at that time provide a very useful training in the need to avoid conceptual traps set by the unthinking use of language.
There was also a great deal of interest in the philosophy of science at the time and how it related to psychology. We were presented with two contrasting approaches, one by Braithwaite (1953), who took Newton's Principia as a model, and appeared to evaluate a science by the extent to which it fitted this mathematico-deductive pattern. The other, rather more modest, approach was presented by Stephen Toulmin (1953), who argued that theories were essentially like maps—useful as ways of capturing what we know, steering us through the world, and helping us develop better maps. I myself was, and have remained, a Toulmin cartographer.
I was therefore fortunate to arrive at UCL at an exciting time, into a rejuvenated department that combined a range of British approaches to psychology with that from teachers who were refugees from Europe, such as Hans Eysenck, who was developing his psychometrically based approach to personality at the time, and Karl Flugel, a charming psychoanalyst who taught us about pre-war European psychology. Among other things, he wrote a book on the psychology of clothes, proposing that in future people would wear rather fewer of them. On his death, he was commemorated on the front page of one of the tabloid newspapers as ‘Author of the brave nude world dies’. Because of Roger Russell's influence, we also got a good grounding in a range of North American approaches to psychology, which I subsequently found invaluable, particularly when I later became involved in the controversy over trace-decay versus a stimulus-response interference interpretation of short-term forgetting.
After graduating from UCL, I spent a year at Princeton, where I learned more about the history of psychology from Carrol Pratt, a psychophysicist who had been trained as an introspectionist by Titchener. Dr Pratt was (p.32) responsible for ensuring that all graduate students could translate psychological papers from both French and German. The latter involved retranslating a page from a translation into German of one of Titchener's introspectionist tomes, which taught us such useful vocabulary as eindringlichkeit—‘thingness’. I also learned to run rats in Skinner boxes and on jumping stands, failed psy-chometrics, but passed psychoanalysis, and got by on my UCL training, being duly awarded an MA. I spent the summer in Los Angeles, paid by the US Office of Naval Research, to do a literature search for research on human-computer interaction (there wasn't any), before returning to England, where I hoped to do a PhD on partial reinforcement in rats. Instead, after a summer on a grant from Guinness concerned with finding some positive effects of alcohol, I found myself in 1958 with a job investigating postal codes at the MRC APU in Cambridge, experiencing at first hand what has subsequently been called the cognitive revolution.
Although the term ‘cognitive revolution’ is widely used, opinions differ as to where and when it started. My impression is that, in North America, the seminal events are seen as Chomsky's (1959) review of Skinner's (1957) book Verbal Behavior and Neisser's (1967) book Cognitive Psychology. In the UK, my impression is that Skinner's views on language were never taken very seriously, and I see Chomsky's work as an unfortunate but temporary distraction from the scientific study of language.3 Neisser's book was certainly very important. It unified a wide range of exciting new developments under the term ‘cognitive psychology’, brilliantly conveying the excitement of the new field. However, as Neisser himself makes clear, he was able to do so because such developments were already taking place. They had their roots in the 1940s and 1950s, and one place where such roots developed was in Cambridge at the MRC APU.4
A seminal figure was the first director of the APU, Kenneth Craik, tragically killed in a cycling accident in 1944. Craik was a remarkable scientist whose book The Nature of Explanation (Craik 1943) introduced the concept of the model as an approach to theory development, and who saw the potential of the computer as a way of developing such models. Digital computers were just being developed, but analogue computers were available, and were used by Craik to model empirical data from gun-aiming in what was probably the earliest computational model in experimental psychology, published after his death (Craik and Vince 1963).
(p.33) When I joined the APU in 1958, it was clear that Craik's and related ideas permeated the unit. They were reflected most coherently in Donald Broadbent's (1958) book Perception and Communication, which was published just before I arrived. Donald had just become director, and must have found rather tedious my constant attempts to buttonhole him to discuss this view or that claim from this exciting new book; ‘friends’ told me he regarded me as rather immature. I still am. My badgering was concerned largely with theories of vigilance, though in retrospect I have been influenced much more by Donald's model of short-term memory.
I recently decided to re-read the relevant section of Perception and Communication, and found it surprisingly hard going. It reflects detailed arguments that attempt to make sense of a complex range of data on attention and memory, using a conceptual framework that was still developing. His model includes two components, the p system and the s system. Somewhat confusingly, the p system involves serial and the s system parallel processing. I suspect the confusing labels reflect the fact that the terms serial and parallel processing were not common at the time, and that p probably refers to perception and s to storage. A further complication came from the fact that the initial diagram in the 1958 book was printed upside down, turning the p system into a d system. It is only in the final chapter summary that all becomes clear.
Information is held in a short-term store with a very limited time span. From this store it may be passed selectively by filter, through some mechanism of limited capacity from which it is returned to the store…Only information that passes the filter can be stored for long periods
Broadbent 1958, p. 242
Neisser (1967) explains Broadbent's model, by then further developed, much more clearly—a model that continues to influence the field, as elaborated by Atkinson and Shiffrin (1968), and in my own work (Baddeley 2007; Baddeley and Hitch 1974).
Although the work of Broadbent and the APU was influential, as Neisser (1967) illustrates, similar ideas were developing elsewhere, typically in laboratories with an applied link, notably Bell Labs, with the work of Sperling (1960) and Sternberg (1966) being highly influential. Within Europe, the link between information-processing theory and its practical application was very well represented by the TNO Laboratory at Soesterberg in the Netherlands, as described by Andries Sanders in Chapter 19.
Within the US university sector, the closest in spirit to the APU was the group led by Paul Fitts at Ohio State. Sadly, Fitts died at an early age, but fortunately the tradition was carried on by his young colleague Michael Posner, who was, of course, instrumental, not only because of his role in promulgating the (p.34)information-processing approach within mainstream experimental psychology, but also for his crucial role in developing the next major revolution in the field, cognitive neuroscience. But that is another story. Happily, a story that is told by Mike Posner himself in Chapter 15.

References

No comments: