Saturday, March 11, 2017

A perception of perception Richard L. Gregory

A perception of perceptionRichard L. Gregory

DOI:10.1093/acprof:oso/9780199228768.003.0010

Abstract and Keywords

This chapter looks at developments in the study of perception during the past fifty years. It describes the different schools of perception, the study of illusions as a physiological and cognitive phenomenon, and so-called artificial perception. It highlights the discovery that the eye is as sensitive as theoretically possible and speculates on the future of perceptions research.
Keywords:   perceptionillusionseyeresearchpsychology
It is hard to know which early experiences initiate lifelong interests. My father was an astronomer, who spent most of his life measuring distances of stars to scale the space of the universe.1 I have tried to understand how visual space is scaled, for seeing sizes and distances of earthly objects. Perhaps it was looking through my father's telescopes that made me question what is really out there, beyond the flickering images of light captured by telescopes and eyes. And isn't the inside of our heads even more mysterious than the surrounding universe?
The science of psychology has its roots in philosophy and, like a tree, would die if it lost its roots. Yet present-day philosophers and psychologists are generally at odds. I read philosophy and psychology at Cambridge just after World War II, after nearly six years in the RAF (signals). This was 1947, when sadly, Wittgenstein left Cambridge terminally ill, so I just missed him. I did attend Bertrand Russell's seminars.2 The lectures, of C. D. Broad, Richard Braithwaite, and John Wisdom were truly memorable in their distinctive ways, and I remain in debt to Dr A. C. Ewing for criticizing my puerile weekly essays. I have continued to write no doubt puerile essays ever since, any rare exceptions reflecting Alfred Ewing's patience of sixty years ago. There seems to be no substitute for the Oxford and Cambridge tradition of individual tuition, (p.120) with its rigour avoiding rigor mortis, though impossible with the student numbers of most universities. This challenge is being met with new technologies of communication, the remarkably successful Open University showing the way.
This post-war, post-Wittgenstein period of Cambridge philosophy is captured with its fascinating characters by David Edmonds and John Eidenow in Wittgenstein's Poker (2001). The title refers to a Moral Sciences Club meeting, when Karl Popper visited from London, speaking to this hallowed centre of Cambridge philosophy for the first and only time.3
Whether a poker was really raised against him is just one of many unanswered questions. Although as students we were concerned mainly with contemporary analytical philosophy, we did read some classics. The eighteenth century empiricists—John Locke, George Berkeley, David Hume—excited interest in epistemology and perception, although the accounts of perception by contemporary philosophers were not so inspiring. For example, explaining why a coin looks circular from one position and elliptical from another by sense data, neither matter nor mind, supposed to exist between objects and eyes. (These strange entities were a product of Oxford rather than Cambridge philosophy!) Seeing this as unsatisfactory initiated a lifelong interest in how perceptions are related to objects, and the significance of illusions. Wittgenstein did have very interesting ideas on perception, as on everything else. But I read him later, as his unpublished ‘Brown’ and ‘Blue’ books were closely guarded secrets, available only to John Wisdom's students. Philosophical Investigations, with its deep thoughts, including interesting discussion of vision and its ambiguities, was not published until five years later.
Reading experimental psychology in the third year, I was one of Sir Frederic Bartlett's last students, being greatly influenced by him then and revering his memory ever since. Sir Fred (as we called him) was a genuinely great man. It was through his influence that we escaped the Behaviourism of the time, by accepting that the mindful brain is knowledge driven and driving, within Bartlett's favourite phrase ‘effort after meaning’. So we escaped the tyranny of reflexes, seeing stimuli as informing rather than commanding behaviour. American Behaviourism was however useful, as it stressed the importance of objective methods for experiments, and provided concepts such as operant conditioning, which remain significant. But rejecting consciousness (largely (p.121) for tactical reasons to make psychology look more like physics and so scientifically respectable), was not only throwing out both the baby and the adult with the bath water, it was cheating. Behaviourists admitted to toothache, and for finding criticism painful and even enjoying art. Behaviourism rejected not only consciousness, but also meaning. Bartlett's comment on the use of nonsense syllables for learning experiments was that they provide nice graphs, and so look ‘scientific’, but tell us nothing about psychology.4
This was in the Cambridge Psychology Department in the Downing Street laboratory site, then as now sharing a building with Physiology. Physiology was presided over by Lord Adrian, discoverer of the neural all-or-none code, and filled by luminaries with higher perches in the scientific pecking order than us in Psychology. Their work in vision was outstanding though confined to the retina, as the brain was practically inaccessible to physiological techniques at that time. There were, however, some physiologically plausible theories, especially the insights of Donald Hebb in Canada (Hebb 1949), suggesting active mechanisms for learning, that gave at least potential substance to Bartlett's dynamic ‘schema’ concepts of memory (Bartlett 1932).
Lessons from the experiences of psychologists in applying their knowledge during the recent war focused experiments and theories on the ‘human operator’, in control tasks such as flying and gun aiming, so experiments on tracking and anticipation were seen as important, and linked to attention and vigilance, as well as the limited capacity of information channels. Many experiments were carried out in the associated Applied Psychology Unit of the Medical Research Council—the APU—which moved at that time from the Psychology Department to a large house in Chaucer Road just outside central Cambridge.
Remarkably, it was found that vigilance would fail after as little as twenty minutes in service conditions, even while under genuine threat. Fortunately for laboratory experiments they generally gave similar results to real-life conditions, when subjects (now ‘participants’) entered into the game in imagination. Indeed, game playing was a major interest of psychologists, as well as economists, and there were simulators of many kinds. From the simple but effective Link Trainer, flight simulators became ever more elaborate until they could cost more than the aircraft they simulated. As an example of applied projects, I was seconded to the Royal Navy at Portsmouth for a year to run (p.122) experiments on improving escape procedures from stricken submarines, following the Affray disaster in which two crews were lost. The conditions were simulated in a large pressure chamber with controlled atmosphere, gradually decreasing oxygen and increasing carbon dioxide, to find how long the crew could wait to be found by surface ships, before attempting to escape one person at time from the gun-tower hatch. If the person passed out or died, the remaining crew would be trapped. I designed, and with my technician built, a printing time-event recorder we called Thoth (Egyptian god of wisdom and language), to record dummy escape performance over ten or so hours. The inspiration was avoiding the labour of reading off marks on miles of moving paper.
A key figure was Kenneth Craik (1914–1947), who tragically died in a cycle accident outside his Cambridge college on the last day of the war. His ideas lived on, and his presence was felt in the Psychology Department for many years afterwards. Craik undertook experiments of lasting value on visual performance, including dark adaptation. It was found that the eyes could be fully dark adapted in normal lighting by wearing red filters—especially useful for submariners. Very differently, he wrote an essentially philosophical book, The Nature of Explanation (Craik 1943), suggesting that perception works with physical ‘internal models’ in the brain, representing the world of objects. This simple idea had lasting impact, though we would see them more as symbolic software.
Wartime technology was important for experimental techniques (we used to build our own apparatus from aircraft components, such as electrical relays and uniselector switches), as well as suggesting theoretical concepts. The new ideas for transmitting and processing information had major effects on theories of brain function, first from analogue devices of cybernetics, considerably later from digital devices which transformed computing and much of technology. Being fast though with slow components, parallel-processing analogue systems look much more like the brain. Specialized analogue processors may not be dead, and certainly biological feedback controls of cybernetics are essential for life.
A recurring theme was localization of function. Oliver Zangwill was a founder of localizing brain functions in Neuropsychology, working originally with wartime head injuries. I thought about localization of function from wartime experience of electronics, suggesting that it is not logically possible to localize functions without knowing what they are, which means understanding how the machine or brain works. For example one can say, dangerously like the phrenologists, that memory is in the parietal and visual processing is in the occipital cortex; but it is functions producing memory storage and vision that matter, and when interactive the functions cannot be simply localized, (p.123) for they result from activities of many components. Further, these functions are sure to be very different from what they produce, at least if electronic systems are any guide to the brain.
Famously, Karl Lashley claimed, from experimental failures of ablation experiments to find specialized regions, that the brain works by ‘mass action’. This was like assuming from not seeing in a fog that there is nothing to be seen. I wrote several papers on these issues (Gregory 1958), sometimes taken as criticisms of colleagues' ablation experiments, but this was far from intended, as I simply tried to point out that conceptual models of how systems work are necessary for interpreting such experimental results. An analogy was from localizing functions in a radio from the effects of removing components. The radio might howl, when a resistor is removed—but it does not follow that this component was a ‘howl inhibitor’. The rest of the circuit can acquire new properties when a part is removed, as it is now a different circuit. Negative feedback can easily become positive, changing an amplifier into a howling oscillator. Where the brain is modular, these problems are not so severe—but the frontal lobes?
The electronics concept of random noise, limiting the sensitivity of detectors, was central to communications and radar, and became a central idea for thinking about sensory discrimination. Horace Barlow developed elegant theories based on retinal noise (Barlow 1956). Violet Cane and I tried to develop a signal/noise account of sensory thresholds (Gregory and Cane 1955), with the raised thresholds and slowing of behaviour associated with ageing, seen as due to increased neural noise, which we tried to measure (Gregory 1958). This idea does not seem to be generally accepted now (though at the age of 80+ I do seem to be noise-masked!). Measuring the effects of neural noise led to a speech-processing hearing aid (Gregory and Drysdale 1976), and to later papers with Alan Drysdale and Tom Troscianko. Unfortunately, the hearing aid was not manufactured.5 It was exceedingly difficult to get university laboratory work accepted by industry, but this has greatly changed.
Ideas of probability and statistics were seen as fundamentally important for understanding perception, as with Claude Shannon's (Bell Labs) mathematical (p.124) theory of communication (Shannon and Weaver 1949). A notable Cambridge contribution was W. E. (William) Hick's (1952) ‘rate of gain of information’ experiment, producing Hick's Law, that choice reaction time increases by (log2 +1) of n, the number of available choices.6These were very exciting ideas, dramatically changing psychological thinking, although they tended to divide the subjects into many specialized ‘disciplines’ without shared aims or paradigms. This had the merit that as a ‘psychologist’ one could work on almost anything!

‘Schools’ of perception

Psychology at that time was riddled with violently opposed and hotly defended ‘schools’ of, one has to say, beliefs. Indeed, psychology was hardly secular as evidence was tenuous (and often ignored), so beliefs were held with the insular fervour of religions. This is not, indeed, unknown in science, especially when there are large uncertain questions with small chances of finding reliable answers. The history of cosmology is not so different. And the history of medicine? The many schools of psychotherapy cartoon the science of mind.
Perception has the benefit of striking phenomena, many readily measured. So, although perceptions are ‘subjective’, they attract objective experiments. Paradoxically illusions are among the most attractive phenomena—and have been ever since Aristotle, who discussed several and appreciated their significance. As dreams were the stuff of psychoanalysis, so illusions woke philosophy and science to realities of perception. The fact that perceptions can depart so clearly from physical reality challenged established ways of thinking, and showed that, though subjective, the mind can be the object of experiments. Although perception was not the major topic of research that it is today, we were introduced when students to wonderful phenomena of illusions, to stereoscopic pictures, to delights of colour and sound, and attempts to understand them.
Most broadly, there were ‘passive’ and ‘active’ accounts. ‘Active’ included Gestalt theory, from pre-war German psychology (moved to America by quite direct pressure from Hitler), and the ‘passive’ Direct Perception of the American psychologist, James J. Gibson, at Cornell. It is important to include the wonderful visual demonstrations of Adelbert Ames (Ittelson 1952). Neither Ames nor, as we shall see, Helmholtz were ashamed of introducing ‘childish’ illusions into serious (p.125) science for casting light, with a light touch, into studies of vision. For many psychologists and philosophers, phenomena of illusions showed how tenuous are relations between the physical world and perceptions. Gibson, however, tried to deny illusions (Gibson 1950), for they should not occur if perceptions are related directly to objects. When pushed, he would say they only occur in artificial laboratory conditions, and so can be ignored. Yet normal conditions are rich with illusions. (Compare the vertical mast of a boat to when it is lying horizontal on the ground: the vertical appears much longer.)
Gibson had a remarkable following, with, one has to say, religious fervour, although a vocal minority of experimental psychologists (including myself) saw his ideas as setting back understanding, not only to before Helmholtz but back to the ancient Greeks, who did not appreciate that eyes have optical images (read, though not seen, by the brain). Gibson abandoned Helmholtz's Unconscious Inference, from evidence of retinal images and even denied retinal images altogether. (He was quite upset when shown a photograph of a retinal image, though he did finally accept them, if not their importance.) Although I was opposed to his ‘school’ of psychology, I liked Gibson and his family immensely, and he did important work and wrote influential books, though I do think his philosophy was wrong. Science benefits from people who are clearly wrong!
The founding father of psychological–physiological experiments on vision and hearing is the German polymath (physiologist, psychologist, physicist, and philosopher), Hermann von Helmholtz (1821–1894). Helmholtz's immense contribution was less in evidence fifty years ago than now. With many others, I see Helmholtz as the Master.
Whereas Gibson thought of perceptions as related directly to objects, Helmholtz and his followers saw the retina and other physiological complexities as lying between objects and perception, making vision indirectly related to the world of objects. As physiological links of neural channels are present for all the senses, this indirectness applies to them all, including touch. The notion is that sense organs are (in the language of electronic instruments) transducers, providing coded messages to the brain via neural channels, which introduce considerable delay, as measured by Helmholtz in 1850. As sensory information comes from the recent past, perception cannot possibly be ‘direct’; yet behaviour is in real time—making ping-pong possible. Perceptions are richer than sensory data, and are predictive—into the immediate future, and to many unsensed features of objects. One might say that the brain is a knowledgeable detective, working along the lines of Sherlock Holmes, using small clues to suggest and test working hypotheses, which are our reality, (Gregory 2007). I like to think of perceptions as hypotheses, essentially like (p.126) hypotheses of the sciences (Gregory 19581981). Both are greatly affected by probabilities; both are subject to fashion; both are tenuously related to ‘truth’. Both, indeed, are bedevilled and enriched by illusions.
This is very different from J. J. Gibson's direct ‘pick up of information’, especially as his notion of information is different from that of Helmholtz and his followers, which at least implicitly follows Thomas Bayes' eighteenth-century formulation of statistical inference, for selecting and testing hypotheses. Only recently has Bayesian theory been seen as a useful model for thinking and for perception. All Helmholtzian perceptual theory is, at least implicitly, Bayesian. I came to think in this way from Bertrand Russell's lectures at Cambridge, and from his book Human Knowledge: Its Scope and Limits (Russell 1942), writing an essay along these lines initially when a student (Gregory 1952/1974).
In the Special Senses laboratory in Cambridge, we worked on a variety of topics, ‘pure’ and ‘applied,’ including perceptual problems anticipated for the moon landing of 1969, for the US Air Force, just before NASA took over. The American government was incredibly generous in funding European research, getting science and so much else going after the war. This allowed us to build a simple space simulator, an electrically driven carriage running along the darkened corridor of the laboratory, with electronically linked displays for measuring Size Constancy dynamically. There was also a large parallelogram swing introducing small acceleration forces. We measured Constancy by shrinking the display while approaching, vice versa for receding, so it appeared constant—the required change giving a measure of visual scaling. (No required change would indicate 100% scaling.) The principle of visual scaling, going back at least to Descartes, became a key concept for explaining many visual illusions, when inappropriate.

Illusions

Which phenomena are ‘physiological’ and which ‘cognitive? This seemed a fundamental issue for physiological-psychology (or at least for the hyphen between these words). It was generally thought that distortion illusions, such as the Muller-Lyer ‘arrows’, were due to peripheral physiological effects such as lateral inhibition's neural sharpening of borders. This was important for sensory physiology, but for various reasons I did not believe that these distortions were related so directly to peripheral physiology. It seemed to me that we should look at what the physiology is doing in order to achieve perception of size and shapes of objects, and then ask whether these procedures can work appropriately in situations of illusion. In particular, how can procedures for three-dimensional objects work correctly for flat representations in pictures?
(p.127) There are dangers, in any science, of seeing with some sort of tunnel vision. The tunnels change over time and also place, as they are set by what is prestigious at a given time and place. For neuroscience there is the temptation to see what is overtly visible as most ‘real’ and most reliable, such as nerves one can see with a microscope, and now brain regions shown as active by coloured regions of (functional magnetic resonance imaging—fMRI) pictures. Of course, these are very important, but so too are less tangible, more abstract, features and phenomena, such as reflex arcs, servo loops, inhibitors, and activators. Indeed, brain regions seen as active by fMRI may be either inhibiting or activating, so they are ambiguous, to be interpreted by conceptual models of what may be going on.
An interesting example of tunnel vision in the recent history of perception is illusory or ‘subjective’ contours. These were described and illustrated in the first years of the twentieth century by Schumann, but ignored almost entirely for half a century, until Geatano Kanizsa's beautiful examples in Scientific American (1976) attracted worldwide attention to these wonderful visual phenomena. Schumann's original example appeared in the most read text—R. S. Woodworth's Experimental Psychology (1938)—as Figure 191 on page 637, yet was ignored. It showed a ghostly rectangle with clear contours, though without brightness differences. The physicist Ernst Mach had appreciated in 1865 that contours are not simply changes of brightness or colour. Yet, for a long time, physiology and psychology viewed them through a tunnel of peripheral neural interactions, these being a popular research topic. People now see them as striking examples of Bayesian inference. I was not alone in thinking along these lines, following Kanizsa's striking examples (Gregory 1972). It is unlikely that the missing slices of the ‘cakes’ would line up exactly—more likely there is some nearer, triangle-shaped, occluding surface, which is conjured up by Bayesian inference and seen, though there is nothing there. It is visible fiction. Of course there will be an underlying physiology, but this account of what the physiology is doing is a useful explanation, although not complete.
I spent a lot of time thinking of distortion illusions as due to constancy scaling, when set inappropriately, by perspective or other clues signalling depth. This was different from physiological accounts or errors of neural signalling, as in lateral inhibition, for it concerned what perceptual processes are doing, for seeing objects in external space, rather than how the physiology works, or how it malfunctions. Although, of course, knowledge of physiology is essential, the action for explaining many phenomena is in what it is doing. Thus strategies can win or lose wars. Of course there must be weapons, but where they are directed is as important as what they can do. So, there are many (p.128) kinds of illusory phenomena, which I tried to classify with a ‘Peeriodic Table’ (Gregory 2005). The major division is between ‘bottom-up’ signals from the senses and ‘top-down’ knowledge, for reading neural signals as evidence for what might be out there. Rules for reading objects from signals may be called ‘sideways’.7
This approach to considering perception and illusions started with the fortunate experience of studying a rare case of adult recovery from blindness at birth—SB—with my research assistant Jean Wallace (Gregory and Wallace 1962). Following a corneal graft operation, SB had surprisingly good vision, with surprising lack of the usual distortion illusions. After fifty-two years of blindness, SB could immediately see a great deal not only from his new eye, but also from his years of touch experience. Still in the hospital he could tell the time visually, and read upper-case letters, from previous touch experience of feeling the hands of a watch and letters engraved on blocks of wood, taught in the blind school. All of this suggested that perception is largely cognitive, knowledge based.
SB disliked his wife's face, and his own in a mirror! Mirrors did, however, fascinate him, as the opposite of blindness, touch without sight. As described in the book Mirrors in Mind (Gregory 1991) I came to see mirrors through SB's eyes, to realize how amazing they are, as indeed is vision itself. Our finding that immediately after receiving sight, and later, SB had only small or no distortion illusions suggested that these are cognitive phenomena. Inspecting the various well known distortion–illusion figures showed that though they appeared flat they were perspective drawings of objects or scenes in depth. Whenever distance was represented, illusory expansion occurred.8 The notion that distortions in pictures might be related to depth was not obvious at that time.
(p.129) It is still sometimes resisted, perhaps because it implies and requires a particular way of thinking about perception. (It fits a Helmholtzian, but not a Gibsonian, approach.) A surprise was that size scaling could be set by clues to distance, even when distance was not seen, as when countermanded by the surface texture of the figure. This suggested that scaling can be set rather directly by depth clues, and also that there is more to perception than the conscious experience, so experiments are required to ‘see’ what is going on.
To identify bottom-up and top-down scaling experimentally, depth-ambiguous figures and objects (such as a wire cube) are useful, as they flip in seen depth without any change of bottom-up clues. These issues occupied my attention for several years, and still do. They do not seem trivial, if only because they are a magnifying glass to what the physiology is doing, as well as how it works. This can now be investigated with techniques of brain imaging. But MRI pictures of brain activity need related observations and experiments, with theoretical concepts to interpret them. Phenomena cannot speak for themselves!
The conclusion was that some phenomena of illusions are due to malfunctions of physiology, others, very differently, to misleading knowledge. These are very different kinds of cause, with different ‘cures’, although not all are easy to classify. I attempted to classify illusions in terms of Kinds and Causes. Why trouble with illusions? Many are highly suggestive phenomena, revealing principles of normal perception free from the restraints of the physical world. We learn to see from handling objects, but perception is not limited to experienced objects, as we can see paradoxes and even impossible fiction in illusions. Although probabilities are very important, remarkably we can see the impossible. This is the wonder of creative perception, allowing discoveries and empowering art.
It was a great privilege to work with the art historian Sir Ernst Gombrich. We set up a major exhibition at the Institute of Contemporary Arts in London, Illusion in Nature and Art, with a book of the same name (Gregory and Gombrich 1973). This included the distinguished neuroscientist Colin Blakemore's first published paper, ‘The baffled brain’, which remains interesting to this day, describing ‘physiological’ illusions arising from the properties (p.130) of visual channels. I wrote a companion chapter ‘The confounded eye’ (Gregory 1973). This introduced the Hollow Face—the concave back of a mask, which appears convex like a normal face. Page 84 says of a photograph of the Hollow Mask, (Figure 34):
The nose is not sticking out, as it appears to be: it is hollow, going inwards. This extremely powerful effect holds for any lighting, and against a great deal of countermanding sensory data – provided one hypothesis only is extremely likely. This is best demonstrated not with a photograph of, say, a hollow face; but with the hollow mould itself. It continues to appear as a normal face until closely approached, with both eyes and full stereo depth information. When the observer then withdraws a little, it will suddenly return to appearing as a normal face – though he knows it is hollow. So we discover that intellectual knowledge of such perceptual situations does not always correct perceptual errors. In other words, perceptual hypothesis making is not under intellectual control’. If the perceptually depth-reversed face is rotated, or if the observer moves round it, the face apparently rotates in the wrong direction. Motion parallax is being interpreted according to the false hypothesis – to generate a powerful illusion, which is improbable. So both the texture depth data from the hollow face, and the resulting illusion of motion, are inadequate as data to correct the hypothesis – against the extreme improbability of a face being hollow.
Gregory 1973, p. 84
Appreciating this playing with probabilities surely gives us significant insights far removed from early stimulus-driven accounts of perception.
The Hollow Face may have been the first demonstration of this power of top-down knowledge. Although so dramatic, it took a surprisingly long time to be taken seriously and absorbed into perceptual theory. It has turned out to be a useful phenomenon for several experiments, including providing further evidence for David Milner and Mel Goodale's (1995) important notion of two cortical streams for visual processing—the dorsal stream for rapid unconscious behaviour (here, flicking targets on the hollow mask) and the ventral stream (slow hand tracing of the illusorily seen convex face)—which separate, the first being stimulus driven and the second from conscious perception. We may say that, much as bizarre phenomena of physics are keys to unlock secrets of matter, so illusions can reveal hidden processes of brain and mind.

Artificial perception

I left Cambridge in 1967, with the distinguished theoretical chemist Christopher Longuet-Higgins FRS (we were both Fellows of Corpus Christi College), to join Donald Michie in Edinburgh to help to start a new subject: Artificial Intelligence.
The dream of intelligent robots roused the imagination over forty years ago, to become serious research projects in America and Britain. The first AI department in Europe, at the University of Edinburgh, was run by Donald Michie, (p.131) Christopher Longuet-Higgins and myself, the founding professors. A robot, called Freddie, was built by Steven Salter. It assembled parts from a television camera eye into a model boat or a model car. It had some learning with flexibility in its behaviour. This was only part of the work of the department, which included neural nets, game theory, linguistics, and machine-aided design, and it initiated the careers of a number of very talented people.
Perhaps naively and certainly optimistically, we thought that by making computers perceive and learn and think intelligently, we would discover the tricks of the brain. The computers of that time were not up to it and, with exceptions, neither were we. Programming was a difficult art, which I never mastered. The joke was that a PhD student at a distinguished American university was asked to spend a summer term programming a computer to recognize objects from signals provided by a television camera—his supervisor apologizing that this was too easy and would not take all summer. Teams of brilliant computer engineers have been struggling ever since! Effective AI still looks fifty years ahead. But what we and others at that time discovered from relative failure was surprising and important: that the mind-brain is far more complicated and hard to understand than anyone had appreciated. The emphasis of most AI research was on algorithms (rules) for thinking and seeing. The idea was to describe mind by operating rules that might be carried out by brains are computers. This strategy had the great merit of making theories of mind explicit. So early AI served psychology better than the psychological theories of the time served AI. To the rescue came Kenneth Craik's notion of Internal Models—physical representations of sensed and imagined realities.
Some of us wanted to augment AI algorithms by developing computer models of mind along these lines. Rather as a joke, from irritation with the clumsy computers of the time, Christopher Longuet-Higgins made a model car steered with a slowly rotating cardboard cutout of the path it was to follow, the cutout being its ‘internal model’. This joke worked, as it annoyed the computer buffs who claimed too much, and demonstrated very simply that sensed inputs are far more powerful when used, not directly for behaviour, but via even simple predictive internal models. I came to this from a rather different direction, from disagreeing with J. J. Gibson's account of perception as ‘direct pick-up of information’. For I saw perceptions as brain-created hypotheses, predictive into the immediate future, so that behaviour anticipates what is likely to happen, and work in real time in spite of physiological delays in sensory signals and commands. Also predictive to many non-sensed properties of objects, though not always correctly. A perhaps uncomfortable consequence for psychology is that, as perceptions are not at all closely linked to stimuli, (p.132) they are not readily described or explained with psychophysics. Perception is creative history.
In 1970 I left Edinburgh for the University of Bristol, with a laboratory in the medical school—the Brain and Perception Laboratory, funded by the Medical Research Council. Here, we more or less continued the Cambridge philosophy, with an emphasis on perceptions as hypotheses of brain and mind. This had clinical implications, initially for Ken Flower's work on Parkinson's disease, using eye-hand tracking with a moving target and a joystick. The idea was to continue tracking while the target was made invisible. Although no one knew then, as the secret of Enigma was guarded for fifty years, his father Tom Flowers had invented the computer Colossus that broke the German Enigma codes in World War II.
Human visual phenomena we studied included the effects of isoluminance—neighbouring regions of different colour with the same luminance—which have implications for the evolution of vision, as colour came late in the evolution of the mammalian brain. It might be said that primates have ‘colour by numbers’ added to ancient monochromatic form perception. With only colour there are striking losses of motion and form perception, and stereo vision (especially for random dot stereograms) is much impaired. So is visual stability, because borders and edges become uncertain and labile, with loss of what printers call ‘registration’ at borders. This led us to suggest that normally colour regions are locked to common luminance borders; but they are missing at isoluminance, so ‘border locking’ fails. This seemed to relate to one of our favourite distortion illusions—the Café Wall—found in the tiles of a nineteenth century café near our laboratory in Bristol. From models with sliding parts we found Laws of the Café Wall, relating to characteristics of neural channels. The dramatic distortions disappeared at isoluminance, presumably because border locking is lost. This is not a ‘cognitive’, but rather a ‘physiological’, illusion; the various channels for position, motion, and stereo having different characteristics, which we could measure (Gregory and Heard 1983).

The future?

Aristotle moved seamlessly from physics, to natural history, to psychology. I like this. Much of engineering and even quantum physics is relevant to how the brain works. In a classical experiment, Hecht, Schlear and Pirenne (1942) measured the minimum number of quanta that could produce a conscious flash of light. It turned out that, apart from not being perfectly transparent, the eye is as sensitive as theoretically possible, responding to a single quantum. Current brain sciences hardly notice quantum physics, although electronic components such as transistors are quantum devices. This needs watching.
(p.133) The human visual system is just one of many possible engineering solutions, and many have been tried through evolution. We had a lot of fun investigating the single-channel scanning eye (as it turned out) of a microscopic copepod, Copilia quadrata, at Naples (Gregory et al 1962). Other scanning eyes, with a few more channels, have been discovered and studied more fully by the biologist Michael Land (Land and Nilsson 2001).
Kenneth Craik's notion of brain models (Craik 1943) can be realized very usefully in engineering terms. This occurred to me in a darkroom at Cambridge, while thinking about the difference between the two eyes' images in stereoscopic vision. In the enlarger, I placed a photographic negative from one lens of a stereo camera, as a sandwich on a positive transparency from the other lens. The sandwich gave a difference picture, as the dark regions of the negative occluded the transparent parts of the positive where they matched, leaving transparent difference regions. It struck me that this could be used to improve images degenerated by atmospheric disturbance, for astronomical telescopes. For if the fluctuating image was projected, through a long-exposure photographic negative, to a photomultiplier opening the shutter of a second camera when the image matched its negative, the shutter would open when the disturbance was small. So the second camera should build up a better picture, each time its shutter opened, than the first unsampled photograph.
We tried this in the laboratory with simulated disturbance (Gregory 1964), then, with the engineer Steven Salter, on the smallest, 100-year-old refractor in the Cambridge Observatory, and a large telescope in New Mexico. This was before there were adequate computers, and there were annoying problems with tracking, but the method is now used very effectively first by amateur astronomers with CCV cameras and graphic computers.9 As this system made its own decisions in the light of evidence, it was possibly an early example of AI. We see the power of internal models most dramatically in satellite navigation (GPS), the map being essential for making the received data effective.
Hints and analogies from engineering are very useful but stop short of consciousness, for so far artificial intelligences are zombies. For all the clever people thinking about consciousness, we do not have a handle on qualia of sensations. John Locke suggested that sensations are tokens of physical events—although the sky looks blue, it is no more blue than the word ‘cat’ is like a cat. We need more insights to unlock the outsight of vision—from this understanding (p.134) perhaps to make conscious robots. I hope we will recognize their artificial qualia, and be kind to sentient machines.
As the future rests with coming generations, education is all important. The evident importance of interactive experience for learning to see suggests that hands-on learning is important for schools. Accepting this, we started and ran the Exploratory hands-on science centre in Bristol. This was the first in Britain, following Frank Oppenheimer's very successful Exploratorium in San Francisco.10 Two million children and adults visited our Exploratory during its final ten years, before it was taken over by other people, with a rather different philosophy and major funding. Whether a change is advance or retreat, can be hard to judge for historical events—impossible for the future. It is no more than a hope that science will become central to human culture and reason will prevail.

References

No comments: