Tuesday, October 08, 2019

TECHNOLOGY Computers Aren’t So Smart, After All

TECHNOLOGY
Computers Aren’t So Smart, After All

I was looking for some old articles after seeing  so much written about  AI recently 


During the "computer craze" of the 1950s and 1960s some people envisioned the machine replacing the human brain. It hasn't happened and, says the author, it probably never will. So we must still think for ourselves

FRED HAPGOOD
AUGUST 1974 ISSUE
Enjoy unlimited access to The Atlantic for less than $1 per week.
Sign in
Subscribe Now
In the late sixties a chess-playing, computer program was written at MIT and was entered into some local tournaments, where it won a number of games and caught the interest of the local newspapers. I had been curiously following the portentous visions that arose out of articles on the "cybernetic revolution" and was still unsure what to make of the Computer. Since I play chess, this new program seemed to offer a chance to sample its mysteries first hand. I called some friends at MIT, and they arranged for me to play MacHack, as the program was known.

This article appears in the August 1974 issue.

Check out the full table of contents and find your next story to read.

See More
The room in which the computers were kept lacked all signs of diurnal rhythm. There were no windows. The illumination was low, so as not to interfere with the phosphor screens. The only sound was the clatter of high-speed readout printers, and underneath that, the hum of air conditioners and circulators. People quietly came and went with perfect indifference to the hour. I found the scene—the rapt and silent meditations of the programmers hunched over their terminals, the background hum with its suggestion of unceasing activity, the hushed light, the twenty-four-hour schedules—subtly exhilarating.

I was shown how to code the moves and enter them into a terminal. The game itself began with a stock opening line: both the computer and I knew the standard chess moves, and so far as I could tell, to about the same depth. I had decided on what I thought would be a winning strategy. Any programmer, I reasoned, would try to make the positions which his program had to evaluate simple ones and would assign a priority to clarifying exchanges. I therefore set out to make the position as complex as possible, hoping that the machine would lose its way among the options and commit a common strategic blunder, entering into a premature series of exchanges that would end only by increasing the activity of my pieces. Instead, in a flurry of exchanges, I lost a pawn and nearly the game. The trick of playing with MacHack, I learned, is to keep the position free from tension. The program's strong point is tactics; it places priorities on piece mobility and material gain, and in the nature of chess these values generate local, tactical give-and-take.


MORE STORIES

Why Everything Is Getting Louder
BIANCA BOSKER
Man sits on top of video game remote.
Games Boys Play
SPENCER A. KLAVAN

SPONSOR CONTENT
What do passengers want from self-driving cars?
CAPGEMINI
Illustration: washing machine with speaker on yellow background
Why Washing Machines Are Learning to Play the Harp
LAURA BLISS

The Metamorphosis
HENRY A. KISSINGER ERIC SCHMIDT DANIEL HUTTENLOCHER

So my strategy was to play away from the program's abilities and to steer the game into slow-paced, stable, balanced positions. Whenever I did this, MacHack's game seemed to become nervous and moody. The program would lose its concentration, begin to shift objectives restlessly, and launch speculative attacks. This is not an unfamiliar style; every chess club has some players—they are called "romantics "—whose joy is found in contact and tension, in games where pieces flash across the board and unexpected possibilities open up with each new move. Put them in slow positions, and, like MacHack, they grow impatient and try to force their game.

We played no more than five times; eventually, beating it became too easy. The winning formula was mechanically simple: develop cautiously, keep contact between the two sides restricted, let the pawns lead out the pieces. MacHack would always develop in a rush and send its knights and bishops skittering about the board trying to scare up some quick action; denied that action, its position would collapse in confusion. The only way to lose to MacHack, I concluded, would be to play as though the dignity of Man somehow required one to crush the machine in the first dozen moves. If, instead, one just played away from it, the computer would barrel by and fall in a heap. I was far more bored than I would have been playing a human of similar strength, and I came to feel that even if MacHack had been good enough to win most, or all, of its games I still would have felt I was wasting my time. In the middle of the nineteenth century, an enterprising showman hid a chess-playing dwarf in a cabinet and toured Europe, claiming that he had invented a chess-playing automaton. Large crowds were awed by the phony machine. My experience with MacHack suggested that the crowds must have come not only because the "automaton" appeared to be a machine but because the dwarf was a master, and could consistently win.

During the last two games I played, MacHack refused to give its moves when I was about to checkmate it. My curiosity was piqued at this sullenness, and I stayed, trying to wait the machine out and get a reply. MacHack just hummed at me. Finally a programmer, becoming interested in this delay, extracted the record of MacHack's deliberations. It had been working over the mate variations, just looking at them, over and over. "Must be a bug somewhere," the programmer said.

Every culture has its juvenile embarrassments; misdirected enthusiasms which fail dramatically and in retrospect seem to say something humiliating about the civilization that pursued them. The great computer craze of the late fifties and the sixties is such a case. From the erecting of the machine, any number of respected thinkers derived a vision of society. Edward Teller foresaw an automatic world, ruled by machines. Gerard Piel, publisher of Scientific American, wrote and spoke about the "disemployment of the nervous system." C. P. Snow thought that automation would be a revolution with effects "far more intimate in the tone of our daily lives ... than either the agricultural transformation in Neolithic times or the early industrial revolution." "Is the handwriting on the wall for the labor movement?, the Wall Street Journal asked, looking at the matter from its own perspective. ("Their membership may dwindle, their strike power weaken, and their political strength fade. And some of unionism's biggest names may be lesser names tomorrow.") The Ad Hoc Committee for the Triple Revolution (weaponry, automation, human rights), which was a study group composed of social luminaries like Gunnar Myrdal, Linus Pauling, A. J. Muste, Michael Harrington, Bayard Rustin, Irving Howe, Robert Heilbroner, and Tom Hayden and Todd Gitlin of SDS, saw the coming of automation as an argument for a guaranteed minimum income. "In twenty years," wrote Donald Michaels in a Center for the Study of Democratic Institutions book, "most of our citizens will be unable to understand the cybernated world in which we live ... the problems of government will be beyond the ken even of our college graduates. Most people will have had to recognize that, when it comes to logic, the machines by and large can think better thanl they.... There will be a small, almost separate society of people in rapport with the advanced computers. These cyberneticians will have established a relationship with their machines that cannot be, shared with the average man. Those with the talent for the work probably will have to develop it from childhood and will be trained as extensively as classical ballerinas." Professor John Wilkinson of the University of California called for the founding of human sanctuaries "as we establish refuges for condors and whooping cranes."



The pragmatists among those who worried about "America in the 'Automic' Age" thought about unemployment. The Bureau of Labor Statistics estimated that 300,000 workers were replaced annually by machines; the American Foundation of Employment and Automation calculated that 2 million jobs a year vanished. President Kennedy said in 1962 that adjusting to automation as America's greatest domestic "challenge" of the sixties, which puts his negative prescience quotient as high as anyone else's. Harry Van Arsdale won the New York electricians a five-hour day, and there was strong feeling that this was just a beginning. "The only question," said George Meany, "is how short the work week is to be."

But there was a visionary wing as well, and one which achieved, to judge by the number of scare stories which ran in the media, remarkable impact. Very roughly, two scenarios were discernible. The first was that automation would proceed at an ever accelerating rate until computers had entirely displaced the working and lower-middle classes. (I find it stimulating that Robbie the Robot, the famous automaton from the movie Forbidden Planet, whose capable and compliant nature earned him his own TV series, had ebony skin.) Those classes, once thrown out of work, would mill about in proletarian discontent. Then, depending on the perspective of the seer, they would either sponsor a revolution themselves or force a revolutionary response from the established order. Andrew Hacker of Cornell warned about "the contraction of the corporate constituency" and predicted a Luddite rampage. Margaret Mead proposed protecting by law certain jobs, "dustman, the night watchman, the postman." She was particularly worried about the problem of the lowest intelligence "brackets," and did not, at least for this class, favor a minimum income: "I am not sure whether good pay in idleness would be a very healthy thing just for the least intelligent, who are least able to make good use of their leisure." This scenario concluded with the feeling that if America did, by one route or the other, successfully manage its entry into "The Age of Abundance," the result would be a classless world in which all lived in a leisurely upper-middle class style, devoting themselves to the arts and public improvements.

The other line of thought, often found in journals like Argosy, National Enquirer, and Popular Mechanics, was that the new brain machines would displace the upper-middle class. The writers who held this second view were impressed with the machine's potential for autonomy and its inscrutable authoritativeness. ("Harvard Computer Finds English Language Fuzzy"—Science Digest.) While it was not clear that unemployment would be a problem ("Wanted: 500,000 Men to Feed Computers"—Popular Science), what did emerge was the feeling that everyone would be forced, by the appealability of the computer's decisions, into the essence of the lower-middle class experience, which is to be ordered about by those "who know what they're doing."



Nearly fifteen years have passed since these specters first became popular, and clearly we are no further down either of these roads; instead, there has been a perceptible loss of conviction that we are on any road at all. The rates of increase in productivity per man-hour, one of the classic mean elements of automation, were no different in the sixties than in the fifties, though nearly 200,000 computers were installed during the last decade. Underemployment has held roughly stable. Computers have assumed a number of functions, some of which have been historically white-collar jobs: reservations, credit and billing, processing checks, payroll operations, inventory scheduling; and some blue-collar: freight routing, and especially flow monitoring and process control in the metallurgical, petrochemical, paper, and feed industries. But while what the computers do is important, it certainly does not appear to add up to a revolution. If computers posed, and pose, a threat it lies not in rendering less significant those decisions humans make but, as in the privacy issue, in enlarging the impact of, and the opportunities for, the staple villainies of the Old Adam.

Why were so many illustrious thinkers so wrong? Or, perhaps simpler, why have we been so reluctant to learn from their mistakes? "Latest Machines See, Hear, Speak and Sing—And May Outthink Man" is the headline of a Wall Street Journal story that appeared in June, 1973, but it could as easily have been the head on any number of stories over the last fifteen years.

What is striking about these stories is the determination of their authors to believe. They seem never to notice the highly artificial environments or the extremely simplified nature of the problems which allow the computer programs they describe to show even the modest success they have to date. Do the authors ever ask why it is that assembly line jobs, whose tediousness made them famous targets of opportunity for computers, remain virtually untouched by automated hands?

The vatic winds which blew some fifteen years ago were more comprehensible: America had just emerged from the fifties, an extraordinary decade. Never before had we delighted in such a rain of innovations with such an immediate and intimate effect on our daily lives. Television took root everywhere. The Polaroid camera, the Aqualung, the transistor radio, and the birth-control pill came on the market. The hi-fi and stereo industry sprang up. Commercial jet travel became standard. Polio was controlled. The hydrogen bomb, the ICBM, space satellites, and the computer all were significant public issues, altering patterns of discourse and attention if nothing else. Xerox brought out its first office copier in 1959; the first working model of the laser was announced in 1960.

We took these inventions, some boon, some bane, as evidence that a high level of innovation was a settled feature of America, and assumed that that level would, if anything, rise still higher over the decades to come. In that atmosphere no technological achievement seemed beyond us and no forecast too fantastic. It was felt only realistic to advance bold speculations.



Actually, one promise of the "soaring sixties" came spectacularly true—the moon-landing program. But it came to seem increasingly anomalous, not representative of our national direction, certainly not emblematic of our national mood. The sixties was a decade in which apprehensions about the effects of technology became widespread, and glittering inventions ceased to enhance our daily lives. Indeed, aside from the pocket calculator, the introduction of new products has fallen off drastically in the last ten years. The promise of robotics is not the only promise unkept. Cancer and the common cold have not been cured; nuclear power through fusion seems more distant than ever. Cheap desalinization has not been achieved. One of the pioneering computers, ENIAC, built by Eckert and Mauchly, was invented in the hope that it would facilitate long-range weather forecasting, Almost certainly John Mauchly thought he was closer to that goal in 1943 than meteorologists do today.

No comments: