Further documentation is available here. It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. AI researchers who had survived the “winter” of the 1970s—warned the business community that enthusiasm for AI had spiraled out perceptrons an introduction to computational geometry pdf control in the ’80s and that disappointment would certainly follow.
Three years later, the billion-dollar AI industry began to collapse. AI by government bureaucrats and venture capitalists. Despite the rise and fall of AI’s reputation, it has continued to develop new and successful technologies. 2002 that “there’s this stupid myth out there that AI has failed, but AI is around you every second of the day.
Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry. As Ray Kurzweil writes: “the AI winter is long since over. US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. At the outset, the researchers were optimistic. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made mistakes.
An anecdotal example was “the spirit is willing but the flesh is weak. Translated back and forth with Russian, it became “the vodka is good but the meat is rotten. Similarly, “out of sight, out of mind” became “blind idiot”. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended.
Some of the earliest work in AI used networks or circuits of connected units to simulate intelligent behavior. Frank Rosenblatt, who kept the field alive with his salesmanship and the sheer force of his personality. He optimistically predicted that the perceptron “may eventually be able to learn, make decisions, and translate languages”. Connectionist approaches were abandoned for the next decade or so.
AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its “grandiose objectives. He concluded that nothing being done in AI couldn’t be done in other sciences. AI’s most successful algorithms would grind to a halt on real world problems and were only suitable for solving “toy” versions. The report was contested in a debate broadcast in the BBC “Controversy” series in 1973. The report led to the complete dismantling of AI research in England.
Alvey had a number of UK-only requirements which did not sit well internationally, especially with US partners, and lost Phase 2 funding. AI research with almost no strings attached. 1969, which required DARPA to fund “mission-oriented direct research, rather than basic undirected research”. Pure undirected research of the kind that had gone on in the ’60s would no longer be funded by DARPA. Researchers now had to show that their work would soon produce some useful military technology. AI research proposals were held to a very high standard. AI research was unlikely to produce anything truly useful in the foreseeable future.
DARPA’s money was directed at specific projects with identifiable goals, such as autonomous tanks and battle management systems. By 1974, funding for AI projects was hard to find. Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that.
But they felt they couldn’t in their next proposal promise less than in the first one, so they promised more. The result, Moravec claims, is that some of the staff at DARPA had lost patience with AI research. DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at Carnegie Mellon University. DARPA had hoped for, and felt it had been promised, a system that could respond to voice commands from a pilot.