3. AI Winter and Funding Challenges

a. Early 1970s and Late 1980s: Initial optimism about AI led to high expectations, but technological limitations led to disappointment and reduced funding, known as the “AI winter.” During the 1970s, AI faced criticism and financial setbacks as researchers struggled to tackle increasingly complex problems. A major obstacle was the limited computational power available at the time, which lacked sufficient memory and processing speed to achieve substantial practical results. One setback was encapsulated in Moravec’s paradox: Computers excelled at tasks like theorem proving (a subfield of automated reasoning and mathematical logic) and geometry problem solving, which were considered difficult for humans, but struggled with tasks that are easy for humans to perform – motor and social skills – like facial recognition or navigating a room without collisions (Arora, 2023; Zador, 2019).

In the mid-1980s the intersection of statistical mechanics and learning theory came into focus. During this time statistical learning from examples took precedence over traditional logic and rule-based AI. This shift was marked by two influential papers: Valiant’s (1984) “A Theory of the Learnable,” which laid the groundwork for rigorous statistical learning in AI, and Hopfield’s (1982) development of the neural network model for associative memory. Hopfield’s work sparked the widespread application of concepts borrowed from spin glass theory to neural network models. A pivotal moment in this evolution was the calculation of memory capacity in the Hopfield model by Amit, Gutfreund, and Sompolinsky (1985), followed by subsequent research endeavors. A more focused and in-depth application to learning models emerged through the pioneering work of Elizabeth Gardner, who harnessed the replica trick (Gardner, 1987, 1988) to compute volumes within weight spaces for straightforward feed-forward neural networks, encompassing both supervised and unsupervised learning models. Concurrently, the “backpropagation” method was popularized as the most popular method of training neural networks (Rumelhart, Hinton, & Williams, 1986).

Despite these notable advancements, the initial optimism had set unrealistic expectations. When the promised results failed to materialize, funding for AI dwindled, leading to a period of decline.

Add a Comment