a. 2010s: Advancements in neural networks lead to the development of deep learning. In the year 2006, Hinton and colleagues introduced “Deep Learning.” Deep Learning constitutes a subset within the realm of machine learning, focusing on algorithms influenced by the intricate workings of the human brain, known as artificial neural networks. These neural networks play a pivotal role in tasks like image classification, speech recognition, object detection, and content description. The emergence of deep learning marked a pivotal moment, leading to a resurgence in neural network exploration, often dubbed as the “new-generation neural networks.” This resurgence stems from the remarkable accomplishments achieved by deep networks in effectively addressing a diverse range of classification and regression tasks (see e.g., Fong et al., 2021 on how deep learning convolutional neural network can be used to predict emotional responses to music).
b. Several important tools and platforms emerged in the 2010s to make AI development more accessible. Google’s deep learning framework TensorFlow and Facebook’s PyTorch (an opensource ML library based on the Torcj library) became prominent open-source deep learning frameworks, simplifying the creation of AI models and giving everyone the tool to build extraordinary models (Tyson, 2022).
Keras, integrated into TensorFlow, offered a high-level API for rapid prototyping. Scikit Learn remained valuable for data analysis and machine learning (Géron, 2022). Jupyter Notebooks and Google Colab provided interactive environments for code, data, and visualization. AutoML tools like Google’s AutoML and H2O.ai’s Driverless AI automated aspects of model development. Microsoft Azure ML offered cloud-based machine learning services, while IBM Watson provided AI capabilities for businesses. These tools collectively empowered researchers, data scientists, and developers to engage in AI development and research.