Glossary of AI Terms in Advertising Research

1. Algorithm: A set of rules or instructions given to an AI or machine learning model to perform a specific task. The IAB (2021) define it as follows: An algorithm is a sequence of well-defined computer instructions that solve a problem or perform a computation such as calculations, data processing, or automated reasoning.

2. Analytics: The discovery, interpretation, and communication of meaningful patterns in data.

3. Artificial Neural Network (ANN): A computing system made up of interconnected nodes that processes information similarly to how neurons in the human brain work. Commonly used in deep learning. IAB (2021) definition: Artificial neural networks (ANN) combine algorithms and computational power to process problems by mimicking biological neural networks’ form and function like our brains.

4. Backpropagation: A method used in training neural networks by adjusting weights based on the error from the predicted output.

5. Bias: Pre-existing beliefs or distortions present in data or in the design of an algorithm which can lead to unfair or skewed results. In ML this refers to an algorithm’s error due to erroneous assumptions in the learning process.

6. Big Data: Extremely large datasets that may be analyzed computationally to reveal patterns, trends, and associations.

7. Chatbot: A software application designed to simulate conversation with human users, especially over the internet.

8. Classification: A supervised learning task where the goal is to predict the categorical class labels of new instances.

9. Clustering: An unsupervised learning task where the aim is to group similar instances based on certain features.

10. Convolutional Neural Networks (CNNs): A class of deep neural networks, highly effective for processing data with a grid-like topology, such as images. CNNs utilize convolutional layers, pooling layers, and fully connected layers to automatically and adaptively learn spatial hierarchies of features from input images. They excel in tasks like image and video recognition, image classification, medical image analysis, and natural language processing. The strength of CNNs lies in their ability to detect local patterns, such as edges in images, and their capacity to handle high-dimensional data, making them a cornerstone of computer vision and deep learning applications.

11. Data Mining: The process of discovering patterns and knowledge from large datasets.

12. Deep Learning: A subset of machine learning that uses neural networks with many layers.

13. Feature: An individual measurable property of the phenomenon being observed.

14. Genetic Algorithm: A search heuristic inspired by the process of natural selection.

15. Generalization: The model’s ability to give accurate predictions for previously unseen data.

16. Generative Adversarial Network (GAN): A class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. GANs consist of two neural networks, a generator and a discriminator, which compete against each other. The generator creates data samples, while the discriminator evaluates them against real data. The goal of the generator is to produce data indistinguishable from real data, and the discriminator’s goal is to correctly differentiate between the two. This adversarial process enhances the performance of both networks, enabling GANs to generate high-quality, realistic synthetic data, widely used in image, video, and voice generation.

17. Hyperautomation: Hyperautomation applies advanced technologies such as robotic process automation, AI, machine learning, and process mining to augment human workers and extend automation beyond traditional capabilities. Unlike simpler tools like macros or isolated scripts, hyperautomation tackles more complex, cognitive tasks, leading to more impactful automation processes.

18. Image Recognition: The ability of software to identify objects, places, people, or actions in images.

19. Keras is an open-source neural network library written in Python. It’s designed to enable fast experimentation with deep neural networks and focuses on being user friendly, modular, and extensible. Initially developed as an interface for TensorFlow, Keras now supports multiple backends, including TensorFlow, Microsoft Cognitive Toolkit (CNTK), and Theano. Keras simplifies the process of building and training deep learning models with its high-level building blocks for creating and training neural networks. It’s widely used in both academia and industry and is known for its ease of use and flexibility, making it a popular choice for beginners and experts in machine learning and deep learning.

20. Knowledge Graph: A knowledge base that uses a graph-structured data model or topology to represent and link information.

21. Jupyter Notebooks: Jupyter Notebooks are an open-source web application that allows you to create and share documents containing live code, equations, visualizations, and narrative text. Widely used in data science, scientific computing, and machine learning, Jupyter Notebooks support various programming languages, including Python, R, and Julia. They are particularly valued for their interactivity, ease of use, and ability to integrate code, explanatory text, and visual outputs in a single document. This makes them an excellent tool for exploratory data analysis, collaborative projects, educational purposes, and presenting computational workflows.

22. Machine Learning (ML): A subset of AI that allows computers to learn from data without being explicitly programmed.

23. Natural Language Processing (NLP): A field of AI that focuses on the interaction between computers and humans through natural language.

24. Overfitting: A modeling error that occurs when a machine learning model is tailored too closely to the training data and performs poorly on new, unseen data.

25. Perceptual Maps: Also known as positioning maps, are visual representations used in marketing to show how consumers perceive a product or brand in comparison to its competitors. These maps are typically based on consumer perceptions of certain attributes (like quality, price, or performance) and help marketers understand how their products are positioned in the market. By plotting products on a two-dimensional grid, perceptual maps illustrate the relationships between market competitors and provide insights into potential market gaps or areas for competitive advantage. This tool is invaluable for strategic planning in product development, branding, and advertising.

26. Predictive Analytics: Using historical data to predict future outcomes.

27. PyTorch: An open-source machine learning library developed by Facebook’s AI Research lab. It’s widely used for applications in computer vision, natural language processing, and deep learning. PyTorch is known for its flexibility and ease of use, particularly in the development and training of deep learning models. It provides dynamic computational graphing, allowing for intuitive and straightforward model building and modification. PyTorch also features strong GPU acceleration support and has a growing community, making it a popular choice among researchers and developers for both experimentation and production in AI and deep learning projects.

28. Regression: A type of supervised learning where the aim is to predict continuous values.

29. Reinforcement Learning: A type of machine learning where agents learn by interacting with their environment and receiving feedback in the form of rewards or penalties.

30. Sentiment Analysis: Using NLP to determine whether a piece of text is positive, negative, or neutral in tone.

31. Supervised Learning: A type of machine learning where the model is trained using labeled data.

32. Support Vector Machine (SVM): A powerful and versatile supervised machine learning algorithm, primarily used for classification and regression challenges. It works by finding the best hyperplane that separates data points into different classes in the feature space. The strength of SVM lies in its ability to handle linear and non-linear data using kernel functions. SVMs are popular in applications like image classification, text categorization, and bioinformatics, due to their effectiveness in handling high-dimensional data and their robustness against overfitting, especially in cases where the number of dimensions exceeds the number of samples.

33. TensorFlow: An open-source software library for numerical computation using data flow graphs. Developed by the Google Brain team, it’s widely used in machine learning and deep learning for building and training neural networks. TensorFlow’s flexible architecture allows for easy deployment of computation across various platforms (CPUs, GPUs, and TPUs), facilitating both research and production use. It supports a wide range of tasks, primarily focused on training and inference of deep neural networks in areas like computer vision, natural language processing, and predictive analytics. TensorFlow’s user-friendly interface and extensive community support have made it a popular tool in the AI field.

34. Text-to-Speech/Speech-to-Text: Text-to-speech and speech-to-text technologies enable the accurate recognition and translation of oral language into textual form, and the effective transformation of written text into audible speech.

35. Time Series Forecasting: Using machine learning to predict future values based on previously observed values.

36. Training Data: The data on which a machine learning model is trained. Test dataset is used to assess the performance of the model.

37. Turing Test: The Turing Test, proposed by Alan Turing in 1950, is a method for determining whether a machine exhibits intelligent behavior equivalent to, or indistinguishable from, that of a human. In this test, a human judge engages in a natural language conversation with one human and one machine, both of which try to appear human. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The Turing Test has been a fundamental concept in the philosophy of artificial intelligence, prompting debates about the nature of intelligence and the potential of machines to exhibit human-like consciousness.

38. Unsupervised Learning: A type of machine learning where the model learns from data without labeled responses.

39. Voice Recognition: The ability of a machine or program to receive and interpret dictation or to understand and carry out spoken commands.

40. Weights: Values in a neural network that transform input data within the network’s layers.

Add a Comment