General AI Terms
- Artificial Intelligence (AI): The study and design of intelligent agents that perceive, reason, learn, and act in an environment.
- Agent: An entity that perceives its environment through sensors and acts upon that environment through actuators.
- Rational Agent: An agent that always selects the action that maximizes its expected performance measure, based on its knowledge.
- Autonomous Agent: An agent that operates without human intervention, continuously and adaptively.
- Environment: Everything external to the agent that it can sense and interact with.
- Percept: An agent’s input from the environment at a given moment.
- Percept Sequence: The complete history of everything an agent has perceived.
- Actuator: The mechanism by which an agent affects the environment (e.g., wheels, arms, output signals).
- Sensor: The mechanism by which an agent receives input from the environment (e.g., cameras, microphones).
Search and Problem Solving
- Search Algorithm: A method for exploring possible states or actions to find a path from a start state to a goal state.
- State Space: A representation of all possible configurations or situations in a search problem.
- Initial State: The starting point in a search or planning problem.
- Goal State: A desired state or set of states that the agent aims to reach.
- Transition Model: Describes how an action changes the state of the environment.
- Path Cost: The cumulative cost of a sequence of actions.
- Uninformed Search: Search algorithms that use no information other than the problem definition (e.g., BFS, DFS, UCS).
- Informed Search: Search algorithms that use domain-specific heuristics to guide the search (e.g., A*).
- Heuristic: An estimate of the cost to reach the goal from a given state.
- Optimality: A property of a search algorithm that guarantees finding the least-cost solution.
- Completeness: A property that ensures the algorithm will find a solution if one exists.
Game and Adversarial Search
- Minimax Algorithm: A strategy for two-player games that maximizes the agent’s minimum possible gain.
- Alpha-Beta Pruning: An optimization of minimax that prunes branches that cannot affect the final decision.
- Utility Function: A numerical representation of the desirability of a state, used in game theory.
- Zero-Sum Game: A game where one player’s gain is another’s loss.
Logic and Knowledge Representation
- Propositional Logic: A formal language where statements are either true or false.
- First-Order Logic (FOL): An extension of propositional logic that includes quantifiers and predicates to express relationships among objects.
- Knowledge Base (KB): A set of logical sentences representing facts and rules.
- Inference: The process of deriving new logical conclusions from known facts.
- Entailment: A relationship where a sentence logically follows from others (e.g., KB ⊨ α).
- Model (Logical): An interpretation that assigns meaning to logical expressions and determines their truth.
- Satisfiability: A KB is satisfiable if there exists at least one model where all sentences are true.
- Resolution: A complete inference technique for propositional logic.
Learning and Machine Learning
- Machine Learning (ML): The study of algorithms that improve automatically through experience.
- Supervised Learning: Learning from labeled examples (input-output pairs).
- Unsupervised Learning: Learning patterns or structure in data without labels.
- Reinforcement Learning (RL): Learning to make sequences of decisions by receiving rewards or penalties from the environment.
- Classifier: An algorithm that assigns a label to input data.
- Neural Network: A computational model inspired by the brain, consisting of layers of nodes (neurons) that learn representations.
- Overfitting: A condition where a model performs well on training data but poorly on unseen data.
- Underfitting: A condition where a model is too simple to capture underlying patterns in the data.
- Loss Function: A function that measures the error between predicted and actual values.
- Gradient Descent: An optimization algorithm used to minimize a loss function by iteratively adjusting parameters.
Probabilistic Reasoning
- Probability Distribution: A function that assigns probabilities to different outcomes.
- Bayesian Network: A directed acyclic graph representing probabilistic dependencies among variables.
- Conditional Probability: The probability of an event given that another event has occurred.
- Markov Decision Process (MDP): A framework for modeling decision-making in fully observable, stochastic environments.
- Partially Observable Markov Decision Process (POMDP): An extension of MDPs where the agent cannot fully observe the environment.
Natural Language Processing (NLP)
- Tokenization: Splitting text into words, phrases, or symbols.
- Parsing: Analyzing the grammatical structure of a sentence.
- Named Entity Recognition (NER): Identifying proper nouns such as names, locations, and dates in text.
- Language Model: A probabilistic model that assigns likelihoods to sequences of words.
Evolutionary Algorithms
- Genetic Algorithm (GA): A search heuristic that mimics the process of natural selection.
- Chromosome: A candidate solution represented as a sequence of genes.
- Gene: A component of a chromosome representing part of a solution.
- Mutation: A random change to an individual’s genes to introduce diversity.
- Crossover: Combining genes from two parents to form offspring.
- Fitness Function: Evaluates how good a solution is for the current problem.
- Selection: The process of choosing individuals from a population to breed the next generation.
- Mixing Number: The number of parents involved in producing an offspring (1 = asexual, 2 = typical crossover, 3+ = multi-parent recombination).
- Stochastic Universal Sampling (SUS): A selection method that places evenly spaced selection points to proportionally sample individuals based on fitness.
Planning and Acting
- Plan: A sequence of actions designed to achieve a goal.
- Planning Agent: An agent that constructs a plan before acting.
- STRIPS: A formal language used for automated planning.
- Effect: The resulting state or outcome of performing an action.
Advanced Learning and Reasoning
- PAC Learning (Probably Approximately Correct): A framework that analyzes whether a learning algorithm can, with high probability, learn a function that is approximately correct given a certain amount of data.
- Inductive Logic Programming (ILP): A technique that combines logic programming with machine learning by inducing logical rules from examples.
- Statistical Relational Learning (SRL): Combines probabilistic reasoning with relational logic to model complex, structured data (e.g., social networks, biological systems).
- Ensemble Learning: Combines multiple models to produce better predictive performance (e.g., random forests, boosting).
- Bayesian Inference: A method of updating probabilities based on evidence using Bayes’ Theorem.
- Variational Inference: An optimization-based method for approximating complex probability distributions, commonly used in deep generative models.
- Expectation-Maximization (EM): An iterative algorithm for finding maximum likelihood estimates in models with hidden variables.
Planning and Acting Under Uncertainty
- Contingency Planning: Creating plans that include branches for different possible future states, useful in uncertain environments.
- Policy: A mapping from states to actions in reinforcement learning or decision processes.
- Value Function: Estimates the expected return (future reward) from a state or state-action pair.
- Bellman Equation: A recursive equation that defines the value of a state in terms of the values of successor states.
- Temporal Difference (TD) Learning: A class of reinforcement learning algorithms that learn predictions based on the difference between successive estimates.
- Q-Learning: An off-policy TD algorithm that learns the expected utility of actions without a model of the environment.
- SARSA (State-Action-Reward-State-Action): An on-policy TD learning algorithm that updates Q-values based on the actual action taken.
- Deep Q-Network (DQN): Combines Q-learning with deep neural networks to approximate value functions in high-dimensional spaces.
Multi-Agent Systems
- Multi-Agent System (MAS): A system composed of multiple interacting intelligent agents, which may cooperate or compete.
- Cooperative Game: A multi-agent setting where all agents share the same reward function.
- Competitive Game (Non-Cooperative): A setting where agents have opposing goals, often modeled as zero-sum.
- Nash Equilibrium: A solution concept where no agent can benefit by changing its strategy while others keep theirs unchanged.
- Joint Policy: A policy that defines a combined strategy for multiple agents acting together.
Advanced Knowledge Representation
- Description Logic: A family of formal languages used to represent structured knowledge and support reasoning, especially in semantic web and ontology systems.
- Ontology: A formal specification of a set of concepts and relationships within a domain.
- Belief-Desire-Intention (BDI) Model: A model for rational agents that distinguishes between beliefs (what is known), desires (goals), and intentions (committed plans).
- Nonmonotonic Reasoning: Reasoning systems where adding new knowledge can invalidate previous conclusions (e.g., default reasoning).
- Default Logic: A formal system for reasoning with defaults or assumptions that hold in the absence of contrary information.
Advanced Probabilistic Models
- Hidden Markov Model (HMM): A statistical model where the system is assumed to be a Markov process with hidden (unobservable) states.
- Kalman Filter: An algorithm that estimates the state of a dynamic system from noisy observations, assuming linearity and Gaussian noise.
- Particle Filter: A nonparametric algorithm that estimates the posterior distribution of state by maintaining a set of samples (particles).
- Factor Graphs: Bipartite graphs representing how a global function factors into a product of local functions; used in probabilistic inference.
Deep Learning & Neural Architectures
- Autoencoder: A neural network trained to reconstruct its input, often used for dimensionality reduction or anomaly detection.
- Variational Autoencoder (VAE): A probabilistic generative model that learns a latent variable representation of input data.
- Generative Adversarial Network (GAN): A generative model where two neural networks (generator and discriminator) compete to produce realistic synthetic data.
- Transformer: A deep learning architecture based on self-attention mechanisms, widely used in NLP (e.g., GPT, BERT).
- Attention Mechanism: Allows a model to focus on specific parts of input data, improving performance in sequence modeling tasks.
Ethics, Safety, and Explainability
- Explainable AI (XAI): Designing AI systems whose decisions can be understood and trusted by humans.
- AI Alignment: Ensuring that AI systems pursue goals that are aligned with human values and intentions.
- Fairness in AI: The study and implementation of algorithms that avoid bias and ensure equitable treatment across demographic groups.
- Adversarial Example: A perturbed input designed to fool a machine learning model into making incorrect predictions.
- Robustness: The ability of a model to maintain performance under noisy, incomplete, or adversarial inputs.

