Abstract
Backpropagation (BP) is fundamental to deep learning but suffers from inefficiencies like vanishing gradients and backward locking. This talk explores localized gradient methods, including Associated Learning (AL), Supervised Contrastive Parallel Learning (SCPL), and Decoupled Supervised Learning with Information Regularization (DeInfoReg), which decouple BP into independent local objectives. These approaches improve training efficiency, robustness, and generalization. We discuss their theoretical foundations, practical applications, and experimental results, highlighting their potential as scalable alternatives to traditional BP.