Abstract
Modern AI is built upon a data-driven approach, where AI problems are solved by deep neural networks (e.g., CNNs, Resinets, and Transformers). Do neural networks own human-like intelligence? To answer this, I will relate “modern AI” to “heavily supervised learning” (or weak AI) and “neural networks” to “data-fitting machines,” respectively. This view provides deeper insights into the working principle of neural networks, and we can clearly understand what they can and cannot do. They are fundamentally different from human brains. The next question is “whether neural networks provide a unique data-fitting machinery for huge input-output data pairs.” If not, what is the alternative? Is it a better one? I have researched this topic since 2014, developed alternative data-fitting machinery, and coined this emerging field “green learning (GL).” It is called “green” since it demands low power consumption in training and inference. GL has many attractive characteristics, such as small model sizes, fewer training samples, mathematical transparency, ease of incremental learning, etc. GL adopts signal processing and statistical tools such as filter banks, linear algebra, probability theory, etc. We recently used the wavelet transform in representation learning to handle input images of higher resolutions. Furthermore, we derived ways to give weights to wavelet coefficients. The weighted wavelet (W2) coefficients offer highly discriminant features for decision learning. These new GL developments will be introduced in the second half of my talk.
Bio