NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
31.12.2025 • 19:58 Research & Innovation

Researchers Quantify Spectral Bias of Regularizers in Convolutional Neural Networks

Global: Researchers Quantify Spectral Bias of Regularizers in Convolutional Neural Networks

A team of machine learning researchers has introduced a visual diagnostic framework to monitor how weight frequencies evolve during the training of modern convolutional neural networks (CNNs). The work, posted on arXiv in December 2025, examines why regularization methods such as L2 weight decay and dropout influence the spectral characteristics of learned features. By focusing on the selection of low- versus high-frequency components, the study aims to clarify a long‑standing gap in the theoretical understanding of regularization.

Visual Diagnostic Framework

The proposed framework captures the dynamic distribution of weight frequencies by applying discrete radial profiling to convolutional kernels, thereby addressing aliasing concerns that arise with small kernels such as 3×3. This approach enables researchers to visualize the gradual shift of spectral energy throughout training epochs.

Spectral Suppression Ratio

To quantify the degree of low‑pass filtering imposed by a regularizer, the authors define the Spectral Suppression Ratio (SSR). SSR measures the proportion of high‑frequency energy retained relative to a baseline model without regularization, offering a single‑value indicator of spectral bias.

Experimental Setup

Empirical evaluations were conducted using a ResNet‑18 architecture trained on the CIFAR‑10 dataset. Both L2 regularization and dropout were applied separately, and their effects were compared against an unregularized control. The analysis focused on the evolution of spectral content in the early layers where kernel size is typically 3×3.

Frequency Suppression Results

Results indicate that L2 regularization suppresses high‑frequency energy accumulation by over 3× compared with the unregularized baseline, as reflected in a markedly lower SSR value. Dropout exhibited a more moderate reduction in high‑frequency components.

Accuracy‑Robustness Trade‑off

The study uncovers a trade‑off between accuracy and robustness. Models trained with L2 regularization displayed heightened sensitivity to broadband Gaussian noise, suggesting over‑specialization in low‑frequency features. Conversely, these models achieved superior robustness against high‑frequency information loss, outperforming baselines by more than 6% in scenarios involving image blurring or reduced resolution.

Implications for Generalization

By framing regularization effects in signal‑processing terms, the research confirms that common techniques impose a strong spectral inductive bias toward low‑frequency structures. This perspective may inform the design of new regularizers that balance frequency suppression with resilience to diverse perturbations.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen