NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
27.01.2026 • 05:05 Research & Innovation

Hierarchical Spectral Composition Achieves Exact Boolean Logic Synthesis via Gradient Descent

Global: Hierarchical Spectral Composition Achieves Exact Boolean Logic Synthesis via Gradient Descent

Researchers publishing on arXiv in January 2026 introduced a new differentiable architecture called Hierarchical Spectral Composition (HSC) that enables precise Boolean logic synthesis using gradient‑descent training. The method selects spectral coefficients from a frozen Boolean Fourier basis and assembles them through Sinkhorn‑constrained routing with column‑sign modulation, addressing long‑standing difficulties in quantizing neural‑network‑derived logic.

Background and Motivation

Traditional neural networks often converge to fuzzy approximations of Boolean functions, which degrade when converted to discrete representations. Prior work on Manifold‑Constrained Hyper‑Connections demonstrated that projecting routing matrices onto the Birkhoff polytope can preserve identity mappings, yet it did not support Boolean negation. The new HSC framework extends this line of research by adding column‑sign modulation, thereby allowing both positive and negative Boolean terms within a doubly stochastic routing structure.

Architecture Overview

HSC operates by fixing a Boolean Fourier basis and learning a set of spectral coefficients that are selected differentiably. These coefficients are routed through a matrix constrained to the Birkhoff polytope via Sinkhorn iterations, ensuring the routing remains doubly stochastic. Column‑sign modulation introduces a binary sign per column, effectively implementing logical negation without breaking the stochastic constraints.

Experimental Evaluation

The authors evaluated HSC across four increasingly complex phases. For n=2 (16 Boolean operations over a 4‑dimensional basis), gradient descent reached 100% accuracy with zero routing drift and lossless ternary mask quantization. In the n=3 case (10 three‑variable operations), gradient descent achieved 76% accuracy, but exhaustive enumeration of 3^8 = 6561 configurations revealed optimal ternary masks that deliver 100% accuracy with 39% sparsity. For n=4 (10 four‑variable operations over a 16‑dimensional basis), a combined approach of exact Walsh‑Hadamard coefficient synthesis, ternary quantization, and MCMC refinement with parallel tempering attained 100% accuracy on all functions.

Hardware Implications

All tested operations were executed in a single combinational logic cycle at a throughput of 10,959 MOps/s on a GPU, demonstrating that HSC can support hardware‑efficient neuro‑symbolic inference. The ternary polynomial threshold representations identified by the method suggest a pathway to compact, high‑speed logic blocks suitable for ASIC or FPGA implementation.

Future Directions

The results indicate that while gradient descent alone suffices for low‑dimensional problems, higher‑dimensional Boolean synthesis benefits from hybrid strategies that incorporate exhaustive search, spectral analysis, and stochastic refinement. Ongoing work aims to scale the approach to larger variable counts and to integrate the architecture into end‑to‑end design flows for neuromorphic hardware.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen