Member-only story
Fourier Analysis Networks (FANs) Are Here To Break Barriers In AI
A deep dive into Fourier Analysis Network (FAN), a novel neural network architecture that performs better than the baselines (MLP, LSTM, KAN, Transformer and Mamba), and learning to build one from scratch.

Multi-layer Perceptrons, or MLPs, are the dominant architecture for AI models today.
They are based on the Universal Approximation Theorem and aim to approximate any real continuous function to any desired accuracy with their hidden layers.
MLPs have been recently challenged by Kolmogorov-Arnold Networks (KANs) and XNets.
But there’s something that is still lacking in the core of these architectures.
They cannot model periodicity from data.
Therefore, their performance on periodic data remains poor.
This has been solved by a new type of neural network architecture called Fourier Analysis Networks (FANs).
Published in ArXiv, this research introduces FANs that use the principles of Fourier Analysis to encode periodic patterns directly within the neural network.