Member-only story

Fourier Analysis Networks (FANs) Are Here To Break Barriers In AI

A deep dive into Fourier Analysis Network (FAN), a novel neural network architecture that performs better than the baselines (MLP, LSTM, KAN, Transformer and Mamba), and learning to build one from scratch.

Dr. Ashish Bamania
Level Up Coding
Published in
12 min readDec 6, 2024

Image generated with DALL-E 3

Multi-layer Perceptrons, or MLPs, are the dominant architecture for AI models today.

They are based on the Universal Approximation Theorem and aim to approximate any real continuous function to any desired accuracy with their hidden layers.

MLPs have been recently challenged by Kolmogorov-Arnold Networks (KANs) and XNets.

But there’s something that is still lacking in the core of these architectures.

They cannot model periodicity from data.

Therefore, their performance on periodic data remains poor.

This has been solved by a new type of neural network architecture called Fourier Analysis Networks (FANs).

Published in ArXiv, this research introduces FANs that use the principles of Fourier Analysis to encode periodic patterns directly within the neural network.

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Written by Dr. Ashish Bamania

🍰 I simplify the latest advances in AI, Quantum Computing & Software Engineering for you | 📰 Subscribe to my newsletter here: https://intoai.pub

Responses (20)

What are your thoughts?

Conceptually, FANs are similar to the Fourier Neural operator (https://neuraloperator.github.io/dev/index.html). The FNO involves transformation to the Fourier space via FFT, processing in the Fourier space followed by transformation back into the…

Lost in reading the article, I momentarily forgot I was using the Medium app.

Kolmogorov-Arnold Networks (KANs)