Brain Lenses

Brain Lenses

Share this post

Brain Lenses
Brain Lenses
Overfitted Brain Hypothesis

Overfitted Brain Hypothesis

Colin Wright's avatar
Colin Wright
Jan 12, 2023
∙ Paid

Share this post

Brain Lenses
Brain Lenses
Overfitted Brain Hypothesis
Share

In the world of artificial intelligence research and development, "overfitting" refers to the (usually accidental) over-specialization of a machine learning model that makes the resulting AI ultra-focused (and consequently less useful for most things).

Said another way: one approach to creating AI software is "training" it on a huge collection of data, like hundreds of millions of photos of cats if you're hoping to make an AI that can accurately tell the difference between photos of cats and photos of things that are not cats.

In this context, overfitting means training this model with such specificity that it can't later be fed images of dogs and achieve the same goal (identifying photos of dogs vs. photos of not-dogs)—its training is too specific to one type of animal, and it's thus useless for any other application.

Keep reading with a 7-day free trial

Subscribe to Brain Lenses to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Colin Wright
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share