Overfitted Brain Hypothesis
In the world of artificial intelligence research and development, "overfitting" refers to the (usually accidental) over-specialization of a machine learning model that makes the resulting AI ultra-focused (and consequently less useful for most things).
Said another way: one approach to creating AI software is "training" it on a huge collection of data, like hundreds of millions of photos of cats if you're hoping to make an AI that can accurately tell the difference between photos of cats and photos of things that are not cats.
In this context, overfitting means training this model with such specificity that it can't later be fed images of dogs and achieve the same goal (identifying photos of dogs vs. photos of not-dogs)—its training is too specific to one type of animal, and it's thus useless for any other application.
Keep reading with a 7-day free trial
Subscribe to Brain Lenses to keep reading this post and get 7 days of free access to the full post archives.