Ord Clouds
A senior researcher at Oxford University’s AI Governance Initiative, Tony Ord, recently wrote an essay in which he addresses the concept of artificial general intelligence (AGI) and how we might know if and when we’ve achieved it.
The traditional definition of AGI is an artificial intelligence system that is as intelligent or more intelligent than the most intelligent humans across a range of measures, or which is as or more cognitively capable than most humans.
A theoretical smarter-than-all-humans AI is sometimes referred to as artificial super intelligence (ASI), while AI systems that are capable in just one or a few fields or tasks are often called artificial narrow intelligence (ANI).
While today’s (arguably ANI) AI systems are already useful for all sorts of purposes, the real benefits (and potential dangers) have long been associated with the development of synthetic intelligences that match or beat most or all humans at basically everything. As such, entire fields of research and organizations have been established to track the development of such systems, and to be prepared for if and when they arrive.
In that piece by Ord, he posits that it will be difficult for us to recognize when such systems have arrived. He uses the analogy of hikers seeing a cloud-crowned mountain in the distance and asking themselves how they’ll know when they have definitively entered that cloud. As they get higher and higher up the mountain, they notice a faint haze and visibly progressively drops, but the change is relative and it’s never obvious that they are absolutely within that cloud halo that was so clearly differentiated from not-cloud-space when viewed from a distance.
The claim he’s making is, then, is that some changes are easier to perceive from far away, when we’re speculating and predicting, and a lot more difficult to distinguish with clarity up close, because the gradient of change is so gradual, day to day.
The definitions we’ve come up with for this class of non-human intelligences—ANI, AGI, ASI—are less useful the closer we get to them, then, and now that we are arguably within the realm of ANI and possibly, maybe, approaching some kind of AGI, these distinctions are becoming less useful because the difference between one model and the next can bear hallmarks of all or none of these (or even both within the same update) but still, in aggregate, seem to nudge us forward along that predicted trajectory.
The author behind AGI Friday, Daniel Reeves, called this analogy the Ord Cloud (not to be confused with the Oort Cloud), and it would seem to be a useful framing for a field of research that’s muddled by rapidly changing (and often business-motivated) definitions for those terms, by a jagged frontier of capabilities in the best-available models, and by the reality that the most powerful systems are not yet publicly available.

