The term “chronowashing” was coined in a preprint article published in Environmental Humanities, in which author (and senior lecturer) Michelle Bastian posits that some public intellectuals may be using the concept of long-term thinking as a means of excusing negative short-term behavior.
In other words, it may be possible to chronowash the destruction of irreplaceable wetlands if you claim that killing those ecosystems will allow for the building of more residences, which will help future generations afford their own homes.
You might also be able to justify extracting huge amounts of oil from the ground by claiming these fossil fuel resources will allow us to build more solar panels, which ultimately will help us enjoy practically unlimited clean, renewable energy.
Some chronowashing is more pernicious than those aforementioned examples: claims about advanced artificial intelligence systems, for instance, are difficult to assess, as are claims about essentially everything social science related, as it's currently impossible to accurately project how various investments and sacrifices will pan-out, long-term.
This general concept seems like it might be useful, though, to help us balance claims about future outcomes when short-term downsides are being discounted by those making those future claims (and as is often the case, when those making these claims will profit in the short-term).
It may be that some destruction and sacrifice, today, will lead to immensely better, absolutely worthwhile outcomes a hundred years hence, but it may also be that some of these claims are purely (or mostly) self-serving, allowing those who wish to avoid scrutiny, or who want to profit without criticism, or who want to receive fanfare for their efforts to enjoy personal benefits up-front while claiming (unprovable) long-term payouts for everyone (they may also make long-term claims and assume that because they and everyone who doubts them will be dead by the time the payouts are meant to arrive, their accuracy doesn’t really matter).
Long-term thinking is arguably important, as it allows us to avoid the downsides of fixating on today’s issues to the exclusion of those that won’t arrive until tomorrow.
Lacking future-facing perspectives, we would never address major problems like those connected to pollution and climate change and ideological extremism, because it's easier to just keep kicking those cans down the road, leaving them for the next generation to handle, and that's if we can even perceive them clearly to begin with using only the evidence we have available, day-to-day, without chronological context to draw-upon.
But sacrificing the present in favor of a theoretical future may also be fraught if we don’t consider the needs and priorities of those who exist right now.
We don’t have a solid system of ethics for balancing the value of an existing human’s life versus that of a potential human a hundred or a thousand or ten thousand years from now (the “discount rate” on such lives), so all of this is theoretical and speculative at the moment.
That said, it’s probably useful to have more terminology of this kind in our vocabularies if we want to spark and usefully frame these sorts of conversations, moving forward.