Considered a virtue by many cultures historical and contemporary, “altruism” refers to the concept and practice of taking care of other humans and/or entities, and concerning oneself with their wellbeing.
This concept varies in specifics from place to place, and is given different levels of prominence in different countries, societies, and families.
Some flavor of altruism (or approach to it) is included in most religious and ideological worldviews, though, and it's often thought of as a component of a good, moral life.
Part of what's interesting about this concept is that it would seem to cry false on the claim that human beings are, and must be, selfish in order to compete evolutionarily.
This criticism is diminished when looking at a tribal model of evolution, which includes care for the group and concepts like pack-hunting and collaboration: through this lens, it makes perfect sense that we might sometimes sacrifice our personal wellbeing for the betterment of the group, which in some cases will mean the betterment of another individual in our group.
Altruism is distinct from concerns related to existing relationships, in that it's not predicated on helping out a family member or someone we owe some kind of loyalty—altruism, by definition, involves personal sacrifice so that someone else (usually someone we don't even know) will benefit.
It's been theorized that part of what compels us to perform such acts—which again, would seem to put us at a personal, biological disadvantage, and which thus don't seem to align with other evolutionary motivations—is that we feel good (experience pleasure) when we are altruistic.
This surge of positive feelings triggered by doing altruistic things might be a fitness-related evolution, as when we help out other people, that can strengthen society in a big-picture sense, which in turn strengthens the foundations of our own survival and fitness.
Because personal gratification is set aside, and in some cases even hindered by altruistic acts, the little surge of satisfaction we feel from being altruistic might be biology's way of incentivizing us to do things that benefit us and those like us, longer-term.
A modern iteration of raw altruism, called “effective altruism,” expands on this basic concept to focus on data-backed approaches to doing the most good for the most people.
It may be personally gratifying and externally beneficial to give $1,000 to someone in need, but that $1,000 might save several lives if instead spent on mosquito-netting and vaccines for folks in poorer parts of the world.
The theory is that altruism is trying to nudge us toward big-picture thinking in the sense that we invest in the wellbeing of all humanity, and perhaps non-human entities as well—life beyond ourselves.
Biology accomplishes this by encouraging us to spend our time and money and other resources on entities other than ourselves by making us feel good when we do so.
But there may be more effective and efficient solutions that don't trigger this same biological sense of satisfaction, and the EA approach is to identify such opportunities (in the most objective way possible) and sell folks on committing to those sorts of efforts, as well, instead of just the latently pleasure-triggering ones.
There's little pushback against the concept of altruism in general—again, most belief systems have some reverence for the concept, even if the specifics differ from group to group—but EA is more controversial because of how callous it can sometimes seem, opting for an at times brutally utilitarian approach rather than something more up-close-and-personal, warm-seeming, and satisfaction-stimulating.
Both approaches would seem to help at different scales, though, and as with most things it seems likely that the best-fit for most people will be somewhere in the middle: our contributions helping in different ways, and all sorts of factors influencing our relative commitments to the wellbeing of others.
Paid Brain Lenses subscribers receive twice as many essays and podcast episodes each week. They also fund the existence and availability of all the free stuff.
You can become a paid subscriber for $5/month or $50/year.
You can also support all my work (and receive gobs of bonus content) via Understandary.
my idea of EA is being choosy with charities. charities must both improve a problem i care about and do it in a measurably effective and efficient way. i give to charities that direct their money to the need, not to management, and that show results/outputs, not just efforts/inputs.