Algorithmic Management
Note: It’s my 39th birthday today! It’s also the day my new book, fittingly called How To Turn 39, goes on pre-sale (it officially launches in paperback, ebook, and audiobook form in a month, on May 16).
This is a book about aging in general (not just aging as it relates to late-30-somethings), and it addresses how growing older can distort our sense of ourselves, our lives, and the world around us—so very Brain Lenses-related.
Snagging a copy is also a great way to support my work, so if you’re finding some value in what I’m doing here, consider pre-ordering a copy (and thanks in advance if you do) :)
—
In the modern business world, “Algorithmic Management” refers to practices and principles that use complex software to make decisions that humans would have previously been required to make.
This might mean using a digital tool to determine whether to buy a particular stock or to pick up a high-demand concert ticket at a given price, using historical data and mathematical formulae to compute the likelihood of making one’s money back on the stock or ticket resale market.
But it can also refer to the systems used to divvy out rides to individual rideshare drivers, and in fact that’s where the term originates: it was coined in a 2015 paper that looked into how this approach to management impacts the humans who are guided by these algorithms.
Complex versions of these tools crunch all sorts of variables and parameters to determine the optimal distribution of responsibilities to a huge number of human workers, and these systems take into account distance from driver to customer, the vehicle they’re driving, their reviews and rankings, time of day, local regulations, and how likely a driver is to be tempted by a given potential payout, among many other data points.
This term has since gone on to be used to refer to corporate policies that, for instance, may dictate the use of software to determine who to fire, hire, and promote based on similar jumbles of data, which ostensibly helps protect the firing, hiring, and promoting companies from human bias-related accidents and prejudices (and thus, related lawsuits).
One potential downside of this approach, though, in both gig economy and traditional workplace setups, is that while algorithms can theoretically be programmed to be objective in some regards, they’re biased toward some outcomes over others by their very nature, and that typically means they’ll prioritize whatever their makers want them to prioritize, even if those outcomes are achieved at the expense of the workers they’re managing.
As a result, the algorithms that connect rideshare drivers with customers may offer lower wages to drivers they know will accept smaller payouts, over time incentivizing all drivers to accept less money for their labor, lest they find themselves slowly elbowed aside in favor of workers willing (and able) to reduce their own pay.
Similarly, such systems might recommend firing waves of employees in order to optimize profits, even if doing so would harm the company’s reputation and the morale of the workers who survive the cull. This, in turn, might even hinder said company’s ability to hire talent in the future—but if these variables aren’t accounted for or prioritized appropriately (in terms the software can understand and incorporate into its math), these considerations won’t be legible to the decision-making components of these systems, which can lead to a slew of negative outcomes, even if the goals that were programmed into the algorithm (maximizing profits) were ultimately attained.
Goodhart’s Law is an adage that says, in essence, as soon as a measure becomes a target, it’s no longer a useful or desirable measure. And Campbell’s Law, another adage, says that efforts to quantify social indicators for decision-making purposes will tend to be corrupted by concomitant efforts to optimize based on that quantification.
In other words, the more we convert difficult-to-quantify elements of human activity and complex work environments into figures that are legible to software, the more likely it is those controlling said software will attempt to optimize for outcomes that serve their purposes, whatever the (unimportant to them, but maybe vital to others) downsides.
Thus, unless external factors (like regulations and laws) force businesses using these sorts of tools to consider variables beyond the dollars and cents in their bank accounts (and the other numbers that inform those numbers), they tend to endlessly tweak these algorithms to optimize for those priorities, potentially corrupting aspects of their industries over time, and hindering or harming those whose priorities are not worked into (and highlighted by) their algorithms.