Algorithm Aversion
In the growing field of human-AI interaction, “Algorithm Aversion” refers to a tendency by some people to substantially favor recommendations and judgements from humans over the same generated by algorithms (including artificial intelligence systems), even when the latter is demonstrably superior.
This might manifest as a preference for human curators of music over automated playlist recommendations, or it may arise as a mistrust of AI systems used to detect cancers in x-rays, or to determine one’s insurance payment.
The rationales for this preference are still being sorted out, but the research we do have available at the moment indicate that many people with an aversion to algorithms seem to mistrust their opacity, as while a human decision-maker (or recommender, or medical practitioner) has seemingly transparent motivations, sources of knowledge, and so on, a piece of software doesn’t have the same.
Thus, their decisions or recommendations may lack the perceptual weight of (for instance) a determination made by a doctor with years of experience, or a music critic who has a clear set of genre preferences and a long history of curating specific sorts of playlist.
There’s also evidence that algorithms operating in spaces that feel more “human,” like those that require some kind of moral decision-making or empathy, are less likely to be trusted than humans doing the same work, and that we may be more likely to trust algorithmic systems if there’s a “human-in-the-loop”—someone working with an AI tool, for instance, rather than the tool operating independently (even if the human involved in that process does little or nothing, most of the time).
Maybe the most potent means of alleviating algorithmic aversion, though, and in some cases possibly even converting it into “algorithm appreciation,” is to increase a person’s experience with the tools in question, which makes said tools and their operation more transparent, increasing the user’s sense of their capabilities and in some cases allowing them to make tweaks to the outputs; all of which have been shown to increase trust in these tools, while also making it more likely the user will discount periodic flubs on the part of the algorithms (because they better understand how and why those flubs happened, and how the systems adjust, over time, to flub less).
A mistrust of new tools, especially those that can seem borderline magical (or obscure and nefarious), isn’t novel to algorithms, and it’s not necessarily a bad thing, as some such tools truly aren’t very good at the tasks to which they’re applied, and many are deployed before they’re fully baked—not ready for prime-time but still utilized for important tasks for all sorts of reasons (often, the desire to save money).
That said, the appropriate use of such tools can amplify our capacities and capabilities (individually and societally), and it’s therefore probably prudent to ensure our concerns are of the warranted, rational variety, rather than the sort that’s based on fear and ignorance. And that means understanding and accounting for our knee-jerk biases in this space, be they positive or negative.