Bulverism
In 1941, the author and religious philosopher C.S. Lewis wrote about a type of circular reasoning that dismisses the larger position or character of the person on the other end of an argument or disagreement, rather than addressing the issue at hand.
Circular reasoning is a logical fallacy that uses the end-point of an argument as the origin point of that same argument.
One common use of circular reasoning is found within the realm of spiritual belief: these holy people claim this scroll is sacred, this scroll says that what these holy people say is important and should be respected, and we should take what the scroll says seriously because they—the holy people—say we should.
This argument is self-reinforcing and never achieves a logical basis beyond its circle of support.
Pluck any one element from the circle and the whole thing collapses, lacking support from external, non-circular elements: Why should we trust these people? Why should we invest any significance in what this scroll (or book, or YouTube video, or public speaker, or online forum post) says?
Objective methods for establishing bases of fact are one means of ameliorating circular reasoning-related issues: if we can establish that the scroll was indeed written by someone possessing knowledge the rest of us lack, demonstrated by means beyond this circle—beyond the confirmation of people who's sole authority is derived from what the scroll says—then this sequence of significance might have some actual bearing.
Lacking this more-objective empowerment from beyond the circle, though, although people will remain free to believe whatever they like about the scroll and these holy people, such belief will tend to be traditional or emotional in nature, not logical.
Bulverism refers to the application of circular reasoning for rhetorical purposes, including persuasion during a conversation or argument.
In practice, this tends to mean beginning with the assumption that the other person is wrong, and then using a type of circular logic predicated on that assumption to explain why they are wrong.
This may involve the use of a circular logic chain, as previously mentioned, but it may also involve some type of ad hominem attack (arguing against the character or motives of the other person, including who they are, their presumed misunderstanding of the topic in question, or their biases) or straw man arguments (replacing the other person's actual arguments with weaker versions that are easier to argue against).
This approach to discourse is common, even amongst otherwise quite rational, well-meaning people.
It makes sense that so many of us succumb to this approach, at times, as it can be soothing to assume people with whom we disagree are wrong, by default.
This assumption allows us to get on with our day and not feel pressured to go through the cognitively taxing process of assessing each and every possible argument from an objective, rational point of view; we default to assuming our beliefs are justified, even if they haven't ever been properly assessed, or haven't been assessed through a critical lens, recently.
Bulverist thinking flatters us and denigrates others, and it can distort our perception of people who may have something valid to say, but who we assume are speaking from a lesser position than ourselves: they are wrong, and our cognitive powers are tasked with explaining why they’re wrong—rather than assessing the validity of what they're saying from a neutral starting point (which is more difficult and time-consuming).
The solution to the problem of Bulverism, according to Lewis, is to accept the possibility that there is such a thing as unbiased reasoning, and it's therefore possible to determine what's true and what's not true even if the people attempting to determine this truth are normal human beings with flaws and prejudices and all the other issues influencing our perception in countless ways.
Lacking this acceptance, we find ourselves in a rational space in which nothing is demonstrably true, and thus, anything is both possible and believable because there's no distinction between fib and fact. We become immensely gullible human beings, basically, because we cannot distinguish between arguments resting upon the relatively solid ground of objectivity, and arguments predicated on some type of self-reinforcing—and often self-serving—pseudo-logic.
Brain Lenses is part of the Understandary portfolio of projects.
You can find the Brain Lenses podcast at brainlenses.com or wherever you get your podcasts.