LLM Writing
Artificial intelligence systems, especially those that use large language models (or LLMs) to produce human-seeming outputs, are creating a low-level ruckus across essentially the whole of society.
Tools like ChatGPT, Claude, Gemini, and even open-source options like Llama and DeepSeek’s R1 are being baked into many common platforms and applications (including social networks, office tools like Google Docs, and Apple’s newer iPhones), and that’s putting them front and center when users of said tools want to write an email, respond to an invitation, or produce an essay for class.
While some of these tools excel at churning out paragraphs with nothing more than a prompt (“Write me a scholarly essay on the geopolitical context in which War and Peace was published”), many of them are also leveraged to increase (or enable) the user’s understanding of something that might otherwise be beyond their ken.
So you might feed an AI tool a link to a long essay and ask it to summarize said essay in five bullet points, or you might ask it to explain a research paper that you don’t understand in language you’ll be more likely to comprehend.
These tools, then, can both expand and contract existing information, taking a core idea and spinning-up a whole email, essay, or book out of it, but it can also compress a book’s-worth of writing down into just a few bullet points.
Recent research (from early 2025) suggests that as of late-2024 about 18% of financial consumer complaints were at least partially LLM-produced and around 10% of job postings were LLM-assisted. The same paper says that up to 24% of corporate press releases are now produced (entirely or partly) by LLMs, and about 14% of United Nations press releases are generated or modified by the same.
Another study found that in scientific publishing, something like 17.5% of computer science papers, and around 16.9% of peer review text included at least some LLM-generated copy.
Yet another study has suggested that something like half of all social media content will be produced by generative AI tools by 2026—up from about 39% in 2024.
These numbers should be considered incomplete and perhaps even suspect, as the tools we use to detect LLM activity are not very reliable. These AI systems are evolving rapidly, which makes keeping up with them tricky, but the methods that have been developed are also quite leaky and generally appear more confident in their findings than their success rates can support.
That said, there are some interesting implications associated with our rapid adoption of these technologies.
Foremost among them is that it’s devilishly difficult to determine whether text we read (and images we view, videos we watch, and so on) was produced by a human being, written by software that fabricates “new” text based on text it has previously ingested, or some combination of both. This can lead to manipulation, as we might swoon over the seemingly heartfelt words of an online suitor, only to find they are not real (and that we’ve maybe be conned).
A seemingly heartfelt memoir or firsthand account of a war or other historic event might likewise tug at our heartstrings, despite having been cobbled together by an LLM system. Learning this is the case could leaving us questioning other such accounts in the future, even if they’re legitimate.
We might also continue to believe these narratives are real, though, never discovering their synthetic origins. This could influence our sense of history and facts, and perhaps even cause us to feel deep empathy for people who didn’t exist, and to develop strong beliefs about situations that never happened.
At a more basic level, we might feed an LLM bullet points we want to present to a business associate, only to have that system add details that aren’t true (or which are incomplete in a meaningful way).
On the flip-side, we might be handed a book full of valuable information, ask an AI for a summary of it, and then have that system leave out vital points because its assumptions about what’s vital differs from our own. Consuming books (or other written content) in this way can leave us with a sense of accomplishment, making us feel as though we’ve learned all the important bits from a given tome, even though we’ve actually overlooked the most vital (for us and our priorities) components.
It’s a complicated moment right now, then, as our communication spaces (person to person, platform-based, and organizational) are being heavily influenced by this collection of increasingly powerful (and useful) tools, but we haven’t yet figured out how to filter for them when appropriate, how to intuit what might be missing from the expanded or contracted content we consume or communicate, or how we might moderate our reflexive responses to a narrative’s potential artificiality—and how we might do so without numbing ourselves to the same sorts of narratives shared by actual humans.