The Next Chapter: What AI in Publishing Means for Authors, Readers, and the Future of Books
Books on White Wooden Shelves. Source: Pexels
Soon, your trips to Waterstones may involve more than just reading between the lines in books and looking at flashy covers. You may find yourself reaching for a novel with a small, unassuming sticker declaring it was written by artificial intelligence. Would you buy it? Should you? How did we get here and who exactly gets paid?
James Daunt, the chain's managing director, has said he would stock AI-authored books if customers want them - but he has also noticed the "disdain" many booksellers feel toward the idea. There is a significant distinction. Waterstones happily uses AI behind the bookshelves for management of inventory and logistics. However when it comes to consumer-facing AI, there is hesitation. This reveals something important: we are comfortable with algorithms optimising the hidden logistics of business, but become very uneasy when they start substituting human creativity to our faces.
That unease is credible, and it raises the question business leaders will inevitably face: should market demand alone decide how AI integrates into publishing, or do we need guardrails to protect the humans who actually create the work?
The Training Data Problem Nobody Wants to Talk About
Recent research from the University of Cambridge has found that over fifty percent of published authors fear being replaced by AI, and roughly 59% believe their work has been used to train language models without consent or payment. These are rational responses to a business model developing away from human creativity, away from the raw human talent that makes the publishing industry so special.
The publishers now experimenting with AI-generated content are often also those who own the catalogues that trained the algorithms in the first place. They profited from human authors when they initially sold the books, now they may again profit by feeding those books into systems that could replace those very authors. This poses a real risk to authenticity, while hiding behind opaque contracts with tech companies.
This is unacceptable if we believe in fair markets. Publishers that use AI should be forced to secure and publicly disclose licensing agreements for training data. We need to know whose work is being used, and authors deserve to be compensated when their words are used as raw material for systems designed to replace them. Otherwise, we face a very genuine crisis in the same business, driven primarily by the current era of free PDFs for nearly any book you can imagine.
These illicit digital repositories provide the foundational data do that allows AI models to learn without compensation. Some have proposed an "AI levy" or collective licensing scheme, similar to how musicians receive royalties when their songs are played publicly. It is worth asking whether our current copyright frameworks, designed for an era of printing presses and photocopiers, are remotely fit for purpose when dealing with large language models trained on billions of scraped texts.
Meanwhile, a new type of worker is emerging: those whose job it is to "fix" what AI generates. They correct the bad wording, fact-check the hallucinations, modify the tone, and smooth out the unnatural repetitions that characterise machine-generated literature. Publishers are already migrating from large editorial teams to a small group of individuals cleaning up AI drafts.
Is this genuinely skilled work, or a race to the bottom?
On one hand, these positions require expertise - knowing when an AI summary has lost nuance, spotting faked citations, and understanding where human touches bring text to life. On the other hand, it concentrates more labour into fewer positions, resulting in lower pay and less autonomy. Those at entry level who once learned their skills by editing junior manuscripts, for example, now have that work done by algorithms. Career ladders are being knocked down as people attempt to ascend them.
We must be honest about what is being developed here: not new types of expertise, but more stressed workers performing more duties for less money. When we talk about "AI augmentation," we frequently mean increasing corporate profit margins by degrading working conditions for the majority.
Can a Sticker Save Reader Trust?
Daunt insists any AI-authored book should be "clearly labelled." Is a sticker really sufficient? This does not combat the root issue, though does put the choice in the customer’s hands.
A simple "AI-written" label raises more questions than it answers. Did AI write the entire manuscript, or did it only suggest plot points? Did a human heavily edit or only run spell-checks? When you read an AI-generated summary of a news article, should you be aware that you are missing out on the original journalism that could have challenged you, surprised you, or introduced you to a writer's particular voice?
Individual deception is not the only risk; there is also systemic flatness. If we encourage readers to accept AI summaries rather than seeking out original work, we undermine the financial stability of the journalists, critics, and essayists whose work feeds our information ecosystem. Traffic slows, publications close, and diversity of thought narrows. AI trained on human writing, creating summaries that reduce readership from reading the humans’, who then stop getting paid to write, leaving AI to train on... what? What about its own output?
Meaningful disclosure must go further. Readers deserve to know not just that AI was involved, but how, and what they miss out on by consuming artificial text instead of engaging with the messy, irreplaceable complexity of human thought.
Who Gains When AI Rewrites the Rules?
Publishers are already complaining that AI-generated overviews in search results divert traffic away from their websites. The irony is laughable, they are upset about tech companies monetising their content without consent, despite some of them doing precisely the same to authors by using AI trained on uncompensated writing.
But the business dynamics go deeper. Recent data shows that publishers have reported traffic losses ranging from 10% to 25% as a result of Google's AI Overviews, with some publishers experiencing click-through rate drops as low as 89%. Big publishers and retail chains that implement AI aggressively gain negotiating power and cost advantages. Smaller presses that cannot afford the same tools lose out. Mid-list authors (working writers who are not bestsellers but make a living) have their advances cut as publishers wonder why they should pay real people when a machine can generate something "good enough" for a fraction of the cost.
The strategic question is stark: is leaning into AI a defensive requirement in a changing market, or a short-term cost-cutting solution that could ultimately diminish the value of trusted publishing brands? If readers realise that they cannot rely on the quality or authenticity of what they buy, they will stop buying. Trust, once lost, is expensive to recover.
Competition regulators should intervene. When a few AI platforms dominate access to the training data, the distribution channels, and reader attention, we are not looking at innovation; instead, we are looking at market consolidation that may exclude those who cannot pay to participate at that scale.
At what point does assistance become substitution? What kind of industry do we want to build?
If AI can flood the market with satisfactory content at minimal cost, the human advantage shifts from production and curation to just curation. Booksellers, editors, and critics become less valuable as creators but more useful as filters, the people who can tell you why this book matters, who connect readers to writing that challenges or engages them, who build communities around shared literary experiences.
Another dimension worth considering for why publishers and bookshops are floating these ideas is the slow death of physical media more broadly, specifically books, which began long before the advent of AI. From a purely financial perspective, it is rational for booksellers to seek cost-cutting measures through automation. However, a compelling alternative route is to lean into the niche of physical reading as a luxury experience, comparable to attending the theatre or cinema in the age of Netflix. By positioning the physical book as a premium, tactile artifact, the industry can reject AI mediocrity by sitting above it. This strategy transforms the act of reading from a commoditized consumption of data into a high-status, intentional experience that generative models cannot replicate.
Curation is valuable only if there is something worth curating. If we let AI hollow out the economics of human authorship, we will not have a vibrant marketplace of ideas to sift through. We will be drowning in an ocean of competent mediocrity, and the truly original voices will have starved before anyone noticed they were gone.
We could accept that market forces will decide, if readers are willing to buy AI-generated books and publishers can cut costs by replacing human authors, then let that be the case. Human writing becomes a premium product for those who can afford to care about the difference. Everyone else gets algorithmically-produced content, and we accept the lost jobs and narrowed thought stream as the price of efficiency.
The other option is harder. It means designing guardrails, including mandatory compensation for training data, transparent labelling that actually informs, antitrust enforcement to prevent monopolistic control, and a collective commitment that originality, voice, the unique perspective of human experience, should not be sacrificed just because an algorithm can mimic them for a reduced cost. The tools we build and the markets we design are not inevitable; they are choices.

