The implications of developments in generative AI technology have sent shockwaves across the publishing industry, and we have already seen their consequences across various components of both the physical and digital publishing communications circuits (Darnton, 1982; Ray, Murray and Squires, 2012). In 2025, A Bookseller investigation found that twenty-two percent of respondents (who work in the industry) are ‘very much encouraged’ to use generative AI, whilst sixty-eight percent of respondents said their company has a policy on the use of generative AI (Wood, 2025). 

Fast-forward to Spring 2026, and we find continued concern over protecting human creativity and intellectual property. Just weeks after Mia Ballard’s novel Shy Girl was pulled from UK booksellers by Hachette due to suspected AI use (Brown, 2026), two new Bookseller articles report that editors have been uploading confidential manuscripts to OpenAI software in order to ‘summarise’ authors’ work (Wood, 2026; Snow, 2026). 

Gordon Wise, a literary agent at Curtis Brown, said, ‘the aims of publishers, agents and authors should be the same […] to protect human creativity while also recognising the usefulness of artificial intelligence technologies, if used responsibly’ (Ibid). Meanwhile, the Author’s Guild in the US has put out a statement that those who have access to unpublished material ‘should not upload any manuscript to or otherwise prompt consumer-facing chatbots’ without ‘the author’s written permission’ (Snow, 2026).

Andrew Gray writes that AI is ‘a perfect storm brewing for the integrity of scholarly publishing’ (Gray, 2026). I would argue that the issues he describes – researchers ‘generat[ing] large portions of papers’ with AI and using them to ‘cut corners’ whilst peer-reviewing papers – are evidently reflective of the state of publishing as a whole, considering the previously-mentioned Bookseller articles (Ibid; Wood, 2026; Snow, 2026). 

It is undeniable that if this trend continues, an erosion of trust will have an impact on author-editor relationships. One commenter on the article, identifying themselves as an unpublished author, wrote, ‘this is so depressing to read. It’s a challenge to stay positive and trust the process’ (Wood, 2026). They continued, ‘the idea that someone to whom you’ve entrusted [your manuscript] might upload it to OpenAI where it could be exploited is beyond disappointing’ (Ibid). However, one comment on The Bookseller’s article reads ‘to my mind it would be inconceivable that somebody in publishing would NOT [upload manuscripts to AI to reduce workload]! Which doesn’t imply the practice is legitimate, it is just inevitable’ (Ibid).

And so: the editor suspects the author of using AI to write their work; the author suspects the editor of uploading their livelihood to a software that absorbs and appropriates intellectual property in its training processes. What reassurances can be made to both parties? Well, Clare Alexander from Aitken Alexander states that rights teams are trying to ‘arrive at robust wording both to safeguard author’s content and to understand to what extent different publishers are or are not using AI tools,’ although she admits that this has been ‘difficult’ (Wood, 2026). In the US, the Author’s Guild has recommended a clause in publishing contracts that would ‘prevent objectionable use’ – a clause that might be worth considering in the UK as well (Snow, 2026). How would this be enforced? That remains to be seen.

Written by Tom Rae.

Bibliography

Brown, L. (2026). ‘Hachette pulls initially self-published horror novel over suspected AI use’, The Bookseller [online]. Available at: https://www.thebookseller.com/news/hachette-pulls-initially-self-published-horror-novel-over-suspected-ai-use [accessed 29/04/2026].

Gray, A. (2026). ‘How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation’, Bulletin of the Atomic Scientists [online]. Available at: https://thebulletin.org/premium/2026-03/how-ai-use-in-scholarly-publishing-threatens-research-integrity-lessens-trust-and-invites-misinformation/ [accessed 05/05/2026].

Snow, M. (2026). ‘Editors should not upload manuscripts to AI without permission, says Authors Guild’, The Bookseller [online]. Available at: https://www.thebookseller.com/news/editors-should-not-upload-manuscripts-to-ai-without-permission-says-authors-guild [accessed 29/04/2026].

Wood, H. (2025). ‘“Incredibly useful” or “hallucinatory and racist”: how book professionals have found using AI at work’, The Bookseller [online]. Available at: https://www.thebookseller.com/news/incredibly-useful-or-hallucinatory-and-racist-how-book-professionals-have-found-using-ai-at-work [accessed 29/04/2026].

Wood, H. (2026). ‘Some editors “uploading confidential manuscripts to ChatGPT to read quickly”, agent claims’, The Bookseller [online]. Available at: https://www.thebookseller.com/news/some-editors-uploading-confidential-manuscripts-to-chatgpt-to-read-quickly-agent-claims [accessed 29/04/2026].