Welcome to ‘Definite articles’, our pick of recent editing-related internet content, most of which are definitely articles. This time, our theme is the impact of artificial intelligence (AI) on editing and proofreading. It’s a hot topic of conversation among editorial professionals, which is why some of the links in this article were sourced from a CIEP forum thread about ChatGPT. Thank you to the CIEP members who shared them.
Because nothing related to discussions about AI can be guaranteed a long shelf life, you should know that this edition of ‘Definite articles’ was put together at the beginning of June 2023. It covers:
- What’s been happening?
- What can AI actually do?
- How can editorial professionals move forward with AI?
What’s been happening?
On 30 November 2022, the AI chatbot ChatGPT was released by OpenAI. Since then, people who work with words, who include editors, proofreaders and writers, have had the unnerving feeling that the fundamentals of what they do might change, at least in some areas. If you haven’t been keeping a close eye on events, Forbes has written a short history of ChatGPT and two professors have summarised some of the implications of ChatGPT in usefully easy-to-understand terms. You can get an overview of Microsoft’s Copilot, an AI assistance feature being launched this summer, from CNN and Microsoft itself.
As well as the obvious nervousness about whether AI would replace various categories of worker, concerns were quickly raised about the effects of AI on assessing student work and what AI might mean for copyright.
By late spring 2023, loud noises were being made about regulation of AI. As lawmakers in Europe worked on an AI Act, workers in the UK reported that they would like to see the regulation of generative AI technologies.
It’s a subject that’s currently being written and thought about on a daily, if not hourly, basis. But, in practice, and at this point in time, what can AI actually do?
What can AI actually do?
If you didn’t catch Harriet Power’s CIEP blog, ‘ChatGPT versus a human editor’, it’s an enlightening and entertaining read that went down well with our social media followers on LinkedIn, Facebook and Twitter. Harriet instructed ChatGPT to take a proofreading test, write a blog post, and edit some fiction and a set of references. In the proofreading and editing tasks, it did ‘pretty well’ and was impressive in simplifying a fiction passage while keeping its main points. It also wrote a serviceable blog draft.
The two main problems Harriet noticed in the technology were a distinct lack of sparkle in ChatGPT’s writing and editing, and its ‘tendency to “hallucinate”: it’s very good at making stuff up with complete confidence’. (This tendency was also written about by Susanne Dunlap for Jane Friedman’s website, in an article called ‘Using ChatGPT for book research? Take exceeding care’.) Weighing up her test run, Harriet concluded:
ChatGPT apparently struggles to remain coherent when responding to much longer pieces of text (like whole books). It isn’t always factually accurate: you can’t entirely trust anything it’s saying. I can’t imagine how it’d make a good development editor, or how it’d handle raising complex, sensitive author queries. It can’t track changes well. It can’t think like a human, even when it can convincingly sound like one.
However, Harriet added the caveat that in her view it may be ‘years or even months’ before ChatGPT might be able to start competing with human editors. So, how should we respond to that?
How can editorial professionals move forward with AI?
Perhaps there’s no choice but to look at the possible upsides of the AI debate. Anne McCarthy for the New York Book Forum starts us off in ‘The potential impact of AI on editing and proofreading’ by reminding us that lightbulbs and the ‘horseless carriage’ inspired dire predictions in their day. She concludes: ‘Books always have (and always will) require a human touch: it’s what draws us readers to them.’
Amanda Goldrick-Jones, in an article for the Editors Toronto blog called ‘ChatGPT and the role of editors’, offers some wise and hopeful advice: there’s a point at which we, as editorial professionals, have to trust ourselves.
If anyone is well-positioned to explore and critique the possibilities and challenges of AI-generated writing, it’s an editor … So, as with other communication technologies, editors must self-educate about its affordances, propose clear ethical boundaries, and critically engage with its limitations. It’s a tool, not our robot overlord.
Part of this consideration and engagement is understanding AI’s risks, and Michelle Garrett lays these out very effectively in a blog post from March, ‘The realities of using ChatGPT to write for you – what to consider when it comes to legalities, reputation, search and originality’.
Moving one step further, a Q&A with writer Elisa Lorello on Jane Friedman’s website talks about actively using ChatGPT to become ‘creatively fertile’. Lorello testifies that when she started using the technology in earnest, ‘It’s like I suddenly gained an edge in productivity, organization, and creativity’.
And finally, Alex Hern in The Guardian described what happened when he spent a week using ChatGPT to enhance his leisure activities. If you’re not ready to use AI at work, perhaps you could at least get a couple of recipes out of it.
With thanks to the users of the CIEP’s forums for the links they shared in recent discussions.
About the CIEP
The Chartered Institute of Editing and Proofreading (CIEP) is a non-profit body promoting excellence in English language editing. We set and demonstrate editorial standards, and we are a community, training hub and support network for editorial professionals – the people who work to make text accurate, clear and fit for purpose.
Find out more about:
Photo credits: robot hand by Tara Winstead on Pexels; OpenAI screen by Jonathan Kemper on Unsplash.
Posted by Sue McLoughlin, blog assistant.
The views expressed here do not necessarily reflect those of the CIEP.