Can ChatGPT hold its own against a human editor? Can it tackle typical editorial tasks? Harriet Power puts it through its paces.
ChatGPT describes itself as an ‘AI language model’: it’s essentially a clever chatbot that you can have human-like conversations with. It’s been trained on enormous amounts of text data (such as websites and books) to learn how language works. ChatGPT tells me that when someone gives it a question or prompt, it ‘breaks down the text into individual words and analyses them to understand the user’s intent. It then generates a response by predicting the most likely next words based on its training data’.
It turns out that ChatGPT is quite good at a variety of things, from writing marketing copy to summarising policy documents to creating computer code, and there’s been a fair bit of noise online about how it’s going to take over all our office-based jobs. But could it take over editing and proofreading? Could it start writing posts for the CIEP blog? I decided to put ChatGPT-4 through its paces with some typical editorial tasks.
Take a proofreading test
How does ChatGPT fare with the CIEP’s proofreading test? This test is available here; it’s a 270-word piece of text with 20 ‘common’ errors. I pasted the text into ChatGPT with the prompt ‘Please point out the proofreading errors in this text’ (it’s hard to let go of ingrained politeness, even when talking to a machine).
If we ignore the 3 formatting errors that didn’t copy across, ChatGPT caught 15 out of 17 errors. It did well at spotting spelling mistakes (such as ‘peaking’ rather than ‘peeking’) and repeated words (‘There had certainly had been one or two eccentric characters’). It spotted that Anne’s cup of tea had morphed into a cup of coffee three paragraphs later, which according to my programming boyfriend is an impressive catch to make.
It missed a hyphen that should have been an en dash, and didn’t change ‘Jones’ geraniums’ to ‘Jones’s geraniums’, although that’s arguably just a style choice (as the test itself acknowledges). Another thing it didn’t do was query how Ann spells her name: it assumed ‘Anne’ was right (probably because it was spelled this way where it first occurs in the text).
So perhaps ChatGPT’s main failing here was to not question the text in a way a human would: to ask about style or spelling preferences. Plus of course it can’t see formatting errors, or track changes in a useful way, but both of those issues might be solved once Microsoft’s Copilot is launched sometime ‘in the months ahead’. This will basically integrate ChatGPT into Microsoft programs. At that point it might be perfectly possible for Copilot to proofread text in Word and mark up corrections using Track Changes and use a style guide for reference. Scary, right?
Write a blog post
How about writing posts for the CIEP blog? ChatGPT is certainly much quicker at this than us slow and plodding humans, taking under a minute to write a post. The results are quite bland and generic, but are also pretty serviceable. For example, here’s the start of a blog post about how to start out as a freelance copyeditor:
Embarking on a freelance copyediting career can be both an exciting and daunting prospect. With the right preparation and know-how, however, you can set yourself up for success in this rewarding field. In this blog post, we’ll explore the essential training and resources you’ll need to get started, as well as tips for finding your first copyediting jobs.
Part A: Training for Success
Understand the basics
Before diving into the world of freelance copyediting, it’s essential to familiarize yourself with the basic principles of the English language. This includes understanding grammar, punctuation, syntax, and spelling rules. There are numerous online resources and books available to help you strengthen your foundation in these areas.
While it’s not strictly necessary to have formal training to become a freelance copyeditor, it can help you refine your skills and lend credibility to your services. Many organizations offer courses and certifications to help aspiring copyeditors learn the ropes. Some recommended courses and programs include:
- The Chartered Institute of Editing and Proofreading (CIEP): CIEP offers a range of courses, from basic proofreading and copyediting to more advanced courses on specific topics. Their Introduction to Copyediting and Proofreading courses are great starting points.
- The Publishing Training Centre (PTC): The PTC provides a comprehensive copyediting course called ‘Basic Editing by Distance Learning’ that covers grammar, punctuation, and syntax, as well as editing techniques and best practices.
- The Editorial Freelancers Association (EFA): EFA’s ‘Copyediting: Beginning to Intermediate’ course is designed to help you build skills and knowledge in various areas of freelance editing, including copyediting, with a focus on practical applications and exercises.
With a little tweaking that could turn into a passable (if not sparkling) blog post. The main problem with it is that no courses exist with the exact titles ‘Basic Editing by Distance Learning’ or ‘Copyediting: Beginning to Intermediate’. This illustrates one of ChatGPT’s main limitations, which is its tendency to ‘hallucinate’: it’s very good at making stuff up with complete confidence, rather than simply admitting when it doesn’t know something.
I amused myself with writing some terrible fiction, then asked ChatGPT to improve it. Here’s what we started with:
Outside, the trees swayed ominously in the howling wind while dark, menacing clouds raced overheard. Rachel was gazing wistfully out of the window longing to be out in the paddock with her horse Ophelia irregardless of the fetid weather, when there was a resoundingly loud crash and a blood curdling screaming from above and she jumped ten feet in the air. Panicked thoughts flew threw her mind as she wrenched open the kitchen door, which persistently stuck as it grazed the uneven flagstones, and pelted downstairs where she found her brother tangled on the floor of the hallway in the rich velveteen curtains with the bust of Artemis, the goddess of love, shattered and lying in forlorn pieces around his beautiful golden-flecked hair.
“Jasper!” whispered Rachael frantically. “What happened!”
She roughly shaked him until he groaned and groggily opened one eye, gazing not at her but staring inwards at the visage that had shaken him to his very core.
“Rachel”, he moaned. “I saw him.”
“Saw who? Who did you see?” she exclaimed fearfully.
Asking ChatGPT to ‘line edit’ this led to a simple copyedit, where it corrected things like ‘overheard’ to ‘overhead’, ‘fetid weather’ to ‘foul weather’, ‘Artemis’ to ‘Aphrodite’ and so on. I then asked ChatGPT to ‘simplify the text a little’, and questioned whether the fact that Rachel heard the crash from upstairs but then ran downstairs is inconsistent. Here’s ChatGPT’s reply:
Yes, it is inconsistent. The story indicates that the crash and screams come from above, suggesting that Rachel should run upstairs instead of downstairs. Here’s the revised text to fix the inconsistency:
Outside, the trees swayed in the strong wind as dark clouds raced overhead. Rachel gazed out of the window, wishing she could be in the paddock with her horse, Ophelia, despite the poor weather. Suddenly, a loud crash and a scream from above startled her. Panicked, she opened the kitchen door, which stuck as it scraped the uneven flagstones, and rushed upstairs. She found her brother on the floor, entangled in the curtains, with the broken bust of Aphrodite scattered around his golden-flecked hair.
“Jasper!” whispered Rachel anxiously. “What happened?”
She gently shook him until he groaned and opened one eye, not looking at her but appearing deeply shaken.
“Rachel,” he murmured. “I saw him.”
“Saw who? Who did you see?” she asked nervously.
This clearly isn’t prize-winning fiction, but I was impressed that ChatGPT could take my original text and rewrite it while still keeping the essential bits of the narrative, and present them in a coherent way. It’s an example of how powerful ChatGPT can potentially be when it comes to simplifying text.
I took a list of references, introduced a bunch of mistakes, and then asked ChatGPT to edit them so they were consistent.
ChatGPT did pretty well. It made corrections like replacing ‘and’ with ‘&’, moving the year to the correct place, changing the volume and issue number from ‘18:2’ to ‘18(2)’, and deleting the word ‘pages’ before the page range. The thing that impressed me was when I gave it a reference where I’d deleted one of the author’s initials, and ChatGPT filled them in correctly. It managed to do that correctly a couple more times, so it didn’t feel like a fluke. (Though if ChatGPT is simply making educated guesses about how to fill in the blanks then this is far from infallible.)
If ChatGPT could track changes (and my guess is that it will be able to do this in Microsoft’s Copilot) then I’d happily use it to help me edit references.
Can ChatGPT replace me?
ChatGPT is remarkable, in that it can edit and write text and you can hold a semi-intelligent conversation with it when its basic function is simply to guess what the next word should be.
As things stand, there are some stumbling blocks when it comes to using it as an editor or proofreader: ChatGPT apparently struggles to remain coherent when responding to much longer pieces of text (like whole books). It isn’t always factually accurate: you can’t entirely trust anything it’s saying. I can’t imagine how it’d make a good development editor, or how it’d handle raising complex, sensitive author queries. It can’t track changes well. It can’t think like a human, even when it can convincingly sound like one.
So I don’t think that it currently competes with a human proofreader or editor (or blog writer), but I do wonder how many years or even months it might be before it can. Things are moving at speed: in the time it’s taken to write this blog post, ChatGPT has already announced two upgrades (one from ChatGPT-3.5 to ChatGPT-4, and one to launch plugins). And when Microsoft’s Copilot launches sometime in the coming months, that’ll be another giant leap forwards.
John Naughton, professor of the public understanding of technology at the Open University, has described ChatGPT like this: just as Google became a prosthesis for memory, so will ChatGPT (or similar AI) become ‘a prosthesis for something that many people find very difficult to do: writing competent prose’. I’m certain that it’s going to have a seismic influence on an industry that helps people to write and publish competent prose; the question is just how quickly, and what editors and proofreaders can or should do about it.
 When I asked ChatGPT later on, ‘Do you appreciate it if I start my requests with the word “Please”?’ its answer was: ‘As an AI language model, I do not have the ability to feel emotions. However, using “please” in your requests is a courteous and polite way to ask for something, and it can help make the conversation more pleasant and respectful.’
About Harriet Power
Harriet Power develops and copyedits nonfiction books and educational materials. She is a commissioning editor for the CIEP information team, and a Professional Member of the CIEP.
About the CIEP
The Chartered Institute of Editing and Proofreading (CIEP) is a non-profit body promoting excellence in English language editing. We set and demonstrate editorial standards, and we are a community, training hub and support network for editorial professionals – the people who work to make text accurate, clear and fit for purpose.
Find out more about:
Photo credits: Motherboard by Pixabay on Pexels; Glasses and smartphone by Mateus Bertelli on Pexels; ChatGPT screen by Jonathan Kemper on Unsplash.
Posted by Sue McLoughlin, blog assistant.
The views expressed here do not necessarily reflect those of the CIEP.
What an excellent piece! It has prompted me to see if I can get ChatGPT to write (really tedious to do) blog posts for work.
Thank you Harriet for this really illuminating blog post on ChatGPT which I was directed to by @sophieplayle’s recent Liminal Pages article. I admit to being more impressed with ChatGPT’s capacity, as reported in your tests, than I thought I would be. One to watch. We’ll need to learn (rapidly) how best to work with it, and differentiate our (more nuanced, human) intervention.
Thanks go to Harriet Power for her legwork and thoughtfulness on this one.