Tag Archives: ChatGPT

Definite articles: AI

Welcome to ‘Definite articles’, our pick of recent editing-related internet content, most of which are definitely articles. This time, our theme is the impact of artificial intelligence (AI) on editing and proofreading. It’s a hot topic of conversation among editorial professionals, which is why some of the links in this article were sourced from a CIEP forum thread about ChatGPT. Thank you to the CIEP members who shared them.

Because nothing related to discussions about AI can be guaranteed a long shelf life, you should know that this edition of ‘Definite articles’ was put together at the beginning of June 2023. It covers:

  • What’s been happening?
  • What can AI actually do?
  • How can editorial professionals move forward with AI?

What’s been happening?

On 30 November 2022, the AI chatbot ChatGPT was released by OpenAI. Since then, people who work with words, who include editors, proofreaders and writers, have had the unnerving feeling that the fundamentals of what they do might change, at least in some areas. If you haven’t been keeping a close eye on events, Forbes has written a short history of ChatGPT and two professors have summarised some of the implications of ChatGPT in usefully easy-to-understand terms. You can get an overview of Microsoft’s Copilot, an AI assistance feature being launched this summer, from CNN and Microsoft itself.

As well as the obvious nervousness about whether AI would replace various categories of worker, concerns were quickly raised about the effects of AI on assessing student work and what AI might mean for copyright.

By late spring 2023, loud noises were being made about regulation of AI. As lawmakers in Europe worked on an AI Act, workers in the UK reported that they would like to see the regulation of generative AI technologies.

It’s a subject that’s currently being written and thought about on a daily, if not hourly, basis. But, in practice, and at this point in time, what can AI actually do?

What can AI actually do?

If you didn’t catch Harriet Power’s CIEP blog, ‘ChatGPT versus a human editor’, it’s an enlightening and entertaining read that went down well with our social media followers on LinkedIn, Facebook and Twitter. Harriet instructed ChatGPT to take a proofreading test, write a blog post, and edit some fiction and a set of references. In the proofreading and editing tasks, it did ‘pretty well’ and was impressive in simplifying a fiction passage while keeping its main points. It also wrote a serviceable blog draft.

The two main problems Harriet noticed in the technology were a distinct lack of sparkle in ChatGPT’s writing and editing, and its ‘tendency to “hallucinate”: it’s very good at making stuff up with complete confidence’. (This tendency was also written about by Susanne Dunlap for Jane Friedman’s website, in an article called ‘Using ChatGPT for book research? Take exceeding care’.) Weighing up her test run, Harriet concluded:

ChatGPT apparently struggles to remain coherent when responding to much longer pieces of text (like whole books). It isn’t always factually accurate: you can’t entirely trust anything it’s saying. I can’t imagine how it’d make a good development editor, or how it’d handle raising complex, sensitive author queries. It can’t track changes well. It can’t think like a human, even when it can convincingly sound like one.

However, Harriet added the caveat that in her view it may be ‘years or even months’ before ChatGPT might be able to start competing with human editors. So, how should we respond to that?

computer screen showing OpenAI logo and text

How can editorial professionals move forward with AI?

Perhaps there’s no choice but to look at the possible upsides of the AI debate. Anne McCarthy for the New York Book Forum starts us off in ‘The potential impact of AI on editing and proofreading’ by reminding us that lightbulbs and the ‘horseless carriage’ inspired dire predictions in their day. She concludes: ‘Books always have (and always will) require a human touch: it’s what draws us readers to them.’

Amanda Goldrick-Jones, in an article for the Editors Toronto blog called ‘ChatGPT and the role of editors’, offers some wise and hopeful advice: there’s a point at which we, as editorial professionals, have to trust ourselves.

If anyone is well-positioned to explore and critique the possibilities and challenges of AI-generated writing, it’s an editor … So, as with other communication technologies, editors must self-educate about its affordances, propose clear ethical boundaries, and critically engage with its limitations. It’s a tool, not our robot overlord.

Part of this consideration and engagement is understanding AI’s risks, and Michelle Garrett lays these out very effectively in a blog post from March, ‘The realities of using ChatGPT to write for you – what to consider when it comes to legalities, reputation, search and originality’.

Moving one step further, a Q&A with writer Elisa Lorello on Jane Friedman’s website talks about actively using ChatGPT to become ‘creatively fertile’. Lorello testifies that when she started using the technology in earnest, ‘It’s like I suddenly gained an edge in productivity, organization, and creativity’.

And finally, Alex Hern in The Guardian described what happened when he spent a week using ChatGPT to enhance his leisure activities. If you’re not ready to use AI at work, perhaps you could at least get a couple of recipes out of it.

With thanks to the users of the CIEP’s forums for the links they shared in recent discussions.

About the CIEP

The Chartered Institute of Editing and Proofreading (CIEP) is a non-profit body promoting excellence in English language editing. We set and demonstrate editorial standards, and we are a community, training hub and support network for editorial professionals – the people who work to make text accurate, clear and fit for purpose.
Find out more about:

 

Photo credits: robot hand by Tara Winstead on Pexels; OpenAI screen by Jonathan Kemper on Unsplash.

Posted by Sue McLoughlin, blog assistant.

The views expressed here do not necessarily reflect those of the CIEP.

ChatGPT versus a human editor

Can ChatGPT hold its own against a human editor? Can it tackle typical editorial tasks? Harriet Power puts it through its paces.

ChatGPT describes itself as an ‘AI language model’: it’s essentially a clever chatbot that you can have human-like conversations with. It’s been trained on enormous amounts of text data (such as websites and books) to learn how language works. ChatGPT tells me that when someone gives it a question or prompt, it ‘breaks down the text into individual words and analyses them to understand the user’s intent. It then generates a response by predicting the most likely next words based on its training data’.

It turns out that ChatGPT is quite good at a variety of things, from writing marketing copy to summarising policy documents to creating computer code, and there’s been a fair bit of noise online about how it’s going to take over all our office-based jobs. But could it take over editing and proofreading? Could it start writing posts for the CIEP blog? I decided to put ChatGPT-4 through its paces with some typical editorial tasks.

Take a proofreading test

How does ChatGPT fare with the CIEP’s proofreading test? This test is available here; it’s a 270-word piece of text with 20 ‘common’ errors. I pasted the text into ChatGPT with the prompt ‘Please point out the proofreading errors in this text’ (it’s hard to let go of ingrained politeness, even when talking to a machine).[1]

If we ignore the 3 formatting errors that didn’t copy across, ChatGPT caught 15 out of 17 errors. It did well at spotting spelling mistakes (such as ‘peaking’ rather than ‘peeking’) and repeated words (‘There had certainly had been one or two eccentric characters’). It spotted that Anne’s cup of tea had morphed into a cup of coffee three paragraphs later, which according to my programming boyfriend is an impressive catch to make.

It missed a hyphen that should have been an en dash, and didn’t change ‘Jones’ geraniums’ to ‘Jones’s geraniums’, although that’s arguably just a style choice (as the test itself acknowledges). Another thing it didn’t do was query how Ann spells her name: it assumed ‘Anne’ was right (probably because it was spelled this way where it first occurs in the text).

So perhaps ChatGPT’s main failing here was to not question the text in a way a human would: to ask about style or spelling preferences. Plus of course it can’t see formatting errors, or track changes in a useful way, but both of those issues might be solved once Microsoft’s Copilot is launched sometime ‘in the months ahead’. This will basically integrate ChatGPT into Microsoft programs. At that point it might be perfectly possible for Copilot to proofread text in Word and mark up corrections using Track Changes and use a style guide for reference. Scary, right?

Write a blog post

How about writing posts for the CIEP blog? ChatGPT is certainly much quicker at this than us slow and plodding humans, taking under a minute to write a post. The results are quite bland and generic, but are also pretty serviceable. For example, here’s the start of a blog post about how to start out as a freelance copyeditor:

Embarking on a freelance copyediting career can be both an exciting and daunting prospect. With the right preparation and know-how, however, you can set yourself up for success in this rewarding field. In this blog post, we’ll explore the essential training and resources you’ll need to get started, as well as tips for finding your first copyediting jobs.

Part A: Training for Success

Understand the basics

Before diving into the world of freelance copyediting, it’s essential to familiarize yourself with the basic principles of the English language. This includes understanding grammar, punctuation, syntax, and spelling rules. There are numerous online resources and books available to help you strengthen your foundation in these areas.

Formal training

While it’s not strictly necessary to have formal training to become a freelance copyeditor, it can help you refine your skills and lend credibility to your services. Many organizations offer courses and certifications to help aspiring copyeditors learn the ropes. Some recommended courses and programs include:

    • The Chartered Institute of Editing and Proofreading (CIEP): CIEP offers a range of courses, from basic proofreading and copyediting to more advanced courses on specific topics. Their Introduction to Copyediting and Proofreading courses are great starting points.
    • The Publishing Training Centre (PTC): The PTC provides a comprehensive copyediting course called ‘Basic Editing by Distance Learning’ that covers grammar, punctuation, and syntax, as well as editing techniques and best practices.
    • The Editorial Freelancers Association (EFA): EFA’s ‘Copyediting: Beginning to Intermediate’ course is designed to help you build skills and knowledge in various areas of freelance editing, including copyediting, with a focus on practical applications and exercises.

With a little tweaking that could turn into a passable (if not sparkling) blog post. The main problem with it is that no courses exist with the exact titles ‘Basic Editing by Distance Learning’ or ‘Copyediting: Beginning to Intermediate’. This illustrates one of ChatGPT’s main limitations, which is its tendency to ‘hallucinate’: it’s very good at making stuff up with complete confidence, rather than simply admitting when it doesn’t know something.

A pair of glasses and a smartphone with ChatGPT open are lying on a desk

Edit fiction

I amused myself with writing some terrible fiction, then asked ChatGPT to improve it. Here’s what we started with:

Outside, the trees swayed ominously in the howling wind while dark, menacing clouds raced overheard. Rachel was gazing wistfully out of the window longing to be out in the paddock with her horse Ophelia irregardless of the fetid weather, when there was a resoundingly loud crash and a blood curdling screaming from above and she jumped ten feet in the air. Panicked thoughts flew threw her mind as she wrenched open the kitchen door, which persistently stuck as it grazed the uneven flagstones, and pelted downstairs where she found her brother tangled on the floor of the hallway in the rich velveteen curtains with the bust of Artemis, the goddess of love, shattered and lying in forlorn pieces around his beautiful golden-flecked hair.

“Jasper!” whispered Rachael frantically. “What happened!”

She roughly shaked him until he groaned and groggily opened one eye, gazing not at her but staring inwards at the visage that had shaken him to his very core.

“Rachel”, he moaned. “I saw him.”

“Saw who? Who did you see?” she exclaimed fearfully.

Asking ChatGPT to ‘line edit’ this led to a simple copyedit, where it corrected things like ‘overheard’ to ‘overhead’, ‘fetid weather’ to ‘foul weather’, ‘Artemis’ to ‘Aphrodite’ and so on. I then asked ChatGPT to ‘simplify the text a little’, and questioned whether the fact that Rachel heard the crash from upstairs but then ran downstairs is inconsistent. Here’s ChatGPT’s reply:

Yes, it is inconsistent. The story indicates that the crash and screams come from above, suggesting that Rachel should run upstairs instead of downstairs. Here’s the revised text to fix the inconsistency:

Outside, the trees swayed in the strong wind as dark clouds raced overhead. Rachel gazed out of the window, wishing she could be in the paddock with her horse, Ophelia, despite the poor weather. Suddenly, a loud crash and a scream from above startled her. Panicked, she opened the kitchen door, which stuck as it scraped the uneven flagstones, and rushed upstairs. She found her brother on the floor, entangled in the curtains, with the broken bust of Aphrodite scattered around his golden-flecked hair.

“Jasper!” whispered Rachel anxiously. “What happened?”

She gently shook him until he groaned and opened one eye, not looking at her but appearing deeply shaken.

“Rachel,” he murmured. “I saw him.”

“Saw who? Who did you see?” she asked nervously.

This clearly isn’t prize-winning fiction, but I was impressed that ChatGPT could take my original text and rewrite it while still keeping the essential bits of the narrative, and present them in a coherent way. It’s an example of how powerful ChatGPT can potentially be when it comes to simplifying text.

Edit references

I took a list of references, introduced a bunch of mistakes, and then asked ChatGPT to edit them so they were consistent.

ChatGPT did pretty well. It made corrections like replacing ‘and’ with ‘&’, moving the year to the correct place, changing the volume and issue number from ‘18:2’ to ‘18(2)’, and deleting the word ‘pages’ before the page range. The thing that impressed me was when I gave it a reference where I’d deleted one of the author’s initials, and ChatGPT filled them in correctly. It managed to do that correctly a couple more times, so it didn’t feel like a fluke. (Though if ChatGPT is simply making educated guesses about how to fill in the blanks then this is far from infallible.)

If ChatGPT could track changes (and my guess is that it will be able to do this in Microsoft’s Copilot) then I’d happily use it to help me edit references.

Close up of a ChatGPT screen

Can ChatGPT replace me?

ChatGPT is remarkable, in that it can edit and write text and you can hold a semi-intelligent conversation with it when its basic function is simply to guess what the next word should be.

As things stand, there are some stumbling blocks when it comes to using it as an editor or proofreader: ChatGPT apparently struggles to remain coherent when responding to much longer pieces of text (like whole books). It isn’t always factually accurate: you can’t entirely trust anything it’s saying. I can’t imagine how it’d make a good development editor, or how it’d handle raising complex, sensitive author queries. It can’t track changes well. It can’t think like a human, even when it can convincingly sound like one.

So I don’t think that it currently competes with a human proofreader or editor (or blog writer), but I do wonder how many years or even months it might be before it can. Things are moving at speed: in the time it’s taken to write this blog post, ChatGPT has already announced two upgrades (one from ChatGPT-3.5 to ChatGPT-4, and one to launch plugins). And when Microsoft’s Copilot launches sometime in the coming months, that’ll be another giant leap forwards.

John Naughton, professor of the public understanding of technology at the Open University, has described ChatGPT like this: just as Google became a prosthesis for memory, so will ChatGPT (or similar AI) become ‘a prosthesis for something that many people find very difficult to do: writing competent prose’. I’m certain that it’s going to have a seismic influence on an industry that helps people to write and publish competent prose; the question is just how quickly, and what editors and proofreaders can or should do about it.


[1] When I asked ChatGPT later on, ‘Do you appreciate it if I start my requests with the word “Please”?’ its answer was: ‘As an AI language model, I do not have the ability to feel emotions. However, using “please” in your requests is a courteous and polite way to ask for something, and it can help make the conversation more pleasant and respectful.’


About Harriet Power

Harriet Power develops and copyedits nonfiction books and educational materials. She is a commissioning editor for the CIEP information team, and a Professional Member of the CIEP.

 

 

About the CIEP

The Chartered Institute of Editing and Proofreading (CIEP) is a non-profit body promoting excellence in English language editing. We set and demonstrate editorial standards, and we are a community, training hub and support network for editorial professionals – the people who work to make text accurate, clear and fit for purpose.
Find out more about:

 

Photo credits: Motherboard by Pixabay on Pexels; Glasses and smartphone by Mateus Bertelli on Pexels; ChatGPT screen by Jonathan Kemper on Unsplash.

Posted by Sue McLoughlin, blog assistant.

The views expressed here do not necessarily reflect those of the CIEP.