Tag Archives: assessment

What do editors and proofreaders think of taking tests?

We asked our freelance members for their opinions about editing and proofreading tests: tests set by a potential client to assess the freelancer’s suitability for the job. Here, four CIEP members share their thoughts on the topic.

Alex Mackenzie

This was a timed proofread I’d been asked to do as part of a job application. The email said they’d send me a document at 10am; I was to proofread it and return it an hour and a half later. Heart racing, I began, wide-eyed and focused. Ninety-one minutes later, of course, reassessing my sent work, I spotted a mistake! The test had been simple enough, basically a simulation of the company’s typical report: one table (misaligned and missing information), one graphic (valuable seconds lost trying to edit that insert), logos and company branding (creative capitalisation), finishing with a wordy biography (university degrees and excessive work experience). But that ticking clock, I thought, was an unnecessary pressure. And my obvious mistake wouldn’t have escaped a re-read or a PerfectIt scan. Oh well, I huffed – I’ll put that down to experience. Three months later I was hired – they’re now a regular client, paying CIEP suggested minimum rates!

Other than that, I’ve completed test edits for academic proofreading agencies. I remember Janet MacMillan’s words in one of my early Cloud Club meetings: give it an hour, there’s nothing to lose. True. This approach has landed me two more jobs, meaning (low-paid) editing work pops into my inbox weekly. I consider it all training!

Laurie Duboucheix-Saunders

In the best of all possible worlds, editors with a proven record of training and experience should not have to take editorial tests.

However, personally, I have found that such tests have opened doors for me that would otherwise have remained closed. English is not my first language and my degrees in English Lit from the Sorbonne and the fact that I have been an Advanced Professional Member of the CIEP since 2016 are not enough to stop people from asking whether I can, or even should, edit or proofread an English text because I am not a native speaker.

I use that term even though it has become controversial in more enlightened editorial circles – where multilingual is preferred – because those editorial tests have allowed me to get work based on my abilities and expertise alone, regardless of where I come from. As with exams that are marked blind, they are an objective assessment of a candidate’s skills and knowledge.

As I write this, I worry that my argument could be used to support the use of editorial tests specifically for non-native editors. This is a million miles away from what I’m trying to say. A strong CV and proven training in editing English material should be enough for people like me to get the job. As it’s not the case yet, I’d rather take a test than be ruled out systematically simply because I am not originally from an anglophone country.

Louise Bolotin

Louise Bolotin

I’m not opposed to sitting editing tests, but I’m not wild about them. I believe my many years of experience and my Advanced Professional Member status in the CIEP should speak for themselves and indicate my competence. My CIEP directory entry, my website and LinkedIn profile are all places a prospective client can look at my CV, a sample client list and client testimonials. On that basis, I won’t take a test for a small company or a private individual such as an independent author. However, I might – if the job looks really interesting and I really want it – offer to do a free one-page sample. No more, and beyond that I’m happy to agree to a week’s (paid) trial or some such, to see how we work together. I do sit editing tests if required where it involves a major rolling contract. I recently took one for a government inspectorate, where passing the test was a prerequisite for joining the freelance pool. I’ve also taken tests for publishing companies for the same reason, but these tests should never be more than a couple of pages and should never take more than around 90 minutes to complete. If they are any longer, you should be paid for your time.

Caroline Petherick

While it’s reasonable for a client to want to have some indication of competence in a freelance editor or proofreader they’re thinking of hiring, the idea of testing a qualified and experienced editor is, in my view, not just out of date but near enough insulting.

As I understand it, the concept arose in the publishing industry in the mid-20th century, an era when many publishing houses were getting rid of their in-house editors in favour of freelance editors and proofreaders, who required less commitment, both financial and pastoral. That was (and still is) seen as an important move in this cut-throat industry.

Until the mid-20th century, copyeditors and proofreaders were quite likely to have been wives of publishers – and unpaid. That view of the profession lingered long in the publishing industry, to its financial advantage. Throughout the 20th century (and in some cases into the 21st), it clearly suited the publishing houses to continue to perceive the proofreader and copyeditor as lowly individuals at the bottom end of the status ladder.

But with the new freelance system, how could a publisher establish editorial competence? A test designed to include the classic traps was obviously deemed appropriate.

Now that we editors and proofreaders have made our mark as independent professionals, due to the formation of the SfEP – now the CIEP, a chartered organisation with registered members and clear status – it is clear that a potential client asking a CIEP member to take a test would be as incongruous a request as, for example, asking a qualified lawyer, designer or accountant – or even a Gas Safe engineer! – to do so. (To be fair, though, it’s probably only a few publishing houses who might still ask for a test. The myriad of other types of clients are highly unlikely to do so.)

The balance of power has changed. And not before time.

A sample edit is, however, as I see it, crucially important. And here, editorial professionals have an advantage over lawyers, designers, accountants and plumbers in their dealings with clients. In working on a sample, a CIEP member will not only be showing the client what work they’d carry out, but they will be assessing the text and, to some degree, the client. And then, having seen what the client produces, they’ll also be able to quote a fair price for the work. All part of the professional package.

Wrapping up: editing tests

This post has presented the opinions of four CIEP members on editing tests, and it shows the breadth of views there are on the topic.

Have you taken an editing or proofreading test? Or have you set one as a way to assess a freelancer’s abilities? Tell us more in the comments.

Ayesha Chari has written about her experiences of editing tests.

READ AYESHA’S ARTICLE

About the CIEP

The Chartered Institute of Editing and Proofreading (CIEP) is a non-profit body promoting excellence in English language editing. We set and demonstrate editorial standards, and we are a community, training hub and support network for editorial professionals – the people who work to make text accurate, clear and fit for purpose.

Find out more about:

 
Photo credit: clock by Samantha Gades on Unsplash 

Editing tests, clients and the editor

We asked our freelance members for their opinions about editing and proofreading tests: tests set by a potential client to assess the freelancer’s suitability for the job. Here, Ayesha Chari shares what she sees as the pros and cons of editing tests.

You know that butterfly-in-the-stomach feeling? The dreadful one before your first big school exam or right before that driving test. Or the tingly one on the quick descent of the Ferris wheel, maybe even while going up and down those weightless lifts to floor infinity. I love the thrilling flutter as much as I hate the panicky one.

It does come as a surprise then that my academic and non-fiction editing career has been built almost solely on voluntarily subjecting myself to the anxiety of editing tests. The first one, for an in-house copyediting role and my first job straight out of university, I’ll never forget: ‘edit this in 1 hour’ was the instruction. This was a 30-page, two-column, single line-spaced, printed article on geospatial satellites gobbledygook. I had no idea whether ‘edit’ meant to correct spelling and grammar or make sense of content or something else. I also had no idea what any of what I was reading meant or where/how I was supposed to write in the thumb-width margins. I got the job.

Since then, particularly as self-employed, I’ve taken every editing test that has come my way, more often seeking them out from publishers and other organisations, definitely too many to keep count. Some with a vague one-line brief, others with a 50-page style guide and dozens of accompanying files and instructions. Some where I’ve set the deadline and the client has agreed, others that have invaded my browser and set running counters. Some that I’ve flunked, several others where I’ve come out on top. Why? Because the pros far outweigh the cons.

Pros

Taking editing tests for potential clients can help to:

  • quickly demonstrate expertise, training and professionalism to the client/employer
  • plant the seed for a relationship of trust with a new client/employer
  • self-assess skills objectively, whether you have formal editorial training and are starting out or are experienced and need to be kept on your toes.
  • build self-confidence (no pain, no gain, plus who doesn’t like a random ego boost?!)
  • identify areas that need to be strengthened or updated, subject/genre-wise or in technical expertise
  • acquire new knowledge or reinforce previous learning, both useful professionally regardless of performance on the test
  • improve time management and organisational abilities (e.g. following instructions of the test brief, editing to style specified, labelling files as asked, meeting deadline agreed or completing the test in the set time)
  • account for some unconscious bias in recruitment processes, very useful if you fit any non-traditional/non-dominant label as an individual but also as a language user (you can have multiple first languages and choose to use only one of them in your editing career – the test eliminates having to justify ‘otherness’)
  • find (and get!) the next big gig or dream editing job
  • lay the ground for negotiating your next big raise or up your freelance fees.

Cons

Taking editing tests for potential clients does not:

  • guarantee work, particularly if passing the test is a route to getting on a freelance list of editors (catch-22 situation: those more experienced on the list will invariably get offered work more often/first; without getting on the list and getting work, you can’t become more experienced)
  • give the whole picture of your expertise or transferable abilities to the client (I picked the wrong file topic on a timed test once, and everything went downhill from there)
  • help imposter syndrome on a bad hair test day (we’ve all been there)
  • save time when you already have little to spare (test results can take a few days, weeks, or months even, a contract several more, and a first assignment even longer)
  • account for all bias, unconscious or otherwise (so you may not bag the job even if you do well on the test)
  • alleviate fear of failure.

I’m unlikely to overcome the panic of taking a test any time soon, especially those rotten timed ones. But for the joy of a flutter, I highly recommend taking them. You might just get hooked (and make a career as I have)!

Wrapping up: editing tests

Tests are used by organisations to assess a potential freelancer’s or employee’s editing or proofreading abilities. They can:

  • build self-confidence
  • highlight development needs
  • reduce the influence of unconscious bias
  • form the foundation of a successful business relationship.

Have you taken an editing or proofreading test? Or have you set one as a way to assess a freelancer’s abilities? Tell us more in the comments.

We’ll be sharing more members’ opinions in a future post.

About Ayesha Chari

Ayesha Chari is a sensitive academic and non-fiction editor, who (much to the amusement of both the driving instructor and her partner) was once terrified of taking a test for a UK driving licence despite having driven in India for a decade, but who passed both the theory and practical tests on the first attempt.

 

 

About the CIEP

The Chartered Institute of Editing and Proofreading (CIEP) is a non-profit body promoting excellence in English language editing. We set and demonstrate editorial standards, and we are a community, training hub and support network for editorial professionals – the people who work to make text accurate, clear and fit for purpose.

Find out more about:

 

Photo credits: butterfly by Patrick Lockley; ferris wheel by Michael Parulava, both on Unsplash.

The A to D of writing multiple choice tests

By Julia Sandford-Cooke

Multiple choice tests are hard to get right. And I’m not just thinking of the time I scored 19% in a school physics test – statistically less than if I’d just guessed every answer. It’s actually really tricky to write high-quality questions and answer options that genuinely assess knowledge and understanding. As with a lot of the topics discussed on this blog, it’s a type of writing and editing that seems easy until you try it.

What do I mean by multiple choice (or multi-choice) questions and answers? They’re the ones with a standalone question (the stem) where the correct answer (the key) is hidden among three or four wrong answers (distractors). The people responding (let’s call them students) have to choose one or more answers from the options given. For example:

What noise does a cat make?

      1. Woof
      2. Moo
      3. Meow [key]
      4. Baa

And what do I know about multiple choice questions? Well, quite a bit. I have edited hundreds, maybe thousands, of them for one of the UK’s biggest test providers over the past 15 years. I’ve also written and edited them for, well, multiple other contexts, including textbooks, revision guides, workbooks and online learning materials.

A good multi-choice test is an objective measurement of a student’s knowledge, which can be taken and marked online, with instant feedback. However, from my experience, authors usually don’t know what a good () – or bad ()– multi-choice test looks like. They might be experts in their subject but they’ve never been taught how to actually write a test. And there’s a lot they should know, involving some pretty complex pedagogical concepts. I don’t have space to go into Bloom’s Taxonomy here but the goal is to ensure that the test is an unobtrusive channel for assessing the student’s knowledge.

So here’s a quick primer, covering four common problems.

Problem A: The question doesn’t make sense

The question must be pitched appropriately for who is taking the test. Unless it’s a Key Stage 2 SATs test, the aim is to find out what students know, not how well they can read or understand long words. Clarity is vital. The wording of question and answers should be concise and unambiguous, assessing knowledge, not literacy skills. There is usually no need to fill the question with irrelevant and confusing information:

Pet cats may be kept inside or outside, or be able to move freely between the house and garden. Sometimes neighbouring cats can enter the house in this way but owners can allow only their cat to come in by installing a special cat flap. How?

What type of cat flap prevents the wrong cats from entering the house?

Students shouldn’t have to waste time under exam conditions trying to work out what they are being asked. The question should be self-contained so that it makes sense without the answers.

My cat Pixel is:

      1. tortoiseshell.
      2. black and white. [key]
      3. ginger.
      4. tabby.

What colour is my cat Pixel?

      1. Tortoiseshell
      2. Black and white [key]
      3. Ginger
      4. Tabby

Avoid colloquialisms and unnecessarily complex language. Of course, you might want to find out whether students know a particular technical term, but the structure of the question should make that intention clear and direct.

A cat is a digitigrade. What does this mean?

      1. It has a different number of toes on its front and back paws.
      2. It walks on its toes. [key]
      3. It stands with its toes flat on the ground.
      4. It has claws.

Technical terms applied in the wrong context might also make for credible distractors.

Opinions differ on negatively phrased questions. Some people argue that they’re confusing, while others say they make students read the question more carefully. I think they’re fine under the right circumstances, and as long as the negative word (eg ‘not’) is obvious (eg formatted
in bold).

Problem B: The distractors are too obvious

I see this issue more than any other. The author knows what they want the students to know but struggles to think of plausible distractors.

What is the common name for the species felis catus?

      1. Cat
      2. Dog
      3. Elephant
      4. Human

If the correct answer can be easily guessed without any background knowledge, the question has failed in its purpose. And a test isn’t the time to try to be funny.

If it’s too hard to think of wrong answers, perhaps it’s the wrong question. Try asking it in a way that allows the distractors to be worth considering. They could be frequent misconceptions, commonly asked questions, otherwise true statements or other related terms or concepts that the student might know. For example:

What is the Latin term for the domestic cat?

      1. Felidae [Latin term for the family ‘cat’]
      2. Felis catus [key]
      3. Panthera [the genus of cats that roar]
      4. Felis silvestris [European wild cat]

All the answer options should have a similar sentence structure that follows on logically from the question. It’s the same principle as wording bullet lists to follow platform sentences – errors may unintentionally draw attention to the wrong (or right) answers.

Cats are crepuscular because they:

      1. they like to knead your laps with their paws.
      2. of their rough tongues.
      3. like to go out at dawn and dusk. [key]
      4. prefers to go out during the day.

Option lengths should be consistent – often, the correct answer is obvious because it is much longer or shorter than the distractors, and phrased slightly differently.

Where does Pixel most like to be stroked?

      1. On his back
      2. Around his face, ears, chin and at the base of his tail, where his scent glands are [key]
      3. On his tummy
      4. On his paws

Pixel deep in thought during a maths test

Avoid ‘All of the above’ – it’s a copout. Students only need to realise that more than one answer could be right to reasonably guess that ‘All of the above’ is the correct answer.

What is a cat’s favourite pastime?

      1. Sleeping
      2. Being stroked
      3. Sitting on laps
      4. All of the above.

With this example, you could also argue that ‘favourite’ implies a single pastime that the cat enjoys more than any other. ‘All of the above’, therefore, is doubly confusing.

‘None of the above’ is also a meaningless option, as it does not identify whether the student knows the correct answer.

On a related note, avoid acronym questions. Not only could a student successfully argue that a collection of letters stands for anything you want it to, but it’s also hard to write realistic distractors for a specific acronym.

What does RSPCA stand for?

      1. Really Special People’s Cats Association
      2. Royal Society for the Protection of Cats and Animals
      3. Royal Society for the Prevention of Cruelty to Animals
      4. Running Short of Possible Cat Answers

If the test isn’t delivered via software that randomises the position of the answers each time it’s administered, vary the placement of the key throughout the test, to avoid any patterns.

Problem C: The questions and/or answers are ambiguous

This is the opposite problem to the obvious distractors. A student may find that more than one option could be correct, but a multi-choice test doesn’t give the opportunity for students to answer ‘it depends’.

What noise does a cat make?

      1. Woof
      2. Moo
      3. Meow [key?]
      4. Purr [key?]

Authors are sometimes advised to ask students to find the ‘best’ answer rather than the ‘correct’ answer but this rather skates over the need for precise wording. In this case, it would be better to ask a more specific question that tests a higher level of understanding:

What noise do cats make to communicate with humans?

      1. Woof
      2. Moo
      3. Meow [key]
      4. Purr

Don’t ask ‘What would you do?’, as the student could easily defend any answer with ‘Well, I would do that!’. Similarly, avoid anything that could be seen as subjective or absolute:

Why are cats so cute?
Why do cats love fish?
Why does Pixel only come into my office when I’m in a Zoom meeting?

But it’s also important not to be too specific. Avoid closed questions – they limit the distractors:

Are whiskers a type of hair?

      1. Yes
      2. No
      3. Sometimes
      4. Meaningless fourth distractor

Problem D: The test isn’t tested

It’s not always possible to try out the questions before using them, but they should at least be run past a colleague. You might know what you mean but other people might not.

As with any edited text, develop a style guide that encompasses any aspects that could be inconsistent – the use of numbers, units and punctuation, for example.

Remember to provide students with clear instructions on how you expect them to take the test. Ensure they know what learning objectives, topics or concepts are being tested, and whether they can refer to notes or use aids such as a calculator.

Tests that are to be administered live (as opposed to being used as self-revision in a textbook) should be kept on a spreadsheet that states clearly when and how the questions have been used.

If possible, keep anonymised data on how students answered each question. There’s quite a bit of analytical science relating to this but, for general tests, all that’s really important is to ask the following:

  • Were there any distractors that nobody chose?
  • Were there any answers that everyone got right?
  • Can variations in students’ results be explained by their different levels of knowledge alone?

Learn from the data and revisit the test to change elements as necessary. Consider, too, whether a multi-choice test format is suitable for assessing everything that needs to be assessed. A bit like this blog post, some topics lend themselves to longer, more evaluative responses, and can’t be properly examined within the constraints of a few options.

But, done right, are multiple choice tests effective tools for assessing learning, useful revision aids and direct channels for measuring knowledge? Well, yes – all of the above …

Julia Sandford-CookeJulia Sandford-Cooke of WordFire Communications has more than 20 years’ experience of publishing and marketing. When she’s not hanging out with other editors (virtually or otherwise), she writes and edits textbooks, proofreads anything that’s put in front of her and posts short, often grumpy, book reviews on her blog, Ju’s Reviews.

 


Photo credits: multiple cats – The Lucky Neko; hand and paw – Humberto Arellano; whiskers – Kevin Knezic, all on Unsplash

Proofread by Alice McBrearty, Entry-Level Member.
Posted by Abi Saffrey, CIEP blog coordinator.

The views expressed here do not necessarily reflect those of the CIEP.