The Problem with Cambridge English Exams

General-purpose language proficiency tests can only give a loose indication of a candidate’s actual proficiency in the situations in which they use the language – even if they are a native speaker. But different approaches to assessment can reflect general proficiency to a greater or lesser degree, making a difference both to the outcome of a given person’s exams and to the lives of students who must study for them.

Cambridge English claims that its exams focus on real-life communication skills. I argue that no part of the exam – not even the ostensibly communicative parts – relates to such skills. If anything, the Cambridge exams are strikingly uncommunicative in their format and content.

My comments refer to B2 First and C1 Advanced, which have a lot of overlap in format, but many are also applicable to C2 Proficiency.

Reading and Use of English

It is reasonable to expect that if you are going to be tested on your knowledge of some specific area, there would be a syllabus indicating what might come up. For Parts 1-4 of the Reading and Use of English test, Cambridge does not provide any such indication. In Part 1 (multiple-choice cloze), any vocabulary associated with the relevant CEFR level could come up, so learners have no way of preparing for it. Even if they did, the questions sometimes require the learner to make distinctions that not even native speakers would be able to make. Take this example from a B2 practice test:

For question 1, the answer is C., but the lightest or the weakest sound isn’t obviously objectionable. At most, you could say they’re jarring, though the meaning is still clear. For question 2, the answer is B, revealed. But the difference between revealed and uncovered in this context is barely detectable. And exposed would only sound slightly unnatural. Wrong answers in Part 1 are almost always so because they are not idiomatic; in other words, these questions are expressly focussed on form, not meaning. So, in addition to being impossible to prepare for, the questions here do not assess one’s ability to communicate.

Part 2 (open cloze) tests knowledge of grammar – things like conjunctions, determiners and pronouns – and lexicogrammar – things like phrasal verbs and linkers. It is useful to know these words, obviously. But being able to recall them for a gap-fill isn’t equivalent to knowing how to interpret them when reading, nor how to use them in your own writing. So, Part 2 is simply a test of your ability to conjure up certain words from contextual cues – not a skill required in any real-life communicative activities.

Part 3 (word formation) has a similar problem: learners might, in a different and more meaningful context, be perfectly capable of both interpreting and producing the elicited words, but not have the metalinguistic knowledge to make connections between the missing words and their ‘stems’ as the task requires. So here again, the test is not testing communication skills.

Part 4 (keyword transformation) is even worse in this regard than Part 2. Again, instead of testing ability to use language in a meaningful context, it merely tests whether students can think up a phrase based on a prompt. Often the phrase or the way it is prompted is rather obscure – as in this example:

It’s like a crossword puzzle: fun as a game, but horrible as a high-stakes exam.

Parts 5-8 for C1 (or 5-7 for B2) involve reading longer passages, so one might hope that there would be a greater focus on meaning in this part of the exam. Yet, in fact, the questions largely test the ability to find specific information in a text – which amounts to identifying parallel expressions and not much else. This sort of question is common in the IELTS reading test, too, although at least IELTS sometimes involves drawing subtler inferences about the texts.

Part 7 (gapped text – part 6 in B2) is another ‘word game’ kind of task: getting the knack for solving such puzzles might be enjoyable, but that is not equivalent to being able to do anything in English. One could read, appreciate, and follow the story of the text in an ordinary context, yet still be unable to perform the task. This is because the way one reads while trying to perform the task is completely different from real-life reading. One is not reading out of curiosity, nor to follow an argument, nor to get information, but instead scanning for clues about how the text might be structured. Again, it is divorced from any meaningful notion of language proficiency.

There is no imaginable real-life context in which one might be expected to do anything similar to this. It is not communicative.

Speaking

To some extent, there is no avoiding the fact that speaking tests are uncomfortable. It is not easy to have a normal conversation in which one freely shares one’s ideas while also knowing that one is being tested. But the Cambridge exams take this inherent awkwardness and go out of their way to make sure it’s felt by everyone involved. The main way they do this is by having two students interviewed by one examiner. In Part 2, each candidate comments on a set of pictures – normally of people engaged in everyday activities – and then answers a general question about the other candidate’s pictures. There is no imaginable real-life context in which one might be expected to do anything similar to this. It is not communicative.

Candidates then have to do a ‘collaborative task’ in Part 3, in which they ‘come to a decision’ about some question given to them by the examiner. The thought is that they might have a discussion in which they agree on some ideas, disagree on others, and negotiate towards an outcome. Materials created for B2 and C1 preparation courses often have sections devoted to useful language for agreeing, disagreeing, and negotiation, with this part of the exam in mind. Inevitably, during the exam, no one dares to make any of the critical comments for which such language would be useful. Instead, candidates reluctantly affirm each statement of their co-examinee and offer up generic remarks of their own.

Writing

My main objection to the Part 1 essay task is that it has overly detailed instructions. It is almost as if the question were intentionally convoluted so that students fall into the trap of misunderstanding it – which is what almost anyone would do, regardless of their English level, unless they were trained not to. Once you do get your head around all the requirements of the question, the essay is effectively planned for you, and you have no leeway to structure your writing in the way you want, or to pose the opinions you want to pose about the topic. In the beginning, they even add a statement of the form: “you have been discussing topic X in class with your teacher.” So, just in case you had any reasons of your own to be interested in the topic, they make sure to take that away from you as well.

Part 2 does offer some choice of what to write about, a fact likely made necessary by the tasks being only loosely connected to reality. Before teaching C1, I never knew that there were so many international magazines wanting information from me personally on transport facilities in my hometown. Nor are magazines requesting reviews of a historical drama with a character who influenced my views on modern life. Nor are town councils wanting my opinion on how to create more green spaces in the neighbourhood. There is an attempt to engineer a communicative context here, but it is a feeble and out-of-touch one.

Listening

In the land of Cambridge Listening, casual friends have lengthy exchanges about the secret motivations behind college publicity material. People leave minute-long answering machine messages (conveniently fitting the format of Part 1). Photographers accost actresses in clear, full sentences, and are then fobbed off in clear, full sentences. In contrast, the IELTS listening test at least involves the plausibly relatable contexts of training and education. It is at least believable that you might need to take notes on a university lecture or to understand a conversation about train times. The Cambridge listening texts do not have a clear focus – they could be about anything so long as it fits the exam format. For the monologues, any person talking will do, and there is rarely any communicative purpose to their talk. It will be things like ‘people talking about their friendships,’ or ‘a person describing his job.’

It is not a test of listening skills but of exam technique.

Even an authentic listening text can be made inauthentic if it is too far removed from its original context. The task which students are asked to do with it – for example, answering comprehension questions – can also make listening materials less authentic. The Cambridge listening tests are not only inauthentic in these respects. The recordings themselves are designed around sets of questions, the answers to which are predictably nested among distractors so as to catch out the less exam-savvy candidates. It is not a test of listening skills but of exam technique.

Conclusion

Why does it matter if the Cambridge exams are not communicative? It matters because it means that learners who have had a large amount of explicit instruction are at an advantage in the exam compared to those who have not. The problem is that most of the purposes for which we value proficiency in a language – forming relationships, communicating at work, educating and entertaining ourselves – do not require any of that explicit language knowledge. More importantly, the issues with the format and content of these exams mean that students who invest time and money in preparing for them spend little time actually learning English. Instead, they have to spend their time highlighting distractors in listening transcripts and discussing whether bullet points are acceptable in the Proposal genre.

To solve these problems, Cambridge could, for instance, minimise or remove the Use of English component, simplify the Speaking and Writing sections, and prioritise authenticity over standardisation of the format. As they are, the exams may still have value for students who want to check their level or work towards a goal while learning English. But it is false to say that they assess real-life communication skills.

References

  • Breen, M. P. (1985). Authenticity in the language classroom. Applied Linguistics, 6(1), 60–70.
  • Grossman, D (2010). The Validity and Reliability of the Cambridge First Certificate in English. Centre for English Language Studies University of Birmingham. Masters in Teaching English as a Foreign or Second Language. Available here (PDF). [Last accessed 10th June 2020.]
  • O’Dell, F, and Broadhead, A. (2014). Objective Advanced Student’s Book. 4th ed. Cambridge University Press and UCLES.
  • Osbourne, C, Chilton, H, and Tiliouine, H. (2015). Exam Essentials Practice Tests. Cambridge English: First (FCE) 2. National Geographic Learning.
  • Roberts, C, and Cooke, M. (2009). “Authenticity in the Adult ESOL Classroom and Beyond. Tesol Quarterly, 43(1), 620-642.
  • Taylor, D. (1994). Inauthentic authenticity or authentic inauthenticity? TESL-EJ, 1(2), 1–11.

The post The Problem with Cambridge English Exams appeared first on EFL Magazine.



from EFL Magazine https://ift.tt/360lS6k
via Learn Online English Speaking

Comments