Clarifying the Notion of Coherence in Standardised Oral Proficiency Tests using Systemic-Functional Linguistics
File version
Author(s)
Primary Supervisor
Fenton-Smith, Ben
Other Supervisors
Kirkpatrick, Thomas A
Editor(s)
Date
Size
File type(s)
Location
License
Abstract
Coherence is a construct used in assessing the speaking proficiency of test-takers whose first language is not English in many high-stakes oral proficiency (OP) tests. The speaking section of many OP tests are conducted in the form of a language proficiency interview and often include a monologue section with some given planning time. Coherence is commonly assessed in these tests in combination with other constructs such as fluency. Descriptors tend to be vague in how coherence is defined; this, combined with overlaps with other constructs when rating, raises issues with rating scale validity. Since coherence lacks a universal definition, it is vital to define what contributes to coherence in the unique context of monologues in language proficiency interviews in order to develop a valid and reliable rating scale. Systemic-functional linguistics (SFL) describes how language is used to make meaning in different contexts, and how the producers of a text – in this case, speakers - make language choices to express themselves in these contexts. SFL is an appropriate framework for analysing test-taker responses in oral proficiency monologues as the focus on choice in different contexts implies test-takers have considered how their lexico-grammatical and thematic choices allow them to clearly communicate their experiences with, and knowledge of, a given subject. This study analysed 13 samples of part two of the IELTS speaking test - the short monologue - using the SFL tools of clause complex analysis, Theme and thematic progression, and lexical cohesion analysis. These aspects were considered the most likely to contribute to a coherent response and topic development, which are two concepts valued in the current IELTS rating scale. Results that indicated higher coherence were integrated into an alternate rating scale for coherence and subsequently tested on raters. Many of the results of the above analyses were in clear contrast to currently used scale descriptors and clearly show the need for these scales to be revised in order to ensure validity in these tests. In some cases, the findings uncovered unexpected implications for other commonly used scale constructs, and these should be investigated in future studies. The main findings relating to coherence as an independent rating scale category were included in a new scale for coherence in short monologues as part of an OP test. The rating scale was developed according to Galaczi et al.’s (2011) multiple methods approach which includes a combination of performance data (IELTS samples), expert consultation, and quantitative and qualitative analysis. Subsequent scale versions were discussed with a focus group to provide expert evaluation, and revisions were made to develop the fourth version of the scale, which was trialled. 10 raters were chosen from various language testing, specialist, and general English as an Additional Language (EAL) teaching backgrounds to trial the new scale. A brief training session was conducted, and any unclear wording was explained. The rater interviews consisted of discussing the raters’ current views on coherence, listening to the 13 samples provided and assigning a score, and providing immediate feedback on the usability of the scale in a testing situation. After the interviews were completed, Cronbach’s alpha was used to measure the inter-rater reliability of the scores given and the results showed very high internal reliability at a = 0.95. The qualitative feedback regarding the usability of the scale was excellent with raters considering the scale very easy to apply. Certain elements such as the deliberate use of repetition and the use of implicit conjunction by speakers were especially highly regarded; many of the raters had not considered these aspects before, and through the application of the scale to the samples, could see how these resources contributed to coherence. Despite the small scale of this study, the results show that SFL has potential for clarifying the notion of coherence in OP tests, mitigating issues with band descriptor overlaps, and understanding the role of cohesion in these contexts. This understanding may lead to more valid assessment in language proficiency interviews and have wider implications for test developers, decisions made based on proficiency test results, and the promotion of SFL as a useful tool for language assessment in various contexts.
Journal Title
Conference Title
Book Title
Edition
Volume
Issue
Thesis Type
Thesis (PhD Doctorate)
Degree Program
Doctor of Philosophy (PhD)
School
School of Hum, Lang & Soc Sc
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
The author owns the copyright in this thesis, unless stated otherwise.
Item Access Status
Note
Access the data
Related item(s)
Subject
oral proficiency tests
Systemic-functional linguistics
IELTS speaking tests