Development and Validation of a Multimedia Computer Package for the Assessment of Oral Proficiency of Adult ESL Learners: Implications for Score Comparability

Loading...
Thumbnail Image
File version
Primary Supervisor
Levy, Mike
Other Supervisors
Kennedy, Claire
O’Neill, Shirley
Editor(s)
Date
2008
Size
File type(s)
Location
License
Abstract

This thesis is about the conceptualization, design, development, trial, and validation of a multimedia package for the computer-based administration of an interview in testing the general English language proficiency of adult ESL learners. This research is significant at both theoretical and practical levels. Theoretically, it fills a gap in the comparability studies of computer-based tests and conventional face-to-face interviews. It also sheds light on the usability and validity of a new mode of presentation which treats the computerized test driven language production as an aspect of the target language use in assessing performance and interpreting test scores. This study employs a quantitative and qualitative survey to investigate the interaction between test-takers and computer-delivered tests. It explores the effects of test-taker characteristics (such as age, gender, language background, and computer familiarity) on test performance and draws upon qualitative feedback provided by the examinees in interpreting test usefulness and substantiating validity generalizations. At a practical level, this project contributes to the language testing industry by capturing the potential of the computer and digital media for developing tests and tasks and introducing a new set of innovative tasks for the assessment of speaking. It further formulates a practical process model for future test developers. The end product of this research is a working prototype of a multimedia language testing instrument using video and audio to present the tasks, which may be used as an entry/exit, gate-keeping, accreditation, certification, or placement mechanism. Along with the substantial findings about the comparability of computer-based tests with face-to-face interviews, this study provides a set of practical guidelines for researchers who embark on the design and development of computer-based language tests. Given the rate of innovation in the digital media, natural language processing and voice recognition technology, the present era must be considered a transitional one and the future is difficult to predict. This thesis, therefore, concludes with two principal suggestions regarding further research at conceptual and practical levels. First, due to the complexity of the nature of human-machine interaction, researchers in language testing (particularly speaking tests) are advised to exercise caution in validity generalizations, because modifications in the delivery mode can result in changes in the quality and nature of the task and, as a consequence, the quality of the speaking performance. Second, this study was a small-scale prototype and a working example of the use of digital video in oral testing. The results showed that, for largescale test development projects, language testing professionals need to utilize the services of a team of information technology experts in developing tests of speaking proficiency with a view to increasing the number and variety of tasks as well as enhancing the security and usability of the test.

Journal Title
Conference Title
Book Title
Edition
Volume
Issue
Thesis Type
Thesis (PhD Doctorate)
Degree Program
Doctor of Philosophy (PhD)
School
School of Languages and Linguistics
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
The author owns the copyright in this thesis, unless stated otherwise.
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
English language proficiency
Adult ESL learners
Computer-based tests
Language testing
Speaking assessment
Oral proficiency testing
Computer adaptive tests
Web-based tests
Computers in language testing
Persistent link to this record
Citation