Author Identifier

Thi Bich Hiep Vu

https://orcid.org/0000-0003-2202-9624

Date of Award

2021

Document Type

Thesis

Publisher

Edith Cowan University

Degree Name

Doctor of Philosophy

School

School of Education

First Supervisor

Dr Anne Thwaite

Second Supervisor

Dr Alistair Campbell

Third Supervisor

Dr Jeremy PAGRAM

Abstract

Assessing the speaking performance of students who are studying English as a Foreign Language (EFL) has mainly been conducted with face-to-face speaking tests. While such tests are undoubtedly interactive and authentic, they have been criticised for subjective scoring, as well as lacking an effective test delivery method and recordings for later review.

Technology has increasingly been integrated into speaking tests over the last decade and become known as computer-assisted or computer-based assessment of speaking. Although this method is widely acknowledged to measure certain aspects of language speaking effectively, such as pronunciation and grammar, it has not yet proved to be a successful option for assessing interactive skills. An effective testing method is deemed to maintain the interactivity and authenticity of live speaking tests, able to deliver tests quickly and efficiently, and provide recordings of performances for multiple marking and review.

This study investigated digital representation of EFL speaking performance as a viable form of student assessment. The feasibility of digital representation has previously been examined in relation to authenticity and reliability in assessment of different subjects in Western Australia, including Italian, Applied Information Technology, Engineering Studies, and Physical Education Studies. However, as far as the researcher is aware, no studies have yet assessed EFL speaking performance using digital representation. In an attempt to bridge this gap, this study explored the feasibility of digital representation for assessing EFL speaking performance in a university in Vietnam, the researcher’s home country.

Data collection was undertaken in two phases using a mixed methods approach. In Phase 1, data related to English teachers’ and students’ perceptions of Computer- Assisted English Speaking Assessment (CAESA) were collected. Their perceptions were analysed in relation to the outcomes of a digital speaking assessment trial using the Oral Video Assessment Application (DMOVA). In Phase 2, student participants took an English speaking test while being videoed and audio recorded. English teachers invigilated and marked the trial test using the current method, followed by the digital method. Data were collected via Qualtrics surveys, interviews, observations and databases of student performance results. The feasibility of digital representation in assessing EFL speaking performance was analysed according to the Feasibility Analysis Framework developed by Kimbell, Wheeler, Miller, and Pollitt (2007).

The findings from Phase 1 indicated that both teachers and students had positive attitudes towards computer-assisted assessment (CAA). They were confident with computer-assisted English assessment (CAEA) and preferred this testing method to the current paper-and-pencil process. Both cohorts believed that CAEA enhanced the precision and fairness of assessments and was efficient in terms of resources. However, some participants were sceptical about the authenticity of computer-assisted EFL speaking tests because it failed to foster conversations and interactions in the same way as face-to-face assessments. In spite of their scepticism, teachers and students indicated their willingness to trial DMOVA.

Phase 2 identified the feasibility dimensions of DMOVA. This method of digital assessment was perceived to enhance fairness, reliability and validity, with some correlations between the live interview and digital tests. Teachers found it easy to manage the speaking tests with DMOVA and recognised the logistical advantages it offered. DMOVA was also credited with generating positive washback effects on learning, teaching and assessment of spoken English. In addition, the digital technology was compatible with the existing facilities at the university and required no support or advanced ICT knowledge. Overall, the benefits of the new testing method were perceived to outweigh the limitations.

The study confirmed that digital representation of EFL speaking performances for assessment would be beneficial for Vietnam for the following reasons: (a) it has potential to enhance the reliability and accuracy of the current English speaking assessment method, (b) it retains evidence of students’ performance for later assessment and review, and (c) it facilitates marking and administration. These changes could boost EFL teaching, learning, and assessment, as witnessed in the trial, leading to increased motivation of teachers and students, and ultimately, enhancement of students’ English communication skills. The findings of the study also have implications for English speaking assessment policies and practices in Vietnam and other similar contexts where English is taught, spoken and assessed as a foreign language.

Access Note

Access to Appendices G, H and I of this thesis is not available.

Share

Paper Location

 
COinS