Abstract
This study seeks to design and validate a computer-mediated speaking test that measures the interactional competence (IC) of L2-Chinese speakers. The researcher first conducted a task-based needs analysis, which translated to a nine-item IC test. The items covered three language use domains (study, work and everyday interaction) and were delivered in three testing methods of varying degrees of interactiveness (first pair-part voice messagings, second pair-part voice messaging and video chat). After the IC test was developed, the researcher elicited everyday-life domain experts’ indigenous criteria to develop a five rating-category rating scale. The rating scale was theorised using concepts in Conversation Analysis and Membership Categorisation Analysis. The five final rating categories were disaffiliation management, affiliative resources, morality, reasoning and role enactment, covering the affective, moral, logical and categorical dimensions of interaction. 105 testtakers of differing proficiency in Chinese sat the test. Their test performances were scored by two raters in a fully-crossed design. Findings from the Rasch analyses on the rating results show that raters can spread test-takers to a wide range of abilities with high Rasch test-taker reliability, item reliability, intra-rater and inter-rater reliability indexes. Correlational analyses between test-takers’ IC test scores and their HSK Chinese proficiency scores show that the IC test measures variance unaccounted for in traditional measures of linguistic competence (LC). The implications from this study are that it is feasible to develop a highly reliable IC test instrument and measure a broader IC test construct that covers aspects not addressed in traditional speaking assessment.
Original language | English |
---|---|
Place of Publication | Australia |
Number of pages | 7 |
Publication status | Published - 18 Feb 2021 |