Does the Type of Multiple-choice Item Make a Difference? The Case of Testing Grammar

Document Type: Original Article

Authors

Department of Foreign Languages and Linguistics, College of Literature and Humanities, Shiraz University, Shiraz, Iran.

Abstract

With the widespread use of multiple-choice (MC) tests, even if they were disapproved by many practitioners, investigating the performance of such tests and their consequent features is desirable. The focus of this study was on a modified version of multiple-choice test, known as multitrak. The study compared the multitrak test scores of about 60 students against those of the standard MC and constructed-response (CR) tests. The tests employed in the study evaluated English language grammar while they all had identical worded stems. The results showed that multitrak items are at a higher level of difficulty in comparison to the other formats. The results suggest that these items can be used to test more advanced aspects of grammatical competence as the test taker requires going beyond mere syntactic knowledge to be competent in the range of alternatives being used in communication to find the unacceptable choice. Therefore, multitrak test is better geared for higher levels of proficiency and could provide better information about test takers who are more proficient. At the end, implications of the study for test constructors and test users, as well as implications for future research, are discussed.

Keywords