Document Type : Original Research Article
University of Foreign Language Studies, The University of Danang
Combining Bachman’s (1990) conceptualization of content validity with Messick’s (1989) unifying model of construct validity, this study attempted to fill the gap in researching content validation in the context of a university reading achievement test by (a) examining traditional content validity evidence and (b) analyzing the test scores to either back-up or rebut the evidence and explore the construct of the underlying test structure. On a sample of 477 third-year English-majored test takers at a Central Vietnamese University (CVU), the study was conducted in the pre-test stage where the test content was compared with the test specification and the post-test stage where the test scores were processed via Rasch and CFA competing model analyses. Results showed that while content relevance was satisfactory and the test construct was to a large extent not threatened by construct-irrelevant variance, content coverage of the test remained problematic with instances of construct underrepresentation. Moreover, the study supported the one-factor model of general reading ability as the underlying structure of the reading achievement test over the correlated three sub-skill and higher-order factor models. The study bears implications for both test writers in designing and test takers in performing the L2 reading test.