eng
Tabaran Institute of Higher Education
International Journal of Language Testing
2476-5880
2012-03-01
2
1
1
19
114357
Application of Confirmatory Factor Analysis in Construct Validity Investigation: The Case of the Grammar Sub-Test of the CEP Placement Exam
Payman Vafaee
1
Nesrine Basheer
2
Reese Heitner
3
Teachers College, Columbia University, New York, USA.
Teachers College, Columbia University, New York, USA.
Teachers College, Columbia University, New York, USA.
An important assumption in language testing is that test items or observable variables tap the underlying latent traits hypothesized in the theoretical model or constructs governing the design of the testing instrument (e.g., Shin, 2005). Accordingly, the present study sought to investigate the extent to which scores from the grammar sub-test of the Columbia University Community English Program (CEP) placement test could be interpreted as indicators of test takers’ grammatical knowledge. The authors adopted Pupura’s (2004) theoretical model of grammatical knowledge, which hypothesizes that grammatical knowledge consists of two underlying traits of form and meaning. To this end, the authors conducted a confirmatory factor analysis (CFA) to investigate whether there is a match between the CEP grammar test data (n= 144) and the theoretical model as hypothesized. Since the test items were not discrete point but were nested within one of four tasks (each with their own theme), by endorsing the interactionist view of construct definition, effects of these four themes (context) on individual items were also investigated. A multitrait-multimethod matrix (MTMM) model achieved the best possible model fit based on substantive and parsimony considerations. It included two underlying traits of grammatical form and meaning and four method (context) factors, and confirmed that the CEP test examined the grammatical knowledge and included the effect of context as a part of its construct. These findings support the interpretive argument presented for the construct validly of the CEP grammar test, and the appropriateness of the explanation inference made based on this test’s scores. Further implications are discussed.
https://www.ijlt.ir/article_114357_f1a661bf04ee1c2c887586ffd6ba6398.pdf
Constructs validity
Confirmatory Factor Analysis
Explanation inference
Grammatical knowledge
eng
Tabaran Institute of Higher Education
International Journal of Language Testing
2476-5880
2012-03-01
2
1
20
27
114358
Theoretical Misconceptions and Misuse of Statistics: A Critique of Khodadady and Hashemi (2011) and Some General Remarks on Cronbach’s Alpha
Rüdiger Grotjahn
1
Seminar für Sprachlehrforschung (Department of Foreign Language Research), Ruhr-Universität Bochum, Bochum, Germany.
This article comments on theoretical misconceptions and misuses of statistics in Khodadady & Hashemi’s (2011) paper “Validity and C-Tests: The Role of Text Authenticity”. Firstly, it is pointed out that the pertinent C-Test literature is not adequately dealt with. Then, it is argued that the authors misconstrue the notion of the C-Test when they apply the term to a single (longer) C-Test text such as their AC-Test (Authentic C-Test). Subsequently, it is shown that there are serious flaws in the data analysis and interpretation. Here, the main focus is on local item dependence, which is not taken into account, and misconceptions with regard to Cronbach’s Alpha, an issue of relevance to a wider audience.
https://www.ijlt.ir/article_114358_a87878d865b167f94c1931376a4eb201.pdf
C-test
Authenticity
Reliability
Cronbach’s alpha
Guttman’s Lambda2
Local Stochastic Dependence
LID
eng
Tabaran Institute of Higher Education
International Journal of Language Testing
2476-5880
2012-03-01
2
1
28
58
114359
How Does “Sentence Structure and Vocabulary” Function as a Scoring Criterion Alongside Other Criteria in Writing Assessment?
Vahid Aryadoust
1
Centre for English Language Communication, National University of Singapore, Singapore.
Several studies have evaluated sentence structure and vocabulary (SSV) as a scoring criterion in assessing writing, but no consensus on its functionality has been reached. The present study presents evidence that this scoring criterion may not be appropriate in writing assessment. Scripts by 182 ESL students at two language centers were analyzed with the Rasch partial credit model. Although other scoring criteria functioned satisfactorily, SSV scores did not fit the Rasch model, and analysis of residuals showed SSV scoring on most test prompts loaded on a benign secondary dimension. The study proposes that a lexico-grammatical scoring criterion has potentially conflicting properties, and therefore recommends considering separate vocabulary and grammar criteria in writing assessment.
https://www.ijlt.ir/article_114359_2abb5f4726a26b5630f3996225a8c658.pdf
Lexico-grammatical scoring
Rasch model
Partial credit model (PCM)
L2 writing
Writing models
eng
Tabaran Institute of Higher Education
International Journal of Language Testing
2476-5880
2012-03-01
2
1
59
92
114361
A Comparison of the Performance of Analytic vs. Holistic Scoring Rubrics to Assess L2 Writing
Cynthia Wiseman
1
City University of New York.
This study compared the performance of a holistic and an analytic scoring rubric to assess ESL writing for placement and diagnostic purposes in a community college basic skills program. The study used Rasch many-faceted measurement to investigate the performance of both rubrics in scoring second language (L2) writing samples from a departmental final examination. Rasch analyses were used to determine whether the rubrics successfully separated examinees along a continuum of L2 writing proficiency. The study also investigated whether each category in the two six-point rubrics were useful. Both scales appeared to be measuring a single latent trait of writing ability. Raters hardly used the lower category of the holistic rubric, suggesting that it might be collapsed to create a five-point scale. The six-point scale of the analytic rubric, on the other hand, separated examinees across a wide range of strata of L2 writing ability and might therefore be the better instrument in assessment for diagnostic and placement purposes.
https://www.ijlt.ir/article_114361_9544f0e7ef140d3731098f945f34a848.pdf
L2 writing assessment
Analytic scoring
Holistic scoring
Rubrics
Rasch
MFRM