Developing a Model of Analytic Rating Scales to Assess College Students’ L2 Chinese Oral Performance
Guangyan
Chen
Department of Modern Language Studies, Texas Christian University, USA.
author
text
article
2016
eng
This study develops a model of analytic rating scales to assess L2 Chinese oral performance. It uses Exploratory Factor Analysis (EFA) to identify a model and employs Confirmative Factor Analysis (CFA) in a separate dataset to test the degree of model fit. The researcher videotaped ten speeches and ACTFL professional raters assessed the oral performances in these samples. The researcher then selected three samples (Samples 1, 2, and 3) to represent the proficiency levels of Novice High, Intermediate High, and Advanced Low. Then, the researcher developed 20 rating items by interviewing ten experienced L2 Chinese teachers and running an EFA. The 20 items were descriptors that Chinese teachers used to assess oral performance in two studies: Study 1 and Study 2. To complete Study 1, the researcher recruited 45 teachers to assess Sample 1 using the 20 items, 62 teachers rated Sample 2, and 49 teachers rated Sample 3. In Study 2, 104 teachers assessed all three samples. The EFA indicated a four-factor model of analytic rating scales: “fluency,” “conceptual understanding,” “communication clarity,” and “communication appropriateness.” In this model, the correlations between these analytic rating scales were relatively high and teachers weighted “fluency” as most important. Together the four scales explained 65.5% of teachers’ holistic judgments of oral performance. The CFA did not show a strong model fit to the data, but the fit was acceptable. This model advances our understanding of the relationship between analytic rating scales and holistic ratings in the context of L2 Chinese. These findings give Chinese teachers with which a reference to assess U.S. college students’ L2 Chinese oral performance.
International Journal of Language Testing
Tabaran Institute of Higher Education
2476-5880
6
v.
2
no.
2016
50
71
https://www.ijlt.ir/article_114430_e62777151d6f622b96de87ab6b412e67.pdf
Development and Validation of a Researcher Constructed Psycho-motor Mechanism Scale for Evaluating the Quality of Translation Works
Sepeedeh
Hanifehzadeh
Science and Research Branch, Islamic Azad University, Tehran, Iran.
author
Farzaneh
Farahzad
Allameh Tabatabai University, Tehran, Iran.
author
text
article
2016
eng
The present study was designed basically to develop a psycho-motor mechanism scale based on the theory of translation competence proposed by PACTE (2003) and then to assess the validity and reliability of the constructed scale. In this quantitative research, after designing the scale, two translation tasks were given to 90 M.A. students majoring in translation studies at four different branches of Islamic Azad University. Based on the ratings by two experienced raters, the reliability and validity of the scale were determined. Therefore, two types of validity including concurrent and construct validity were assessed. The possible correlation between TOEFL PBT, as the test of linguistic ability and the researcher constructed psycho-motor scale was later checked, and verified. Next, the possible correlation between holistic scale for translation quality assessment developed by Waddington (2001), and the researcher constructed psycho-motor scale was examined and proved. For calculating the construct validity of the scale, factor analysis was run to probe the underlying constructs of the eight components of the researcher-constructed psycho-motor mechanism scale. As for reliability, the correlation between the two ratings by the raters based on the constructed scale was calculated and the scale was found to be reliable. The present findings, approved by the validity and reliability of the researcher-constructed scale, can contribute to the field of translation studies, which seems to be in great need of objective and communicative scales for translation tasks based on an anchored and consolidated theory of translation quality assessment like that of PACTE (2003).
International Journal of Language Testing
Tabaran Institute of Higher Education
2476-5880
6
v.
2
no.
2016
72
91
https://www.ijlt.ir/article_114431_2ac99bfd9f2f279acd8405687cb8b7f4.pdf
Cognitive Diagnostic Modeling of L2 Reading Comprehension Ability: Providing Feedback on the Reading Performance of Iranian Candidates for the University Entrance Examination
S. Jamal
Hemmat
English Department, Islamic Azad University, Mashhad Branch, Mashhad, Iran.
author
Purya
Baghaei
English Department, Islamic Azad University, Mashhad Branch, Mashhad, Iran.
author
Masoumeh
Bemani
English Department, Islamic Azad University, Mashhad Branch, Mashhad, Iran.
author
text
article
2016
eng
Research has been conducted on linguistic aspects of L2 reading comprehension ability but studies which concern cognitive aspects of this skill are rather scarce. Efforts should be made to have sound understanding of the attributes and subskills needed for successful performance on L2 reading comprehension tests. In this study the reading attributes underlying the reading comprehension section of the Iranian National University Entrance Examination (Konkoor) held in 2011are investigated. The G-DINA model, a general cognitive diagnostic model, is applied and the CDM package in R is used to run the analyses. The findings indicate problematic areas in reading comprehension at a national level. Students who desire to pursue their studies at national universities, the teachers who train the candidates at high schools and the ministry of education can benefit from such feedbacks.
International Journal of Language Testing
Tabaran Institute of Higher Education
2476-5880
6
v.
2
no.
2016
92
100
https://www.ijlt.ir/article_114432_d0c792401cf6619358a88c4591172ab6.pdf
An in-depth Insight into EFL University Students’ Cognitive Processes of C-Test and X-Test: A Case of Comparison
Hamid
Ashraf
English Department, Torbat-e-Heydarieh Branch, Islamic Azad University, Torbat-e-Heydarieh, Iran.
author
Mona
Tabatabaee-Yazdi
English Department, Torbat-e-Heydarieh Branch, Islamic Azad University, Torbat-e-Heydarieh, Iran.
author
Aynaz
Samir
English Department, Torbat-e-Heydarieh Branch, Islamic Azad University, Torbat-e-Heydarieh, Iran.
author
text
article
2016
eng
The literature on C-test and Cloze test in a second language provides us with few accounts of the real mental processes that test-takers are involved in, which in fact indicate the real nature of what these tests measure. However, the literature leaves researchers with little attention to the cognitive processes involved in X-Tests. Therefore, the purpose of this study was to discover the extent to which the C-Test and the X-Test tap participants’ use of mental strategies. In doing so a C-Test and an X-test were administered to eight subjects who were EFL learners in Mashhad, Iran. They all took part in introspective methods of think-aloud and retroactive interviews throughout the test administration. Think aloud protocol was used to collect the required data. The results showed that both tests were similar to each other regarding mental processes, except in using two strategies that participants applied only for filling out the gaps of the X-Test. Moreover, it appeared that the X-Test was more difficult for the subjects.
International Journal of Language Testing
Tabaran Institute of Higher Education
2476-5880
6
v.
2
no.
2016
101
112
https://www.ijlt.ir/article_114433_926b4c8268d580cde4fb9d8c83cbff11.pdf
Investigating Factors of Difficulty in C-Tests: A Construct Identification Approach
Fahimeh
Khoshdel
English Department, Islamic Azad University, Mashhad Branch, Mashhad, Iran.
author
Purya
Baghaei
English Department, Islamic Azad University, Mashhad Branch, Mashhad, Iran.
author
Masoumeh
Bemani
English Department, Islamic Azad University, Mashhad Branch, Mashhad, Iran.
author
text
article
2016
eng
In this paper we tried to demonstrate the validity of C-Test using construct identification approach. In this approach to construct validation the factors which contribute to item difficulty are identified. The assumption is that the factors which make items difficult are actually the construct underlying the test. For the purposes of this study, 11 item-level and sentence-level factors, deemed to affect item difficulty, were entered into a regression analysis to predict classical item p-values. The 11 factors explained only 8% of the variance in item difficulties. This finding shows that lexical and sentential factors explain only a very small portion of the variance in p-values. It seems that a great amount of variation in item difficulties should be attributed to above-sentence and text level factors. The implications of the study for C-Test construct validity are discussed.
International Journal of Language Testing
Tabaran Institute of Higher Education
2476-5880
6
v.
2
no.
2016
113
122
https://www.ijlt.ir/article_114434_79f82febccce968b555b9271961854df.pdf